Download the AI & Machine Learning For Dummies PRO App: iOS - Android Our AI and Machine Learning For Dummies PRO App can help you Ace the following AI and Machine Learning certifications:
The AWS Certified Solutions Architect – Associate (SAA-C03) examination offers a comprehensive set of questions, drawing from a wide spectrum of topics. During my multiple attempts at the examination, I discerned that the questions presented weren’t merely repetitive or overly familiar. Instead, they challenged candidates with multi-faceted scenarios, often demanding the selection of multiple correct responses from a diverse set of options. These scenarios were intricately detailed, paired with answer choices that went beyond mere service names. The answers were often elaborate statements, interweaving various AWS features or services.
A web application hosted on AWS uses an EC2 instance to serve content and an RDS MySQL instance for database needs. During a performance audit, you notice frequent read operations are causing performance bottlenecks. To optimize the read performance, which of the following strategies should you implement? (Select TWO.)
A. Deploy an ElastiCache cluster to cache common queries and reduce the load on the RDS instance.
B. Convert the RDS instance to a Multi-AZ deployment for improved read performance.
C. Use RDS Read Replicas to offload read requests from the primary RDS instance.
D. Increase the instance size of the RDS database to a larger instance type with more CPU and RAM.
E. Implement Amazon Redshift to replace RDS for improved read and write operation performance.
Correct Answer:
A. Deploy an ElastiCache cluster to cache common queries and reduce the load on the RDS instance.
C. Use RDS Read Replicas to offload read requests from the primary RDS instance.
Explanation:
The correct answers are A and C, and here’s why:
A. Deploy an ElastiCache cluster to cache common queries and reduce the load on the RDS instance.
Using Amazon ElastiCache is a common strategy to enhance the performance of a database-driven application by caching the results of frequent queries. When your application queries the database, it first checks the cache to see if the result is available, which reduces the number of direct read requests to the database and improves response times for your end-users.
C. Use RDS Read Replicas to offload read requests from the primary RDS instance.
Amazon RDS Read Replicas provide a way to scale out beyond the capacity of a single database deployment for read-heavy database workloads. You can create one or more replicas of a source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput.
Reference: Amazon RDS Read Replicas
As for the other options:
B. Convert the RDS instance to a Multi-AZ deployment for improved read performance.
Multi-AZ deployments for Amazon RDS are designed to provide enhanced availability and durability for Database (DB) Instances, making them well-suited for production workloads. However, they do not inherently improve read performance, as the standby instance in a Multi-AZ deployment is not used to serve read traffic.
D. Increase the instance size of the RDS database to a larger instance type with more CPU and RAM.
While increasing the size of the RDS instance can improve overall performance, it is not the most cost-effective strategy for optimizing read performance specifically. This approach increases the capacity of the database to handle a larger load, but it does not address the read load issue as efficiently as caching or using read replicas.
E. Implement Amazon Redshift to replace RDS for improved read and write operation performance.
Amazon Redshift is a data warehousing service and is used for complex queries on large sets of data. It’s not a direct replacement for a transactional database like MySQL and is typically used for different types of workloads that involve analytics and data warehousing operations. Redshift is optimized for high-performance analysis and reporting on large datasets, not for transactional web application data patterns.
In a landscape where adherence to regulatory standards is paramount, a business ventures to confirm that their AWS services are compliant. A Solutions Architect is tasked with provisioning the audit team an arsenal of compliance documents to assess the services’ conformity to industry standards.
Which tool should the Architect leverage to provide comprehensive access to these vital documents?
A. Engage with AWS Artifact for immediate access to AWS compliance documents.
B. Retrieve compliance documents directly from the AWS Security Hub.
C. Deploy Amazon Inspector to collect compliance data.
D. Operate Amazon Macie for a detailed compliance report review.
Correct Answer: A. Engage with AWS Artifact for immediate access to AWS compliance documents.
Here’s the detailed explanation and reference link for the answer provided:
Enable IAM Database Authentication for the RDS instance.
IAM database authentication is used to control who can connect to your Amazon RDS database instances. When IAM database authentication is enabled, you don’t need to use a password to connect to a DB instance. Instead, you use an authentication token issued by AWS Security Token Service (STS). IAM database authentication works with MySQL and PostgreSQL. It provides enhanced security because the authentication tokens are time-bound and encrypted. Moreover, this method integrates the database access with the centralized IAM service, simplifying user management and access control.
By using IAM Database Authentication, you satisfy the security requirements by ensuring that only authenticated EC2 instances (or more precisely, the applications running on them that assume an IAM role with the necessary permissions) can access the RDS database. This method also preserves the confidentiality of customer data by leveraging AWS’s robust identity and access management system.
A corporation endeavors to migrate their web application, undergirded by IIS for Windows Server, alongside a network-attached file share, to AWS. The goal is to achieve a resilient and accessible system post-migration. The Architect is charged with the migration of the file share to a cloud service that supports Windows file storage conventions.
Which service should the Architect employ to migrate and integrate the file share seamlessly?
A. Migrate the network file share to Amazon FSx for Windows File Server.
B. Transfer the file storage to Amazon EBS.
C. Implement AWS Storage Gateway for the file share transition.
D. Opt for Amazon EFS for file storage solutions.
Correct Answer: A. Migrate the network file share to Amazon FSx for Windows File Server.
Here’s the detailed explanation and reference link for the answer provided:
Migrate the network file share to Amazon FSx for Windows File Server.
Amazon FSx for Windows File Server provides fully managed Microsoft Windows file storage and is built on Windows Server. It’s designed to be compatible with the SMB protocol and Windows NTFS, and it supports features like Active Directory integration and DFS namespaces. FSx for Windows File Server is a cloud-compatible service that makes it easy for enterprises to migrate and integrate existing Windows-based applications that require file storage.
Using FSx for Windows File Server, the company can lift and shift their existing file shares to AWS without needing to modify their applications or file management tools, maintaining the same file storage conventions they currently use.
A tech firm’s CRM application, hosted on a fleet of on-demand EC2 instances, suffers from initial performance dips as work commences. The Architect must devise a solution to bolster application readiness and maintain peak performance from the onset of business hours.
What scaling policy should the Architect enforce to anticipate and address the morning performance surge?
A. Initiate a CPU utilization-based dynamic scaling policy.
B. Implement a timed scaling policy to augment instances prior to peak usage hours.
C. Base scaling on memory usage metrics.
D. Predictive scaling to forecast and scale for expected traffic increases.
Correct Answer: B. Implement a timed scaling policy to augment instances prior to peak usage hours.
Here’s the detailed explanation and reference link for the answer provided:
Implement a timed scaling policy to augment instances prior to peak usage hours.
Scheduled scaling allows you to set up scaling actions to start at specific times, which is useful when you can predict changes in load. For the tech firm’s CRM application, which experiences known performance dips at the beginning of the business day, implementing a scheduled scaling policy enables the system to prepare for the influx of users by increasing the number of EC2 instances before they log in. This preemptive approach ensures that the CRM application is scaled up and ready to handle requests, maintaining consistent performance levels during peak operating times.
A software development entity utilizes AWS Lambda for serverless application deployment. They employ Lambda functions that integrate with MongoDB Atlas and utilize third-party APIs, necessitating the storage of sensitive credentials across development, staging, and production environments. These credentials must be obfuscated to avert unauthorized access by team members or external entities.
How should the environment variables be safeguarded to ensure maximum confidentiality and security?
A. Assume default AWS Lambda encryption is sufficient for the task.
B. Implement SSL encryption through AWS CloudHSM for enhanced security measures.
C. Resort to EC2 instance deployment for storing environment variables.
D. Encrypt the sensitive data using AWS KMS with environment variable encryption helpers.
Correct Answer: D. Encrypt the sensitive data using AWS KMS with environment variable encryption helpers.
Here’s the detailed explanation and reference link for the answer provided:
Encrypt the sensitive data using AWS KMS with environment variable encryption helpers.
AWS Lambda supports environment variables for storing configuration settings that control the behavior of your Lambda function. For sensitive information such as database credentials or API keys, AWS recommends encrypting the environment variables using AWS Key Management Service (KMS). The Lambda service integrates with KMS to automatically encrypt and decrypt these environment variables. When you create or update a Lambda function and its environment variables, you can specify a KMS key and use the Lambda encryption helpers to handle the encryption and decryption of this data.
While the official SAA-C03 exam guide does provide substantial coverage, it’s crucial to recognize its limitations. There were myriad topics, technologies, and services beyond its scope, underscoring the necessity for holistic preparation. To give potential candidates a glimpse, here are some focal areas from my exam experience:
Apache Technologies: The exam delved deep into Apache’s suite, covering technologies like Apache Spark, Apache Parquet, Apache Kafka, and more.
Disaster Recovery: There was a pronounced focus on disaster recovery, encompassing key concepts such as RTO (Recovery Time Objective), RPO (Recovery Point Objective), and the relevant AWS tools to address them.
Kubernetes: The test touched upon various Kubernetes-centric technologies, notably the Kubernetes Metrics Server and Kubernetes Cluster Autoscaler.
Amazon S3 Features: Questions around Amazon’s Simple Storage Service (S3) and its nuanced features like S3 Access Point and S3 Lifecycle Policy were prevalent.
Machine Learning: The exam presented scenarios centered on machine learning, spotlighting AWS offerings like Amazon SageMaker and Amazon Transcribe.
Emerging AWS Offerings: The test also introduced queries on newer AWS services, such as the Lambda function URL feature and the AWS Elastic Disaster Recovery service.
These insights emphasize the significance of adopting an expansive and detailed preparation methodology for the SAA-C03 exam, ensuring a firm grasp on both mainstream and niche topics for a triumphant outcome.
As I initially ventured into the SAA-C03 online exam through Pearson Vue in early 2023, my feelings oscillated between sheer enthusiasm and palpable apprehension. Weeks of meticulous preparation had gone into mastering the extensive AWS services, architectures, and best practices. Yet, the intricacy of the SAA-C03 exam surpassed my expectations, confronting me with nuanced questions that demanded a profound grasp of AWS functionalities and discernment amidst closely related choices.
The swift progression of time during the exam was a testament to its rigorousness; it wasn’t just about technical acumen but also about making swift, informed decisions. Much to my chagrin, my initial attempt didn’t culminate in a passing score. While the initial sting of disappointment was potent, I chose resilience over resignation, using this setback as a catalyst for deeper introspection and redoubled effort.
Having previously navigated the simpler waters of the CLF-C01 exam, the SAA-C03 felt like uncharted territory with its heightened complexity. While I had immersed myself in the SAA-C03 video course lessons, I acknowledged the oversight in not dedicating adequate time to practice tests, which likely played a role in my initial stumble. Undeterred, I fortified my resolve for the subsequent attempt.
My Nuggets of Wisdom for the SAA-C03 Exam:
Thorough Preparation: The bedrock of SAA-C03 success lies in an in-depth understanding of AWS services in their myriad applications. A multifaceted approach to preparation, embracing official documentation, practice exams, and real-world application, is non-negotiable. Take the time to deconstruct and revisit practice exam explanations to ensure a comprehensive grasp of all exam facets.
Mastering Time: The exam’s temporal constraints necessitate strategic agility. Cultivate techniques to swiftly discern question types, prune out incorrect alternatives, and optimize the accuracy-speed equilibrium.
Hands-on Exploration: Theoretical knowledge finds its true potency when applied. Engaging directly with AWS services crystallizes understanding and anchors memory. Incorporating hands-on exercises, such as those from the PlayCloud labs in the SAA-C03 course, is a prudent strategy.
Growth in Adversity: An unsuccessful exam attempt is not a cul-de-sac but a detour signpost, guiding towards areas needing more attention. Embrace this feedback, solicit expert counsel, and perhaps consider amplifying your repository of study resources.
Relentless Tenacity: Triumph often lies just beyond adversity. Foster a mindset of unyielding persistence, viewing challenges as milestones en route to the pinnacles of certification success.
Welcome to the “Djamgatech Education” podcast– your ultimate educational hub where we dive deep into an ocean of knowledge, covering a wide range of topics from cutting-edge Artificial Intelligence to fundamental subjects like Mathematics, History, and Science. But that’s not all – our platform is tailored for learners of all ages and stages, from child education to continuing education across a multitude of subjects. So join us on this enlightening journey as we break down complex topics into digestible, engaging conversations. Stay curious, stay informed, and stay tuned with Djamgatech Education! In today’s episode, we’ll cover the importance of the SAA-C03 certification for IT professionals, the wide range of topics covered in the SAA-C03 exam, the challenges and insights gained from the initial exam attempt, the keys to success in the SAA-C03 exam, and the availability of Etienne Noumen’s book for comprehensive study material and practice tests.
Becoming certified is a big deal for IT professionals nowadays. It’s a key milestone that opens doors for career growth in the highly competitive industry. One certification that stands out is the AWS Certified Solutions Architect – Associate, also known as SAA-C03. In this article, I’ll take you through my personal journey with the SAA-C03 exam.
Let’s talk about the challenges I faced. First off, the exam is no walk in the park. It tests your ability to design cost-effective, scalable, high-performing, and resilient cloud solutions within the AWS platform. So you need to be well-prepared and have a solid understanding of the AWS Well-Architected Framework.
Overcoming setbacks was tough, but perseverance pays off. When I encountered difficulties, I sought out additional resources, such as online forums and practice exams. These helped me fill any knowledge gaps and gain more confidence in my abilities.
Throughout this process, I learned some valuable lessons. One important insight was that the SAA-C03 exam covers a range of topics, including architecture, security, and deployment strategies. So, brushing up on these areas is essential for success.
Being AWS Certified Solutions Architect – Associate not only boosts your career prospects but also enhances your credibility. It demonstrates your expertise in AWS services and shows that you can design robust cloud solutions. This certification gives you confidence when interacting with stakeholders and customers, as they know you have the skills to meet their needs.
So, if you’re an IT professional looking to take your career to the next level, consider becoming an AWS Certified Solutions Architect – Associate. The SAA-C03 exam may be challenging, but with dedication and the right resources, you can achieve success. Good luck on your certification journey!
The AWS Certified Solutions Architect – Associate (SAA-C03) exam is no walk in the park. It covers a wide range of topics and poses challenging questions that demand in-depth knowledge and critical thinking. Having attempted the exam multiple times, I can testify to the complexity and depth of the questions.
What sets this exam apart is the way it challenges candidates with multi-faceted scenarios. It’s not just about regurgitating information or selecting the obvious answers. Instead, you are presented with intricately detailed scenarios and asked to choose multiple correct responses from a diverse set of options. This requires a deep understanding of the subject matter and the ability to apply your knowledge in practical scenarios.
The official SAA-C03 exam guide does provide a solid foundation, but it is important to recognize its limitations. The scope of the exam is vast, and there are many topics, technologies, and services that go beyond what is covered in the guide. To succeed in the exam, you need to take a holistic approach to your preparation.
Based on my own exam experience, there are several focal areas that you should pay special attention to. One such area is Apache technologies. The exam delves deep into Apache’s suite of technologies, including Apache Spark, Apache Parquet, and Apache Kafka. Make sure you have a good understanding of these technologies and how they are used in AWS environments.
Disaster recovery is another important topic that you should be well-versed in. The exam places a lot of emphasis on concepts such as Recovery Time Objective (RTO) and Recovery Point Objective (RPO), as well as the AWS tools and services that can help you achieve these objectives.
Kubernetes is also a key area that you should focus on. The exam touches upon various Kubernetes-centric technologies, such as the Kubernetes Metrics Server and Kubernetes Cluster Autoscaler. Understanding how these technologies work and how they integrate with AWS services is crucial.
Amazon S3 features are another recurring theme in the exam. You can expect questions on features like S3 Access Point and S3 Lifecycle Policy. Familiarize yourself with these features and know how to use them effectively in different scenarios.
Machine learning is a hot topic in today’s technology landscape, and the SAA-C03 exam reflects that. You can expect scenarios that center around machine learning and AWS offerings like Amazon SageMaker and Amazon Transcribe. Make sure you understand the core concepts of machine learning and how these AWS services fit into the big picture.
Lastly, be prepared for questions on emerging AWS offerings. The exam may introduce queries on newer services that are not covered in traditional study materials. Examples of these could be the Lambda function URL feature or the AWS Elastic Disaster Recovery service. Stay up to date with the latest AWS announcements and familiarize yourself with these new offerings.
In conclusion, the SAA-C03 exam demands a comprehensive and detailed preparation methodology. You need to have a solid grasp on both mainstream and niche topics to succeed. Study the official exam guide but go beyond it. Explore additional resources, practice with hands-on labs, and stay updated with the latest AWS developments. By adopting this approach, you will be well-prepared for the challenges that await you in the exam room. Good luck!
So, let’s talk about my SAA-C03 exam journey. It was quite a rollercoaster ride, to say the least. When I first signed up for the online exam through Pearson Vue, I was filled with excitement and a bit of nervousness. I had spent weeks preparing for this moment, diving deep into the world of AWS services, architectures, and best practices. But little did I know what I was getting myself into.
The SAA-C03 exam proved to be more challenging than I had anticipated. The questions were not just about regurgitating information, but rather required a profound understanding of AWS functionalities and the ability to make informed decisions. Time seemed to fly by during the exam, a clear indication of its rigour. It was not just about technical know-how, but also about being able to think on your feet and make quick choices.
Unfortunately, my first attempt did not end in the passing score I had hoped for. It was a tough pill to swallow, the disappointment was real. However, I made a conscious decision not to let this setback define me. Instead, I chose to channel my disappointment into introspection and double down on my efforts.
I realized that one of my mistakes was not dedicating enough time to practice tests. I had focused primarily on the SAA-C03 video course lessons, neglecting the importance of practicing with sample questions. In hindsight, it was a crucial oversight. But I refused to let it discourage me. I took it as a lesson learned and a motivation to do better in my next attempt.
The SAA-C03 exam felt like uncharted territory. It was a significant step up from the CLF-C01 exam that I had previously conquered. The complexity was on a whole new level. But I was determined to rise to the challenge. I knew that I had to be better prepared this time around.
So, armed with renewed determination, I dove back into my studies. I made sure to not only review the course material but also to dedicate ample time to practice tests. I wanted to familiarize myself with the types of questions I might encounter and train my mind to think critically.
And guess what? The second time was the charm! I walked into the exam room with more confidence, armed with the lessons I had learned from my previous attempt. I felt better equipped to tackle the challenges the SAA-C03 exam threw at me. And it paid off. When I saw that passing score on the screen, it was pure elation.
Looking back on my SAA-C03 exam journey, I can’t help but feel proud of how far I’ve come. Yes, there were setbacks and moments of doubt, but I didn’t let them define me. Instead, I used them as stepping stones towards my success. The SAA-C03 exam was a true test of my knowledge and resilience, and I emerged stronger because of it. Now, I can confidently say that I am an AWS Certified Solutions Architect and ready to take on new challenges in the world of cloud computing.
When it comes to preparing for the SAA-C03 exam, I’ve got some valuable nuggets of wisdom to share with you. The key to success lies in thoroughly understanding the various AWS services and how they can be applied in different scenarios. So, make sure you take a multifaceted approach to your preparation. Dive into the official documentation, take practice exams, and don’t forget to apply what you’ve learned in real-world situations. It’s important to deconstruct and revisit the explanations for practice exam questions to ensure you have a comprehensive grasp of all the exam facets.
Another essential aspect of exam success is mastering your time. The SAA-C03 exam has time constraints, so you’ll need to develop techniques to quickly identify question types, eliminate incorrect options, and strike the right balance between accuracy and speed. It may take some practice, but with strategic agility, you can optimize your performance.
Theory alone won’t cut it. To truly solidify your understanding and enhance your memory, you need to get hands-on with AWS services. This means engaging directly with the tools and applications. There are plenty of hands-on exercises available, such as those offered in the SAA-C03 course, like the PlayCloud labs. By incorporating these exercises into your study routine, you’ll gain practical experience and a deeper understanding of how things work.
Remember, even if you experience setbacks along the way, they shouldn’t be viewed as dead ends. An unsuccessful attempt at the exam is more like a detour signpost, guiding you towards areas that need more attention. Embrace the feedback, seek advice from experts, and consider expanding your study resources. Sometimes, a fresh perspective and additional resources can make all the difference.
Lastly, keep in mind that success often lies just beyond adversity. Cultivate a mindset of relentless tenacity, where challenges are seen as stepping stones to your certification goals. With persistence and determination, you can overcome any obstacle that comes your way.
So, to summarize, thorough preparation, mastering your time, hands-on exploration, growth through adversity, and relentless tenacity are the key elements that will help you succeed in the SAA-C03 exam. Good luck on your journey to certification success!
Hey there, tech enthusiasts and future solution architects! We’ve got something exciting just for you. If you’re gearing up to take on the AWS Solutions Architect Associates SAA Certification, then you absolutely need to check out Etienne Noumen’s fantastic book called “Latest AWS Solutions Architect Associates SAA Certification Practice Tests and Quizzes Illustrated“. This book is seriously packed with amazing resources that’ll give you an edge on the SAA-C03 exam.
Inside, you’ll find over 250 quizzes, flashcards, practice exams, and cheat sheets specifically tailored for this certification. It’s the ultimate guide to help you master AWS, boost your confidence, and ace the exam. But that’s not all! The book also includes uplifting testimonials from people who have successfully used it to pass their exams with flying colors.
On this episode, we discussed the importance of the SAA-C03 certification for IT professionals, covering topics such as Apache technologies, disaster recovery, Kubernetes, Amazon S3 features, machine learning, and emerging AWS offerings, and shared insights on the initial challenging exam experience, emphasizing the value of thorough preparation, time management, hands-on exploration, growth in adversity, tenacity, and highlighted Etienne Noumen’s comprehensive study material and practice tests for the SAA-C03 certification exam. Thank you for joining us on the “Djamgatech Education” podcast, where we strive to ignite curiosity, foster lifelong learning, and keep you at the forefront of educational trends – so stay curious, stay informed, and stay tuned with Djamgatech Education!
I took the AWS SAA-C03 exam this morning and received an email notification from Creedly just two hours after the end of the exam: badge received, exam passed. Phew.
Started the Adrian Cantrill course almost exactly two months ago. Created a lot of notes with video screenshots and my custom notes. Went through all 6 TD exams in review mode… that was a shocker, so many details and services that I’m pretty sure weren’t mentioned in the course video. Only about half were just above 70%, the other half just below. In any case, the test exams were extremely helpful and probably essential for passing the exam.
I felt confident before the exam as I had memorized the notes quite well. Nevertheless, I found the exam pretty hard and often wasn’t really sure about my choices. Nevertheless, it was enough for 793 points…
A few questions/topics that came up in the exam:
– Aurora Auto Scaling
– MySQL how to do encryption in transit
– EKS, a lot of questions!
– Windows Server File Share
– EFS read only implementation (POSIX)
– MongoDB
– EventBridge / Scheduled
– SQS Cross-Account access
– AuditTrail in combination with AWS Org
Read more Testimonials and Practice Tutorial Dojo’s style Exams in the eBook below:
AWS uses this to trial questions to my knowledge. They aren’t scored, but you don’t know which 15 they are.
if we do those questions and get wrong then do we loose the marks ? “Unscored” means they don’t count at all. Makes no difference if they are all right or wrong.
So basically we get the marks out of 50, not 65 is that correct ? That is correct. Your score will based of 50 graded questions.
Treat the test as 50 questions but really there is 65 just hope the questions you get wrong are only the 15 and you smash the scores 50 questions.
The 15 are new questions Amazon is trialing to asses the level of difficulty based on the percentage of people who get it right. Therefore, questions with a relative low percentage may be classified as difficult or conversely rated as easy. Or they may eventually decide to discard and not include it in their bank of graded questions.
Download the AI & Machine Learning For Dummies PRO App: iOS - Android Our AI and Machine Learning For Dummies PRO App can help you Ace the following AI and Machine Learning certifications:
Welcome to the “Djamgatech Education” podcast and blog – your ultimate educational hub. Get ready to dive deep into an ocean of knowledge as we explore a wide range of topics, from cutting-edge Artificial Intelligence and expansive Cloud technologies, to fundamental subjects like Mathematics, History, Geography, Economics, and Science. But that’s not all – our platform is designed for learners of all ages and stages, making us your go-to resource for child education, extracurricular activities, and continuing education across a multitude of subjects. Our mission is to ignite your curiosity, foster lifelong learning, and keep you up to date with the latest trends in education. So, stay curious, stay informed, and tune in to Djamgatech Education for enlightening conversations that break down complex topics into easily digestible discussions. In today’s episode, we’ll cover the foundational AWS Certified Cloud Practitioner certification, testimonials from recent exam passers, tips for studying and passing the exam, changes in the exam structure and content, and a comprehensive guide for preparing for the 2023 AWS CCP exam.
The AWS Certified Cloud Practitioner is a great starting point for individuals with no prior IT or cloud experience who are looking to switch to a career in the cloud or for those line-of-business employees who want to gain foundational cloud literacy. It validates your foundational, high-level understanding of AWS Cloud, services, and terminology. The exam is 90 minutes long and consists of 65 questions that are either multiple choice or multiple response.
The exam fee is $100, and it is offered in multiple languages including English, Japanese, Korean, Simplified Chinese, Traditional Chinese, Bahasa (Indonesian), Spanish (Spain), Spanish (Latin America), French (France), German, Italian, and Portuguese (Brazil).
There are no prerequisites to prepare for and take the AWS Certified Cloud Practitioner exam. The content outline is designed for candidates new to Cloud who may not have an IT background. While having up to 6 months of exposure to AWS Cloud can be helpful, it is not required.
Earning this certification can greatly benefit your career. It serves as an entry point to a cloud career for candidates from non-IT backgrounds, and job listings requiring AWS Certified Cloud Practitioner have increased by 84%.
After obtaining the AWS Certified Cloud Practitioner certification, you can consider taking the AWS Certified Solutions Architect – Associate, AWS Certified Developer – Associate, or AWS Certified SysOps Administrator – Associate certifications to further advance your career in roles such as cloud architect, cloud engineer, developer, and systems administrator.
The AWS Certified Cloud Practitioner certification is valid for 3 years. Before it expires, you can recertify by retaking the latest version of the exam or by upgrading to any of the Associate or Professional-level certifications.
I recently came across some testimonials, tips, and key resources from individuals who have recently passed the AWS Certified Cloud Practitioner (CCP) exam. It seems like a lot of people found success in their preparation and were able to pass with varying levels of prior experience.
One person mentioned that they prepared hard for the exam, despite never having any AWS Cloud experience. He dedicatedly studied for 15 days, with intermittent preparation over the course of 3-6 months, then he found resources like Stephen Mark’s Udemy course, Tutorial Dojo’s Udemy practice sets, Tutorial Dojo cheatsheets, and their own notes to be helpful. He advised focusing on storage classes, VPC, and CAF practical applications. The exam covered topics like Kendra, carbon footprint, and instance types.
Another individual passed the CCP exam without any prior AWS or cloud experience. Their preparation involved repeatedly reading the relevant product information on the AWS website and matching keywords in the exam questions to the closest available product. They emphasized the importance of memorizing the exam objectives to pass.
Another testimonial shared an interesting experience where they accidentally rescheduled their exam and ended up with only 2 days to cram. Despite this, they managed to pass. They had about a month of previous experience with AWS in a non-professional setting. They purchased Nea Davis’ CCP course and worked through 6 practice exams from Stephane Maarek. Although they initially scored lower on the practice exams, they were able to answer a few questions from the practice exams on the real exam. They also noted that the exam covered some questions on the Cloud Adoption Framework.
Another successful exam taker mentioned resources like Tutorial Dojo’s CCP practice exams, Digital Cloud’s CCP practice exams, and Stephane Maarek’s videos. They mentioned that due to time constraints (working full-time and having kids), they were unable to finish all of the videos but found them helpful. They wrote hand notes on services, mainly focusing on areas where they struggled, and combined it with cheat sheets and slides.
In summary, it seems that a combination of studying resources like Udemy courses, practice exams, reading AWS documentation, and taking notes on important concepts helped these individuals pass the CCP exam. Despite varying levels of experience, they all highlighted the importance of understanding the baseline knowledge required for this exam.
Now, let me tell you something. I work in tech, but I had absolutely zero experience with AWS or IT in general, so everything was completely new to me. I decided to start off by taking the “AWS for non-engineers” course on LinkedIn Learning. It was an alright introduction, but honestly, it didn’t cover everything I needed to know. There was a lot of filler content that didn’t hit the mark.
After that, I tried out Stephane Maarek’s first practice exam, and let’s just say I scored a whopping 46%. Yeah, not so great. But I didn’t give up. I scheduled the actual exam for two weeks later and signed up for Stephane’s full Udemy CCP course. After that, I managed to get through the first 11 sections, doing about one to two sections per day after work. After each section, I made sure to do all the section summary quizzes multiple times and reviewed all the wrong answers.
I also took all six of Stephane’s practice tests, consistently scoring anywhere from the low 60s to mid 70s. I was prepared to fail and reschedule the exam for a later date, but guess what? The actual exam questions were way easier than I expected. I might have even gotten a little lucky, but Stephane’s practice tests were definitely harder. There were some questions about the well-architected framework that I found quite easy, but I did stumble a bit on a few AWS Outposts questions.
Overall, the exam was foundational, with a mix of tricky and easy questions. But here’s the interesting part – I actually had some time left over. That’s pretty cool, right?
Now, let me share with you the resources that really helped me out. First, I made use of the AWS training and AWS Skill Builder, as well as watching some helpful videos on AWS Twitch. I also purchased Adrian Cantrill’s SAA and Developer Associate courses, since I already had some of his other courses. I revisited some sections that I needed to brush up on.
To further enhance my knowledge, I dived into the AWS white papers on the six pillars of the Well-Architected Framework and Billing and Pricing. And let’s not forget about ACloudGuru. My work actually had a business plan subscription, so I had access to their CCP and practice exams. Talk about winning, right?
So there you have it. I passed my AWS CCP exam and I couldn’t be happier. It was definitely a journey, but with the right resources and a bit of perseverance, it’s definitely doable.
Hey there! Have you heard the news? AWS has just announced a new version of the AWS Certified Cloud Practitioner exam, called CLF-C02. In this podcast, we’ll dive into the changes and discuss what topics are covered in the updated exam, along with tips on how to prepare for success.
Let’s start with some quick facts. The CLF-C02 exam is replacing the previous CLF-C01 exam, and the last day to take the old exam is September 18th, 2023. The new exam will be available from September 19th, 2023, and registration opens on August 22nd.
So, what’s different about the new exam? Well, it now includes new AWS services and features, keeping you up to date with the latest advancements in cloud computing.
Now, let’s talk about the exam structure. The AWS Certified Cloud Practitioner exam consists of 65 multiple-choice and multiple-response questions. Out of these, only 50 will be graded, while the remaining 15 will be used for data collection purposes. Unfortunately, you won’t know which questions are graded or ungraded.
You’ll have 90 minutes to complete the exam, and a passing score of 700 out of 1000 is required. The exam fee is $100 USD.
Moving on to the exam changes, the new CLF-C02 exam focuses on various areas, including threat detection and incident response, security logging and monitoring, identity and access management, and data protection.
There have been some adjustments in domain percentages as well. The Cloud Concepts domain has decreased from 26% to 24%, while Security and Compliance have increased from 25% to 30%. Cloud Technology and Services have gone up from 33% to 34%, and Billing, Pricing, and Support have decreased from 16% to 12%.
CLF-C01 vs CLF-C02: Exam Topics
Keep in mind that “Migration” and “Business applications” are no longer out-of-scope in this version of the exam. Also, the new exam places greater emphasis on understanding Cloud Design principles within the context of the AWS Well-Architected Framework.
There have been several additions to the exam, such as migration strategies, AWS IAM Identity Center, AWS Wavelength Zones, database migration, edge services like CloudFront and Global Accelerator, storage classes, AI/ML services, and more. However, it’s important to note that this exam focuses on general concept knowledge of AWS services and their functionalities, rather than the design and implementation aspects.
So, if you’re planning to take the AWS Certified Cloud Practitioner exam, make sure to understand these changes, study the updated topics, and utilize the suggested resources for preparation. Good luck on your cloud journey!
Before we move forward, I want to take a minute to give a shout-out to our amazing sponsor for today’s episode. If you’re on the path to becoming an AWS Cloud Practitioner and need a solid study resource, you’ve come to the right place. Introducing Etienne Noumen’s comprehensive guide, ‘AWS Cloud Practitioner Certification Practice Exam Prep‘.
Now, what makes this resource truly special is that it’s tailor-made for the 2023 AWS CCP exam. You won’t find any outdated information here! This guide is jam-packed with practice tests that closely resemble the current format and content of the exam. So, when test day arrives, you’ll be ready for anything that comes your way.
Etienne Noumen, our expert in all things cloud computing, has poured his heart and soul into creating this book. He understands the ins and outs of the AWS ecosystem like nobody else. And his dedication to making complex concepts easy to grasp truly shines through in his explanations and walkthroughs.
Each chapter of this guide delves deep into the key concepts and principles that are essential for the AWS Cloud Practitioner exam. It’s more than just memorization, though. Etienne emphasizes understanding the ‘why’ behind each concept, which will set you apart from the rest of the pack.
Whether you’re a beginner just dipping your toes into the world of cloud computing, or a professional looking to expand your knowledge, this practice exam prep has got you covered. No more scrambling to gather resources from different places – everything you need is right here within this comprehensive guide.
And the best part? You can find it conveniently on platforms like Amazon, Apple, Google, Barnes and noble, and Shopify. So, no matter which platform you prefer, you can easily access this valuable resource.
Ready to take your AWS Cloud Practitioner journey to the next level? Simply click the link in our show notes and make your preparation more effective and less stressful.
In this episode, we covered the foundational AWS Certified Cloud Practitioner certification, heard testimonials from recent exam passers, shared tips for studying, discussed upcoming changes to the exam, and recommended a comprehensive exam prep guide – thanks for listening to today’s episode, I’ll see you guys at the next one and don’t forget to subscribe!
Download the AI & Machine Learning For Dummies PRO App: iOS - Android Our AI and Machine Learning For Dummies PRO App can help you Ace the following AI and Machine Learning certifications:
What are the Top 10 AWS jobs you can get with an AWS certification in 2022 plus AWS Interview Questions
AWS certifications are becoming increasingly popular as the demand for AWS-skilled workers continues to grow. AWS certifications show that an individual has the necessary skills to work with AWS technologies, which can be beneficial for both job seekers and employers. AWS-certified individuals can often command higher salaries and are more likely to be hired for AWS-related positions. So, what are the top 10 AWS jobs that you can get with an AWS certification?
AWS solutions architects are responsible for designing, implementing, and managing AWS solutions. They work closely with other teams to ensure that AWS solutions are designed and implemented correctly.
AWS Architects, AWS Cloud Architects, and AWS solutions architects spend their time architecting, building, and maintaining highly available, cost-efficient, and scalable AWS cloud environments. They also make recommendations regarding AWS toolsets and keep up with the latest in cloud computing.
Professional AWS cloud architects deliver technical architectures and lead implementation efforts, ensuring new technologies are successfully integrated into customer environments. This role works directly with customers and engineers, providing both technical leadership and an interface with client-side stakeholders.
AWS sysops administrators are responsible for managing and operating AWS systems. They work closely with AWS developers to ensure that systems are running smoothly and efficiently.
A Cloud Systems Administrator, or AWS SysOps administrator, is responsible for the effective provisioning, installation/configuration, operation, and maintenance of virtual systems, software, and related infrastructures. They also maintain analytics software and build dashboards for reporting.
AWS devops engineers are responsible for designing and implementing automated processes for Amazon Web Services. They work closely with other teams to ensure that processes are efficient and effective.
AWS DevOps engineers design AWS cloud solutions that impact and improve the business. They also perform server maintenance and implement any debugging or patching that may be necessary. Among other DevOps things!
AWS cloud engineers are responsible for designing, implementing, and managing cloud-based solutions using AWS technologies. They work closely with other teams to ensure that solutions are designed and implemented correctly.
5. AWS Network Engineer:
AWS network engineers are responsible for designing, implementing, and managing networking solutions using AWS technologies. They work closely with other teams to ensure that networking solutions are designed and implemented correctly.
Cloud network specialists, engineers, and architects help organizations successfully design, build, and maintain cloud-native and hybrid networking infrastructures, including integrating existing networks with AWS cloud resources.
AWS security engineers are responsible for ensuring the security of Amazon Web Services environments. They work closely with other teams to identify security risks and implement controls to mitigate those risks.
Cloud security engineers provide security for AWS systems, protect sensitive and confidential data, and ensure regulatory compliance by designing and implementing security controls according to the latest security best practices.
As a database administrator on Amazon Web Services (AWS), you’ll be responsible for setting up, maintaining, and securing databases hosted on the Amazon cloud platform. You’ll work closely with other teams to ensure that databases are properly configured and secured.
8. Cloud Support Engineer:
Support engineers are responsible for providing technical support to AWS customers. They work closely with customers to troubleshoot problems and provide resolution within agreed upon SLAs.
9. Sales Engineer:
Sales engineers are responsible for working with sales teams to generate new business opportunities through the use of AWS products and services .They must have a deep understanding of AWS products and how they can be used by potential customers to solve their business problems .
10. Cloud Developer
An AWS Developer builds software services and enterprise-level applications. Generally, previous experience working as a software developer and a working knowledge of the most common cloud orchestration tools is required to get and succeed at an AWS cloud developer job
Cloud consultants provide organizations with technical expertise and strategy in designing and deploying AWS cloud solutions or in consulting on specific issues such as performance, security, or data migration.
AWS certified professionals are in high demand across a variety of industries. AWS certs can open the door to a number of AWS jobs, including cloud engineer, solutions architect, and DevOps engineer.
Through studying and practice, any of the listed jobs could becoming available to you if you pass your AWS certification exams. Educating yourself on AWS concepts plays a key role in furthering your career and receiving not only a higher salary, but a more engaging position.
Download the AI & Machine Learning For Dummies PRO App: iOS - Android Our AI and Machine Learning For Dummies PRO App can help you Ace the following AI and Machine Learning certifications:
AWS Azure Google Cloud Certifications Testimonials and Dumps
Do you want to become a Professional DevOps Engineer, a cloud Solutions Architect, a Cloud Engineer or a modern Developer or IT Professional, a versatile Product Manager, a hip Project Manager? Therefore Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career.
85% of hiring managers say cloud certifications make a candidate more attractive.
Build the skills that’ll drive your career into six figures.
In this blog, we are going to feed you with AWS Azure and GCP Cloud Certification testimonials and Frequently Asked Questions and Answers Dumps.
Went through the entire CloudAcademy course. Most of the info went out the other ear. Got a 67% on their final exam. Took the ExamPro free exam, got 69%.
Was going to take it last Saturday, but I bought TutorialDojo’s exams on Udemy. Did one Friday night, got a 50% and rescheduled it a week later to today Sunday.
Took 4 total TD exams. Got a 50%, 54%, 67%, and 64%. Even up until last night I hated the TD exams with a passion, I thought they were covering way too much stuff that didn’t even pop up in study guides I read. Their wording for some problems were also atrocious. But looking back, the bulk of my “studying” was going through their pretty well written explanations, and their links to the white papers allowed me to know what and where to read.
Not sure what score I got yet on the exam. As someone who always hated testing, I’m pretty proud of myself. I also had to take a dump really bad starting at around question 25. Thanks to TutorialsDojo Jon Bonso for completely destroying my confidence before the exam, forcing me to up my game. It’s better to walk in way over prepared than underprepared.
Just Passed My CCP exam today (within 2 weeks)
I would like to thank this community for recommendations about exam preparation. It was wayyyy easier than I expected (also way easier than TD practice exams scenario-based questions-a lot less wordy on real exam). I felt so unready before the exam that I rescheduled the exam twice. Quick tip: if you have limited time to prepare for this exam, I would recommend scheduling the exam beforehand so that you don’t procrastinate fully.
Resources:
-Stephane’s course on Udemy (I have seen people saying to skip hands-on videos but I found them extremely helpful to understand most of the concepts-so try to not skip those hands-on)
-Tutorials Dojo practice exams (I did only 3.5 practice tests out of 5 and already got 8-10 EXACTLY worded questions on my real exam)
-Very little to no experience (deployed my group’s app to cloud via Elastic beanstalk in college-had 0 clue at the time about what I was doing-had clear guidelines)
I used Stephane Maarek on Udemy. Purchased his course and the 6 Practice Exams. Also got Neal Davis’ 500 practice questions on Udemy. I took Stephane’s class over 2 days, then spent the next 2 weeks going over the tests (3~4 per day) till I was constantly getting over 80% – passed my exam with a 882.
What an adventure, I’ve never really gieven though to getting a cert until one day it just dawned on me that it’s one of the few resources that are globally accepted. So you can approach any company and basically prove you know what’s up on AWS 😀
Passed with two weeks of prep (after work and weekends)
This was just a nice structured presentation that also gives you the powerpoint slides plus cheatsheets and a nice overview of what is said in each video lecture.
Udemy – AWS Certified Cloud Practitioner Practice Exams, created by Jon Bonso**, Tutorials Dojo**
These are some good prep exams, they ask the questions in a way that actually make you think about the related AWS Service. With only a few “Bullshit! That was asked in a confusing way” questions that popped up.
Pass AWS CCP. The score is beyond expected
I took CCP 2 days ago and got the pass notification right after submitting the answers. In about the next 3 hours I got an email from Credly for the badge. This morning I got an official email from AWS congratulating me on passing, the score is much higher than I expected. I took Stephane Maarek’s CCP course and his 6 demo exams, then Neal Davis’ 500 questions also. On all the demo exams, I took 1 fail and all passes with about 700-800. But in the real exam, I got 860. The questions in the real exam are kind of less verbose IMO, but I don’t truly agree with some people I see on this sub saying that they are easier. Just a little bit of sharing, now I’ll find something to continue ^^
– Took 450 screenshots of practice questions and technology/service descriptions as reference notes to quickly swift through on my phone and computer for review. Screenshots were of questions that I either didn’t know, knew but was iffy on, or those I believed I’d easily forget.
– Made 15-20 pages of notes. Chill. Nothing crazy. This is on A4 paper. Free-form note taking. With big diagrams. Around 60-80 words per page.
– I was getting low-to-mid 70%s on Neal Davis’s and Stephane Maarek’s practice exams. Highest score I got was an 80%.
– I got a 67(?)% on one of Stephane Maarek’s exams. The only sub-70% I ever got on any practice test. I got slightly anxious. But given how much harder Maarek’s exams are compared to the actual exam, the anxiety was undue.
– Certified Cloud Practitioner Course by Exam Pro (Paid Version)**
– One or two free practice exams found by a quick Google search
*Regarding Exam Pro: I went through about 40% of the video lectures. I went through all the videos in the first few sections but felt that watching the lectures was too slow and laborious even at 1.5-2x speed. (The creator, for the most part, reads off of the slides, adding brief comments here and there.) So, I decided to only watch the video lectures for sections I didn’t have a good grasp on. (I believe the video lectures provided in the course are just split versions of the full length course available for free on YouTube under the freeCodeCamp channel, here.) The online course provides five practice exams. I did not take any of them.
**Regarding Stephane Maarek: I only took his practice exams. I did not take his study guide course.
Notes
– My study regimen (i.e., an hour to two every day for three weeks) was overkill.
– The questions on the practice exams created by Neal Davis and Stephane Maarek were significantly harder than those on the actual exam. I believe I could’ve passed without touching any of these resources.
– I retook one or two practice exams out of the 10+ I’ve taken. I don’t think there’s a need to retake the exams as long as you are diligent about studying the questions and underlying concepts you got wrong. I reviewed all the questions I missed on every practice exam the day before.
What would I do differently?
– Focus on practice tests only. No video lectures.
– Focus on the technologies domain. You can intuit your way through questions in the other domains.
Lots of the comments here about networking / VPC questions being prevalent are true. Also so many damn Aurora questions, it was like a presales chat.
The questions are actually quite detailed; as some had already mentioned. So pay close attention to the minute details Some questions you definitely have to flag for re-review.
It is by far harder than the Developer Associate exam, despite it having a broader scope. The DVA-C02 exam was like doing a speedrun but this felt like finishing off Sigrun on GoW. Ya gotta take your time.
I took the TJ practice exams. It somewhat helped, but having intimate knowledge of VPC and DB concepts would help more.
Passed SAA-C03 – Feedback
Just passed the SAA-C03 exam (864) and wanted to provide some feedback since that was helpful for me when I was browsing here before the exam.
I come from an IT background and have a strong knowledge in the VPC portion so that section was a breeze for me in the preparation process (I had never used AWS before this so everything else was new, but the concepts were somewhat familiar considering my background). I started my preparation about a month ago, and used the Mareek class on Udemy. Once I finished the class and reviewed my notes I moved to Mareek’s 6 practice exams (on Udemy). I wasn’t doing extremely well on the PEs (I passed on 4/6 of the exams with 70s grades) I reviewed the exam questions after each exam and moved on to the next. I also purchased Tutorial Dojo’s 6 exams set but only ended up taking one out of 6 (which I passed).
Overall the practice exams ended up being a lot harder than the real exam which had mostly the regular/base topics: a LOT of S3 stuff and storage in general, a decent amount of migration questions, only a couple questions on VPCs and no ML/AI stuff.
Sharing the study guide that I followed when I prepared for the AWS Certified Solutions Architect Associate SAA-C03 exam. I passed this test and thought of sharing a real exam experience in taking this challenging test.
First off: my background – I have 8 years of development.experience and been doing AWS for several project, both personally and at work. Studied for a total of 2 months. Focused on the official Exam Guide, and carefully studied the Task Statements and related AWS services.
SAA-C03 Exam Prep
For my exam prep, I bought the adrian cantrill video course, tutorialsdojo (TD) video course and practice exams. Adrian’s course is just right and highly educational but like others has said, the content is long and cover more than just the exam. Did all of the hands-on labs too and played around some machine learning services in my AWS account.
TD video course is short and a good overall summary of the topics items you’ve just learned. One TD lesson covers multiple topics so the content is highly concise. After I completed doing Adrian’s video course, I used TD’s video course as a refresher, did a couple of their hands-on labs then head on to their practice exams.
For the TD practice exams, I took the exam in chronologically and didn’t jumped back and forth until I completed all tests. I first tried all of the 7 timed-mode tests, and review every wrong ones I got on every attempt., then the 6 review-mode tests and the section/topic-based tests. I took the final-test mode roughly 3 times and this is by far one of the helpful feature of the website IMO. The final-test mode generates a unique set from all TD question bank, so every attempt is challenging for me. I also noticed that the course progress doesn’t move if I failed a specific test, so I used to retake the test that I failed.
The Actual SAA-C03 Exam
The actual AWS exam is almost the same with the ones in the TD tests where:
All of the questions are scenario-based
There are two (or more) valid solutions in the question, e.g:
Need SSL: options are ACM and self-signed URL
Need to store DB credentials: options are SSM Parameter Store and Secrets Manager
The scenarios are long-winded and asks for:
MOST Operationally efficient solution
MOST cost-effective
LEAST amount overhead
Overall, I enjoyed the exam and felt fully prepared while taking the test, thanks to Adrian and TD, but it doesn’t mean the whole darn thing is easy. You really need to put some elbow grease and keep your head lights on when preparing for this exam. Good luck to all and I hope my study guide helped out anyone who is struggling.
Another Passed SAA-C03?
Just another thread about passing the general exam? I passed SAA-C03 yesterday, would like to share my experience on how I earned the examination.
Background:
– graduate with networking background
– working experience on on-premise infrastructure automation, mainly using ansible, python, zabbix and etc.
– cloud experience, short period like 3-6 months with practice
– provisioned cloud application using terraform in azure and aws
cantrill course is depth and lot of practical knowledge, like email alias and etc.. check in to know more
tutorialdojo practice exam help me filter the answer and guide me on correct answer. If I am wrong in specific topic, I rewatch cantrill video. However, there is some topics that not covered by cantrill but the guideline/review in practice exam will provide pretty much detail. I did all the other mode before the timed-based, after that get average 850 in timed-based exam, while scoring the final practice exam with 63/65. However, real examination is harder compared to practice exam in my opinion.
udemy course and practice exam, I go through some of them but I think the practice exam is quite hard compared to tutorialdojo.
lab – just get hand dirty and they will make your knowledge deep dive in your brain, my advice is try not only to do copy and paste lab but really read the description for each parameter in aws portal
Advice:
you need to know some general exam topics like how to:
– s3 private access
– ec2 availability
– kinesis product including firehose, data stream, blabla
– iam
My next target will be AWS SAP and CKA, still searching suitable material for AWS SAP but proposed mainly using acloudguru sandbox and homelab to learn the subject, practice with acantrill lab in github.
Good luck anyone!
Passed SAA
I wanted to give my personal experience. I have a background in IT, but I have never worked in AWS previous to 5 weeks ago. I got my Cloud Practitioner in a week and SAA after another 4 weeks of studying (2-4 hours a day). I used Cantril’s Course and Tutorials Dojo Practice Exams. I highly, highly recommend this combo. I don’t think I would have passed without the practice exams, as they are quite difficult. In my opinion, they are much more difficult than the actual exam. They really hit the mark on what kind of content you will see. I got a 777, and that’s with getting 70-80%’s on the practice exams. I probably could have done better, but I had a really rough night of sleep and I came down with a cold. I was really on the struggle bus halfway through the test.
I only had a couple of questions on ML / AI, so make sure you know the differences between them all. Lot’s of S3 and EC2. You really need to know these in and out.
My company is offering stipend’s for each certification, so I’m going straight to developer next.
Recently passed SAA-C03
Just passed my SAA-C03 yesterday with 961 points. My first time doing AWS certification. I used Cantrill’s course. Went through the course materials twice, and took around 6 months to study, but that’s mostly due to my busy schedule. I found his materials very detailed and probably go beyond what you’d need for the actual exam.
I also used Stephane’s practice exams on Udemy. I’d say it’s instrumental in my passing doing these to get used to the type of questions in the actual exams and review missing knowledge. Would not have passed otherwise.
Just a heads-up, there are a few things popped up that I did not see in the course materials or practice exams:
* Lake Formation: question about pooling data from RDS and S3, as well as controlling access.
* S3 Requester Pays: question about minimizing S3 data cost when sharing with a partner.
* Pinpoint journey: question about customer replying to SMS sent-out and then storing their feedback.
Not sure if they are graded or Amazon testing out new parts.
Cheers.
Another SAP-C01-Pass
Received my notification this morning that I passed 811.
Prep Time: 10 weeks 2hrs a day
Materials: Neil Davis videos/practice exam Jon Bonso practice exams White papers Misc YouTube videos Some hands on
Prof Experience: 4 years AWS using main services as architect
AWS Certs: CCP-SSA-DVA-SAP(now)
Thoughts: Exam was way more familiar to me than the Developer Exam. I use very little AWS developer tools but mainly use core AWS services. Neil’s videos were very straightforward, easy to digest, and on point. I was able to watch most of the videos on a plane flight to Vegas.
After video series I started to hit his section based exams, main exam, notes, and followed up with some hands on. I was getting destroyed on some of the exams early on and had to rewatch and research the topics, writing notes. There is a lot of nuance and fine details on the topics, you’ll see this when you take the practice exam. These little details matter.
Bonso’s exam were nothing less than awesome as per usual. Same difficulty and quality as Neil Davis. Followed the same routine with section based followed by final exam. I believe Neil said to aim for 80’s on his final exams to sit for the exam. I’d agree because that’s where I was hitting a week before the exam (mid 80’s). Both Neil and Jon exams were on par with exam difficulty if not a shade more difficult.
The exam itself was very straightforward. My experience is the questions were not overly verbose and were straight to the point as compared to the practice exams I took. I was able to quickly narrow down the questions and make a selection. Flagged 8 questions along the way and had 30min to review all my answers. Unlike some people, I didn’t feel like it was a brain melter and actually enjoyed the challenge. Maybe I’m a sadist who knows.
Advice: Follow Neil’s plan, bone up on weak areas and be confident. These questions have a pattern based upon the domain. Doing the practice exams enough will allow you to see the pattern and then research will confirm your suspicions. You can pass this exam!
Passed the certified developer associate this week.
Primary study was Stephane Maarek’s course on Udemy.
I also used the Practice Exams by Stephane Maarek and Abhishek Singh.
I used Stephane’s course and practice exams for the Solutions Architect Associate as well, and find his course does a good job preparing you to pass the exams.
The practice exams were more challenging than the actual exam, so they are a good gauge to see if you are ready for the exam.
Haven’t decided if I’ll do another associate level certification next or try for the solutions architect professional.
I cleared Developer associate exam yesterday. I scored 873. Actual Exam Exp: More questions were focused on mainly on Lambda, API, Dynamodb, cloudfront, cognito(must know proper difference between user pool and identity pool) 3 questions I found were just for redis vs memecached (so maybe you can focus more here also to know exact use case& difference.) other topic were cloudformation, beanstalk, sts, ec2. Exam was mix of too easy and too tough for me. some questions were one liner and somewhere too long.
Resources: The main resources I used was udemy. Course of Stéphane Maarek and practice exams of Neal Davis and Stéphane Maarek. These exams proved really good and they even helped me in focusing the area which I lacked. And they are up to the level to actual exam, I found 3-4 exact same questions in actual exam(This might be just luck ! ). so I feel, the course of stephane is more than sufficient and you can trust it. I have achieved solution architect associate previously so I knew basic things, so I took around 2 weeks for preparation and revised the Stephen’s course as much as possible. Parallelly I gave the mentioned exams as well, which guided me where to focus more.
Thanks to all of you and feel free to comment/DM me, if you think I can help you in anyway for achieving the same.
Another Passed Associate Developer Exam (DVA-C01)
Already had passed the Associate Architect Exam (SA-C03) 3 months ago, so I got much more relaxed to the exam, I did the exam with Pearson Vue at home with no problems. Used Adrian Cantrill for the course together with the TD exams.
Studied 2 weeks a 1-2 hours since there is a big overlap with the associate architect couse, even tho the exam has a different approach, more focused on the Serverless side of AWS. Lots of DynamoDB, Lambda, API Gateway, KMS, CloudFormation, SAM, SSO, Cognito (User Pool and Identity Pool), and IAM role/credentials best practices.
I do think in terms of difficulty it was a bit easier than the Associate Architect, maybe it is made up on my mind as it was my second exam so I went in a bit more relaxed.
Next step is going for the Associate Sys-Ops, I will use Adrian Cantrill and Stephane Mareek courses as it is been said that its the most difficult associate exam.
Passed the SCS-C01 Security Specialty
Mixture of Tutorial Dojo practice exams, A Cloud Guru course, Neal Davis course & exams helped a lot. Some unexpected questions caught me off guard but with educated guessing, due to the material I studied I was able to overcome them. It’s important to understand:
KMS Keys
AWS Owned Keys
AWS Managed KMS keys
Customer Managed Keys
asymmetrical
symmetrical
Imported key material
What services can use AWS Managed Keys
KMS Rotation Policies
Depending on the key matters the rotation that can be applied (if possible)
Key Policies
Grants (temporary access)
Cross-account grants
Permanent Policys
How permissions are distributed depending on the assigned principle
IAM Policy format
Principles (supported principles)
Conditions
Actions
Allow to a service (ARN or public AWS URL)
Roles
Secrets Management
Credential Rotation
Secure String types
Parameter Store
AWS Secrets Manager
Route 53
DNSSEC
DNS Logging
Network
AWS Network Firewall
AWS WAF (some questions try to trick you into thinking AWS Shield is needed instead)
AWS Shield
Security Groups (Stateful)
NACL (Stateless)
Ephemeral Ports
VPC FlowLogs
AWS Config
Rules
Remediation (custom or AWS managed)
AWS CloudTrail
AWS Organization Trails
Multi-Region Trails
Centralized S3 Bucket for multi-account log aggregation
AWS GuardDuty vs AWS Macie vs AWS Inspector vs AWS Detective vs AWS Security Hub
It gets more in depth, I’m willing to help anyone out that has questions. If you don’t mind joining my Discord to discuss amongst others to help each other out will be great. A study group community. Thanks. I had to repost because of a typo 🙁
Exam guide book by Kam Agahian and group of authors – this just got released and has all you need in a concise manual, it also included 3 practice exams, this is a must buy for future reference and covers ALL current exam topics including container networking, SD-WAN etc.
Stephane Maarek’s Udemy course – it is mostly up-to-date with the main exam topics including TGW, network firewall etc. To the point lectures with lots of hands-on demos which gives you just what you need, highly recommended as well!
Tutorial Dojos practice tests to drive it home – this helped me get an idea of the question wording, so I could train myself to read fast, pick out key words, compare similar answers and build confidence in my knowledge.
Crammed daily for 4 weeks (after work, I have a full time job + family) and went in and nailed it. I do have networking background (15+ years) and I am currently working as a cloud security engineer and I’m working with AWS daily, especially EKS, TGW, GWLB etc.
For those not from a networking background – it would definitely take longer to prep.
What an exciting journey. I think AZ-900 is the hardest probably because it is my first Microsoft certification. Afterwards, the others are fair enough. AI-900 is the easiest.
I generally used Microsoft Virtual Training Day, Cloud Ready Skills, Measureup and John Savill’s videos. Having built a fundamental knowledge of the Cloud, I am planning to do AWS CCP next. Wish me luck!
Passed Azure Fundamentals
Learning Material
Hi all,
I passed my Azure fundamentals exam a couple of days ago, with a score of 900/1000. Been meaning to take the exam for a few months but I kept putting it off for various reasons. The exam was a lot easier than I thought and easier than the official Microsoft practice exams.
Study materials;
A Cloud Guru AZ-900 fundamentals course with practice exams
I am pretty proud of this one. Databases are an area of IT where I haven’t spent a lot of time, and what time I have spent has been with SQL or MySQL with old school relational databases. NoSQL was kinda breaking my brain for a while.
Study Materials:
Microsoft Virtual Training Day, got the voucher for the free exam. I know several people on here said that was enough for them to pass the test, but that most certainly was not enough for me.
Exampro.co DP-900 course and practice test. They include virtual flashcards which I really liked.
Whizlabs.com practice tests. I also used the course to fill in gaps in my testing.
Passed AI-900! Tips & Resources Included!!
Achievement Celebration
Huge thanks to this subreddit for helping me kick start my Azure journey. I have over 2 decades of experience in IT and this is my 3rd Azure certification as I already have AZ-900 and DP-900.
Here’s the order in which I passed my AWS and Azure certifications:
SAA>DVA>SOA>DOP>SAP>CLF|AZ-900>DP-900>AI-900
I have no plans to take this certification now but had to as the free voucher is expiring in a couple of days. So I started preparing on Friday and took the exam on Sunday. But give it more time if you can.
Here’s my study plan for AZ-900 and DP-900 exams:
finish a popular video course aimed at the cert
watch John Savill’s study/exam cram
take multiple practice exams scoring in 90s
This is what I used for AI-900:
Alan Rodrigues’ video course (includes 2 practice exams) 👌
John Savill’s study cram 💪
practice exams by Scott Duffy and in 28Minutes Official 👍
knowledge checks in AI modules from MS learn docs 🙌
I also found the below notes to be extremely useful as a refresher. It can be played multiple times throughout your preparation as the exam cram part is just around 20 minutes.
Just be clear on the topics explained by the above video and you’ll pass AI-900. I advise you to watch this video at the start, middle and end of your preparation. All the best in your exam
Just passed AZ-104
Achievement Celebration
I recommend to study networking as almost all of the questions are related to this topic. Also, AAD is a big one. Lots of load balancers, VNET, NSGs.
Received very little of this:
Containers
Storage
Monitoring
I passed with a 710 but a pass is a pass haha.
Used tutorial dojos but the closest questions I found where in the Udemy testing exams.
Regards,
Passed GCP Professional Cloud Architect
First of all, I would like to start with the fact that I already have around 1 year of experience with GCP in depth, where I was working on GKE, IAM, storage and so on. I also obtained GCP Associate Cloud Engineer certification back in June as well, which helps with the preparation.
I started with Dan Sullivan’s Udemy course for Professional Cloud Architect and did some refresher on the topics I was not familiar with such as BigTable, BigQuery, DataFlow and all that. His videos on the case studies helps a lot to understand what each case study scenario requires for designing the best cost-effective architecture.
In order to understand the services in depth, I also went through the GCP documentation for each service at least once. It’s quite useful for knowing the syntax of the GCP commands and some miscellaneous information.
As for practice exam, I definitely recommend Whizlabs. It helped me prepare for the areas I was weak at and helped me grasp the topics a lot faster than reading through the documentation. It will also help you understand what kind of questions will appear for the exam.
I used TutorialsDojo (Jon Bonso) for preparation for Associate Cloud Engineer before and I can attest that Whizlabs is not that good. However, Whizlabs still helps a lot in tackling the tough questions that you will come across during the examination.
One thing to note is that, there wasn’t even a single question that was similar to the ones from Whizlabs practice tests. I am saying this from the perspective of the content of the questions. I got totally different scenarios for both case study and non case study questions. Many questions focused on App Engine, Data analytics and networking. There were some Kubernetes questions based on Anthos, and cluster networking. I got a tough question regarding storage as well.
I initially thought I would fail, but I pushed on and started tackling the multiple-choices based on process of elimination using the keywords in the questions. 50 questions in 2 hours is a tough one, especially due to the lengthy questions and multiple choices. I do not know how this compares to AWS Solutions Architect Professional exam in toughness. But some people do say GCP professional is tougher than AWS.
All in all, I still recommend this certification to people who are working with GCP. It’s a tough one to crack and could be useful for future prospects. It’s a bummer that it’s only valid for 2 years.
Passed GCP: Cloud Digital Leader
Hi everyone,
First, thanks for all the posts people share. It helps me prep for my own exam. I passed the GCP: Cloud Digital Leader exam today and wanted to share a few things about my experience.
Preparation
I have access to ACloudGuru (AGU)and Udemy through work. I started one of the Udemy courses first, but it was clear the course was going beyond the scope of the Cloud Digital Leader certification. I switched over AGU and enjoyed the content a lot more. The videos were short and the instructor hit all the topics on the Google exam requirements sheet.
AGU also has three – 50 question practices test. The practice tests are harder than the actual exam (and the practice tests aren’t that hard).
I don’t know if someone could pass the test if they just watched the videos on Google Cloud’s certification site, especially if you had no experience with GCP.
Overall, I would say I spent 20 hrs preparing for the exam. I have my CISSP and I’m working on my CCSP. After taking the test, I realized I way over prepared.
Exam Center
It was my first time at this testing center and I wasn’t happy with the experience. A few of the issues I had are:
– My personal items (phone, keys) were placed in an unlocked filing cabinet
– My desk are was dirty. There were eraser shreds (or something similar) and I had to move the keyboard and mouse and brush all the debris out of my work space
– The laminated sheet they gave me looked like someone had spilled Kool-Aid on it
– They only offered earplugs, instead of noise cancelling headphones
Exam
My recommendation for the exam is to know the Digital Transformation piece as well as you know all the GCP services and what they do.
I wish you all luck on your future exams. Onto GCP: Associate Cloud Engineer.
Passed the Google Cloud: Associate Cloud Engineer
Hey all, I was able to pass the Google Cloud: Associate Cloud Engineer exam in 27 days.
I studied about 3-5 hours every single day.
I created this note to share with the resources I used to pass the exam.
Happy studying!
GCP ACE Exam Aced
Hi folks,
I am glad to share with you that I have cleared by GCP ACE exam today and would like to share my preparation with you:
1)I completed these courses from Coursera:
1.1 Google Cloud Platform Fundamentals – Core Infrastructure
1.2 Essential Cloud Infrastructure: Foundation
1.3 Essential Cloud Infrastructure: Core Services
1.4 Elastic Google Cloud Infrastructure: Scaling and Automation
Post these courses, I did couple of QwikLab courses as listed in orderly manner:
2 Getting Started: Create and Manage Cloud Resources (Qwiklabs Quest)
2.1 A Tour of Qwiklabs and Google Cloud
2.2 Creating a Virtual Machine
2.2 Compute Engine: Qwik Start – Windows
2.3 Getting Started with Cloud Shell and gcloud
2.4 Kubernetes Engine: Qwik Start
2.5 Set Up Network and HTTP Load Balancers
2.6 Create and Manage Cloud Resources: Challenge Lab
3 Set up and Configure a Cloud Environment in Google Cloud (Qwiklabs Quest)
3.1 Cloud IAM: Qwik Start
3.2 Introduction to SQL for BigQuery and Cloud SQL
3.3 Multiple VPC Networks
3.4 Cloud Monitoring: Qwik Start
3.5 Deployment Manager – Full Production [ACE]
3.6 Managing Deployments Using Kubernetes Engine
3.7 Set Up and Configure a Cloud Environment in Google Cloud: Challenge Lab
4 Kubernetes in Google Cloud (Qwiklabs Quest)
4.1 Introduction to Docker
4.2 Kubernetes Engine: Qwik Start
4.3 Orchestrating the Cloud with Kubernetes
4.4 Managing Deployments Using Kubernetes Engine
4.5 Continuous Delivery with Jenkins in Kubernetes Engine
Post these courses I did the following for mock exam preparation:
Cloud computing has revolutionized the way companies develop applications. Most of the modern applications are now cloud native. Undoubtedly, the cloud offers immense benefits like reduced infrastructure maintenance, increased availability, cost reduction, and many others.
However, which cloud vendor to choose, is a challenge in itself. If we look at the horizon of cloud computing, the three main providers that come to mind are AWS, Azure, and Google cloud. Today, we will compare the top three cloud giants and see how they differ. We will compare their services, specialty, and pros and cons. After reading this article, you will be able to decide which cloud vendor is best suited to your needs and why.
History and establishment
AWS
AWS is the oldest player in the market, operating since 2006. Here’s a brief history of AWS and how computing has changed. Being the first in the cloud industry, it has gained a particular advantage over its competitors. It offers more than 200+ services to its users. Some of its notable clients include:
Netflix
Expedia
Airbnb
Coursera
FDA
Coca Cola
Azure
Azure by Microsoft started in 2010. Although it started four years later than AWS, it is catching up quite fast. Azure is Microsoft’s public cloud platform which is why many companies prefer to use Azure for their Microsoft-based applications. It also offers more than 200 services and products. Some of its prominent clients include:
HP
Asus
Mitsubishi
3M
Starbucks
CDC (Center of Disease Control) USA
National health service (NHS) UK
Google
Google Cloud also started in 2010. Its arsenal of cloud services is relatively smaller compared to AWS or Azure. It offers around 100+ services. However, its services are robust, and many companies embrace Google cloud for its specialty services. Some of its noteworthy clients include:
PayPal
UPS
Toyota
Twitter
Spotify
Unilever
Market share & growth rate
If you look at the market share and growth chart below, you will notice that AWS has been leading for more than four years. Azure is also expanding fast, but it is still has a long way to go to catch up with AWS.
However, in terms of revenue, Azure is ahead of AWS. In Q1 2022, AWS revenue was $18.44 billion; Azure earned $23.4 billion, while Google cloud earned $5.8 billion.
Availability Zones (Data Centers)
When comparing cloud vendors, it is essential to see how many regions and availability zones are offered. Here is a quick comparison between all three cloud vendors in terms of regions and data centers:
AWS
AWS operates in 25 regions and 81 availability zones. It offers 218+ edge locations and 12 regional edge caches as well. You can utilize the edge location and edge caches in services like AWS Cloudfront and global accelerator, etc.
Azure
Azure has 66 regions worldwide and a minimum of three availability zones in each region. It also offers more than 116 edge locations.
Google
Google has a presence in 27 regions and 82 availability zones. It also offers 146 edge locations.
Although all three cloud giants are continuously expanding. Both AWS and Azure offer data centers in China to specifically cater for Chinese consumers. At the same time, Azure seems to have broader coverage than its competitors.
Comparison of common cloud services
Let’s look at the standard cloud services offered by these vendors.
Compute
Amazon’s primary compute offering is EC2 instances, which are very easy to operate. Amazon also provides a low-cost option called “Amazon lightsail” which is a perfect fit for those who are new to computing and have a limited budget. AWS charges for EC2 instances only when you are using them. Azure’s compute offering is also based on virtual machines. Google is no different and offers virtual machines in Google’s data centers. Here’s a brief comparison of compute offerings of all three vendors:
Storage
All three vendors offer various forms of storage, including object-based storage, cold storage, file-based storage, and block-based storage. Here’s a brief comparison of all three:
Database
All three vendors support managed services for databases. They also offer NoSQL as well as document-based databases. AWS also provides a proprietary RDBMS named “Aurora”, a highly scalable and fast database offering for both MySQL and PostGreSQL. Here’s a brief comparison of all three vendors:
Comparison of Specialized services
All three major cloud providers are competing with each other in the latest technologies. Some notable areas of competition include ML/AI, robotics, DevOps, IoT, VR/Gaming, etc. Here are some of the key specialties of all three vendors.
AWS
Being the first and only one in the cloud market has many benefits, and Amazon has certainly taken advantage of that. Amazon has advanced specifically in AI and machine learning related tools. AWS DeepLens is an AI-powered camera that you can use to develop and deploy machine learning algorithms. It helps you with OCR and image recognition. Similarly, Amazon has launched an open source library called “Gluon” which helps with deep learning and neural networks. You can use this library to learn how neural networks work, even if you lack any technical background. Another service that Amazon offers is SageMaker. You can use SageMaker to train and deploy your machine learning models. It contains the Lex conversational interface, which is the backbone of Alexa, Lambda, and Greengrass IoT messaging services.
Another unique (and recent) offering from AWS is IoT twinmaker. This service can create digital twins of real-world systems like factories, buildings, production lines, etc.
AWS is even providing a service for Quantum computing called AWS Braket.
Azure
Azure excels where you are already using some Microsoft products, especially on-premises Microsoft products. Organizations already using Microsoft products prefer to use Azure instead of other cloud vendors because Azure offers a better and more robust integration with Microsoft products.
Azure has excellent services related to ML/AI and cognitive services. Some notable services include Bing web search API, Face API, Computer vision API, text analytics API, etc.
Google
Google is the current leader of all cloud providers regarding AI. This is because of their open-source Google library TensorFlow, the most popular library for developing machine learning applications. Vertex AI and BigQueryOmni are also beneficial services offered lately. Similarly, Google offers rich services for NLP, translation, speech, etc.
Pros and Cons
Let’s summarize the pros and cons for all three cloud vendors:
AWS
Pros:
An extensive list of services
Huge market share
Support for large businesses
Global reach
Cons:
Pricing model. Many companies struggle to understand the cost structure. Although AWS has improved the UX of its cost-related reporting in the AWS console, many companies still hesitate to use AWS because of a perceived lack of cost transparency
Azure
Pros:
Excellent integration with Microsoft tools and software
Broader feature set
Support for open source
Cons:
Geared towards enterprise customers
Google
Pros:
Strong integration with open source tools
Flexible contracts
Good DevOps services
The most cost-efficient
The preferred choice for startups
Good ML/AI-based services
Cons:
A limited number of services as compared to AWS and Azure
As mentioned earlier, AWS has the largest market share compared to other cloud vendors. That means more companies are using AWS, and there are more vacancies in the market for AWS-certified professionals. Here are main reasons why you would choose to learn AWS:
Azure is the second largest cloud service provider. It is ideal for companies that are already using Microsoft products. Here are the top reasons why you would choose to learn Azure:
Ideal for experienced user of Microsoft services
Azure certifications rank among the top paying IT certifications
If you’re applying for a company that primarily uses Microsoft Services
Google
Although Google is considered an underdog in the cloud market, it is slowly catching up. Here’s why you may choose to learn GCP.
While there are fewer job postings, there is also less competition in the market
GCP certifications rank among the top paying IT certifications
Most valuable IT Certifications
Keen to learn about the top paying cloud certifications and jobs? If you look at the annual salary figures below, you can see the average salary for different cloud vendors and IT companies, no wonder AWS is on top. A GCP cloud architect is also one of the top five. The Azure architect comes at #9.
Which cloud certification to choose depends mainly on your career goals and what type of organization you want to work for. No cloud certification path is better than the other. What matters most is getting started and making progress towards your career goals. Even if you decide at a later point in time to switch to a different cloud provider, you’ll still benefit from what you previously learned.
Over time, you may decide to get certified in all three – so you can provide solutions that vary from one cloud service provider to the next.
Don’t get stuck in analysis-paralysis! If in doubt, simply get started with AWS certifications that are the most sought-after in the market – especially if you are at the very beginning of your cloud journey. The good news is that you can become an AWS expert when enrolling in our value-packed training.
Further Reading
You may also be interested in the following articles:
Hi everyone, I wanted to share the path I took to obtain the DP-203 certification with 854. This might help those who are preparing or considering it. Here’s how I went about it: 1. Starting with AZ-900 and DP-900: Before diving into the DP-203 preparation, I first completed the AZ-900 and DP-900 certifications. This gave me a solid foundation on Azure and the fundamental data concepts. 2. Deep Dive into DP-203: For DP-203, I started with Piotr’s video series, which I found extremely interesting and detailed. I followed the entire playlist while practicing simultaneously on my own Azure account, which helped me understand and apply the concepts in real-time. Here’s the link to the playlist: Piotr’s Playlist. 3. Strengthening with Microsoft Learn: Next, to solidify my knowledge, I went through all the content offered by Microsoft Learn for the DP-203 certification. The material provided by Microsoft is well-structured and covers all the necessary areas for the exam. 4. Practice with Test Questions: For the practical part, I focused on a series of questions available on YouTube. These questions helped me get familiar with the exam format and identify the areas where I needed to improve. Here’s the link to the practice questions video: Practice Questions Series. This whole process took me about a month and a week of full concentration. The exam questions weren’t too difficult, but it’s crucial to have a solid understanding of important concepts like partitioning, distribution, indexing, streaming, and a good knowledge of T-SQL in the context of Dedicated SQL Pool. Feel free to ask if you have any questions or need further advice! Good luck to everyone preparing for this certification! submitted by /u/No-Afternoon-4637 [link] [comments]
Finally! 863 🙂 Completed Windows Server Hybrid Administrator Associate. Got only 44 questions (including 9 question case study). And 100 minutes. Exam was way easier than Az-800, no unknown/ unexpected things/services, plenty of time for browsing MS documentation. Material used to prepare: - AZ-801 courses on linkedin (there are 5 of them there) - MS on demand instruction led training (15 videos): https://learn.microsoft.com/en-us/shows/on-demand-instructor-led-training-series/?terms=Az-801&source=docs went about half of Dan Zabinski videos on youtube https://youtube.com/playlist?list=PLf4LHvX8--d9OHjQOs5Mnk1nNE0BTD488&si=-CraNAitWsYWury4 And of course MS Learn submitted by /u/Glum-Implement9857 [link] [comments]
Are there any resources similar to Stéphane Maarek's slides for AWS, but focused on Azure? I found his slides perfect for quickly reviewing concepts during learning and exam preparation. Thank you! submitted by /u/IanglDev [link] [comments]
Free AZ-500 Practice Exams, coupon valid only for 100 redeems. https://www.udemy.com/course/practice-exams-certified-azure-security-engineer-associate/?couponCode=0CFD219DCDB761C927CA submitted by /u/Junior_Series3225 [link] [comments]
57 questions, 10 of which came in the form of a case study. No simulations, but a lot of drag and drop and multiple answer type of questions. The case study is sandboxed entirely away from the rest of the exam. It counts towards your total exam time, but when you mark the case study as complete, you cannot go back and review it again. You do have the opportunity to review it before you mark it as complete. I used the following resources: YouTube: Exam AZ 800 Administering Windows Server Hybrid Core Infrastructure Full Course (Geekdom Academy) AZ-800 - Administering Windows Server Hybrid Core Infrastructure (BurningIceTech) Microsoft: MS Learn Learning Paths: Deploy and manage identity infrastructure (6 modules) Manage Windows Servers and workloads in a hybrid environment (7 modules) Manage virtualization and containers in a hybrid networking environment (8 modules) Implement and operate an on-premises and hybrid networking infrastructure (7 modules) Configure storage and file services (6 modules) submitted by /u/dejjen [link] [comments]
Hey everyone, Back again. So I got the AZ-900 and DP-900 and wanted to get the PL-900 but after much reading a lot of people are saying if you have a lot of experience doing this then it's not worth it and should just jump to doing the PL-400. Some context: I have been a BI analyst since April of 2023. I do a lot of data work using power BI and some azure and I also customize and deploy solutions for the CRM (CE). This can be from simple field customizations to creating complex power automate flows and every once in a while implementing JS as needed. I started taking the fundamental Azure certs because recently there was a need to move some data from AWS to Azure for simpler data processing and better integration with other MS products. I am the only one who can do this or have the aptitude to do this (building and managing azure pipelines, analysis etc). So my question ultimately is, should I also pursue the PL-900 or just get into PL-400? The certs I am currently thinking of taking are: AZ-104, DP-203, AZ-305, PL-400 Thank you and if anyone has any other suggestions let me know. I am trying to increase my opportunities in the market and eventually get into freelance work in the future. submitted by /u/Brave_Win2464 [link] [comments]
Hi guys, I made a post a few weeks back when I took my exam and failed with 687, I recently tried again after spending hours every day during the week and the weekend (+- 4 hours daily) since my last attempt, this time focusing on practice tests extensively and learning to navigate MSLearn effectively. This time around I made sure not to repeat my biggest mistake from my first attempt that took up so much of time, which was relying on MSLearn when I got stuck on a question. Doing this on my first attempt left me with 15 minutes or so and I still had to do my case study, while having a few questions for review. So definitely be careful and watch the timer. On my second attempt I marked each question I was unsure about for review (While still trying to choose the most appropriate answers in case I did not have time to go back and review it), this left me with 25-30 minutes left to review 15 questions and make use of MSLearn to find the answers. This worked much better for me, especially since I have used MSLearn a lot while studying, I knew where to find most of the information I needed, but still had to trust my instinct for others. I also found out that I actually skipped one Yes/No question when I went back to review, this was because of the small screen I was on that cut off the last Yes/No question. I had questions based on pretty much all the topics this time, but I had much more Networking and Compute related questions. So I definitely would make sure to cover all the topics of the exam and not also try to solidify your weak points, for me it was Entra ID and DNS questions since there's normally a lot of reading for DNS related questions where you need to try to create a mental mind map of the scenario. I am currently a Developer and have about under 2 years of professional experience, I don't really use Azure at all in my day to day (Just basic App Service setup once in a while), although I'm trying to land a job in DevOps/Cloud, so I had to grind to learn most of what Azure offers within about 7-8 weeks, spending hours daily. If i had any advice to someone taking the AZ-104, I would highly recommend practice tests, I've only used TutorialsDojo, using the timed-mode you get to experience how it will feel on the exam day, where you biggest constraint is the time. Another resource I really grinded since my initial attempt was Youtube playlists on practice exams, this helped a lot since they will explain everything, tell you why an answer is incorrect, and where it can be found on MSLearn for you to go through yourself. Channels like TechwithJaspal, TheTechBlackboard are what I mainly used. Regarding case studies, I don't believe you should read the entire passage of text they give you, there are multiple sections that you need to switch between to comprehend all the information. You should just head to the questions immediately, then head to the case study afterwards to look for the appropriate sections that are applicable. (Technical requirements, User Requirements, Existing environment). This way you avoid wasting time reading information that doesn't apply to the questions. There are also all the labs from Microsoft that show you how to do everything in the Portal (Compute, Networking, Storage etc) that I went through a couple times. The Azure 104 path on MSLearn is decent, but does not cover close to the amount of content you need to pass, so do not rely on that only. I was scoring about 85-90% consistently on TutorialsDojo practice tests and the AZ-104 practice test by Microsoft, but it is not really a good indicator since the questions become familiar after a while. This is my first certification and I'm glad it's done, I scored high 800s this time, now I'm just hoping to get any entry-level role in the space since I enjoy Cloud and DevOps. submitted by /u/New-Peanut-5610 [link] [comments]
Hi, I come from the Azure side, and Microsoft offers vouchers for enterprise employees via the so-called Enterprise Skills Initiative program. I am curious if Google has similar thing? Thanks for any hint! submitted by /u/Logical-Tip5222 [link] [comments]
Less than a week ago I passed AZ-204, and a few weeks before that I did my first Azure certification by passing AZ-900. The AZ-204 felt as quite an effort, so I planned to take it easy for a while when it comes to studying. However, after reading about AI-900 in this forum, I suddenly felt inspired to try one more Azure certification. Since I have a A Cloud Guru subscription from my employer, I watched the corresponding course from there. I really liked the course, and it prepared me well for the exam. Well, except it was slightly outdated since I think a module about Generative AI may have been missing from it. I almost have no experience from AI development from before, and I studied maybe 5-6 hours for this exam (not counting waiting time for machine learning model training 😊). The exam still felt super easy. I still think the certification is useful, since I got acquainted with the AI service offerings available from Azure, and got to work with them as part of the studying plan. I must say it was actually fun to work with AI Vision, and also the Azure Machine Learning Designer. submitted by /u/astrohijacker [link] [comments]
Pretty confusing, especially since I confused the naming conventions between both the service names on GCP and Azure. submitted by /u/_areebpasha [link] [comments]
I have scored 90% on the practise test of Microsoft. Am I prepared to apply to the real test, or is the real exam much harder? submitted by /u/rommaneus [link] [comments]
I have set Azure Solutions Architect (AZ-305) as my ultimate goal, so I want to get Azure Administrator (AZ-104) certification. And found that I need to have at least 6 months of hands-on experience in Azure administration. Will it prevent me from applying for AZ-104 if I have no work experience? Or would it be a better option to take the Azure Fundamentals (AZ-900) exam? submitted by /u/used4 [link] [comments]
Failed with 609 I got the chance to read every question, and I didn’t freak out. I got through the exam with about 40 minutes to spare. I did this on purpose because I wanted to review everything with ms learn. Mid way through my exam it froze, and while I was doing the MS learn it would take like 1-2 minutes to load. Because I made a chart and I knew I needed to put 120 hours in, and I only did 47 hours, I know I didn’t do my best: I know that if I had drilled more activities, if I had worked things forward and backwards more, I would have been able to read the questions and I would have less 50/50 need to check that at the end moments and I would have been successful. That’s on me. I would say my prep made me less confused with the answers, unlike az700 which was truly outside of my comfort zone…but lead to strong networking component in my exam results on the 104. I prepped good, but if I wanted to pass with no excuses I should have worked harder. Not really sure what to do now… I don’t work in the space so I’ll decompress for a bit and figure something out. Good luck everyone! submitted by /u/Theprettiestthings [link] [comments]
The deal ends on December 9th, if you are not ready for the exam yet you can purchase the test and activate it later (I believe you can activate it within a month of the purchase date). Upon checking out, use this code AWEBZGSX to get another 10% off the discounted price. Good luck! submitted by /u/HardLearner01 [link] [comments]
I am studying to get certification of Azure 204. I am checking exams of whizlabs and teacherset. On whizlabs I get 75-80% but on TeacherSet I get 60%. I am not sure which resource has more realistic questions and I cannot tell when I am ready to give the exam. Anyone has experience? Additionally any other advice? Thanks submitted by /u/fotf91 [link] [comments]
Hello, i am going to take the SC-300 exam in 3 weeks. Currently i am learning with Microsoft learn and bought the MeasureUp test exam. Are those questions similar to the questions in the PersonVue exam? submitted by /u/R_Bane [link] [comments]
Hello everyone, Thank you very much for the collaboration here. I have thought about giving AZ-900 exam. What kind of study plan do you suggest and approved source of learning material? So I would consider some video material first and then practice questions. Thanks in advance! submitted by /u/Strong_Carpenter1484 [link] [comments]
Top-paying Cloud certifications:
Google Certified Professional Cloud Architect — $175,761/year AWS Certified Solutions Architect – Associate — $149,446/year Azure/Microsoft Cloud Solution Architect – $141,748/yr Google Cloud Associate Engineer – $145,769/yr AWS Certified Cloud Practitioner — $131,465/year Microsoft Certified: Azure Fundamentals — $126,653/year Microsoft Certified: Azure Administrator Associate — $125,993/year A Twitter List by enoumen A Twitter List by enoumen
Download the AI & Machine Learning For Dummies PRO App: iOS - Android Our AI and Machine Learning For Dummies PRO App can help you Ace the following AI and Machine Learning certifications:
Djamgatech – Multilingual and Platform Independent Cloud Certification and Education App for AWS Azure Google Cloud
Djamgatech is the ultimate Cloud Education Certification App. It is an EduFlix App for AWS, Azure, Google Cloud Certification Prep, School Subjects, Python, Math, SAT, etc.[Android, iOS]
Technology is changing and is moving towards the cloud. The cloud will power most businesses in the coming years and is not taught in schools. How do we ensure that our kids and youth and ourselves are best prepared for this challenge?
Building mobile educational apps that work offline and on any device can help greatly in that sense.
The ability to tab on a button and learn the cloud fundamentals and take quizzes is a great opportunity to help our children and youth to boost their job prospects and be more productive at work.
The App covers the following certifications : AWS Cloud Practitioner Exam Prep CCP CLF-C01, Azure Fundamentals AZ 900 Exam Prep, AWS Certified Solution Architect Associate SAA-C02 Exam Prep, AWS Certified Developer Associate DVA-C01 Exam Prep, Azure Administrator AZ 104 Exam Prep, Google Associate Cloud Engineer Exam Prep, Data Analytics for AWS DAS-C01, Machine Learning for AWS and Google, AWS Certified Security – Specialty (SCS-C01), AWS Certified Machine Learning – Specialty (MLS-C01), Google Cloud Professional Machine Learning Engineer and more… [Android, iOS]
Features: – Practice exams – 1000+ Q&A updated frequently. – 3+ Practice exams per Certification – Scorecard / Scoreboard to track your progress – Quizzes with score tracking, progress bar, countdown timer. – Can only see scoreboard after completing the quiz. – FAQs for most popular Cloud services – Cheat Sheets – Flashcards – works offline
Note and disclaimer: We are not affiliated with AWS, Azure, Microsoft or Google. The questions are put together based on the certification study guide and materials available online. The questions in this app should help you pass the exam but it is not guaranteed. We are not responsible for any exam you did not pass.
Important: To succeed with the real exam, do not memorize the answers in this app. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
Download the AI & Machine Learning For Dummies PRO App: iOS - Android Our AI and Machine Learning For Dummies PRO App can help you Ace the following AI and Machine Learning certifications:
CyberSecurity 101 and Top 25 AWS Certified Security Specialty Questions and Answers Dumps
Almost 4.57 billion people were active internet users as of July 2020, encompassing 59 percent of the global population. 94% of enterprises use cloud. 77% of organizations worldwide have at least one application running on the cloud. This results in an exponential growth of cyber attacks. Therefore, CyberSecurity is one the biggest challenge to individuals and organizations worldwide: 158,727 cyber attacks per hour, 2,645 per minute and 44 every second of every day.
I- The AWS Certified Security – Specialty (SCS-C01) examination is intended for individuals who perform a security role. This exam validates an examinee’s ability to effectively demonstrate knowledge about securing the AWS platform.
It validates an examinee’s ability to demonstrate:
An understanding of specialized data classifications and AWS data protection mechanisms.
An understanding of data-encryption methods and AWS mechanisms to implement them.
An understanding of secure Internet protocols and AWS mechanisms to implement them.
Question 2: A company has AWS workloads in multiple geographical locations. A Developer has created an Amazon Aurora database in the us-west-1 Region. The database is encrypted using a customer-managed AWS KMS key. Now the Developer wants to create the same encrypted database in the us-east-1 Region. Which approach should the Developer take to accomplish this task?
A) Create a snapshot of the database in the us-west-1 Region. Copy the snapshot to the us-east-1 Region and specify a KMS key in the us-east-1 Region. Restore the database from the copied snapshot.
B) Create an unencrypted snapshot of the database in the us-west-1 Region. Copy the snapshot to the useast-1 Region. Restore the database from the copied snapshot and enable encryption using the KMS key from the us-east-1 Region
C) Disable encryption on the database. Create a snapshot of the database in the us-west-1 Region. Copy the snapshot to the us-east-1 Region. Restore the database from the copied snapshot.
D) In the us-east-1 Region, choose to restore the latest automated backup of the database from the us-west1 Region. Enable encryption using a KMS key in the us-east-1 Region
ANSWER2:
A
Notes/Hint2:
If a user copies an encrypted snapshot, the copy of the snapshot must also be encrypted. If a user copies an encrypted snapshot across Regions, users cannot use the same AWS KMS encryption key for the copy as used for the source snapshot, because KMS keys are Region specific. Instead, users must specify a KMS key that is valid in the destination Region
Question 3: A corporate cloud security policy states that communication between the company’s VPC and KMS must travel entirely within the AWS network and not use public service endpoints. Which combination of the following actions MOST satisfies this requirement? (Select TWO.)
A) Add the aws:sourceVpce condition to the AWS KMS key policy referencing the company’s VPC endpoint ID.
B) Remove the VPC internet gateway from the VPC and add a virtual private gateway to the VPC to prevent direct, public internet connectivity.
C) Create a VPC endpoint for AWS KMS with private DNS enabled.
D) Use the KMS Import Key feature to securely transfer the AWS KMS key over a VPN. E) Add the following condition to the AWS KMS key policy: “aws:SourceIp”: “10.0.0.0/16“.
Question 4: An application team is designing a solution with two applications. The security team wants the applications’ logs to be captured in two different places, because one of the applications produces logs with sensitive data. Which solution meets the requirement with the LEAST risk and effort?
A) Use Amazon CloudWatch Logs to capture all logs, write an AWS Lambda function that parses the log file, and move sensitive data to a different log.
B) Use Amazon CloudWatch Logs with two log groups, with one for each application, and use an AWS IAM policy to control access to the log groups, as required.
C) Aggregate logs into one file, then use Amazon CloudWatch Logs, and then design two CloudWatch metric filters to filter sensitive data from the logs.
D) Add logic to the application that saves sensitive data logs on the Amazon EC2 instances’ local storage, and write a batch script that logs into the Amazon EC2 instances and moves sensitive logs to a secure location.
In an n-tier architecture, each tier’s security group allows traffic from the security group sending it traffic only. The presentation tier opens traffic for HTTP and HTTPS from the internet. Since security groups are stateful, only inbound rules are required.
Question 6: A security engineer is working with a product team building a web application on AWS. The application uses Amazon S3 to host the static content, Amazon API Gateway to provide RESTful services, and Amazon DynamoDB as the backend data store. The users already exist in a directory that is exposed through a SAML identity provider. Which combination of the following actions should the engineer take to enable users to be authenticated into the web application and call APIs? (Select THREE).
A) Create a custom authorization service using AWS Lambda.
B) Configure a SAML identity provider in Amazon Cognito to map attributes to the Amazon Cognito user pool attributes.
C) Configure the SAML identity provider to add the Amazon Cognito user pool as a relying party.
D) Configure an Amazon Cognito identity pool to integrate with social login providers.
E) Update DynamoDB to store the user email addresses and passwords.
F) Update API Gateway to use an Amazon Cognito user pool authorizer.
ANSWER6:
B, C and F
Notes/Hint6:
When Amazon Cognito receives a SAML assertion, it needs to be able to map SAML attributes to user pool attributes. When configuring Amazon Cognito to receive SAML assertions from an identity provider, you need ensure that the identity provider is configured to have Amazon Cognito as a relying party.Amazon API Gateway will need to be able to understand the authorization being passed from Amazon Cognito, which is a configuration step.
Question 7: A company is hosting a web application on AWS and is using an Amazon S3 bucket to store images. Users should have the ability to read objects in the bucket. A security engineer has written the following bucket policy to grant public read access:
Attempts to read an object, however, receive the error: “Action does not apply to any resource(s) in statement.” What should the engineer do to fix the error?
A) Change the IAM permissions by applying PutBucketPolicy permissions.
B) Verify that the policy has the same name as the bucket name. If not, make it the same.
C) Change the resource section to “arn:aws:s3:::appbucket/*”.
D) Add an s3:ListBucket action.
ANSWER7:
C
Notes/Hint7:
The resource section should match with the type of operation. Change the ARN to include /* at the end, as it is an object operation.
Question 8: A company decides to place database hosts in its own VPC, and to set up VPC peering to different VPCs containing the application and web tiers. The application servers are unable to connect to the database. Which network troubleshooting steps should be taken to resolve the issue? (Select TWO.)
A) Check to see if the application servers are in a private subnet or public subnet.
B) Check the route tables for the application server subnets for routes to the VPC peering connection.
C) Check the NACLs for the database subnets for rules that allow traffic from the internet.
D) Check the database security groups for rules that allow traffic from the application servers.
E) Check to see if the database VPC has an internet gateway.
Question 9: A company is building a data lake on Amazon S3. The data consists of millions of small files containing sensitive information. The security team has the following requirements for the architecture:
Data must be encrypted in transit.
Data must be encrypted at rest.
The bucket must be private, but if the bucket is accidentally made public, the data must remain confidential.
Which combination of steps would meet the requirements? (Select TWO.)
A) Enable AES-256 encryption using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) on the S3 bucket.
B) Enable default encryption with server-side encryption with AWS KMS-managed keys (SSE-KMS) on the S3 bucket.
C) Add a bucket policy that includes a deny if a PutObject request does not include aws:SecureTransport.
D) Add a bucket policy with aws:SourceIp to allow uploads and downloads from the corporate intranet only.
E) Enable Amazon Macie to monitor and act on changes to the data lake’s S3 bucket.
Question 10: A security engineer must ensure that all API calls are collected across all company accounts, and that they are preserved online and are instantly available for analysis for 90 days. For compliance reasons, this data must be restorable for 7 years. Which steps must be taken to meet the retention needs in a scalable, cost-effective way?
A) Enable AWS CloudTrail logging across all accounts to a centralized Amazon S3 bucket with versioning enabled. Set a lifecycle policy to move the data to Amazon Glacier daily, and expire the data after 90 days.
B) Enable AWS CloudTrail logging across all accounts to S3 buckets. Set a lifecycle policy to expire the data in each bucket after 7 years.
C) Enable AWS CloudTrail logging across all accounts to Amazon Glacier. Set a lifecycle policy to expire the data after 7 years.
D) Enable AWS CloudTrail logging across all accounts to a centralized Amazon S3 bucket. Set a lifecycle policy to move the data to Amazon Glacier after 90 days, and expire the data after 7 years.
ANSWER10:
D
Notes/Hint10:
Meets all requirements and is cost effective by using lifecycle policies to transition to Amazon Glacier.
Question 11: A security engineer has been informed that a user’s access key has been found on GitHub. The engineer must ensure that this access key cannot continue to be used, and must assess whether the access key was used to perform any unauthorized activities. Which steps must be taken to perform these tasks?
A) Review the user’s IAM permissions and delete any unrecognized or unauthorized resources.
B) Delete the user, review Amazon CloudWatch Logs in all regions, and report the abuse.
C) Delete or rotate the user’s key, review the AWS CloudTrail logs in all regions, and delete any unrecognized or unauthorized resources.
D) Instruct the user to remove the key from the GitHub submission, rotate keys, and re-deploy any instances that were launched.
Question 12: You have a CloudFront distribution configured with the following path patterns: When users request objects that start with ‘static2/’, they are receiving 404 response codes. What might be the problem?
A) CloudFront distributions cannot have multiple different origin types
B) The ‘*’ path pattern must appear after the ‘static2/*’ path
C) CloudFront distributions cannot have origins in different AWS regions
D) The ‘*’ path pattern must appear before ‘static1/*’ path
ANSWER12:
C
Notes/Hint12:
CloudFront distributions cannot have origins in different AWS regions
Question 13: An application running on EC2 instances processes sensitive information stored on Amazon S3. The information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 is a security risk. Which solution will resolve the security concern?
A) Access the data through an Internet Gateway.”,
B) Access the data through a VPN connection.”,
C) Access the data through a NAT Gateway.”,
D) Access the data through a VPC endpoint for Amazon S3″,
ANSWER13:
D
Notes/Hint13:
VPC endpoints for Amazon S3 provide secure connections to S3 buckets that do not require a gateway or NAT instances. NAT Gateways and Internet Gateways still route traffic over the Internet to the public endpoint for Amazon S3. There is no way to connect to Amazon S3 via VPN.
Question 14: An organization is building an Amazon Redshift cluster in their shared services VPC. The cluster will host sensitive data. How can the organization control which networks can access the cluster?
A) Run the cluster in a different VPC and connect through VPC peering
B) Create a database user inside the Amazon Redshift cluster only for users on the network
C) Define a cluster security group for the cluster that allows access from the allowed networks
D) Only allow access to networks that connect with the shared services network via VPN
ANSWER14:
C
Notes/Hint14:
A security group can grant access to traffic from the allowed networks via the CIDR range for each network. VPC peering and VPN are connectivity services and cannot control traffic for security. Amazon Redshift user accounts address authentication and authorization at the user level and have no control over network traffic
Question 15: From a security perspective, what is a principal?
A) An identity
B) An anonymous user
C) An authenticated user
D) A resource
ANSWER15:
B and C
Notes/Hint15:
An anonymous user falls under the definition of a principal. A principal can be an anonymous user acting on a system. An authenticated user falls under the definition of a principal. A principal can be an authenticated user acting on a system
Question 16: A company is storing an access key (access key ID and secret access key) in a text file on a custom AMI. The company uses the access key to access DynamoDB tables from instances created from the AMI. The security team has mandated a more secure solution. Which solution will meet the security team’s mandate?
A) Put the access key in an S3 bucket, and retrieve the access key on boot from the instance.
B) Pass the access key to the instances through instance user data.
C) Obtain the access key from a key server launched in a private subnet
D) Create an IAM role with permissions to access the table, and launch all instances with the new role
ANSWER16:
D
Notes/Hint16:
IAM roles for EC2 instances allow applications running on the instance to access AWS resources without having to create and store any access keys. Any solution involving the creation of an access key then introduces the complexity of managing that secret
Question 17: While signing in REST/ Query requests, for additional security, you should transmit your requests using Secure Sockets Layer (SSL) by using ____.”,
Question 18: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?
A) Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
B) Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
C) Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
D) Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.”,
ANSWER18:
D
Notes/Hint18:
With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys.
Question 19: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?
A) Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
B) Configure S3 bucket tags with your AWS access keys for your bucket hosting your website so that the application can query them for access.
C) Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
D) Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.
ANSWER2:
C
Notes/Hint19:
With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used
Question 20: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?
A) Cognito Data
B) Cognito Events
C) Cognito Streams
D) Cognito Callbacks
ANSWER20:
C
Notes/Hint20:
Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams
Question 22:Which of the following statements are correct? (Choose 2)
A) The Customer Master Key is used to encrypt and decrypt the Envelope Key or Data Key
B) The Envelope Key or Data Key is used to encrypt and decrypt plain text files.
C) The envelope Key or Data Key is used to encrypt and decrypt the Customer Master Key.
D) The Customer MasterKey is used to encrypt and decrypt plain text files.
ANSWER22:
A and B
Notes/Hint22:
AWS Key Management Service Concepts: The Customer Master Key is used to encrypt and decrypt the Envelope Key or Data Key, The Envelope Key or Data Key is used to encrypt and decrypt plain text files.
Question 23:Which of the following is an encrypted key used by KMS to encrypt your data
A) Customer Managed Key
B) Encryption Key
C) Envelope Key
D) Customer Master Key
ANSWER23:
C
Notes/Hint23:
Your Data key also known as the Enveloppe key is encrypted using the master key. This approach is known as Envelope encryption. Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key
Question 26: A Security engineer must develop an AWS Identity and Access Management (IAM) strategy for a company’s organization in AWS Organizations. The company needs to give developers autonomy to develop and test their applications on AWS, but the company also needs to implement security guardrails to help protect itself. The company creates and distributes applications with different levels of data classification and types. The solution must maximize scalability.
Which combination of steps should the security engineer take to meet these requirements? (Choose three.)
A) Create an SCP to restrict access to highly privileged or unauthorized actions to specific AM principals. Assign the SCP to the appropriate AWS accounts.
B) Create an IAM permissions boundary to allow access to specific actions and IAM principals. Assign the IAM permissions boundary to all AM principals within the organization
C) Create a delegated IAM role that has capabilities to create other IAM roles. Use the delegated IAM role to provision IAM principals by following the principle of least privilege.
D) Create OUs based on data classification and type. Add the AWS accounts to the appropriate OU. Provide developers access to the AWS accounts based on business need.
E) Create IAM groups based on data classification and type. Add only the required developers’ IAM role to the IAM groups within each AWS account.
F) Create IAM policies based on data classification and type. Add the minimum required IAM policies to the developers’ IAM role within each AWS account.
Answer: A B and C
Notes:
If you look at the choices, there are three related to SCP, which controls services, and three related to IAM and permissions boundaries.
Limiting services doesn’t help with data classification – using boundaries, policies and roles give you the scalability and can solve the problem.
Question 27: A Network Load Balancer (NLB) target instance is not entering the InService state. A security engineer determines that health checks are failing,
Which factors could cause the health check failures? (Choose three.)
A) The target instance’s security group does not allow traffic from the NLB.
B) The target instance’s security group is not attached to the NLB
C) The NLB’s security group is not attached to the target instance.
D) The target instance’s subnet network ACL does not allow traffic from the NLB.
E) The target instance’s security group is not using IP addresses to allow traffic from the NLB.
F) The target network ACL is not attached to the NLB.
B D and E I believe. You have a one to many relationship based on L3 NLB, and it’s unreachable – well architected would put them in same security group, the traffic would have to be allowed on the port that’s sending and receiving. The host points back to NLB as default gateway. Don’t think other ones fit. Plus BDE is a preferred combo for their tests. I remember it with the acronym big dice envy.
Cryptography: Practice and study of techniques for secure communication in the presence of third parties called adversaries.
Hacking: catch-all term for any type of misuse of a computer to break the security of another computing system to steal data, corrupt systems or files, commandeer the environment or disrupt data-related activities in any way.
Cyberwarfare: Uuse of technology to attack a nation, causing comparable harm to actual warfare. There is significant debate among experts regarding the definition of cyberwarfare, and even if such a thing exists
Penetration testing: Colloquially known as a pen test, pentest or ethical hacking, is an authorized simulated cyberattack on a computer system, performed to evaluate the security of the system. Not to be confused with a vulnerability assessment.
Malwares: Any software intentionally designed to cause damage to a computer, server, client, or computer network. A wide variety of malware types exist, including computer viruses, worms, Trojan horses, ransomware, spyware, adware, rogue software, and scareware.
Malware Analysis Tool: Any .Run Malware hunting with live access to the heart of an incident https://any.run/Malware Analysis Total: VirusTotal – Analyze suspicious files and URLs to detect types of malware, automatically share them with the security community https://www.virustotal.com/gui/
VPN: A virtual private network (VPN) extends a private network across a public network and enables users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network. Applications running across a VPN may therefore benefit from the functionality, security, and management of the private network. Encryption is a common, although not an inherent, part of a VPN connection.
Antivirus: Antivirus software, or anti-virus software (abbreviated to AV software), also known as anti-malware, is a computer program used to prevent, detect, and remove malware.
DDos: A distributed denial-of-service (DDoS) attack is one of the most powerful weapons on the internet. When you hear about a website being “brought down by hackers,” it generally means it has become a victim of a DDoS attack.
Fraud Detection: Set of activities undertaken to prevent money or property from being obtained through false pretenses. Fraud detection is applied to many industries such as banking or insurance. In banking, fraud may include forging checks or using stolen credit cards.
Spywares: Spyware describes software with malicious behavior that aims to gather information about a person or organization and send such information to another entity in a way that harms the user; for example by violating their privacy or endangering their device’s security.
Spoofing: Disguising a communication from an unknown source as being from a known, trusted source
Pharming: Malicious websites that look legitimate and are used to gather usernames and passwords.
Catfishing: Creating a fake profile for fraudulent or deceptive purposes
SSL: Stands for secure sockets layer. Protocol for web browsers and servers that allows for the authentication, encryption and decryption of data sent over the Internet.
Phishing emails: Disguised as trustworthy entity to lure someone into providing sensitive information
Intrusion detection System: Device or software application that monitors a network or systems for malicious activity or policy violations. Any intrusion activity or violation is typically reported either to an administrator or collected centrally using a security information and event management system.
Encryption: Encryption is the method by which information is converted into secret code that hides the information’s true meaning. The science of encrypting and decrypting information is called cryptography. In computing, unencrypted data is also known as plaintext, and encrypted data is called ciphertext.
MFA: Multi-factor authentication (MFA) is defined as a security mechanism that requires an individual to provide two or more credentials in order to authenticate their identity. In IT, these credentials take the form of passwords, hardware tokens, numerical codes, biometrics, time, and location.
Vulnerabilities: A vulnerability is a hole or a weakness in the application, which can be a design flaw or an implementation bug, that allows an attacker to cause harm to the stakeholders of an application. Stakeholders include the application owner, application users, and other entities that rely on the application.
SQL injections: SQL injection is a code injection technique, used to attack data-driven applications, in which malicious SQL statements are inserted into an entry field for execution.
Cyber attacks: In computers and computer networks an attack is any attempt to expose, alter, disable, destroy, steal or gain unauthorized access to or make unauthorized use of an asset.
Confidentiality: Confidentiality involves a set of rules or a promise usually executed through confidentiality agreements that limits access or places restrictions on certain types of information.
Secure channel: In cryptography, a secure channel is a way of transferring data that is resistant to overhearing and tampering. A confidential channel is a way of transferring data that is resistant to overhearing, but not necessarily resistant to tampering.
Tunneling: Communications protocol that allows for the movement of data from one network to another. It involves allowing private network communications to be sent across a public network through a process called encapsulation.
SSH: Secure Shell is a cryptographic network protocol for operating network services securely over an unsecured network. Typical applications include remote command-line, login, and remote command execution, but any network service can be secured with SSH.
SSL Certificates: SSL certificates are what enable websites to move from HTTP to HTTPS, which is more secure. An SSL certificate is a data file hosted in a website’s origin server. SSL certificates make SSL/TLS encryption possible, and they contain the website’s public key and the website’s identity, along with related information.
Phishing: Phishing is a cybercrime in which a target or targets are contacted by email, telephone or text message by someone posing as a legitimate institution to lure individuals into providing sensitive data such as personally identifiable information, banking and credit card details, and passwords.
Cybercrime: Cybercrime, or computer-oriented crime, is a crime that involves a computer and a network. The computer may have been used in the commission of a crime, or it may be the target. Cybercrime may threaten a person, company or a nation’s security and financial health.
Backdoor: A backdoor is a means to access a computer system or encrypted data that bypasses the system’s customary security mechanisms. A developer may create a backdoor so that an application or operating system can be accessed for troubleshooting or other purposes.
Salt and Hash: A cryptographic salt is made up of random bits added to each password instance before its hashing. Salts create unique passwords even in the instance of two users choosing the same passwords. Salts help us mitigate rainbow table attacks by forcing attackers to re-compute them using the salts.
Password: A password, sometimes called a passcode,[1] is a memorized secret, typically a string of characters, usually used to confirm the identity of a user.[2] Using the terminology of the NIST Digital Identity Guidelines,[3] the secret is memorized by a party called the claimant while the party verifying the identity of the claimant is called the verifier. When the claimant successfully demonstrates knowledge of the password to the verifier through an established authentication protocol,[4] the verifier is able to infer the claimant’s identity.
Fingerprint: A fingerprint is an impression left by the friction ridges of a human finger. The recovery of partial fingerprints from a crime scene is an important method of forensic science. Moisture and grease on a finger result in fingerprints on surfaces such as glass or metal.
Facial recognition: Facial recognition works better for a person as compared to fingerprint detection. It releases the person from the hassle of moving their thumb or index finger to a particular place on their mobile phone. A user would just have to bring their phone in level with their eye.
Asymmetric key ciphers versus symmetric key ciphers (Difference between symmetric and Asymmetric encryption): The basic difference between these two types of encryption is that symmetric encryption uses one key for both encryption and decryption, and the asymmetric encryption uses public key for encryption and a private key for decryption.
Decryption: The conversion of encrypted data into its original form is called Decryption. It is generally a reverse process of encryption. It decodes the encrypted information so that an authorized user can only decrypt the data because decryption requires a secret key or password.
Algorithms: Finite sequence of well-defined, computer-implementable instructions, typically to solve a class of problems or to perform a computation.
Authentication: is the act of proving an assertion, such as the identity of a computer system user. In contrast with identification, the act of indicating a person or thing’s identity, authentication is the process of verifying that identity. It might involve validating personal identity documents, verifying the authenticity of a website with a digital certificate,[1] determining the age of an artifact by carbon dating, or ensuring that a product or document is not counterfeit.
DFIR: Digital forensic and incident response: Multidisciplinary profession that focuses on identifying, investigating, and remediating computer network exploitation. This can take varied forms and involves a wide variety of skills, kinds of attackers, an kinds of targets. We’ll discuss those more below.
OTP: One Time Password: A one-time password, also known as one-time PIN or dynamic password, is a password that is valid for only one login session or transaction, on a computer system or other digital device
Proxy Server and Reverse Proxy Server:A proxyserver is a go‑between or intermediary server that forwards requests for content from multiple clients to different servers across the Internet. A reverseproxyserver is a type of proxyserver that typically sits behind the firewall in a private network and directs client requests to the appropriate backend server.
Offensive * Exploit Database – The Exploit Database is maintained by Offensive Security, an information security training company that provides various Information Security Certifications as well as high end penetration testing services. https://www.exploit-db.com/
Dark Reading Cyber security’s comprehensive news site is now an online community for security professionals. https://www.darkreading.com/
The Hacker News – The Hacker News (THN) is a leading, trusted, widely-acknowledged dedicated cybersecurity news platform, attracting over 8 million monthly readers including IT professionals, researchers, hackers, technologists, and enthusiasts. https://thehackernews.com
SecuriTeam – A free and independent source of vulnerability information. https://securiteam.com/
SANS NewsBites – “A semiweekly high-level executive summary of the most important news articles that have been published on computer security during the last week. Each news item is very briefly summarized and includes a reference on the web for detailed information, if possible.” Published for free on Tuesdays and Fridays. https://www.sans.org/newsletters/newsbites
SimplyCyber Weekly vids, Simply Cyber brings Information security related content to help IT or Information Security professionals take their career further, faster. Current cyber security industry topics and techniques are explored to promote a career in the field. Topics cover offense, defense, governance, risk, compliance, privacy, education, certification, conferences; all with the intent of professional development. https://www.youtube.com/c/GeraldAuger
HackADay – Hackaday serves up Fresh Hacks Every Day from around the Internet. https://hackaday.com/
TheCyberMentor – Heath Adams uploads regular videos related to various facets of cyber security, from bug bounty hunts to specific pentest methodologies like API, buffer overflows, networking. https://www.youtube.com/c/TheCyberMentor/
Grant Collins – Grant uploads videos regarding breaking into cybersecurity, various cybersecurity projects, building up a home lab amongst many others. Also has a companion discord channel and a resource website. https://www.youtube.com/channel/UCTLUi3oc1-a7dS-2-YgEKmA/featured
Risky Business Published weekly, the Risky Business podcast features news and in-depth commentary from security industry luminaries. Hosted by award-winning journalist Patrick Gray, Risky Business has become a must-listen digest for information security professionals. https://risky.biz/
Pauls Security Weekly This show features interviews with folks in the security community; technical segments, which are just that, very technical; and security news, which is an open discussion forum for the hosts to express their opinions about the latest security headlines, breaches, new exploits and vulnerabilities, “not” politics, “cyber” policies and more. https://securityweekly.com/category-shows/paul-security-weekly/
Security Now – Steve Gibson, the man who coined the term spyware and created the first anti-spyware program, creator of Spinrite and ShieldsUP, discusses the hot topics in security today with Leo Laporte. https://twit.tv/shows/security-now
Daily Information Security Podcast (“StormCast”) Stormcasts are daily 5-10 minute information security threat updates. The podcast is produced each work day, and typically released late in the day to be ready for your morning commute. https://isc.sans.edu/podcast.html
ShadowTalk Threat Intelligence Podcast by Digital Shadow_. The weekly podcast highlights key findings of primary-source research our Intelligence Team is conducting, along with guest speakers discussing the latest threat actors, campaigns, security events and industry news. https://resources.digitalshadows.com/threat-intelligence-podcast-shadowtalk
Don’t Panic – The Unit 42 Podcast Don’t Panic! is the official podcast from Unit 42 at Palo Alto Networks. We find the big issues that are frustrating cyber security practitioners and help simplify them so they don’t need to panic. https://unit42.libsyn.com/
Recorded Future Recorded Future takes you inside the world of cyber threat intelligence. We’re sharing stories from the trenches and the operations floor as well as giving you the skinny on established and emerging adversaries. We also talk current events, technical tradecraft, and offer up insights on the big picture issues in our industry. https://www.recordedfuture.com/resources/podcast/
The Cybrary Podcast Listen in to the Cybrary Podcast where we discuss a range topics from DevSecOps and Ransomware attacks to diversity and how to retain of talent. Entrepreneurs at all stages of their startup companies join us to share their stories and experience, including how to get funding, hiring the best talent, driving sales, and choosing where to base your business. https://www.cybrary.it/info/cybrary-podcast/
Cyber Life The Cyber Life podcast is for cyber security (InfoSec) professionals, people trying to break into the industry, or business owners looking to learn how to secure their data. We will talk about many things, like how to get jobs, cover breakdowns of hot topics, and have special guest interviews with the men and women “in the trenches” of the industry. https://redcircle.com/shows/cyber-life
Career Notes Cybersecurity professionals share their personal career journeys and offer tips and advice in this brief, weekly podcast from The CyberWire. https://www.thecyberwire.com/podcasts/career-notes
Down the Security Rabbitholehttp://podcast.wh1t3rabbit.net/ Down the Security Rabbithole is hosted by Rafal Los and James Jardine who discuss, by means of interviewing or news analysis, everything about Cybersecurity which includes Cybercrime, Cyber Law, Cyber Risk, Enterprise Risk & Security and many more. If you want to hear issues that are relevant to your organization, subscribe and tune-in to this podcast.
The Privacy, Security, & OSINT Showhttps://podcasts.apple.com/us/podcast/the-privacy-security-osint-show/id1165843330 The Privacy, Security, & OSINT Show, hosted by Michael Bazzell, is your weekly dose of digital security, privacy, and Open Source Intelligence (OSINT) opinion and news. This podcast will help listeners learn some ideas on how to stay secure from cyber-attacks and help them become “digitally invisible”.
Defensive Security Podcasthttps://defensivesecurity.org/ Hosted by Andrew Kalat (@lerg) and Jerry Bell (@maliciouslink), the Defensive Security Podcasts aims to look/discuss the latest security news happening around the world and pick out the lessons that can be applied to keeping organizations secured. As of today, they have more than 200 episodes and some of the topics discussed include Forensics, Penetration Testing, Incident Response, Malware Analysis, Vulnerabilities and many more.
Darknet Diarieshttps://darknetdiaries.com/episode/ Darknet Diaries Podcast is hosted and produced by Jack Rhysider that discuss topics related to information security. It also features some true stories from hackers who attacked or have been attacked. If you’re a fan of the show, you might consider buying some of their souvenirs here (https://shop.darknetdiaries.com/).
Brakeing Down Securityhttps://www.brakeingsecurity.com/ Brakeing Down Security started in 2014 and is hosted by Bryan Brake, Brian Boettcher, and Amanda Berlin. This podcast discusses everything about the Cybersecurity world, Compliance, Privacy, and Regulatory issues that arise in today’s organizations. The hosts will teach concepts that Information Security Professionals need to know and discuss topics that will refresh the memories of seasoned veterans.
Open Source Security Podcasthttps://www.opensourcesecuritypodcast.com/ Open Source Security Podcast is a podcast that discusses security with an open-source slant. The show started in 2016 and is hosted by Josh Bressers and Kurt Siefried. As of this writing, they now posted around 190+ podcasts
Cyber Motherboardhttps://podcasts.apple.com/us/podcast/cyber/id1441708044 Ben Makuch is the host of the podcast CYBER and weekly talks to Motherboard reporters Lorenzo Franceschi-Bicchierai and Joseph Cox. They tackle topics about famous hackers and researchers about the biggest news in cybersecurity. The Cyber- stuff gets complicated really fast, but Motherboard spends its time fixed in the infosec world so we don’t have to.
Hak5https://shop.hak5.org/pages/videos Hak5 is a brand that is created by a group of security professionals, hardcore gamers and “IT ninjas”. Their podcast, which is mostly uploaded on YouTube discusses everything from open-source software to penetration testing and network infrastructure. Their channel currently has 590,000 subscribers and is one of the most viewed shows when you want to learn something about security networks.
Threatpost Podcast Serieshttps://threatpost.com/category/podcasts/ Threatpost is an independent news site which is a leading source of information about IT and business security for hundreds of thousands of professionals worldwide. With an award-winning editorial team produces unique and high-impact content including security news, videos, feature reports and more, with their global editorial activities are driven by industry-leading journalist Tom Spring, editor-in-chief.
CISO-Security Vendor Relationship Podcasthttps://cisoseries.com Co-hosted by the creator of the CISO/Security Vendor Relationship Series, David Spark, and Mike Johnson, in 30 minutes, this weekly program challenges the co-hosts, guests, and listeners to critique, share true stories. This podcast, The CISO/Security Vendor Relationship, targets to enlighten and educate listeners on improving security buyer and seller relationships.
Getting Into Infosec Podcast Stories of how Infosec and Cybersecurity pros got jobs in the field so you can be inspired, motivated, and educated on your journey. – https://gettingintoinfosec.com/
Unsupervised Learning Weekly podcasts and biweekly newsletters as a curated summary intersection of security, technology, and humans, or a standalone idea to provoke thought, by Daniel Miessler. https://danielmiessler.com/podcast/
SECURITY BOOKS:
Building Secure & Reliable Systems Best Practices for Designing, Implementing and Maintaining Systems (O’Reilly) By Heather Adkins, Betsy Beyer, Paul Blankinship, Ana Oprea, Piotr Lewandowski, Adam Stubblefield https://landing.google.com/sre/books/
Security Engineering By Ross Anderson – A guide to building dependable distributed systems. (and Ross Anderson is brilliant //OP editorial) https://www.cl.cam.ac.uk/~rja14/book.html
The Cyber Skill Gap By Vagner Nunes – The Cyber Skill Gap: How To Become A Highly Paid And Sought After Information Security Specialist! (Use COUPON CODE: W4VSPTW8G7 to make it free) https://payhip.com/b/PdkW
Texas A&M Security Courses The web-based courses are designed to ensure that the privacy, reliability, and integrity of the information systems that power the global economy remain intact and secure. The web-based courses are offered through three discipline-specific tracks: general, non-technical computer users; technical IT professionals; and business managers and professionals. https://teex.org/program/dhs-cybersecurity/
AWS Cloud Certified Get skills in AWS to be more marketable. Training is quality and free. https://www.youtube.com/watch?v=3hLmDS179YE Have to create an AWS account, Exam is $100.
“Using ATT&CK for Cyber Threat Intelligence Training” – 4 hour training The goal of this training is for students to understand the following: at: https://attack.mitre.org/resources/training/cti/
Chief Information Security Officer (CISO) Workshop Training – The Chief Information Security Office (CISO) workshop contains a collection of security learnings, principles, and recommendations for modernizing security in your organization. This training workshop is a combination of experiences from Microsoft security teams and learnings from customers. – https://docs.microsoft.com/en-us/security/ciso-workshop/ciso-workshop
CLARK Center Plan C – Free cybersecurity curriculum that is primarily video-based or provide online assignments that can be easily integrated into a virtual learning environments https://clark.center/home
Hack.me is a FREE, community based project powered by eLearnSecurity. The community can build, host and share vulnerable web application code for educational and research purposes. It aims to be the largest collection of “runnable” vulnerable web applications, code samples and CMS’s online. The platform is available without any restriction to any party interested in Web Application Security. https://hack.me/
M.E. Kabay Free industry courses and course materials for students, teachers and others are welcome to use for free courses and lectures. http://www.mekabay.com/courses/index.htm
Enroll Now Free: PCAP Programming Essentials in Pythonhttps://www.netacad.com/courses/programming/pcap-programming-essentials-python Python is the very versatile, object-oriented programming language used by startups and tech giants, Google, Facebook, Dropbox and IBM. Python is also recommended for aspiring young developers who are interested in pursuing careers in Security, Networking and Internet-of-Things. Once you complete this course, you are ready to take the PCAP – Certified Associate in Python programming. No prior knowledge of programming is required.
Stanford University Webinar – Hacked! Security Lessons from Big Name Breaches 50 minute cyber lecture from Stanford.You Will Learn: — The root cause of key breaches and how to prevent them; How to measure your organization’s external security posture; How the attacker lifecycle should influence the way you allocate resources https://www.youtube.com/watch?v=V9agUAz0DwI
Stanford University Webinar – Hash, Hack, Code: Emerging Trends in Cyber Security Join Professor Dan Boneh as he shares new approaches to these emerging trends and dives deeper into how you can protect networks and prevent harmful viruses and threats. 50 minute cyber lecture from Stanford. https://www.youtube.com/watch?v=544rhbcDtc8
Kill Chain: The Cyber War on America’s Elections (Documentary) (Referenced at GRIMMCON), In advance of the 2020 Presidential Election, Kill Chain: The Cyber War on America’s Elections takes a deep dive into the weaknesses of today’s election technology, an issue that is little understood by the public or even lawmakers. https://www.hbo.com/documentaries/kill-chain-the-cyber-war-on-americas-elections
Intro to Cybersecurity Course (15 hours) Learn how to protect your personal data and privacy online and in social media, and why more and more IT jobs require cybersecurity awareness and understanding. Receive a certificate of completion. https://www.netacad.com/portal/web/self-enroll/c/course-1003729
Cybersecurity Essentials (30 hours) Foundational knowledge and essential skills for all cybersecurity domains, including info security, systems sec, network sec, ethics and laws, and defense and mitigation techniques used in protecting businesses. https://www.netacad.com/portal/web/self-enroll/c/course-1003733
Pluralsight and Microsoft Partnership to help you become an expert in Azure. With skill assessments and over 200+ courses, 40+ Skill IQs and 8 Role IQs, you can focus your time on understanding your strengths and skill gaps and learn Azure as quickly as possible.https://www.pluralsight.com/partners/microsoft/azure
Blackhat Webcast Series Monthly webcast of varying cyber topics. I will post specific ones in the training section below sometimes, but this is worth bookmarking and checking back. They always have top tier speakers on relevant, current topics. https://www.blackhat.com/html/webcast/webcast-home.html
Federal Virtual Training Environment – US Govt sponsored free courses. There are 6 available, no login required. They are 101 Coding for the Public, 101 Critical Infrastructure Protection for the Public, Cryptocurrency for Law Enforcement for the Public, Cyber Supply Chain Risk Management for the Public, 101 Reverse Engineering for the Public, Fundamentals of Cyber Risk Management. https://fedvte.usalearning.gov/public_fedvte.php
Harrisburg University CyberSecurity Collection of 18 curated talks. Scroll down to CYBER SECURITY section. You will see there are 4 categories Resource Sharing, Tools & Techniques, Red Team (Offensive Security) and Blue Teaming (Defensive Security). Lot of content in here; something for everyone. https://professionaled.harrisburgu.edu/online-content/
OnRamp 101-Level ICS Security Workshop Starts this 4/28. 10 videos, Q&A / discussion, bonus audio, great links. Get up to speed fast on ICS security. It runs for 5 weeks. 2 videos per week. Then we keep it open for another 3 weeks for 8 in total. https://onramp-3.s4xevents.com
HackXOR WebApp CTF Hackxor is a realistic web application hacking game, designed to help players of all abilities develop their skills. All the missions are based on real vulnerabilities I’ve personally found while doing pentests, bug bounty hunting, and research. https://hackxor.net/
flAWS System Through a series of levels you’ll learn about common mistakes and gotchas when using Amazon Web Services (AWS). Multiple levels, “Buckets” of fun. http://flaws.cloud/
Stanford CS 253 Web Security A free course from Stanford providing a comprehensive overview of web security. The course begins with an introduction to the fundamentals of web security and proceeds to discuss the most common methods for web attacks and their countermeasures. The course includes video lectures, slides, and links to online reading assignments. https://web.stanford.edu/class/cs253
Linux Journey A free, handy guide for learning Linux. Coverage begins with the fundamentals of command line navigation and basic text manipulation. It then extends to more advanced topics, such as file systems and networking. The site is well organized and includes many examples along with code snippets. Exercises and quizzes are provided as well. https://linuxjourney.com
Ryan’s Tutorials A collection of free, introductory tutorials on several technology topics including: Linux command line, Bash scripting, creating and styling webpages with HTML and CSS, counting and converting between different number systems, and writing regular expressions. https://ryanstutorials.net
CYBER INTELLIGENCE ANALYTICS AND OPERATIONS Learn:The ins and outs of all stages of the intelligence cycle from collection to analysis from seasoned intel professionals. How to employ threat intelligence to conduct comprehensive defense strategies to mitigate potential compromise. How to use TI to respond to and minimize impact of cyber incidents. How to generate comprehensive and actionable reports to communicate gaps in defenses and intelligence findings to decision makers. https://www.shadowscape.io/cyber-intelligence-analytics-operat
Linux Command Line for Beginners 25 hours of training – In this course, you’ll learn from one of Fullstack’s top instructors, Corey Greenwald, as he guides you through learning the basics of the command line through short, digestible video lectures. Then you’ll use Fullstack’s CyberLab platform to hone your new technical skills while working through a Capture the Flag game, a special kind of cybersecurity game designed to challenge participants to solve computer security problems by solving puzzles. Finally, through a list of carefully curated resources through a series of curated resources, we’ll introduce you to some important cybersecurity topics so that you can understand some of the common language, concepts and tools used in the industry. https://prep.fullstackacademy.com/
Hacking 101 6 hours of free training – First, you’ll take a tour of the world and watch videos of hackers in action across various platforms (including computers, smartphones, and the power grid). You may be shocked to learn what techniques the good guys are using to fight the bad guys (and which side is winning). Then you’ll learn what it’s like to work in this world, as we show you the different career paths open to you and the (significant) income you could make as a cybersecurity professional. https://cyber.fullstackacademy.com/prepare/hacking-101
Choose Your Own Cyber Adventure Series: Entry Level Cyber Jobs Explained YouTube Playlist (videos from my channel #simplyCyber) This playlist is a collection of various roles within the information security field, mostly entry level, so folks can understand what different opportunities are out there. https://www.youtube.com/playlist?list=PL4Q-ttyNIRAqog96mt8C8lKWzTjW6f38F
NETINSTRUCT.COM Free Cybersecurity, IT and Leadership Courses – Includes OS and networking basics. Critical to any Cyber job. https://netinstruct.com/courses
HackerSploit – HackerSploit is the leading provider of free and open-source Infosec and cybersecurity training. https://hackersploit.org/
Computer Science courses with video lectures Intent of this list is to act as Online bookmarks/lookup table for freely available online video courses. Focus would be to keep the list concise so that it is easy to browse. It would be easier to skim through 15 page list, find the course and start learning than having to read 60 pages of text. If you are student or from non-CS background, please try few courses to decide for yourself as to which course suits your learning curve best. https://github.com/Developer-Y/cs-video-courses?utm_campaign=meetedgar&utm_medium=social&utm_source=meetedgar.com
Cryptography I -offered by Stanford University – Rolling enrollment – Cryptography is an indispensable tool for protecting information in computer systems. In this course you will learn the inner workings of cryptographic systems and how to correctly use them in real-world applications. The course begins with a detailed discussion of how two parties who have a shared secret key can communicate securely when a powerful adversary eavesdrops and tampers with traffic. We will examine many deployed protocols and analyze mistakes in existing systems. The second half of the course discusses public-key techniques that let two parties generate a shared secret key. https://www.coursera.org/learn/crypto
Software Security Rolling enrollment -offered by University of Maryland, College Park via Coursera – This course we will explore the foundations of software security. We will consider important software vulnerabilities and attacks that exploit them — such as buffer overflows, SQL injection, and session hijacking — and we will consider defenses that prevent or mitigate these attacks, including advanced testing and program analysis techniques. Importantly, we take a “build security in” mentality, considering techniques at each phase of the development cycle that can be used to strengthen the security of software systems. https://www.coursera.org/learn/software-security
Intro to Information Security Georgia Institute of Technology via Udacity – Rolling Enrollment. This course provides a one-semester overview of information security. It is designed to help students with prior computer and programming knowledge — both undergraduate and graduate — understand this important priority in society today. Offered at Georgia Tech as CS 6035 https://www.udacity.com/course/intro-to-information-security–ud459
Cyber-Physical Systems Security Georgia Institute of Technology via Udacity – This course provides an introduction to security issues relating to various cyber-physical systems including industrial control systems and those considered critical infrastructure systems. 16 week course – Offered at Georgia Tech as CS 8803 https://www.udacity.com/course/cyber-physical-systems-security–ud279
Finding Your Cybersecurity Career Path – University of Washington via edX – 4 weeks long – self paced – In this course, you will focus on the pathways to cybersecurity career success. You will determine your own incoming skills, talent, and deep interests to apply toward a meaningful and informed exploration of 32 Digital Pathways of Cybersecurity. https://www.edx.org/course/finding-your-cybersecurity-career-path
Building a Cybersecurity Toolkit – University of Washington via edX – 4 weeks self-paced The purpose of this course is to give learners insight into these type of characteristics and skills needed for cybersecurity jobs and to provide a realistic outlook on what they really need to add to their “toolkits” – a set of skills that is constantly evolving, not all technical, but fundamentally rooted in problem-solving. https://www.edx.org/course/building-a-cybersecurity-toolkit
Cybersecurity: The CISO’s View – University of Washington via edX – 4 weeks long self-paced – This course delves into the role that the CISO plays in cybersecurity operations. Throughout the lessons, learners will explore answers to the following questions: How does cybersecurity work across industries? What is the professionals’ point of view? How do we keep information secure https://www.edx.org/course/cybersecurity-the-cisos-view
Introduction to Cybersecurity – University of Washington via edX – In this course, you will gain an overview of the cybersecurity landscape as well as national (USA) and international perspectives on the field. We will cover the legal environment that impacts cybersecurity as well as predominant threat actors. – https://www.edx.org/course/introduction-to-cybersecurity
Cyber Attack Countermeasures New York University (NYU) via Coursera – This course introduces the basics of cyber defense starting with foundational models such as Bell-LaPadula and information flow frameworks. These underlying policy enforcements mechanisms help introduce basic functional protections, starting with authentication methods. Learners will be introduced to a series of different authentication solutions and protocols, including RSA SecureID and Kerberos, in the context of a canonical schema. – https://www.coursera.org/learn/cyber-attack-countermeasures
Introduction to Cyber Attacks New York University (NYU) via Coursera – This course provides learners with a baseline understanding of common cyber security threats, vulnerabilities, and risks. An overview of how basic cyber attacks are constructed and applied to real systems is also included. Examples include simple Unix kernel hacks, Internet worms, and Trojan horses in software utilities. Network attacks such as distributed denial of service (DDOS) and botnet- attacks are also described and illustrated using real examples from the past couple of decades. https://www.coursera.org/learn/intro-cyber-attacks
Enterprise and Infrastructure Security New York University (NYU) via Coursera – This course introduces a series of advanced and current topics in cyber security, many of which are especially relevant in modern enterprise and infrastructure settings. The basics of enterprise compliance frameworks are provided with introduction to NIST and PCI. Hybrid cloud architectures are shown to provide an opportunity to fix many of the security weaknesses in modern perimeter local area networks. https://www.coursera.org/learn/enterprise-infrastructure-security
Network Security Georgia Institute of Technology via Udacity – This course provides an introduction to computer and network security. Students successfully completing this class will be able to evaluate works in academic and commercial security, and will have rudimentary skills in security research. The course begins with a tutorial of the basic elements of cryptography, cryptanalysis, and systems security, and continues by covering a number of seminal papers and monographs in a wide range of security areas. – https://www.udacity.com/course/network-security–ud199
Real-Time Cyber Threat Detection and Mitigation – New York University (NYU) via Coursera This course introduces real-time cyber security techniques and methods in the context of the TCP/IP protocol suites. Explanation of some basic TCP/IP security hacks is used to introduce the need for network security solutions such as stateless and stateful firewalls. Learners will be introduced to the techniques used to design and configure firewall solutions such as packet filters and proxies to protect enterprise assets. https://www.coursera.org/learn/real-time-cyber-threat-detection
Hey everyone, I’ve started getting into hacking, and would like to know the cheapest but best Wi-Fi cracking/deauthing/hacking adapter. I’m on a fairly tight budget of 20AUD and am willing to compromise if needed. Priority is a card with monitor mode, then cracking capabilities, then deauthing, etc. Thank you guys! By the way, if there are any beginner tips you are willing to give, please let me know!
A browser or server attempts to connect to a website (i.e. a web server) secured with SSL. The browser/server requests that the web server identify itself.
The web server sends the browser/server a copy of its SSL certificate.
The browser/server checks to see whether or not it trusts the SSL certificate. If so, it sends a message to the web server.
The web server sends back a digitally signed acknowledgement to start an SSL encrypted session.
Encrypted data is shared between the browser/server and the web server.
There are many benefits to using SSL certificates. Namely, SSL customers can:
Utilize HTTPs, which elicits a stronger Google ranking
Create safer experiences for your customers
Build customer trust and improve conversions
Protect both customer and internal data
Encrypt browser-to-server and server-to-server communication
Authentication — The process of checking if a user is allowed to gain access to a system. eg. Login forms with username and password.
Authorization — Checking if the authenticated user has access to perform an action. eg. user, admin, super admin roles.
Audit — Conduct a complete inspection of an organization’s network to find vulnerable endpoints or malicious software.
Access Control List — A list that contains users and their level of access to a system.
Aircrack-ng — Wifi penetration testing software suite. Contains sniffing, password cracking, and general wireless attacking tools.
Backdoor — A piece of code that lets hackers get into the system easily after it has been compromised.
Burp Suite — Web application security software, helps test web apps for vulnerabilities. Used in bug bounty hunting.
Banner Grabbing — Capturing basic information about a server like the type of web server software (eg. apache) and services running on it.
Botnet — A network of computers controlled by a hacker to perform attacks such as Distributed Denial of Service.
Brute-Force Attack — An attack where the hacker tries different login combinations to gain access. eg. trying to crack a 9 -digit numeric password by trying all the numbers from 000000000 to 999999999
Buffer Overflow — When a program tries to store more information than it is allowed to, it overflows into other buffers (memory partitions) corrupting existing data.
Cache — Storing the response to a particular operation in temporary high-speed storage is to serve other incoming requests better. eg. you can store a database request in a cache till it is updated to reduce calling the database again for the same query.
Cipher — Cryptographic algorithm for encrypting and decrypting data.
Code Injection — Injecting malicious code into a system by exploiting a bug or vulnerability.
Cross-Site Scripting — Executing a script on the client-side through a legitimate website. This can be prevented if the website sanitizes user input.
Compliance — A set of rules defined by the government or other authorities on how to protect your customer’s data. Common ones include HIPAA, PCI-DSS, and FISMA.
Dictionary Attack — Attacking a system with a pre-defined list of usernames and passwords. eg. admin/admin is a common username/password combination used by amateur sysadmins.
Dumpster Diving — Looking into a company’s trash cans for useful information.
Denial of Service & Distributed Denial of Service — Exhausting a server’s resources by sending too many requests is Denial of Service. If a botnet is used to do the same, its called Distributed Denial of Service.
DevSecOps — Combination of development and operations by considering security as a key ingredient from the initial system design.
Directory Traversal — Vulnerability that lets attackers list al the files and folders within a server. This can include system configuration and password files.
Domain Name System (DNS) — Helps convert domain names into server IP addresses. eg. Google.com -> 216.58.200.142
DNS Spoofing — Trikcnig a system’s DNS to point to a malicious server. eg. when you enter ‘facebook.com’, you might be redirected to the attacker’s website that looks like Facebook.
Encryption — Encoding a message with a key so that only the parties with the key can read the message.
Exploit — A piece of code that takes advantage of a vulnerability in the target system. eg. Buffer overflow exploits can get you to root access to a system.
Enumeration — Mapping out all the components of a network by gaining access to a single system.
Footprinting — Gathering information about a target using active methods such as scanning and enumeration.
Flooding — Sending too many packets of data to a target system to exhaust its resources and cause a Denial of Service or similar attacks.
Firewall — A software or hardware filter that can be configured to prevent common types of attacks.
Fork Bomb — Forking a process indefinitely to exhaust system resources. Related to a Denial of Service attack.
Fuzzing — Sending automated random input to a software program to test its exception handling capacity.
Hardening — Securing a system from attacks like closing unused ports. Usually done using scripts for servers.
Hash Function — Mapping a piece of data into a fixed value string. Hashes are used to confirm data integrity.
Honey Pot — An intentionally vulnerable system used to lure attackers. This is then used to understand the attacker’s strategies.
HIPAA — The Health Insurance Portability and Accountability Act. If you are working with healthcare data, you need to make sure you are HIPAA compliant. This is to protect the customer’s privacy.
Input Validation — Checking user inputs before sending them to the database. eg. sanitizing form input to prevent SQL injection attacks.
Integrity — Making sure the data that was sent from the server is the same that was received by the client. This ensures there was no tampering and integrity is achieved usually by hashing and encryption.
Intrusion Detection System — A software similar to a firewall but with advanced features. Helps in defending against Nmap scans, DDoS attacks, etc.
IP Spoofing — Changing the source IP address of a packet to fool the target into thinking a request is coming from a legitimate server.
John The Ripper — Brilliant password cracking tool, runs on all major platforms.
Kerberos — Default authorization software used by Microsoft, uses a stronger encryption system.
KeyLogger — A software program that captures all keystrokes that a user performs on the system.
Logic Bombs — A piece of code (usually malicious) that runs when a condition is satisfied.
Light Weight Directory Access Protocol (LDAP) — Lightweight client-server protocol on Windows, central place for authentication. Stores usernames and passwords to validate users on a network.
Malware — Short for “Malicious Software”. Everything from viruses to backdoors is malware.
MAC Address — Unique address assigned to a Network Interface Card and is used as an identifier for local area networks. Easy to spoof.
Multi-factor Authentication — Using more than one method of authentication to access a service. eg. username/password with mobile OTP to access a bank account (two-factor authentication)
MD5 — Widely used hashing algorithm. Once a favorite, it has many vulnerabilities.
Meterpreter — An advanced Metasploit payload that lives in memory and hard to trace.
Null-Byte Injection — An older exploit, uses null bytes (i.e. %00, or 0x00 in hexadecimal) to URLs. This makes web servers return random/unwanted data which might be useful for the attacker. Easily prevented by doing sanity checks.
Network Interface Card(NIC) — Hardware that helps a device connect to a network.
Network Address Translation — Utility that translates your local IP address into a global IP address. eg. your local IP might be 192.168.1.4 but to access the internet, you need a global IP address (from your router).
Nmap — Popular network scanning tool that gives information about systems, open ports, services, and operating system versions.
Netcat — Simple but powerful tool that can view and record data on a TCP or UDP network connections. Since it is not actively maintained, NCat is preferred.
Nikto — A popular web application scanner, helps to find over 6700 vulnerabilities including server configurations and installed web server software.
Nessus — Commercial alternative to NMap, provides a detailed list of vulnerabilities based on scan results.
Packet — Data is sent and received by systems via packets. Contains information like source IP, destination IP, protocol, and other information.
Password Cracking — Cracking an encrypted password using tools like John the Ripper when you don’t have access to the key.
Password Sniffing — Performing man-in-the-middle attacks using tools like Wireshark to find password hashes.
Patch — A software update released by a vendor to fix a bug or vulnerability in a software system.
Phishing — Building fake web sites that look remarkably similar to legitimate websites (like Facebook) to capture sensitive information.
Ping Sweep — A technique that tries to ping a system to see if it is alive on the network.
Public Key Cryptography — Encryption mechanism that users a pair of keys, one private and one public. The sender will encrypt a message using your public key which then you can decrypt using your private key.
Public Key Infrastructure — A public key infrastructure (PKI) is a system to create, store, and distribute digital certificates. This helps sysadmins verify that a particular public key belongs to a certain authorized entity.
Personally Identifiable Information (PII) — Any information that identified a user. eg. Address, Phone number, etc.
Payload — A piece of code (usually malicious) that performs a specific function. eg. Keylogger.
PCI-DSS — Payment Card Industry Data Security Standard. If you are working with customer credit cards, you should be PCI-DSS compliant.
Ransomware — Malware that locks your system using encryption and asks you to pay a price to get the key to unlock it.
Rainbow Table — Pre calculated password hashes that will help you crack password hashes of the target easily.
Reconnaissance — Finding data about the target using methods such as google search, social media, and other publicly available information.
Reverse Engineering — Rebuilding a piece of software based on its functions.
Role-Based Access — Providing a set of authorizations for a role other than a user. eg. “Managers” role will have a set of permissions while the “developers” role will have a different set of permissions.
Rootkit — A rootkit is a malware that provides unauthorized users admin privileges. Rootkits include keyloggers, password sniffers, etc.
Scanning — Sending packets to a system and gaining information about the target system using the packets received. This involved the 3-way-handshake.
Secure Shell (SSH) — Protocol that establishes an encrypted communication channel between a client and a server. You can use ssh to login to remote servers and perform system administration.
Session — A session is a duration in which a communication channel is open between a client and a server. eg. the time between logging into a website and logging out is a session.
Session Hijacking — Taking over someone else’s session by pretending to the client. This is achieved by stealing cookies and session tokens. eg. after you authenticate with your bank, an attacker can steal your session to perform financial transactions on your behalf.
Social Engineering — The art of tricking people into making them do something that is not in their best interest. eg. convincing someone to provide their password over the phone.
Secure Hashing Algorithm (SHA) — Widely used family of encryption algorithms. SHA256 is considered highly secure compared to earlier versions like SHA 1. It is also a one-way algorithm, unlike an encryption algorithm that you can decrypt. Once you hash a message, you can only compare with another hash, you cannot re-hash it to its earlier format.
Sniffing — performing man-in-the-middle attacks on networks. Includes wired and wireless networks.
Spam — Unwanted digital communication, including email, social media messages, etc. Usually tries to get you into a malicious website.
Syslog — System logging protocol, used by system administrators to capture all activity on a server. Usually stored on a separate server to retain logs in the event of an attack.
Secure Sockets Layer (SSL) — Establishes an encrypted tunnel between the client and server. eg. when you submit passwords on Facebook, only the encrypted text will be visible for sniffers and not your original password.
Snort — Lightweight open-source Intrusion Detection System for Windows and Linux.
SQL Injection — A type of attack that can be performed on web applications using SQL databases. Happens when the site does not validate user input.
Trojan — A malware hidden within useful software. eg. a pirated version of MS office can contain trojans that will execute when you install and run the software.
Traceroute — Tool that maps the route a packet takes between the source and destination.
Tunnel — Creating a private encrypted channel between two or more computers. Only allowed devices on the network can communicate through this tunnel.
Virtual Private Network — A subnetwork created within a network, mainly to encrypt traffic. eg. connecting to a VPN to access a blocked third-party site.
Virus — A piece of code that is created to perform a specific action on the target systems. A virus has to be triggered to execute eg. autoplaying a USB drive.
Vulnerability — A point of attack that is caused by a bug / poor system design. eg. lack of input validation causes attackers to perform SQL injection attacks on a website.
War Driving — Travelling through a neighborhood looking for unprotected wifi networks to attack.
WHOIS — Helps to find information about IP addresses, its owners, DNS records, etc.
Wireshark — Open source program to analyze network traffic and filter requests and responses for network debugging.
Worm — A malware program capable of replicating itself and spreading to other connected systems. eg. a worm to built a botnet. Unlike Viruses, Worms don’t need a trigger.
Wireless Application Protocol (WAP) — Protocol that helps mobile devices connect to the internet.
Web Application Firewall (WAF) — Firewalls for web applications that help with cross-site scripting, Denial of Service, etc.
Zero-Day — A newly discovered vulnerability in a system for which there is no patch yet. Zero-day vulnerabilities are the most dangerous type of vulnerabilities since there is no possible way to protect against one.
Zombie — A compromised computer, controlled by an attacker. A group of zombies is called a Botnet.
Increased distributed working: With organizations embracing work from home, incremental risks have been observed due to a surge in Bring Your Own Device (BYOD), Virtual Private Network (VPN), Software As A Service (SaaS), O365 and Shadow IT, as it could be exploited by various Man-in-the-Middle (MITM) attack vectors.
Reimagine Business Models: Envisioning new business opportunities, modes of working, and renewed investment priorities. With reduced workforce capability, compounded with skill shortages, staff who are focusing on business as usual tasks can be victimized, via social engineering.
Digital Transformation and new digital infrastructure: With the change in nature for organizations across the industrial and supply chain sector – security is deprioritized. Hardening of the industrial systems and cloud based infrastructure is crucial as cyber threats exploit these challenges via vulnerability available for unpatched systems.
With an extreme volume of digital communication, security awareness is lowered with increased susceptibility. Malicious actors are using phishing techniques to exploit such situations.
Re-evaluate your approach to cyber
Which cyber scenarios your organization appears to be preparing for or is prepared?
Is there a security scenario that your organization is currently ignoring – but shouldn’t be?
What would your organization need to do differently in order to win, in each of the identified cyber scenarios?
What capabilities, cyber security partnerships, and workforce strategies do you need to strengthen?
The organizations should reflect the following scenarios at a minimum and consider:
Which cyber scenarios your organization appears to be preparing for or is prepared?
Is there a security scenario that your organization is currently ignoring – but shouldn’t be?
What would your organization need to do differently in order to win, in each of the identified cyber scenarios?
What capabilities, cyber security partnerships, and workforce strategies do you need to strengthen?
To tackle the outcome from the above scenarios, the following measures are the key:
Inoculation through education: Educate and / or remind your employees about –
Your organization’s defense – remote work cyber security policies and best practices
Potential threats to your organization and how will it attack – with a specific focus on social engineering scams and identifying COVID-19 phishing campaigns
Assisting remote employees with enabling MFA across the organization assets
Adjust your defenses: Gather cyber threat intelligence and execute a patching sprint:
Set intelligence collection priorities
Share threat intelligence with other organizations
Use intelligence to move at the speed of the threat
Focus on known tactics, such as phishing and C-suite fraud.
Prioritize unpatched critical systems and common vulnerabilities.
Enterprise recovery: If the worst happens and an attack is successful, follow a staged approach to recovering critical business operations which may include tactical items such as:
Protect key systems through isolation
Fully understand and contain the incident
Eradicate any malware
Implement appropriate protection measures to improve overall system posture
Identify and prioritize the recovery of key business processes to deliver operations
Implement a prioritized recovery plan
Cyber Preparedness and Response: It is critical to optimize the detection capability thus, re-evaluation of the detection strategy aligned with the changing landscape is crucial. Some key trends include:
Secure and monitor your cloud environments and remote working applications
Increase monitoring to identify threats from shadow IT
Analyze behavior patterns to improve detection content
Finding the right cyber security partner: To be ready to respond identify the right partner with experience and skillset in Social Engineering, Cyber Response, Cloud Security, and Data Security.
Critical actions to address
At this point, as the organizations are setting the direction towards the social enterprise, it is an unprecedented opportunity to lead with cyber discussions and initiatives. Organizations should immediately gain an understanding of newly introduced risks and relevant controls by:
Getting a seat at the table
Understanding the risk prioritization:
Remote workforce/technology performance
Operational and financial implications
Emerging insider and external threats
Business continuity capabilities
Assessing cyber governance and security awareness in the new operating environment
Assessing the highest areas of risk and recommend practical mitigation strategies that minimize impact to constrained resources.
Keeping leadership and the Board apprised of ever-changing risk profile
Given the complexity of the pandemic and associated cyber challenges, there is reason to believe that the recovery phase post-COVID-19 will require unprecedented levels of cyber orchestration, communication, and changing of existing configurations across the organization.
CyberSecurity: Protect Yourself on Internet
Use two factor authentication when possible. If not possible, use strong unique passwords that are difficult to guess or crack. This means avoiding passwords that use of common words, your birthdate, your SSN, names and birthdays of close associates, etc.
Make sure the devices you are using are up-to-date and have some form of reputable anti-virus/malware software installed.
Never open emails, attachments, programs unless they are from a trusted source (i.e., a source that can be verified). Also disregard email or web requests that ask you to share your personal or account information unless you are sure the request and requestor are legitimate.
Try to only use websites that are encrypted. To do this, look for either the trusted security lock symbol before the website address and/or the extra “s” at the end of http in the URL address bar.
Avoid using an administrator level account when using the internet.
Only enable cookies when absolutely required by a website.
Make social media accounts private or don’t use social media at all.
Consider using VPNs and encrypting any folders/data that contains sensitive data.
Stay away from using unprotected public Wi-Fi networks.
Social media is genetically engineered in Area 51 to harvest as much data from you as possible. Far beyond just having your name and age and photograph.
Never use the same username twice anywhere, or the same password twice anywhere.
Use Tor/Tor Browser whenever possible. It’s not perfect, but it is a decent default attempt at anonymity.
Use a VPN. Using VPN and Tor can be even better.
Search engines like DuckDuckGo offer better privacy (assuming they’re honest, which you can never be certain of) than Google which, like social media, works extremely hard to harvest every bit of data from you that they can.
Never give your real details anywhere. Certainly not things like your name or pictures of yourself, but even less obvious things like your age or country of origin. Even things like how you spell words and grammatical quirks can reveal where you’re from.
Erase your comments from websites after a few days/weeks. It might not erase them from the website’s servers, but it will at least remove them from public view. If you don’t, you can forget they exist and you never know how or when they can and will be used against you.
With Reddit, you can create an account fairly easily over Tor using no real information. Also, regularly nuke your accounts in case Reddit or some crazy stalker is monitoring your posts to build a profile of who you might be. Source: Reddit
Notable Hackers
Adrian Lamo – gained media attention for breaking into several high-profile computer networks, including those of The New York Times, Yahoo!, and Microsoft, culminating in his 2003 arrest. Lamo was best known for reporting U.S. soldier Chelsea Manning to Army criminal investigators in 2010 for leaking hundreds of thousands of sensitive U.S. government documents to WikiLeaks.
Albert Gonzales – an American computer hacker and computer criminal who is accused of masterminding the combined credit card theft and subsequent reselling of more than 170 million card and ATM numbers from 2005 to 2007: the biggest such fraud in history.
Andrew Auernheimer (known as Weev) – Went to jail for using math against AT&T website.
Barnaby Jack – was a New Zealand hacker, programmer and computer security expert. He was known for his presentation at the Black Hat computer security conference in 2010, during which he exploited two ATMs and made them dispense fake paper currency on the stage. Among his other most notable works were the exploitation of various medical devices, including pacemakers and insulin pumps.
Gary McKinnon – a Scottish systems administrator and hacker who was accused in 2002 of perpetrating the “biggest military computer hack of all time,” although McKinnon himself states that he was merely looking for evidence of free energy suppression and a cover-up of UFO activity and other technologies potentially useful to the public. 👽🛸
George Hotz aka geohot – “The former Facebook engineer took on the giants of the tech world by developing the first iPhone carrier-unlock techniques,” says Mark Greenwood, head of data science at Netacea, “followed a few years later by reverse engineering Sony’s PlayStation 3, clearing the way for users to run their own code on locked-down hardware. George sparked an interest in a younger generation frustrated with hardware and software restrictions being imposed on them and led to a new scene of opening up devices, ultimately leading to better security and more openness.”
Guccifer 2.0 – a persona which claimed to be the hacker(s) that hacked into the Democratic National Committee (DNC) computer network and then leaked its documents to the media, the website WikiLeaks, and a conference event.
Hector Monsegur (known as Sabu) – an American computer hacker and co-founder of the hacking group LulzSec. He Monsegur became an informant for the FBI, working with the agency for over ten months to aid them in identifying the other hackers from LulzSec and related groups.
Jacob Appelbaum – an American independent journalist, computer security researcher, artist, and hacker. He has been employed by the University of Washington, and was a core member of the Tor project, a free software network designed to provide online anonymity.
James Forshaw – one of the world’s foremost bug bounty huners
Jeanson James Ancheta – On May 9, 2006, Jeanson James Ancheta (born 1985) became the first person to be charged for controlling large numbers of hijacked computers or botnets.
Jeremy Hammond – He was convicted of computer fraud in 2013 for hacking the private intelligence firm Stratfor and releasing data to the whistle-blowing website WikiLeaks, and sentenced to 10 years in prison.
John Draper – also known as Captain Crunch, Crunch or Crunchman (after the Cap’n Crunch breakfast cereal mascot), is an American computer programmer and former legendary phone phreak.
Kimberley Vanvaeck (known as Gigabyte) – a virus writer from Belgium known for a long-standing dispute which involved the internet security firm Sophos and one of its employees, Graham Cluley. Vanvaeck wrote several viruses, including Quis, Coconut and YahaSux (also called Sahay). She also created a Sharp virus (also called “Sharpei”), credited as being the first virus to be written in C#.
Lauri Love – a British activist charged with stealing data from United States Government computers including the United States Army, Missile Defense Agency, and NASA via computer intrusion.
Michael Calce (known as MafiaBoy) – a security expert from Île Bizard, Quebec who launched a series of highly publicized denial-of-service attacks in February 2000 against large commercial websites, including Yahoo!, Fifa.com, Amazon.com, Dell, Inc., E*TRADE, eBay, and CNN.
Mudge – Peiter C. Zatko, better known as Mudge, is a network security expert, open source programmer, writer, and a hacker. He was the most prominent member of the high-profile hacker think tank the L0pht as well as the long-lived computer and culture hacking cooperative the Cult of the Dead Cow.
PRAGMA – Also known as Impragma or PHOENiX, PRAGMA is the author of Snipr, one of the most prolific credential stuffing tools available online.
The 414s – The 414s were a group of computer hackers who broke into dozens of high-profile computer systems, including ones at Los Alamos National Laboratory, Sloan-Kettering Cancer Center, and Security Pacific Bank, in 1982 and 1983.
The Shadow Brokers – is a hacker group who first appeared in the summer of 2016. They published several leaks containing hacking tools from the National Security Agency (NSA), including several zero-day exploits. Specifically, these exploits and vulnerabilities targeted enterprise firewalls, antivirus software, and Microsoft products.[6] The Shadow Brokers originally attributed the leaks to the Equation Group threat actor, who have been tied to the NSA’s Tailored Access Operations unit.
The Strange History of Ransomware The first ransomware virus predates e-mail, even the Internet as we know it, and was distributed on floppy disk by the postal service. It sounds quaint, but in some ways this horse-and-buggy version was even more insidious than its modern descendants. Contemporary ransomware tends to bait victims using legitimate-looking email attachments — a fake invoice from UPS, or a receipt from Delta airlines. But the 20,000 disks dispatched to 90 countries in December of 1989 were masquerading as something far more evil: AIDS education software.
How to protect sensitive data for its entire lifecycle in AWS
You can protect data in-transit over individual communications channels using transport layer security (TLS), and at-rest in individual storage silos using volume encryption, object encryption or database table encryption. However, if you have sensitive workloads, you might need additional protection that can follow the data as it moves through the application stack. Fine-grained data protection techniques such as field-level encryption allow for the protection of sensitive data fields in larger application payloads while leaving non-sensitive fields in plaintext. This approach lets an application perform business functions on non-sensitive fields without the overhead of encryption, and allows fine-grained control over what fields can be accessed by what parts of the application. Read m ore here…
I Passed AWS Security Specialty SCS-C01 Testimonials
Passing the SCS-C01 AWS Certified Security Specialty exam
I’ve been studying for both DevOps DOP-C01 and Security Specialty SCS-C01 tests but opted to just focus on SCS-C01 since the DevOps exam seems like a tough one to pass. I’m planning to take the DevOps one next but I read that there’s a new DOP-C02 version just came out so I might postpone it until for a couple of months.
This AWS Certified Security Specialty exam is easier than the SAA exam since the main focus is all about security. The official Exam Guide has been my ultimate guide in knowing the particular AWS services to focus for the test. Once I got 90% on all my practice tests attempts from TD, I went ahead and booked my exam.
Here’s a compilation of all the helpful SCS-C01 posts that helped me:
The Exam Readiness: AWS Certified Security Specialty course provides a good summary of all the relevant topics that are about to be asked in the exam. Prepare to see topics in Key Management Infrastructure, IPS/IDS, network security, EKS/ECS container security and many more.
As the title says, Has anyone played BAE Systems legend of the cyber king / hacker quest games? I don’t get them! submitted by /u/DamageInteresting873 [link] [comments]
As I understand it, a good and effective process for defenders is to find the threat actors (TAs) that commonly attack your industry and work to setup defenses against their TTPs. Do TAs really not shift after finding that their TTPs have been categorized and published? Is there a long lead time for a TA to develop a new way to attack? submitted by /u/LethargicEscapist [link] [comments]
Hi everyone, I’m looking to set up security monitoring for my company and would really appreciate advice from the community. One challenge I’m facing is identifying what use cases are worth monitoring. From what I understand, there are some common baseline use cases that are applicable to most companies, but there are also more specific ones that depend on the unique needs and risks of a particular organization. Here are some questions I’m grappling with: What are some essential use cases that every organization should monitor (e.g., unusual login activity, data exfiltration attempts, etc.)? How do you identify and prioritize custom use cases that align with your company’s unique risks or business processes? Are there any tools or frameworks that can help streamline this process? Additionally, I’d love recommendations on books, blogs, or resources about security monitoring and the blue team discipline in general. If anyone here has personal experience or lessons learned from setting up security monitoring, I’d love to hear your story! From my side, I already parsed this book: Blue Team Handbook SOC, SIEM, and Threat Hunting, 2019. There is a lot of interesting information but it seems to be a legacy for the current time. Thanks in advance for your help and advice! submitted by /u/athanielx [link] [comments]
Hello Good People, I have 20 years of windows server experience and would consider myself knowledgeable in AD, DNS, ADFS, GPOs and PKI to name some key tech areas. I would also consider myself to be network savvy and familiar with it security having worked on hardening servers to CIS standards and working with pen testers. So I would like to go into this field and would like some suggestions on what certs I should study if necessary or what SIEM tool to look at or maybe some framework? For tooling it seems splunk, huntsman and log rhythm are popular where i have been looking. Thanks submitted by /u/AdIndependent6883 [link] [comments]
Hi, Im writing my Thesis about threat detection with elastic siem and want to Check how i can build the siem to detect threats from different mitre phases. I want to Setup Rules to detect threats and mitigate compromises. Therefore i want to plant malware on vms i Setup to analyze the recognition. I would Like to use Common Malware because of its todays relevance. Which Malware/ threats would you recommend for me to use ? I thought about mimikatz and several critical Windows Event IDS in correlation detection. Edit: to specify - im looking for relevant Malware thats Common in Business world and Not too hard to protect against. Malware that also is Not this Dangerous since there is Always a risk working stuff Like that. Thanks in advance. submitted by /u/sw4gyJ0hnson [link] [comments]
Anyone else getting insane numbers of Threatlocker advertisements in their feed? I haven’t counted but it feels like every second ad is for TL. I assume being in this sub is part of the demographic targeting, and I’m in Australia FWIW. submitted by /u/CeleryMan20 [link] [comments]
I run a small, independent cybersecurity consulting firm in the EU and am exploring the idea of forming a small alliance (about 4-6 firms) with like-minded founders across Europe. No legal entity, no complex contracts—just a trusted circle committed to raising collective credibility and visibility. What’s in it for us?: - Present a pan-EU alliance badge on each member’s website, signaling vetted expertise and a broader reach to clients. - Co-author thought leadership pieces (e.g., whitepapers on NIS2 or compliance best practices) to position ourselves as a knowledgeable, reliable EU network. - Exchange market insights, share regulatory knowledge, and strengthen each firm’s value proposition—without selling products or merging brands. The aim is to remain fully independent but gain the benefit of “looking bigger” and more connected. Over time, we might host joint webinars, share event spots, or simply reinforce each other’s reputations. Interested?: If you’re a founder of a small EU-based cybersecurity consulting/integration firm (no product vendors), please DM me. Include a short intro, your country, company website, and any relevant details. Let’s see if we can form a meaningful, trust-driven alliance that enhances all our market positions. submitted by /u/No_Jellyfish3707 [link] [comments]
TLDR: After over 10 years working in the industry and leaving my failed startup, I'm worried about getting a job that will mean I'm throwing away years of experience across varied roles and putting myself in a pigeon hole I can't progress to leadership in or get out of. I'm worried I'll hate what's next. I've been in the industry over 10 years having started out as a pentest and later red team consultant. I transitioned to an R&D focused role building out SOC capabilities and incident management platforms, working closely with a large SOC team, while building tooling for a red team capability, and providing SME support for everything from incidents to red team engagements. I even worked in and around threat intel, solution engineering, and sales, it was a fantastic role. Then I left to start my own business and after completely burning out of both energy and cash, I'm looking for employment again. I'm extremely concerned that I haven't really found my "thing" yet as far as a typical career person goes, I guess I'm not one of those people but now I feel I need to be. I'm getting interviewed for roles from pentesting to security engineering management, but nothing appeals to me except the money. I'm also interviewing for a role as a high level software engineer in a company that effectively wants me to be the tech lead doing the very thing I've failed to do in my business. This one appeals to me the most albeit I've not had an offer so it could simply not be a real option. I'm completely exhausted mentally, and I'd love to hear what people think about my concerns. Basically, I worry that I won't have adequate career progression in some of these roles, or I won't like them at all, but end up stuck because my most recent experience will make it such that I can't transition into something senior enough in a role I prefer to match my expectations (salary and influence - I love autonomy). My skillset is diverse, and I guess I've gravitated towards this sort of builder of security solutions type role without thinking about it, but now I feel like I need to see a reasonably clear path for progression into more senior / leadership roles, as my adventure into entrepreneurship has failed. This one worries me the most, as it feels like I'm really venturing into software engineering in a specific security niche, I worry this pigeon hole will be incredibly hard to get out of if I find it's a mistake. I'd like to scratch some technical and leadership itches now, but I imagine the latter will appeal to me the older I get. What are the prospects here for me? I've also thought about advisory roles but they seem so GRC focused that I'd probably be considered under qualified. As a side note / tangent, for me the industry seems broken. So many vendors and consultancies have been consolidated through acquisitions, leaving just a few huge vendors as viable options to land a job that would appeal to me, and they're either not hiring or have extremely high expectations of candidate experience in very specific areas. Thus going internal seems like the only viable way of getting paid in a way that reflects my experience. I'm truly just shocked as to how limited the job market has become for senior roles. I know numerous industry veterans that are in a similar position of not knowing what's next, these are people with more experience than I in varied roles. The industry used to be one of opportunities, now it feels like you're left to choose between low grade vendors who can't pay decent salaries but will offer big job titles, or super vendors that only hire once a year and / or have no senior level opportunities, or ones that have incredibly high and unrealistic expectations. If you didn't get into the right role at the right company about 5 years ago, it feels like your options are significantly limited. submitted by /u/Regular_Lie906 [link] [comments]
In a historic decision, Romania's constitutional court has annulled the result of the first round of voting in the presidential election amid allegations of Russian interference.
As a result, the second round vote, which was scheduled for December 8, 2024, will no longer take place. Călin Georgescu, who won the first round, denounced the verdict as an "officialized coup" and an attack on
If you have systems that aren't able to do MFA (or even with it), this is a quick reminder to train your users not to make their password SeasonYear!! Like Spring2025! Might have seen something like this cause a compromise recently. submitted by /u/Bright-Ice-8802 [link] [comments]
🎤 Calling All Cybersecurity Enthusiasts! 🚀 Do you have insights, expertise, or groundbreaking research to share? Submit your talk or workshop proposal for BSidesSLC 2025 before the CFP closes on January 10th, 2025! 🎯 What We’re Looking For: From cutting-edge tech to real-world lessons, BSidesSLC is all about diverse perspectives and innovative ideas. First-time speakers and seasoned pros are all welcome! 🌟 Why Present at BSidesSLC? Share your knowledge with Utah’s vibrant cybersecurity community. Network with industry leaders and like-minded professionals. Build your reputation as a thought leader in InfoSec. 📅 Conference Date: April 10th & 11th, 2025 📍 Location: Salt Lake City, UT 💻 Submit your proposal here: sessionize.com/bsidesslc-2025 🌐 Learn more about the event: bsidesslc.org Tag your friends, share this post, and help us make BSidesSLC 2025 the best yet! #Cybersecurity #BSidesSLC #CFP #CallForPapers #InfoSec submitted by /u/PilotSmooth9439 [link] [comments]
Hey everyone, I'm working at a company that provides IT services like Microsoft 365, Active Directory, Veem Backup, Nakivo, and Rapid7 vulnerability scanning. We're also exploring Proxmox virtualization. We're interested in expanding our service offerings to include DevOps and DevSecOps. I'm wondering if anyone has experience integrating these practices with similar tools. Specifically, I'm curious about: Automation: How can we automate tasks using Proxmox and tools like Rapid7 or Veem/Nakivo? Security: What are the best practices for integrating DevSecOps into our existing security framework? Other Ideas: Are there any other technologies or approaches that could enhance our DevOps and DevSecOps capabilities? What are the most cutting-edge tools and technologies in this field? Our goal is to: Learn and Grow: Explore new technical areas and strengthen our skills. Stay Competitive: Keep up with industry trends and seize new opportunities. Any advice, suggestions, or shared experiences would be greatly appreciated! Relevant Tags: #devops #devsecops #proxmox #microsoft365 #activedirectory #veem #nakivo #rapid7 #automation #security submitted by /u/rached2023 [link] [comments]
Download the AI & Machine Learning For Dummies PRO App: iOS - Android Our AI and Machine Learning For Dummies PRO App can help you Ace the following AI and Machine Learning certifications:
Top 100 AWS Certified Data Analytics Specialty Certification Questions and Answers Dumps
If you’re looking to take your data analytics career to the next level, then this AWS Data Analytics Specialty Certification Exam Preparation blog is a must-read! With over 100 exam questions and answers, plus data science and data analytics interview questions, cheat sheets and more, you’ll be fully prepared to ace the DAS-C01 exam.
In this blog, we talk about big data and data analytics; we also give you the last updated top 100 AWS Certified Data Analytics – Specialty Questions and Answers Dumps
The AWS Certified Data Analytics – Specialty (DAS-C01) examination is intended for individuals who perform in a data analytics-focused role. This exam validates an examinee’s comprehensive understanding of using AWS services to design, build, secure, and maintain analytics solutions that provide insight from data.
Question1:What combination of services do you need for the following requirements: accelerate petabyte-scale data transfers, load streaming data, and the ability to create scalable, private connections. Select the correct answer order.
A) Snowball, Kinesis Firehose, Direct Connect
B) Data Migration Services, Kinesis Firehose, Direct Connect
AWS has many options to help get data into the cloud, including secure devices like AWS Import/Export Snowball to accelerate petabyte-scale data transfers, Amazon Kinesis Firehose to load streaming data, and scalable private connections through AWS Direct Connect.
AWS Data Analytics Specialty Certification Exam Preparation App is a great way to prepare for your upcoming AWS Data Analytics Specialty Certification Exam. The app provides you with over 300 questions and answers, detailed explanations of each answer, a scorecard to track your progress, and a countdown timer to help keep you on track. You can also find data science and data analytics interview questions and detailed answers, cheat sheets, and flashcards to help you study. The app is very similar to the real exam, so you will be well-prepared when it comes time to take the test.
Question 3: There is a five-day car rally race across Europe. The race coordinators are using a Kinesis stream and IoT sensors to monitor the movement of the cars. Each car has a sensor and data is getting back to the stream with the default stream settings. On the last day of the rally, data is sent to S3. When you go to interpret the data in S3, there is only data for the last day and nothing for the first 4 days. Which of the following is the most probable cause of this?
A) You did not have versioning enabled and would need to create individual buckets to prevent the data from being overwritten.
B) Data records are only accessible for a default of 24 hours from the time they are added to a stream.
C) One of the sensors failed, so there was no data to record.
D) You needed to use EMR to send the data to S3; Kinesis Streams are only compatible with DynamoDB.
Streams support changes to the data record retention period of your stream. An Amazon Kinesis stream is an ordered sequence of data records, meant to be written to and read from in real-time. Data records are therefore stored in shards in your stream temporarily. The period from when a record is added to when it is no longer accessible is called the retention period. An Amazon Kinesis stream stores records for 24 hours by default, up to 168 hours.
Question 4: A publisher website captures user activity and sends clickstream data to Amazon Kinesis Data Streams. The publisher wants to design a cost-effective solution to process the data to create a timeline of user activity within a session. The solution must be able to scale depending on the number of active sessions. Which solution meets these requirements?
A) Include a variable in the clickstream data from the publisher website to maintain a counter for the number of active user sessions. Use a timestamp for the partition key for the stream. Configure the consumer application to read the data from the stream and change the number of processor threads based upon the counter. Deploy the consumer application on Amazon EC2 instances in an EC2 Auto Scaling group.
B) Include a variable in the clickstream to maintain a counter for each user action during their session. Use the action type as the partition key for the stream. Use the Kinesis Client Library (KCL) in the consumer application to retrieve the data from the stream and perform the processing. Configure the consumer application to read the data from the stream and change the number of processor threads based upon the counter. Deploy the consumer application on AWS Lambda.
C) Include a session identifier in the clickstream data from the publisher website and use as the partition key for the stream. Use the Kinesis Client Library (KCL) in the consumer application to retrieve the data from the stream and perform the processing. Deploy the consumer application on Amazon EC2 instances in an EC2 Auto Scaling group. Use an AWS Lambda function to reshard the stream based upon Amazon CloudWatch alarms.
D) Include a variable in the clickstream data from the publisher website to maintain a counter for the number of active user sessions. Use a timestamp for the partition key for the stream. Configure the consumer application to read the data from the stream and change the number of processor threads based upon the counter. Deploy the consumer application on AWS Lambda.
Partitioning by the session ID will allow a single processor to process all the actions for a user session in order. An AWS Lambda function can call the UpdateShardCount API action to change the number of shards in the stream. The KCL will automatically manage the number of processors to match the number of shards. Amazon EC2 Auto Scaling will assure the correct number of instances are running to meet the processing load.
Question 5: Your company has two batch processing applications that consume financial data about the day’s stock transactions. Each transaction needs to be stored durably and guarantee that a record of each application is delivered so the audit and billing batch processing applications can process the data. However, the two applications run separately and several hours apart and need access to the same transaction information. After reviewing the transaction information for the day, the information no longer needs to be stored. What is the best way to architect this application?
A) Use SQS for storing the transaction messages; when the billing batch process performs first and consumes the message, write the code in a way that does not remove the message after consumed, so it is available for the audit application several hours later. The audit application can consume the SQS message and remove it from the queue when completed.
B) Use Kinesis to store the transaction information. The billing application will consume data from the stream and the audit application can consume the same data several hours later.
C) Store the transaction information in a DynamoDB table. The billing application can read the rows while the audit application will read the rows then remove the data.
D) Use SQS for storing the transaction messages. When the billing batch process consumes each message, have the application create an identical message and place it in a different SQS for the audit application to use several hours later.
SQS would make this more difficult because the data does not need to persist after a full day.
ANSWER5:
B
Notes/Hint5:
Kinesis appears to be the best solution that allows multiple consumers to easily interact with the records.
Question 6: A company is currently using Amazon DynamoDB as the database for a user support application. The company is developing a new version of the application that will store a PDF file for each support case ranging in size from 1–10 MB. The file should be retrievable whenever the case is accessed in the application. How can the company store the file in the MOST cost-effective manner?
A) Store the file in Amazon DocumentDB and the document ID as an attribute in the DynamoDB table.
B) Store the file in Amazon S3 and the object key as an attribute in the DynamoDB table.
C) Split the file into smaller parts and store the parts as multiple items in a separate DynamoDB table.
D) Store the file as an attribute in the DynamoDB table using Base64 encoding.
ANSWER6:
B
Notes/Hint6:
Use Amazon S3 to store large attribute values that cannot fit in an Amazon DynamoDB item. Store each file as an object in Amazon S3 and then store the object path in the DynamoDB item.
Question 7: Your client has a web app that emits multiple events to Amazon Kinesis Streams for reporting purposes. Critical events need to be immediately captured before processing can continue, but informational events do not need to delay processing. What solution should your client use to record these types of events without unnecessarily slowing the application?
A) Log all events using the Kinesis Producer Library.
B) Log critical events using the Kinesis Producer Library, and log informational events using the PutRecords API method.
C) Log critical events using the PutRecords API method, and log informational events using the Kinesis Producer Library.
D) Log all events using the PutRecords API method.
ANSWER2:
C
Notes/Hint7:
The PutRecords API can be used in code to be synchronous; it will wait for the API request to complete before the application continues. This means you can use it when you need to wait for the critical events to finish logging before continuing. The Kinesis Producer Library is asynchronous and can send many messages without needing to slow down your application. This makes the KPL ideal for the sending of many non-critical alerts asynchronously.
Question 8: You work for a start-up that tracks commercial delivery trucks via GPS. You receive coordinates that are transmitted from each delivery truck once every 6 seconds. You need to process these coordinates in near real-time from multiple sources and load them into Elasticsearch without significant technical overhead to maintain. Which tool should you use to digest the data?
A) Amazon SQS
B) Amazon EMR
C) AWS Data Pipeline
D) Amazon Kinesis Firehose
ANSWER8:
D
Notes/Hint8:
Amazon Kinesis Firehose is the easiest way to load streaming data into AWS. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service, enabling near real-time analytics with existing business intelligence tools and dashboards.
Question 9: A company needs to implement a near-real-time fraud prevention feature for its ecommerce site. User and order details need to be delivered to an Amazon SageMaker endpoint to flag suspected fraud. The amount of input data needed for the inference could be as much as 1.5 MB. Which solution meets the requirements with the LOWEST overall latency?
A) Create an Amazon Managed Streaming for Kafka cluster and ingest the data for each order into a topic. Use a Kafka consumer running on Amazon EC2 instances to read these messages and invoke the Amazon SageMaker endpoint.
B) Create an Amazon Kinesis Data Streams stream and ingest the data for each order into the stream. Create an AWS Lambda function to read these messages and invoke the Amazon SageMaker endpoint.
C) Create an Amazon Kinesis Data Firehose delivery stream and ingest the data for each order into the stream. Configure Kinesis Data Firehose to deliver the data to an Amazon S3 bucket. Trigger an AWS Lambda function with an S3 event notification to read the data and invoke the Amazon SageMaker endpoint.
D) Create an Amazon SNS topic and publish the data for each order to the topic. Subscribe the Amazon SageMaker endpoint to the SNS topic.
ANSWER9:
A
Notes/Hint9:
An Amazon Managed Streaming for Kafka cluster can be used to deliver the messages with very low latency. It has a configurable message size that can handle the 1.5 MB payload.
Question 10: You need to filter and transform incoming messages coming from a smart sensor you have connected with AWS. Once messages are received, you need to store them as time series data in DynamoDB. Which AWS service can you use?
A) IoT Device Shadow Service
B) Redshift
C) Kinesis
D) IoT Rules Engine
ANSWER10:
D
Notes/Hint10:
The IoT rules engine will allow you to send sensor data over to AWS services like DynamoDB
Question 11: A media company is migrating its on-premises legacy Hadoop cluster with its associated data processing scripts and workflow to an Amazon EMR environment running the latest Hadoop release. The developers want to reuse the Java code that was written for data processing jobs for the on-premises cluster. Which approach meets these requirements?
A) Deploy the existing Oracle Java Archive as a custom bootstrap action and run the job on the EMR cluster.
B) Compile the Java program for the desired Hadoop version and run it using a CUSTOM_JAR step on the EMR cluster.
C) Submit the Java program as an Apache Hive or Apache Spark step for the EMR cluster.
D) Use SSH to connect the master node of the EMR cluster and submit the Java program using the AWS CLI.
ANSWER11:
B
Notes/Hint11:
A CUSTOM JAR step can be configured to download a JAR file from an Amazon S3 bucket and execute it. Since the Hadoop versions are different, the Java application has to be recompiled.
Question 12: You currently have databases running on-site and in another data center off-site. What service allows you to consolidate to one database in Amazon?
A) AWS Kinesis
B) AWS Database Migration Service
C) AWS Data Pipeline
D) AWS RDS Aurora
ANSWER12:
B
Notes/Hint12:
AWS Database Migration Service can migrate your data to and from most of the widely used commercial and open source databases. It supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora. Migrations can be from on-premises databases to Amazon RDS or Amazon EC2, databases running on EC2 to RDS, or vice versa, as well as from one RDS database to another RDS database.
Question 13: An online retail company wants to perform analytics on data in large Amazon S3 objects using Amazon EMR. An Apache Spark job repeatedly queries the same data to populate an analytics dashboard. The analytics team wants to minimize the time to load the data and create the dashboard. Which approaches could improve the performance? (Select TWO.)
A) Copy the source data into Amazon Redshift and rewrite the Apache Spark code to create analytical reports by querying Amazon Redshift.
B) Copy the source data from Amazon S3 into Hadoop Distributed File System (HDFS) using s3distcp.
C) Load the data into Spark DataFrames.
D) Stream the data into Amazon Kinesis and use the Kinesis Connector Library (KCL) in multiple Spark jobs to perform analytical jobs.
E) Use Amazon S3 Select to retrieve the data necessary for the dashboards from the S3 objects.
ANSWER13:
C and E
Notes/Hint13:
One of the speed advantages of Apache Spark comes from loading data into immutable dataframes, which can be accessed repeatedly in memory. Spark DataFrames organizes distributed data into columns. This makes summaries and aggregates much quicker to calculate. Also, instead of loading an entire large Amazon S3 object, load only what is needed using Amazon S3 Select. Keeping the data in Amazon S3 avoids loading the large dataset into HDFS.
Question 14: You have been hired as a consultant to provide a solution to integrate a client’s on-premises data center to AWS. The customer requires a 300 Mbps dedicated, private connection to their VPC. Which AWS tool do you need?
A) VPC peering
B) Data Pipeline
C) Direct Connect
D) EMR
ANSWER14:
C
Notes/Hint14:
Direct Connect will provide a dedicated and private connection to an AWS VPC.
Question 15: Your organization has a variety of different services deployed on EC2 and needs to efficiently send application logs over to a central system for processing and analysis. They’ve determined it is best to use a managed AWS service to transfer their data from the EC2 instances into Amazon S3 and they’ve decided to use a solution that will do what?
A) Installs the AWS Direct Connect client on all EC2 instances and uses it to stream the data directly to S3.
B) Leverages the Kinesis Agent to send data to Kinesis Data Streams and output that data in S3.
C) Ingests the data directly from S3 by configuring regular Amazon Snowball transactions.
D) Leverages the Kinesis Agent to send data to Kinesis Firehose and output that data in S3.
ANSWER15:
D
Notes/Hint15:
Kinesis Firehose is a managed solution, and log files can be sent from EC2 to Firehose to S3 using the Kinesis agent.
Question 16: A data engineer needs to create a dashboard to display social media trends during the last hour of a large company event. The dashboard needs to display the associated metrics with a latency of less than 1 minute. Which solution meets these requirements?
A) Publish the raw social media data to an Amazon Kinesis Data Firehose delivery stream. Use Kinesis Data Analytics for SQL Applications to perform a sliding window analysis to compute the metrics and output the results to a Kinesis Data Streams data stream. Configure an AWS Lambda function to save the stream data to an Amazon DynamoDB table. Deploy a real-time dashboard hosted in an Amazon S3 bucket to read and display the metrics data stored in the DynamoDB table.
B) Publish the raw social media data to an Amazon Kinesis Data Firehose delivery stream. Configure the stream to deliver the data to an Amazon Elasticsearch Service cluster with a buffer interval of 0 seconds. Use Kibana to perform the analysis and display the results.
C) Publish the raw social media data to an Amazon Kinesis Data Streams data stream. Configure an AWS Lambda function to compute the metrics on the stream data and save the results in an Amazon S3 bucket. Configure a dashboard in Amazon QuickSight to query the data using Amazon Athena and display the results.
D) Publish the raw social media data to an Amazon SNS topic. Subscribe an Amazon SQS queue to the topic. Configure Amazon EC2 instances as workers to poll the queue, compute the metrics, and save the results to an Amazon Aurora MySQL database. Configure a dashboard in Amazon QuickSight to query the data in Aurora and display the results.
ANSWER16:
A
Notes/Hint16:
Amazon Kinesis Data Analytics can query data in a Kinesis Data Firehose delivery stream in near-real time using SQL. A sliding window analysis is appropriate for determining trends in the stream. Amazon S3 can host a static webpage that includes JavaScript that reads the data in Amazon DynamoDB and refreshes the dashboard.
Question 17: A real estate company is receiving new property listing data from its agents through .csv files every day and storing these files in Amazon S3. The data analytics team created an Amazon QuickSight visualization report that uses a dataset imported from the S3 files. The data analytics team wants the visualization report to reflect the current data up to the previous day. How can a data analyst meet these requirements?
A) Schedule an AWS Lambda function to drop and re-create the dataset daily.
B) Configure the visualization to query the data in Amazon S3 directly without loading the data into SPICE.
C) Schedule the dataset to refresh daily.
D) Close and open the Amazon QuickSight visualization.
ANSWER17:
B
Notes/Hint17:
Datasets created using Amazon S3 as the data source are automatically imported into SPICE. The Amazon QuickSight console allows for the refresh of SPICE data on a schedule.
Question 18: You need to migrate data to AWS. It is estimated that the data transfer will take over a month via the current AWS Direct Connect connection your company has set up. Which AWS tool should you use?
A) Establish additional Direct Connect connections.
B) Use Data Pipeline to migrate the data in bulk to S3.
C) Use Kinesis Firehose to stream all new and existing data into S3.
D) Snowball
ANSWER18:
D
Notes/Hint18:
As a general rule, if it takes more than one week to upload your data to AWS using the spare capacity of your existing Internet connection, then you should consider using Snowball. For example, if you have a 100 Mb connection that you can solely dedicate to transferring your data and need to transfer 100 TB of data, it takes more than 100 days to complete a data transfer over that connection. You can make the same transfer by using multiple Snowballs in about a week.
Question 19: You currently have an on-premises Oracle database and have decided to leverage AWS and use Aurora. You need to do this as quickly as possible. How do you achieve this?
A) It is not possible to migrate an on-premises database to AWS at this time.
B) Use AWS Data Pipeline to create a target database, migrate the database schema, set up the data replication process, initiate the full load and a subsequent change data capture and apply, and conclude with a switchover of your production environment to the new database once the target database is caught up with the source database.
C) Use AWS Database Migration Services and create a target database, migrate the database schema, set up the data replication process, initiate the full load and a subsequent change data capture and apply, and conclude with a switch-over of your production environment to the new database once the target database is caught up with the source database.
D) Use AWS Glue to crawl the on-premises database schemas and then migrate them into AWS with Data Pipeline jobs.
DMS can efficiently support this sort of migration using the steps outlined. While AWS Glue can help you crawl schemas and store metadata on them inside of Glue for later use, it isn’t the best tool for actually transitioning a database over to AWS itself. Similarly, while Data Pipeline is great for ETL and ELT jobs, it isn’t the best option to migrate a database over to AWS.
Question 20: A financial company uses Amazon EMR for its analytics workloads. During the company’s annual security audit, the security team determined that none of the EMR clusters’ root volumes are encrypted. The security team recommends the company encrypt its EMR clusters’ root volume as soon as possible. Which solution would meet these requirements?
A) Enable at-rest encryption for EMR File System (EMRFS) data in Amazon S3 in a security configuration. Re-create the cluster using the newly created security configuration.
B) Specify local disk encryption in a security configuration. Re-create the cluster using the newly created security configuration.
C) Detach the Amazon EBS volumes from the master node. Encrypt the EBS volume and attach it back to the master node.
D) Re-create the EMR cluster with LZO encryption enabled on all volumes.
ANSWER20:
B
Notes/Hint20:
Local disk encryption can be enabled as part of a security configuration to encrypt root and storage volumes.
Question 21: A company has a clickstream analytics solution using Amazon Elasticsearch Service. The solution ingests 2 TB of data from Amazon Kinesis Data Firehose and stores the latest data collected within 24 hours in an Amazon ES cluster. The cluster is running on a single index that has 12 data nodes and 3 dedicated master nodes. The cluster is configured with 3,000 shards and each node has 3 TB of EBS storage attached. The Data Analyst noticed that the query performance of Elasticsearch is sluggish, and some intermittent errors are produced by the Kinesis Data Firehose when it tries to write to the index. Upon further investigation, there were occasional JVMMemoryPressure errors found in Amazon ES logs.
What should be done to improve the performance of the Amazon Elasticsearch Service cluster?
A) Improve the cluster performance by increasing the number of master nodes of Amazon Elasticsearch.
B) Improve the cluster performance by increasing the number of shards of the Amazon Elasticsearch index.
C) Improve the cluster performance by decreasing the number of data nodes of Amazon Elasticsearch.
D) Improve the cluster performance by decreasing the number of shards of the Amazon Elasticsearch index.
ANSWER21:
D
Notes/Hint21:
“Amazon Elasticsearch Service (Amazon ES) is a managed service that makes it easy to deploy, operate, and scale Elasticsearch clusters in AWS Cloud. Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and clickstream analysis. With Amazon ES, you get direct access to the Elasticsearch APIs; existing code and applications work seamlessly with the service.
Each Elasticsearch index is split into some number of shards. You should decide the shard count before indexing your first document. The overarching goal of choosing a number of shards is to distribute an index evenly across all data nodes in the cluster. However, these shards shouldn’t be too large or too numerous.
A good rule of thumb is to try to keep a shard size between 10 – 50 GiB. Large shards can make it difficult for Elasticsearch to recover from failure, but because each shard uses some amount of CPU and memory, having too many small shards can cause performance issues and out of memory errors. In other words, shards should be small enough that the underlying Amazon ES instance can handle them, but not so small that they place needless strain on the hardware. Therefore the correct answer is: Improve the cluster performance by decreasing the number of shards of Amazon Elasticsearch index.
Question 26: Which service uses continuous data replication with high availability to consolidate databases into a petabyte-scale data warehouse by streaming data to amazon Redshift and Amazon S3?
Question 29: During your data preparation stage, the raw data has been enriched to support additional insights. You need to improve query performance and reduce costs of the final analytics solution.
Which data formats meet these requirements (SELECT TWO)
Question 30: Your small start-uo company is developing a data analytics solution. You need to clean and normalize large datasets, but you do not have developers with the skill set to write custom scripts. Which tool will help efficiently design and run the data preparation activities?
ANSWER30:
B
Notes/Hint30:
AWS Glue DataBrew
To be able to run analytics, build reports, or apply machine learning, you need to be sure the data you’re using is clean and in the right format. This data preparation step requires data analysts and data scientists to write custom code and perform many manual activities. When cleaning and normalizing data, it is helpful to first review the dataset to understand which possible values are present. Simple visualizations are helpful for determining whether correlations exist between the columns.
AWS Glue DataBrew is a visual data preparation tool that helps you clean and normalize data up to 80% faster so you can focus more on the business value you can get. DataBrew provides a visual interface that quickly connects to your data stored in Amazon S3, Amazon Redshift, Amazon Relational Database Service (RDS), any JDBC-accessible data store, or data indexed by the AWS Glue Data Catalog. You can then explore the data, look for patterns, and apply transformations. For example, you can apply joins and pivots, merge different datasets, or use functions to manipulate data.
Question 30: In which scenario would you use AWS Glue jobs?
A) Analyze data in real-time as data comes into the data lake
B) Transform data in real-time as data comes into the data lake
C) Analyze data in batches on schedule or on demand
D) Transform data in batches on schedule or on demand.
ANSWER30:
D
Notes/Hint30:
An AWS Glue job encapsulates a script that connects to your source data, processes it, and then writes it out to your data target. Typically, a job runs extract, transform, and load (ETL) scripts. Jobs can also run general-purpose Python scripts (Python shell jobs.) AWS Glue triggers can start jobs based on a schedule or event, or on demand. You can monitor job runs to understand runtime metrics such as completion status, duration, and start tim
Question 31: Your data resides in multiple data stores, including Amazon S3, Amazon RDS, and Amazon DynamoDB. You need to efficiently query the combined datasets.
Which tool can achieve this, using a single query, without moving data?
A) Amazon Athena Federated Query
B) Amazon Redshift Query Editor
C) SQl Workbench
D) AWS Glue DataBrew
ANSWER31:
A
Notes/Hint31:
With Amazon Athena Federated Query, you can run SQL queries across a variety of relational, non-relational, and custom data sources. You get a unified way to run SQL queries across various data stores.
Athena uses data source connectors that run on AWS Lambda to run federated queries. A data source connector is a piece of code that can translate between your target data source and Athena. You can think of a connector as an extension of Athena’s query engine. Pre-built Athena data source connectors exist for data sources like Amazon CloudWatch Logs, Amazon DynamoDB, Amazon DocumentDB, Amazon RDS, and JDBC-compliant relational data sources such MySQL and PostgreSQL under the Apache 2.0 license. You can also use the Athena Query Federation SDK to write custom connectors. To choose, configure, and deploy a data source connector to your account, you can use the Athena and Lambda consoles or the AWS Serverless Application Repository. After you deploy data source connectors, the connector is associated with a catalog that you can specify in SQL queries. You can combine SQL statements from multiple catalogs and span multiple data sources with a single query.
Question 32: Which benefit do you achieve by using AWS Lake Formation to build data lakes?
A) Build data lakes quickly
B) Simplify security management
C) Provide self-service access to data
D) All of the above
ANSWER32:
D
Notes/Hint32:
Build data lakes quickly
With Lake Formation, you can move, store, catalog, and clean your data faster. You simply point Lake Formation at your data sources, and Lake Formation crawls those sources and moves the data into your new Amazon S3 data lake. Lake Formation organizes data in S3 around frequently used query terms and into right-sized chunks to increase efficiency. Lake Formation also changes data into formats like Apache Parquet and ORC for faster analytics. In addition, Lake Formation has built-in machine learning to deduplicate and find matching records (two entries that refer to the same thing) to increase data quality.
Simplify security management
You can use Lake Formation to centrally define security, governance, and auditing policies in one place, versus doing these tasks per service. You can then enforce those policies for your users across their analytics applications. Your policies are consistently implemented, eliminating the need to manually configure them across security services like AWS Identity and Access Management (AWS IAM) and AWS Key Management Service (AWS KMS), storage services like Amazon S3, and analytics and machine learning services like Amazon Redshift, Amazon Athena, and (in beta) Amazon EMR for Apache Spark. This reduces the effort in configuring policies across services and provides consistent enforcement and compliance.
Provide self-service access to data
With Lake Formation, you build a data catalog that describes the different available datasets along with which groups of users have access to each. This makes your users more productive by helping them find the right dataset to analyze. By providing a catalog of your data with consistent security enforcement, Lake Formation makes it easier for your analysts and data scientists to use their preferred analytics service. They can use Amazon EMR for Apache Spark (in beta), Amazon Redshift, or Amazon Athena on diverse datasets that are now housed in a single data lake. Users can also combine these services without having to move data between silos.
Question 33: What are the three stages to set up a data lake using AWS Lake Formation? (SELECT THREE)
A) Register the storage location
B) Create a database
C) Populate the database
D) Grant permissions
ANSWER33:
A B and D
Notes/Hint33:
Register the storage location
Lake Formation manages access to designated storage locations within Amazon S3. Register the storage locations that you want to be part of the data lake.
Create a database
Lake Formation organizes data into a catalog of logical databases and tables. Create one or more databases and then automatically generate tables during data ingestion for common workflows.
Grant permissions
Lake Formation manages access for IAM users, roles, and Active Directory users and groups via flexible database, table, and column permissions. Grant permissions to one or more resources for your selected users.
Question 34: Which of the following AWS Lake Formation tasks are performed by the AWS Glue service? (SELECT THREE)
A) ETL code creation and job monitoring
B) Blueprints to create workflows
C) Data catalog and serverless architecture
D) Simplify securty management
ANSWER34:
A B and C
Notes/Hint34:
Lake Formation leverages a shared infrastructure with AWS Glue, including console controls, ETL code creation and job monitoring, blueprints to create workflows for data ingest, the same data catalog, and a serverless architecture. While AWS Glue focuses on these types of functions, Lake Formation encompasses all AWS Glue features AND provides additional capabilities designed to help build, secure, and manage a data lake. See the AWS Glue features page for more de
Question 35: A digital media customer needs to quickly build a data lake solution for the data housed in a PostgreSQL database. As a solutions architect, what service and feature would meet this requirement?
A) Copy PostgreSQL data to an Amazon S3 bucket and build a data lake using AWS Lake Formation
B) Use AWS Lake Formation blueprints
C) Build a data lake manually
D) Build an analytics solution by directly accessing the database.
ANSWER35:
B
Notes/Hint35:
A blueprint is a data management template that enables you to easily ingest data into a data lake. Lake Formation provides several blueprints, each for a predefined source type, such as a relational database or AWS CloudTrail logs. From a blueprint, you can create a workflow. Workflows consist of AWS Glue crawlers, jobs, and triggers that are generated to orchestrate the loading and update of data. Blueprints take the data source, data target, and schedule as input to configure the workflow.
Question 36: AWS Lake Formation has a set of suggested personas and IAM permissions. Which is a required persona?
A) Data lake administrator
B) Data engineer
C) Data analyst
D) Business analyst
ANSWER36:
A
Notes/Hint36:
Data lake administrator (Required)
A user who can register Amazon S3 locations, access the Data Catalog, create databases, create and run workflows, grant Lake Formation permissions to other users, and view AWS CloudTrail logs. The user has fewer IAM permissions than the IAM administrator but enough to administer the data lake. Cannot add other data lake administrators.
Data engineer (Optional) A user who can create and run crawlers and workflows and grant Lake Formation permissions on the Data Catalog tables that the crawlers and workflows create.
Data analyst (Optional) A user who can run queries against the data lake using, for example, Amazon Athena. The user has only enough permissions to run queries.
Business analyst (Optional) Generally, an end-user application specific persona that would query data and resource using a workflow role.
Question 37: Which three types of blueprints does AWS Lake Formation support? (SELECT THREE)
AWS Lake Formation blueprints simplify and automate creating workflows. Lake Formation provides the following types of blueprints:
• Database snapshot – Loads or reloads data from all tables into the data lake from a JDBC source. You can exclude some data from the source based on an exclude pattern.
• Incremental database – Loads only new data into the data lake from a JDBC source, based on previously set bookmarks. You specify the individual tables in the JDBC source database to include. For each table, you choose the bookmark columns and bookmark sort order to keep track of data that has previously been loaded. The first time that you run an incremental database blueprint against a set of tables, the workflow loads all data from the tables and sets bookmarks for the next incremental database blueprint run. You can therefore use an incremental database blueprint instead of the database snapshot blueprint to load all data, provided that you specify each table in the data source as a paramete
• Log file – Bulk loads data from log file sources, including AWS CloudTrail, Elastic Load Balancing logs, and Application Load Balancer logs.
Question 38: Which one of the following is the best description of the capabilities of Amazon QuickSight?
A) Automated configuration service build on AWS Glue
B) Fast, serverless, business intelligence service
C) Fast, simple, cost-effective data warehousing
D) Simple, scalable, and serverless data integration
ANSWER38:
B C and D
Notes/Hint38:
B. Scalable, serverless business intelligence service is the correct choice.
See the brief descriptions of several AWS Analytics services below:
AWS Lake Formation Build a secure data lake in days using Glue blueprints and workflows
Amazon QuickSight Scalable, serverless, embeddable, ML-powered BI Service built for the cloud
Amazon Redshift Analyze all of your data with the fastest and most widely used cloud data warehouse
AWS Glue Simple, scalable, and serverless data integration
Question 39: Which benefits are provided by Amazon Redshift? (Select TWO)
A) Analyze Data stored in your data lake
B) Maintain performance at scale
C) Focus effort on Data warehouse administration
D) Store all the data to meet analytics need
E) Amazon Redshift includes enterprise-level security and compliance features.
ANSWER38:
A and B
Notes/Hint38:
A is correct – With Amazon Redshift, you can analyze all your data, including exabytes of data stored in your Amazon S3 data lake.
B is correct – Amazon Redshift provides consistent performance at scale.
• C is incorrect – Amazon Redshift is a fully managed data warehouse solution. It includes automations to reduce the administrative overhead traditionally associated with data warehouses. When using Amazon Redshift, you can focus your development effort on strategic data analytics solutions.
• D is incorrect – With Amazon Redshift features—such as Amazon Redshift Spectrum, materialized views, and federated query—you can analyze data where it is stored in your data lake or AWS databases. This capability provides flexibility to meet new analytics requirements without the cost, time, or complexity of moving large volumes of data between solutions.
• Answer E is incorrect – Amazon Redshift includes enterprise-level security and compliance features.
Djamga Data Sciences Big Data – Data Analytics Youtube Playlist