Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6 Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
Understand bastion hosts, and which subnet one might live on. Bastion hosts are instances that sit within your public subnet and are typically accessed using SSH or RDP. Once remote connectivity has been established with the bastion host, it then acts as a ‘jump’ server, allowing you to use SSH or RDP to login to other instances (within private subnets) deeper within your network. When properly configured through the use of security groups and Network ACLs, the bastion essentially acts as a bridge to your private instances via the Internet.” Bastion Hosts
3
Know the difference between Directory Service’s AD Connector and Simple AD. Use Simple AD if you need an inexpensive Active Directory–compatible service with the common directory features. AD Connector lets you simply connect your existing on-premises Active Directory to AWS. AD Connector and Simple AD
Know how to enable cross-account access with IAM: To delegate permission to access a resource, you create an IAM role that has two policies attached. The permissions policy grants the user of the role the needed permissions to carry out the desired tasks on the resource. The trust policy specifies which trusted accounts are allowed to grant its users permissions to assume the role. The trust policy on the role in the trusting account is one-half of the permissions. The other half is a permissions policy attached to the user in the trusted account that allows that user to switch to, or assume the role. Enable cross-account access with IAM
Know which services allow you to retain full admin privileges of the underlying EC2 instances EC2 Full admin privilege
8
Know When Elastic IPs are free or not: If you associate additional EIPs with that instance, you will be charged for each additional EIP associated with that instance per hour on a pro rata basis. Additional EIPs are only available in Amazon VPC. To ensure efficient use of Elastic IP addresses, we impose a small hourly charge when these IP addresses are not associated with a running instance or when they are associated with a stopped instance or unattached network interface. When are AWS Elastic IPs Free or not?
9
Know what are the four high level categories of information Trusted Advisor supplies. #AWS Trusted advisor
10
Know how to troubleshoot a connection time out error when trying to connect to an instance in your VPC. You need a security group rule that allows inbound traffic from your public IP address on the proper port, you need a route that sends all traffic destined outside the VPC (0.0.0.0/0) to the Internet gateway for the VPC, the network ACLs must allow inbound and outbound traffic from your public IP address on the proper port, etc. #AWS Connection time out error
11
Be able to identify multiple possible use cases and eliminate non-use cases for SWF. #AWS
12
Understand how you might set up consolidated billing and cross-account access such that individual divisions resources are isolated from each other, but corporate IT can oversee all of it. #AWS Set up consolidated billing
13
Know how you would go about making changes to an Auto Scaling group, fully understanding what you can and can’t change. “You can only specify one launch configuration for an Auto Scaling group at a time, and you can’t modify a launch configuration after you’ve created it. Therefore, if you want to change the launch configuration for your Auto Scaling group, you must create a launch configuration and then update your Auto Scaling group with the new launch configuration. When you change the launch configuration for your Auto Scaling group, any new instances are launched using the new configuration parameters, but existing instances are not affected. #AWS Make Change to Auto Scaling group
14
Know how you would go about making changes to an Auto Scaling group, fully understanding what you can and can’t change. “You can only specify one launch configuration for an Auto Scaling group at a time, and you can’t modify a launch configuration after you’ve created it. Therefore, if you want to change the launch configuration for your Auto Scaling group, you must create a launch configuration and then update your Auto Scaling group with the new launch configuration. When you change the launch configuration for your Auto Scaling group, any new instances are launched using the new configuration parameters, but existing instances are not affected. #AWS Make Change to Auto Scaling group
15
Know which field you use to run a script upon launching your instance. #AWS User data script
16
Know how DynamoDB (durable, and you can pay for strong consistency), Elasticache (great for speed, not so durable), and S3 (eventual consistency results in lower latency) compare to each other in terms of durability and low latency. #AWS DynamoDB consistency
Know the difference between bucket policies, IAM policies, and ACLs for use with S3, and examples of when you would use each. “With IAM policies, companies can grant IAM users fine-grained control to their Amazon S3 bucket or objects while also retaining full control over everything the users do. With bucket policies, companies can define rules which apply broadly across all requests to their Amazon S3 resources, such as granting write privileges to a subset of Amazon S3 resources. Customers can also restrict access based on an aspect of the request, such as HTTP referrer and IP address. With ACLs, customers can grant specific permissions (i.e. READ, WRITE, FULL_CONTROL) to specific users for an individual bucket or object. #AWS Difference between bucket policies
Understand how you can use ELB cross-zone load balancing to ensure even distribution of traffic to EC2 instances in multiple AZs registered with a load balancer. #AWS ELB cross-zone load balancing
Spot instances are good for cost optimization, even if it seems you might need to fall back to On-Demand instances if you wind up getting kicked off them and the timeline grows tighter. The primary (but still not only) factor seems to be whether you can gracefully handle instances that die on you–which is pretty much how you should always design everything, anyway! #AWS Spot instances
The term “use case” is not the same as “function” or “capability”. A use case is something that your app/system will need to accomplish, not just behaviour that you will get from that service. In particular, a use case doesn’t require that the service be a 100% turnkey solution for that situation, just that the service plays a valuable role in enabling it. #AWS use case
23
There might be extra, unnecessary information in some of the questions (red herrings), so try not to get thrown off by them. Understand what services can and can’t do, but don’t ignore “obvious”-but-still-correct answers in favour of super-tricky ones. #AWS Exam Answers: Distractors
24
If you don’t know what they’re trying to ask, in a question, just move on and come back to it later (by using the helpful “mark this question” feature in the exam tool). You could easily spend way more time than you should on a single confusing question if you don’t triage and move on. #AWS Exa: Skip Questions that are vague and come back to them later
25
Some exam questions required you to understand features and use cases of: VPC peering, cross-account access, DirectConnect, snapshotting EBS RAID arrays, DynamoDB, spot instances, Glacier, AWS/user security responsibilities, etc. #AWS
26
The 30 Day constraint in the S3 Lifecycle Policy before transitioning to S3-IA and S3-One Zone IA storage classes #AWS S3 lifecycle policy
Watch Acloud Guru Videos Lectures while commuting / lunch break – Reschedule the exam if you are not yet ready #AWS ACloud Guru
36
Watch Linux Academy Videos Lectures while commuting / lunch break – Reschedule the exam if you are not yet ready #AWS Linux Academy
37
Watch Udemy Videos Lectures while commuting / lunch break – Reschedule the exam if you are not yet ready #AWS Linux Academy
38
The Udemy practice test interface is good that it pinpoints your weak areas, so what I did was to re-watch all the videos that I got the wrong answers. Since I was able to gauge my exam readiness, I decided to reschedule my exam for 2 more weeks, to help me focus on completing the practice tests. #AWS Udemy
39
Use AWS Cheatsheets – I also found the cheatsheets provided by Tutorials Dojo very helpful. In my opinion, it is better than Jayendrapatil Patil’s blog since it contains more updated information that complements your review notes. #AWS Cheat Sheet
40
Watch this exam readiness 3hr video, it very recent webinar this provides what is expected in the exam. #AWS Exam Prep Video
41
Start off watching Ryan’s videos. Try and completely focus on the hands on. Take your time to understand what you are trying to learn and achieve in those LAB Sessions. #AWS Exam Prep Video
42
Do not rush into completing the videos. Take your time and hone the basics. Focus and spend a lot of time for the back bone of AWS infrastructure – Compute/EC2 section, Storage (S3/EBS/EFS), Networking (Route 53/Load Balancers), RDS, VPC, Route 3. These sections are vast, with lot of concepts to go over and have loads to learn. Trust me you will need to thoroughly understand each one of them to ensure you pass the certification comfortably. #AWS Exam Prep Video
43
Make sure you go through resources section and also AWS documentation for each components. Go over FAQs. If you have a question, please post it in the community. Trust me, each answer here helps you understand more about AWS. #AWS Faqs
44
Like any other product/service, each AWS offering has a different flavor. I will take an example of EC2 (Spot/Reserved/Dedicated/On Demand etc.). Make sure you understand what they are, what are the pros/cons of each of these flavors. Applies for all other offerings too. #AWS Services
45
Ensure to attend all quizzes after each section. Please do not treat these quizzes as your practice exams. These quizzes are designed to mostly test your knowledge on the section you just finished. The exam itself is designed to test you with scenarios and questions, where in you will need to recall and apply your knowledge of different AWS technologies/services you learn over multiple lectures. #AWS Services
46
I, personally, do not recommend to attempt a practice exam or simulator exam until you have done all of the above. It was a little overwhelming for me. I had thoroughly gone over the videos. And understood the concepts pretty well, but once I opened exam simulator I felt the questions were pretty difficult. I also had a feeling that videos do not cover lot of topics. But later I realized, given the vastness of AWS Services and offerings it is really difficult to encompass all these services and their details in the course content. The fact that these services keep changing so often, does not help #AWS Services
47
Go back and make a note of all topics, that you felt were unfamiliar for you. Go through the resources section and fiund links to AWS documentation. After going over them, you shoud gain at least 5-10% more knowledge on AWS. Have expectations from the online courses as a way to get thorough understanding of basics and strong foundations for your AWS knowledge. But once you are done with videos. Make sure you spend a lot of time on AWS documentation and FAQs. There are many many topics/sub topics which may not be covered in the course and you would need to know, atleast their basic functionalities, to do well in the exam. #AWS Services
48
Once you start taking practice exams, it may seem really difficult at the beginning. So, please do not panic if you find the questions complicated or difficult. IMO they are designed or put in a way to sound complicated but they are not. Be calm and read questions very carefully. In my observation, many questions have lot of information which sometimes is not relevant to the solution you are expected to provide. Read the question slowly and read it again until you understand what is expected out of it. #AWS Services
49
With each practice exam you will come across topics that you may need to scale your knowledge on or learn them from scratch. #AWS Services
50
With each test and the subsequent revision, you will surely feel more confident. There are 130 mins for questions. 2 mins for each question which is plenty of time. At least take 8-10 practice tests. The ones on udemy/tutorialdojo are really good. If you are a acloudguru member. The exam simulator is really good. Manage your time well. Keep patience. I saw someone mention in one of the discussions that do not under estimate the mental focus/strength needed to sit through 130 mins solving these questions. And it is really true. Do not give away or waste any of those precious 130 mins. While answering flag/mark questions you think you are not completely sure. My advice is, even if you finish early, spend your time reviewing the answers. I could review 40 of my answers at the end of test. And I at least rectified 3 of them (which is 4-5% of total score, I think) So in short – Put a lot of focus on making your foundations strong. Make sure you go through AWS Documentation and FAQs. Try and envision how all of the AWS components can fit together and provide an optimal solution. Keep calm. This video gives outline about exam, must watch before or after Ryan’s course.#AWS Services
51
Walking you through how to best prepare for the AWS Certified Solutions Architect Associate SAA-C02 exam in 5 steps: 1. Understand the exam blueprint 2. Learn about the new topics included in the SAA-C02 version of the exam 3. Use the many FREE resources available to gain and deepen your knowledge 4. Enroll in our hands-on video course to learn AWS in depth 5. Use practice tests to fully prepare yourself for the exam and assess your exam readiness AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS
52
Storage: 1. Know your different Amazon S3 storage tiers! You need to know the use cases, features and limitations, and relative costs; e.g. retrieval costs. 2. Amazon S3 lifecycle policies is also required knowledge — there are minimum storage times in certain tiers that you need to know. 3. For Glacier, you need to understand what it is, what it’s used for, and what the options are for retrieval times and fees. 4. For the Amazon Elastic File System (EFS), make sure you’re clear which operating systems you can use with it (just Linux). 5. For the Amazon Elastic Block Store (EBS), make sure you know when to use the different tiers including instance stores; e.g. what would you use for a datastore that requires the highest IO and the data is distributed across multiple instances? (Good instance store use case) 6. Learn about Amazon FSx. You’ll need to know about FSx for Windows and Lustre. 7. Know how to improve Amazon S3 performance including using CloudFront, and byte-range fetches — check out this whitepaper. 8. Make sure you understand about Amazon S3 object deletion protection options including versioning and MFA delete. AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS
53
Compute: 1. You need to have a good understanding of the options for how to scale an Auto Scaling Group using metrics such as SQS queue depth, or numbers of SNS messages. 2. Know your different Auto Scaling policies including Target Tracking Policies. 3. Read up on High Performance Computing (HPC) with AWS. You’ll need to know about Amazon FSx with HPC use cases. 4. Know your placement groups. Make sure you can differentiate between spread, cluster and partition; e.g. what would you use for lowest latency? What about if you need to support an app that’s tightly coupled? Within an AZ or cross AZ? 5. Make sure you know the difference between Elastic Network Adapters (ENAs), Elastic Network Interfaces (ENIs) and Elastic Fabric Adapters (EFAs). 6. For the Amazon Elastic Container Service (ECS), make sure you understand how to assign IAM policies to ECS for providing S3 access. How can you decouple an ECS data processing process — Kinesis Firehose or SQS? 7. Make sure you’re clear on the different EC2 pricing models including Reserved Instances (RI) and the different RI options such as scheduled RIs. 8. Make sure you know the maximum execution time for AWS Lambda (it’s currently 900 seconds or 15 minutes). AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS
54
Network 1. Understand what AWS Global Accelerator is and its use cases. 2. Understand when to use CloudFront and when to use AWS Global Accelerator. 3. Make sure you understand the different types of VPC endpoint and which require an Elastic Network Interface (ENI) and which require a route table entry. 4. You need to know how to connect multiple accounts; e.g. should you use VPC peering or a VPC endpoint? 5. Know the difference between PrivateLink and ClassicLink. 6. Know the patterns for extending a secure on-premises environment into AWS. 7. Know how to encrypt AWS Direct Connect (you can use a Virtual Private Gateway / AWS VPN). 8. Understand when to use Direct Connect vs Snowball to migrate data — lead time can be an issue with Direct Connect if you’re in a hurry. 9. Know how to prevent circumvention of Amazon CloudFront; e.g. Origin Access Identity (OAI) or signed URLs / signed cookies. AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS
55
Databases 1. Make sure you understand Amazon Aurora and Amazon Aurora Serverless. 2. Know which RDS databases can have Read Replicas and whether you can read from a Multi-AZ standby. 3. Know the options for encrypting an existing RDS database; e.g. only at creation time otherwise you must encrypt a snapshot and create a new instance from the snapshot. 4. Know which databases are key-value stores; e.g. Amazon DynamoDB. AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS
56
Application Integration 1. Make sure you know the use cases for the Amazon Simple Queue Service (SQS), and Simple Notification Service (SNS). 2. Understand the differences between Amazon Kinesis Firehose and SQS and when you would use each service. 3. Know how to use Amazon S3 event notifications to publish events to SQS — here’s a good “How To” article. AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS
57
Management and Governance 1. You’ll need to know about AWS Organizations; e.g. how to migrate an account between organizations. 2. For AWS Organizations, you also need to know how to restrict actions using service control policies attached to OUs. 3. Understand what AWS Resource Access Manager is. AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS
The AWS Certified Solution Architect Associate Examination reparation and Readiness Quiz App (SAA-C01, SAA-C01, SAA) Prep App helps you prepare and train for the AWS Certification Solution Architect Associate Exam with various questions and answers dumps.
This App provide updated Questions and Answers, an Intuitive Responsive Interface allowing to browse questions horizontally and browse tips and resources vertically after completing a quiz.
Features:
100+ Questions and Answers updated frequently to get you AWS certified.
Quiz with score tracker, countdown timer, highest score saving. Vie Answers after completing the quiz for each category.
Ability to navigate through questions for each category using next and previous button.
Resource info page about the answer for each category and Top 60 Tips to succeed in the exam.
Prominent Cloud Evangelist latest tweets and Technology Latest News Feed
The app helps you study and practice from your mobile device with an intuitive interface.
SAA-C01 and SAA-C02 compatible
Resource info page about the answer for each category.
Helps you study and practice from your mobile device with an intuitive interface.
The questions and Answers are divided in 4 categories:
Design High Performing Architectures,
Design Cost Optimized Architectures,
Design Secure Applications And Architectures,
Design Resilient Architecture,
The questions and answers cover the following topics: AWS VPC, S3, DynamoDB, EC2, ECS, Lambda, API Gateway, CloudWatch, CloudTrail, Code Pipeline, Code Deploy, TCO Calculator, AWS S3, AWS DynamoDB, CloudWatch , AWS SES, Amazon Lex, AWS EBS, AWS ELB, AWS Autoscaling , RDS, Aurora, Route 53, Amazon CodeGuru, Amazon Bracket, AWS Billing and Pricing, AWS Simply Monthly Calculator, AWS cost calculator, Ec2 pricing on-demand, AWS Pricing, AWS Pay As You Go, AWS No Upfront Cost, Cost Explorer, AWS Organizations, Consolidated billing, Instance Scheduler, on-demand instances, Reserved instances, Spot Instances, CloudFront, Web hosting on S3, S3 storage classes, AWS Regions, AWS Availability Zones, Trusted Advisor, Various architectural Questions and Answers about AWS, AWS SDK, AWS EBS Volumes, EC2, S3, Containers, KMS, AWS read replicas, Cloudfront, API Gateway, AWS Snapshots, Auto shutdown Ec2 instances, High Availability, RDS, DynamoDB, Elasticity, AWS Virtual Machines, AWS Caching, AWS Containers, AWS Architecture, AWS Ec2, AWS S3, AWS Security, AWS Lambda, Bastion Hosts, S3 lifecycle policy, kinesis sharing, AWS KMS, Design High Performing Architectures, Design Cost Optimized Architectures, Design Secure Applications And Architectures, Design Resilient Architecture, AWS vs Azure vs Google Cloud, Resources, Questions, AWS, AWS SDK, AWS EBS Volumes, AWS read replicas, Cloudfront, API Gateway, AWS Snapshots, Auto shutdown Ec2 instances, High Availability, RDS, DynamoDB, Elasticity, AWS Virtual Machines, AWS Caching, AWS Containers, AWS Architecture, AWS Ec2, AWS S3, AWS Security, AWS Lambda, Load Balancing, DynamoDB, EBS, Multi-AZ RDS, Aurora, EFS, DynamoDB, NLB, ALB, Aurora, Auto Scaling, DynamoDB(latency), Aurora(performance), Multi-AZ RDS(high availability), Throughput Optimized EBS (highly sequential), SAA-CO1, SAA-CO2, Cloudwatch, CloudTrail, KMS, ElasticBeanstalk, OpsWorks, RPO vs RTO, HA vs FT, Undifferentiated Heavy Lifting, Access Management Basics, Shared Responsibility Model, Cloud Service Models, etc…
The resources sections cover the following areas: Certification, AWS training, Mock Exam Preparation Tips, Cloud Architect Training, Cloud Architect Knowledge, Cloud Technology, cloud certification, cloud exam preparation tips, cloud solution architect associate exam, certification practice exam, learn aws free, amazon cloud solution architect, question dumps, acloud guru links, tutorial dojo links, linuxacademy links, latest aws certification tweets, and post from reddit, quota, linkedin, medium, cloud exam preparation tips, aws cloud solution architect associate exam, aws certification practice exam, cloud exam questions, learn aws free, amazon cloud solution architect, amazon cloud certified solution architect associate exam questions, as certification dumps, google cloud, azure cloud, acloud, learn google cloud, learn azure cloud, cloud comparison, etc.
Abilities Validated by the Certification:
Effectively demonstrate knowledge of how to architect and deploy secure and robust applications on AWS technologies
Define a solution using architectural design principles based on customer requirements
Provide implementation guidance based on best practices to the organization throughout the life cycle of the project
Recommended Knowledge for the Certification:
One year of hands-on experience designing available, cost-effective, fault-tolerant, and scalable distributed systems on AWS.
Hands-on experience using compute, networking, storage, and database AWS services.
Hands-on experience with AWS deployment and management services.
Ability to identify and define technical requirements for an AWS-based application.
bility to identify which AWS services meet a given technical requirement.
Knowledge of recommended best practices for building secure and reliable applications on the AWS platform.
An understanding of the basic architectural principles of building in the AWS Cloud.
An understanding of the AWS global infrastructure.
An understanding of network technologies as they relate to AWS.
An understanding of security features and tools that AWS provides and how they relate to traditional services.
Note and disclaimer: We are not affiliated with AWS or Amazon or Microsoft or Google. The questions are put together based on the certification study guide and materials available online. We also receive questions and answers from anonymous users and we vet to make sure they are legitimate. The questions in this app should help you pass the exam but it is not guaranteed. We are not responsible for any exam you did not pass.
Important: To succeed with the real exam, do not memorize the answers in this app. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
What is the AWS Certified Solution Architect Associate Exam?
This exam validates an examinee’s ability to effectively demonstrate knowledge of how to architect and deploy secure and robust applications on AWS technologies. It validates an examinee’s ability to:
Define a solution using architectural design principles based on customer requirements.
Multiple-response: Has two correct responses out of five options.
There are two types of questions on the examination:
Multiple-choice: Has one correct response and three incorrect responses (distractors).
Provide implementation guidance based on best practices to the organization throughout the lifecycle of the project.
Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that an examinee with incomplete knowledge or skill would likely choose. However, they are generally plausible responses that fit in the content area defined by the test objective. Unanswered questions are scored as incorrect; there is no penalty for guessing.
To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
AWS certification exam quiz apps for all platforms
AWS (Amazon Web Services) is a popular cloud computing platform that offers a range of services including computing, storage, networking, and more. AWS offers a variety of certification exams to validate the skills and knowledge of professionals who work with its technologies. These certification exams are designed to test a wide range of knowledge and skills, including technical expertise, problem-solving abilities, and understanding of AWS services.
To prepare for an AWS certification exam, you may consider using a variety of resources including training courses, practice exams, and quiz apps. These resources can help you become familiar with the exam format, the types of questions that may be asked, and the knowledge and skills that will be tested.
Below is a listing of AWS certification exam quiz apps for all platforms:
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6 Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
What are the corresponding Azure and Google Cloud services for each of the AWS services?
What are unique distinctions and similarities between AWS, Azure and Google Cloud services? For each AWS service, what is the equivalent Azure and Google Cloud service? For each Azure service, what is the corresponding Google Service? AWS Services vs Azure vs Google Services? Side by side comparison between AWS, Google Cloud and Azure Service?
Category: Marketplace Easy-to-deploy and automatically configured third-party applications, including single virtual machine or multiple virtual machine solutions. References: [AWS]:AWS Marketplace [Azure]:Azure Marketplace [Google]:Google Cloud Marketplace Tags: #AWSMarketplace, #AzureMarketPlace, #GoogleMarketplace Differences: They are both digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on their respective cloud platform.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6 Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
Tags: #AlexaSkillsKit, #MicrosoftBotFramework, #GoogleAssistant Differences: One major advantage Google gets over Alexa is that Google Assistant is available to almost all Android devices.
Tags: #AmazonLex, #CogintiveServices, #AzureSpeech, #Api.ai, #DialogFlow, #Tensorflow Differences: api.ai provides us with such a platform which is easy to learn and comprehensive to develop conversation actions. It is a good example of the simplistic approach to solving complex man to machine communication problem using natural language processing in proximity to machine learning. Api.ai supports context based conversations now, which reduces the overhead of handling user context in session parameters. On the other hand in Lex this has to be handled in session. Also, api.ai can be used for both voice and text based conversations (assistant actions can be easily created using api.ai).
Category: Big data and analytics: Data warehouse Description: Apache Spark-based analytics platform. Managed Hadoop service. Data orchestration, ETL, Analytics and visualization References: [AWS]:EMR, Data Pipeline, Kinesis Stream, Kinesis Firehose, Glue, QuickSight, Athena, CloudSearch [Azure]:Azure Databricks, Data Catalog Cortana Intelligence, HDInsight, Power BI, Azure Datafactory, Azure Search, Azure Data Lake Anlytics, Stream Analytics, Azure Machine Learning [Google]:Cloud DataProc, Machine Learning, Cloud Datalab Tags:#EMR, #DataPipeline, #Kinesis, #Cortana, AzureDatafactory, #AzureDataAnlytics, #CloudDataProc, #MachineLearning, #CloudDatalab Differences: All three providers offer similar building blocks; data processing, data orchestration, streaming analytics, machine learning and visualisations. AWS certainly has all the bases covered with a solid set of products that will meet most needs. Azure offers a comprehensive and impressive suite of managed analytical products. They support open source big data solutions alongside new serverless analytical products such as Data Lake. Google provide their own twist to cloud analytics with their range of services. With Dataproc and Dataflow, Google have a strong core to their proposition. Tensorflow has been getting a lot of attention recently and there will be many who will be keen to see Machine Learning come out of preview.
Category: Serverless Description: Integrate systems and run backend processes in response to events or schedules without provisioning or managing servers. References: [AWS]:AWS Lambda [Azure]:Azure Functions [Google]:Google Cloud Functions Tags:#AWSLAmbda, #AzureFunctions, #GoogleCloudFunctions Differences: Both AWS Lambda and Microsoft Azure Functions and Google Cloud Functions offer dynamic, configurable triggers that you can use to invoke your functions on their platforms. AWS Lambda, Azure and Google Cloud Functions support Node.js, Python, and C#. The beauty of serverless development is that, with minor changes, the code you write for one service should be portable to another with little effort – simply modify some interfaces, handle any input/output transforms, and an AWS Lambda Node.JS function is indistinguishable from a Microsoft Azure Node.js Function. AWS Lambda provides further support for Python and Java, while Azure Functions provides support for F# and PHP. AWS Lambda is built from the AMI, which runs on Linux, while Microsoft Azure Functions run in a Windows environment. AWS Lambda uses the AWS Machine architecture to reduce the scope of containerization, letting you spin up and tear down individual pieces of functionality in your application at will.
Category:Caching Description:An in-memory–based, distributed caching service that provides a high-performance store typically used to offload non transactional work from a database. References: [AWS]:AWS ElastiCache (works as an in-memory data store and cache to support the most demanding applications requiring sub-millisecond response times.) [Azure]:Azure Cache for Redis (based on the popular software Redis. It is typically used as a cache to improve the performance and scalability of systems that rely heavily on backend data-stores.) [Google]:Memcache (In-memory key-value store, originally intended for caching) Tags:#Redis, #Memcached <Differences: They all support horizontal scaling via sharding.They all improve the performance of web applications by allowing you to retrive information from fast, in-memory caches, instead of relying on slower disk-based databases.”, “Differences”: “ElastiCache supports Memcached and Redis. Memcached Cloud provides various data persistence options as well as remote backups for disaster recovery purposes. Redis offers persistence to disk, Memcache does not. This can be very helpful if you cache lots of data, since you remove the slowness around having a fully cold cache. Redis also offers several extra data structures that Memcache doesn’t— Lists, Sets, Sorted Sets, etc. Memcache only has Key/Value pairs. Memcache is multi-threaded. Redis is single-threaded and event driven. Redis is very fast, but it’ll never be multi-threaded. At hight scale, you can squeeze more connections and transactions out of Memcache. Memcache tends to be more memory efficient. This can make a big difference around the magnitude of 10s of millions or 100s of millions of keys. ElastiCache supports Memcached and Redis. Memcached Cloud provides various data persistence options as well as remote backups for disaster recovery purposes. Redis offers persistence to disk, Memcache does not. This can be very helpful if you cache lots of data, since you remove the slowness around having a fully cold cache. Redis also offers several extra data structures that Memcache doesn’t— Lists, Sets, Sorted Sets, etc. Memcache only has Key/Value pairs. Memcache is multi-threaded. Redis is single-threaded and event driven. Redis is very fast, but it’ll never be multi-threaded. At hight scale, you can squeeze more connections and transactions out of Memcache. Memcache tends to be more memory efficient. This can make a big difference around the magnitude of 10s of millions or 100s of millions of keys.
Category: Enterprise application services Description:Fully integrated Cloud service providing communications, email, document management in the cloud and available on a wide variety of devices. References: [AWS]:Amazon WorkMail, Amazon WorkDocs, Amazon Kendra (Sync and Index) [Azure]:Office 365 [Google]:G Suite Tags: #AmazonWorkDocs, #Office365, #GoogleGSuite Differences: G suite document processing applications like Google Docs are far behind Office 365 popular Word and Excel software, but G Suite User interface is intuite, simple and easy to navigate. Office 365 is too clunky. Get 20% off G-Suite Business Plan with Promo Code: PCQ49CJYK7EATNC
Category: Management Description: A unified management console that simplifies building, deploying, and operating your cloud resources. References: [AWS]:AWS Management Console, Trusted Advisor, AWS Usage and Billing Report, AWS Application Discovery Service, Amazon EC2 Systems Manager, AWS Personal Health Dashboard, AWS Compute Optimizer (Identify optimal AWS Compute resources) [Azure]:Azure portal, Azure Advisor, Azure Billing API, Azure Migrate, Azure Monitor, Azure Resource Health [Google]:Google CLoud Platform, Cost Management, Security Command Center, StackDriver Tags: #AWSConsole, #AzurePortal, #GoogleCloudConsole, #TrustedAdvisor, #AzureMonitor, #SecurityCommandCenter Differences: AWS Console categorizes its Infrastructure as a Service offerings into Compute, Storage and Content Delivery Network (CDN), Database, and Networking to help businesses and individuals grow. Azure excels in the Hybrid Cloud space allowing companies to integrate onsite servers with cloud offerings. Google has a strong offering in containers, since Google developed the Kubernetes standard that AWS and Azure now offer. GCP specializes in high compute offerings like Big Data, analytics and machine learning. It also offers considerable scale and load balancing – Google knows data centers and fast response time.
Build and connect intelligent bots that interact with your users using text/SMS, Skype, Teams, Slack, Office 365 mail, Twitter, and other popular services.
Enables both Speech to Text, and Text into Speech capabilities. The Speech Services are the unification of speech-to-text, text-to-speech, and speech-translation into a single Azure subscription. It’s easy to speech enable your applications, tools, and devices with the Speech SDK, Speech Devices SDK, or REST APIs. Amazon Polly is a Text-to-Speech (TTS) service that uses advanced deep learning technologies to synthesize speech that sounds like a human voice. With dozens of lifelike voices across a variety of languages, you can select the ideal voice and build speech-enabled applications that work in many different countries. Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to their applications. Using the Amazon Transcribe API, you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech.
Computer Vision: Extract information from images to categorize and process visual data. Amazon Rekognition is a simple and easy to use API that can quickly analyze any image or video file stored in Amazon S3. Amazon Rekognition is always learning from new data, and we are continually adding new labels and facial recognition features to the service.
Face: Detect, identy, and analyze faces in photos.
The Virtual Assistant Template brings together a number of best practices we’ve identified through the building of conversational experiences and automates integration of components that we’ve found to be highly beneficial to Bot Framework developers.
Processes and moves data between different compute and storage services, as well as on-premises data sources at specified intervals. Create, schedule, orchestrate, and manage data pipelines.
Virtual servers allow users to deploy, manage, and maintain OS and server software. Instance types provide combinations of CPU/RAM. Users pay for what they use with the flexibility to change sizes.
Allows you to automatically change the number of VM instances. You set defined metric and thresholds that determine if the platform adds or removes instances.
Redeploy and extend your VMware-based enterprise workloads to Azure with Azure VMware Solution by CloudSimple. Keep using the VMware tools you already know to manage workloads on Azure without disrupting network, security, or data protection policies.
Azure Container Instances is the fastest and simplest way to run a container in Azure, without having to provision any virtual machines or adopt a higher-level orchestration service.
Deploy orchestrated containerized applications with Kubernetes. Simplify monitoring and cluster management through auto upgrades and a built-in operations console.
Fully managed service that enables developers to deploy microservices applications without managing virtual machines, storage, or networking. AWS App Mesh is a service mesh that provides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure. App Mesh standardizes how your services communicate, giving you end-to-end visibility and ensuring high-availability for your applications.
Integrate systems and run backend processes in response to events or schedules without provisioning or managing servers. AWS Lambda is an event-driven, serverless computing platform provided by Amazon as a part of the Amazon Web Services. It is a computing service that runs code in response to events and automatically manages the computing resources required by that code
Managed relational database service where resiliency, scale, and maintenance are primarily handled by the platform. Amazon Relational Database Service is a distributed relational database service by Amazon Web Services. It is a web service running “in the cloud” designed to simplify the setup, operation, and scaling of a relational database for use in applications. Administration processes like patching the database software, backing up databases and enabling point-in-time recovery are managed automatically. Scaling storage and compute resources can be performed by a single API call as AWS does not offer an ssh connection to RDS instances.
An in-memory–based, distributed caching service that provides a high-performance store typically used to offload non transactional work from a database. Amazon ElastiCache is a fully managed in-memory data store and cache service by Amazon Web Services. The service improves the performance of web applications by retrieving information from managed in-memory caches, instead of relying entirely on slower disk-based databases. ElastiCache supports two open-source in-memory caching engines: Memcached and Redis.
Migration of database schema and data from one database format to a specific database technology in the cloud. AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.
Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers. AWS X-Ray is an application performance management service that enables a developer to analyze and debug applications in the Amazon Web Services (AWS) public cloud. A developer can use AWS X-Ray to visualize how a distributed application is performing during development or production, and across multiple AWS regions and accounts.
A cloud service for collaborating on code development. AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define. AWS CodeCommit is a source code storage and version-control service for Amazon Web Services’ public cloud customers. CodeCommit was designed to help IT teams collaborate on software development, including continuous integration and application delivery.
Collection of tools for building, debugging, deploying, diagnosing, and managing multiplatform scalable apps and services. The AWS Developer Tools are designed to help you build software like Amazon. They facilitate practices such as continuous delivery and infrastructure as code for serverless, containers, and Amazon EC2.
Built on top of the native REST API across all cloud services, various programming language-specific wrappers provide easier ways to create solutions. The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.
Configures and operates applications of all shapes and sizes, and provides templates to create and manage a collection of resources. AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers.
Provides a way for users to automate the manual, long-running, error-prone, and frequently repeated IT tasks. AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. CloudFormation allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts.
Provides an isolated, private environment in the cloud. Users have control over their virtual networking environment, including selection of their own IP address range, creation of subnets, and configuration of route tables and network gateways.
Connects Azure virtual networks to other Azure virtual networks, or customer on-premises networks (Site To Site). Allows end users to connect to Azure services through VPN tunneling (Point To Site).
A service that hosts domain names, plus routes users to Internet applications, connects user requests to datacenters, manages traffic to apps, and improves app availability with automatic failover.
Application Gateway is a layer 7 load balancer. It supports SSL termination, cookie-based session affinity, and round robin for load-balancing traffic.
Azure Digital Twins is an IoT service that helps you create comprehensive models of physical environments. Create spatial intelligence graphs to model the relationships and interactions between people, places, and devices. Query data from a physical space rather than disparate sensors.
Provides analysis of cloud resource configuration and security so subscribers can ensure they’re making use of best practices and optimum configurations.
Allows users to securely control access to services and resources while offering data security and protection. Create and manage users and groups, and use permissions to allow and deny access to resources.
Role-based access control (RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.
Provides managed domain services such as domain join, group policy, LDAP, and Kerberos/NTLM authentication that are fully compatible with Windows Server Active Directory.
Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources, so those resources stay compliant with your corporate standards and service level agreements.
Azure management groups provide a level of scope above subscriptions. You organize subscriptions into containers called “management groups” and apply your governance conditions to the management groups. All subscriptions within a management group automatically inherit the conditions applied to the management group. Management groups give you enterprise-grade management at a large scale, no matter what type of subscriptions you have.
Helps you protect and safeguard your data and meet your organizational security and compliance commitments.
Key Management Service AWS KMS, CloudHSM | Key Vault
Provides security solution and works with other services by providing a way to manage, create, and control encryption keys stored in hardware security modules (HSM).
Provides inbound protection for non-HTTP/S protocols, outbound network-level protection for all ports and protocols, and application-level protection for outbound HTTP/S.
An automated security assessment service that improves the security and compliance of applications. Automatically assess applications for vulnerabilities or deviations from best practices.
Object storage service, for use cases including cloud applications, content distribution, backup, archiving, disaster recovery, and big data analytics.
Provides a simple interface to create and configure file systems quickly, and share common files. Can be used with traditional protocols that access files over a network.
Easily join your distributed microservice architectures into a single global application using HTTP load balancing and path-based routing rules. Automate turning up new regions and scale-out with API-driven global actions, and independent fault-tolerance to your back end microservices in Azure—or anywhere.
Cloud technology to build distributed applications using out-of-the-box connectors to reduce integration challenges. Connect apps, data and devices on-premises or in the cloud.
Serverless technology for connecting apps, data and devices anywhere, whether on-premises or in the cloud for large ecosystems of SaaS and cloud-based connectors.
Azure Stack is a hybrid cloud platform that enables you to run Azure services in your company’s or service provider’s datacenter. As a developer, you can build apps on Azure Stack. You can then deploy them to either Azure Stack or Azure, or you can build truly hybrid apps that take advantage of connectivity between an Azure Stack cloud and Azure.
Basically, it all comes down to what your organizational needs are and if there’s a particular area that’s especially important to your business (ex. serverless, or integration with Microsoft applications).
Some of the main things it comes down to is compute options, pricing, and purchasing options.
Here’s a brief comparison of the compute option features across cloud providers:
Here’s an example of a few instances’ costs (all are Linux OS):
Each provider offers a variety of options to lower costs from the listed On-Demand prices. These can fall under reservations, spot and preemptible instances and contracts.
Both AWS and Azure offer a way for customers to purchase compute capacity in advance in exchange for a discount: AWS Reserved Instances and Azure Reserved Virtual Machine Instances. There are a few interesting variations between the instances across the cloud providers which could affect which is more appealing to a business.
Another discounting mechanism is the idea of spot instances in AWS and low-priority VMs in Azure. These options allow users to purchase unused capacity for a steep discount.
With AWS and Azure, enterprise contracts are available. These are typically aimed at enterprise customers, and encourage large companies to commit to specific levels of usage and spend in exchange for an across-the-board discount – for example, AWS EDPs and Azure Enterprise Agreements.
You can read more about the differences between AWS and Azure to help decide which your business should use in this blog post
Cloud computing is the new big thing in Information Technology. Everyone, every business will sooner or later adopt it, because of hosting cost benefits, scalability and more.
This blog outlines the Pros and Cons of Cloud Computing, Pros and Cons of Cloud Technology, Faqs, Facts, Questions and Answers Dump about cloud computing.
Cloud computing is an information technology paradigm that enables ubiquitous access to shared pools of configurable system resources and higher-level services that can be rapidly provisioned with minimal management effort, often over the Internet. Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a public utility. Simply put, cloud computing is the delivery of computing services including servers, storage, databases, networking, software, analytics, and intelligence—over the Internet (“the cloud”) to offer faster innovation, flexible resources, and economies of scale. You typically pay only for cloud services you use, helping you lower your operating costs, run your infrastructure more efficiently, and scale as your business needs change.
Stop spending money on running and maintaining data centers
Go global in minutes
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6 Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
Cost effective & Time saving: Cloud computing eliminates the capital expense of buying hardware and software and setting up and running on-site datacenters; the racks of servers, the round-the-clock electricity for power and cooling, and the IT experts for managing the infrastructure.
The ability to pay only for cloud services you use, helping you lower your operating costs.
Powerful server capabilities and Performance: The biggest cloud computing services run on a worldwide network of secure datacenters, which are regularly upgraded to the latest generation of fast and efficient computing hardware. This offers several benefits over a single corporate datacenter, including reduced network latency for applications and greater economies of scale.
Powerful and scalable server capabilities: The ability to scale elastically; That means delivering the right amount of IT resources—for example, more or less computing power, storage, bandwidth—right when they’re needed, and from the right geographic location.
SaaS ( Software as a service). Software as a service is a method for delivering software applications over the Internet, on demand and typically on a subscription basis. With SaaS, cloud providers host and manage the software application and underlying infrastructure, and handle any maintenance, like software upgrades and security patching. Users connect to the application over the Internet, usually with a web browser on their phone, tablet, or PC.
PaaS ( Platform as a service). Platform as a service refers to cloud computing services that supply an on-demand environment for developing, testing, delivering, and managing software applications. PaaS is designed to make it easier for developers to quickly create web or mobile apps, without worrying about setting up or managing the underlying infrastructure of servers, storage, network, and databases needed for development.
IaaS ( Infrastructure as a service). The most basic category of cloud computing services. With IaaS, you rent IT infrastructure—servers and virtual machines (VMs), storage, networks, operating systems—from a cloud provider on a pay-as-you-go basis
Serverless: Running complex Applications without a single server. Overlapping with PaaS, serverless computing focuses on building app functionality without spending time continually managing the servers and infrastructure required to do so. The cloud provider handles the setup, capacity planning, and server management for you. Serverless architectures are highly scalable and event-driven, only using resources when a specific function or trigger occurs.
Infrastructure provisioning as code, helps recreating same infrastructure by re-running the same code in a few click.
Automatic and Reliable Data backup and storage of data: Cloud computing makes data backup, disaster recovery, and business continuity easier and less expensive because data can be mirrored at multiple redundant sites on the cloud provider’s network.
Increase Productivity: On-site datacenters typically require a lot of “racking and stacking”—hardware setup, software patching, and other time-consuming IT management chores. Cloud computing removes the need for many of these tasks, so IT teams can spend time on achieving more important business goals.
Security: Many cloud providers offer a broad set of policies, technologies, and controls that strengthen your security posture overall, helping protect your data, apps, and infrastructure from potential threats.
Speed: Most cloud computing services are provided self service and on demand, so even vast amounts of computing resources can be provisioned in minutes, typically with just a few mouse clicks, giving businesses a lot of flexibility and taking the pressure off capacity planning. In a cloud computing environment, new IT resources are only a click away. This means that the time those resources are available to your developers is reduced from weeks to minutes. As a result, the organization experiences a dramatic increase in agility because the cost and time it takes to experiment and develop is lower
Go global in minutes Easily deploy your application in multiple regions around the world with just a few clicks. This means that you can provide a lower latency and better experience for your customers simply and at minimal cost.
Privacy: Cloud computing poses privacy concerns because the service provider can access the data that is in the cloud at any time. It could accidentally or deliberately alter or delete information.Many cloud providers can share information with third parties if necessary for purposes of law and order without a warrant. That is permitted in their privacy policies, which users must agree to before they start using cloud services.
Security: According to the Cloud Security Alliance, the top three threats in the cloud are Insecure Interfaces and API’s, Data Loss & Leakage, and Hardware Failure—which accounted for 29%, 25% and 10% of all cloud security outages respectively. Together, these form shared technology vulnerabilities.
Ownership of Data: There is the problem of legal ownership of the data (If a user stores some data in the cloud, can the cloud provider profit from it?). Many Terms of Service agreements are silent on the question of ownership.
Limited Customization Options: Cloud computing is cheaper because of economics of scale, and—like any outsourced task—you tend to get what you get. A restaurant with a limited menu is cheaper than a personal chef who can cook anything you want.
Downtime: Technical outages are inevitable and occur sometimes when cloud service providers (CSPs) become overwhelmed in the process of serving their clients. This may result to temporary business suspension.
Security of stored data and data in transit may be a concern when storing sensitive data at a cloud storage provider[10]
Users with specific records-keeping requirements, such as public agencies that must retain electronic records according to statute, may encounter complications with using cloud computing and storage. For instance, the U.S. Department of Defense designated the Defense Information Systems Agency (DISA) to maintain a list of records management products that meet all of the records retention, personally identifiable information (PII), and security (Information Assurance; IA) requirements
Cloud storage is a rich resource for both hackers and national security agencies. Because the cloud holds data from many different users and organizations, hackers see it as a very valuable target.
Piracy and copyright infringement may be enabled by sites that permit filesharing. For example, the CodexCloud ebook storage site has faced litigation from the owners of the intellectual property uploaded and shared there, as have the GrooveShark and YouTube sites it has been compared to.
Public clouds: A cloud is called a “public cloud” when the services are rendered over a network that is open for public use. They are owned and operated by a third-party cloud service providers, which deliver their computing resources, like servers and storage, over the Internet. Microsoft Azure is an example of a public cloud. With a public cloud, all hardware, software, and other supporting infrastructure is owned and managed by the cloud provider. You access these services and manage your account using a web browser. For infrastructure as a service (IaaS) and platform as a service (PaaS), Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP) hold a commanding position among the many cloud companies.
Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third party, and hosted either internally or externally. A private cloud refers to cloud computing resources used exclusively by a single business or organization. A private cloud can be physically located on the company’s on-site datacenter. Some companies also pay third-party service providers to host their private cloud. A private cloud is one in which the services and infrastructure are maintained on a private network.
Hybrid cloud is a composition of a public cloud and a private environment, such as a private cloud or on-premise resources, that remain distinct entities but are bound together, offering the benefits of multiple deployment models. Hybrid cloud can also mean the ability to connect collocation, managed and/or dedicated services with cloud resources. Hybrid clouds combine public and private clouds, bound together by technology that allows data and applications to be shared between them. By allowing data and applications to move between private and public clouds, a hybrid cloud gives your business greater flexibility, more deployment options, and helps optimize your existing infrastructure, security, and compliance.
Community Cloud: A community cloud in computing is a collaborative effort in which infrastructure is shared between several organizations from a specific community with common concerns, whether managed internally or by a third-party and hosted internally or externally. This is controlled and used by a group of organizations that have shared interest. The costs are spread over fewer users than a public cloud, so only some of the cost savings potential of cloud computing are realized.
What do the top 3 public cloud providers like AWS, Azure, Google cloud do to insure customer data loss?
As cloud user, cloud customer, company storing customer data in the cloud, you probably have a lot of personal or private data hosted in various infrastructure in the cloud. Losing that data or having the data accessed by hackers or unauthorized third party can be very harmful both financially and emotionally to you or your customers. A cloud User or Customer Insurance can protect you against data lost or stolen data. Practically, the cloud computing insurance is a cyber liability policy that covers web-based services. Before looking for a customer insurance in the cloud, you need to clarify “What data should the insurance cover and under which governing laws?“, “What data can be considered a loss?” . The good news is : as cloud adoption is increasing in the insurance industry, insurers have the opportunity to better understand their operations models and to implement tailored insurance solutions for cloud.
Cloud Data loss can happen in the following forms:
First Party Losses: losses where the cloud provider incurs damages. Those types of losses include:
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6 Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
Electrical Malfunctions and Power Surges in data centers
Natural Disasters
Network Failures
Cyber Extortion
Each of the above exposures to loss would result in direct damages to the insured, or first-party loss.
Third-Party Losses – damages that would occur to customers outside of the cloud provider. These types of losses include:
Breach of Privacy
Misuse of Private Personal Information
Defamation or Slander
Transmission of Malicious Content
The above exposures could result in a company being held liable for the damages caused to others (liability).
Cyber insurance is a form of insurance for businesses and individuals against internet-based risks. The most common risk that is insured against is data breaches. … It also covers losses from network security breaches, theft of intellectual property and loss of privacy.
Data Compromise coverage insures a commercial entity when there is a data breach, theft or unauthorized disclosure of personal information. … Thus Cyber Liability covers both the expenses to notify affected individuals of data breaches and the expenses to make the insured whole for their own damages incurred.
However, a more effective risk management solution might be loss control rather than financing. If you encrypt your data at rest and set up and adopt a process of automatic regular backups, and geographically distribute those backups , then you have effectively minimized the potential costs of loss.
Cyber Insurance is not yet standardized as many other forms of commercial insurance. Therefore, breadth of coverage and pricing can vary widely.
Access: As a customer, you maintain full control of your content and responsibility for configuring access to AWS services and resources. We provide an advanced set of access, encryption, and logging features to help you do this effectively (e.g., AWS Identity and Access Management, AWS Organizations and AWS CloudTrail). We provide APIs for you to configure access control permissions for any of the services you develop or deploy in an AWS environment. We do not access or use your content for any purpose without your consent. We never use your content or derive information from it for marketing or advertising.
Storage: You choose the AWS Region(s) in which your content is stored and the type of storage. You can replicate and back up your content in more than one AWS Region. We will not move or replicate your content outside of your chosen AWS Region(s) without your consent, except as legally required and as necessary to maintain the AWS services.
Security: You choose how your content is secured. We offer you strong encryption for your content in transit and at rest, and we provide you with the option to manage your own encryption keys. These features include:
Data encryption capabilities available in AWS storage and database services, such as Amazon Elastic Block Store, Amazon Simple Storage Service, Amazon Relational Database Service, and Amazon Redshift.
Flexible key management options, including AWS Key Management Service (KMS), allow customers to choose whether to have AWS manage the encryption keys or enable customers to keep complete control over their keys.
AWS customers can employ Server-Side Encryption (SSE) with Amazon S3-Managed Keys (SSE-S3), SSE with AWS KMS-Managed Keys (SSE-KMS), or SSE with Customer-Provided Encryption Keys (SSE-C).
Disclosure of customer content: We do not disclose customer information unless we’re required to do so to comply with a legally valid and binding order. Unless prohibited from doing so or there is clear indication of illegal conduct in connection with the use of Amazon products or services, Amazon notifies customers before disclosing content information.
Security Assurance: We have developed a security assurance program that uses best practices for global privacy and data protection to help you operate securely within AWS, and to make the best use of our security control environment. These security protections and control processes are independently validated by multiple third-party independent assessments
Property and Casualty Insurance: Property insurance covers the physical location of the business and its contents from things like fire, theft, flood, and earthquakes—although read the terms carefully to make sure they include everything you need. Casualty insurance, on the other hand, covers the operation of the business, but the two are usually grouped together in policies.
Auto Insurance:Auto insurance protects you against financial loss if you have an accident. It is a contract between you and the insurance company.
Liability Insurance: Liability insurance is insurance that provides protection against claims resulting from injuries and damage property.
Business Insurance: Business interruption insurance can make up for lost cash flow and profits incurred because of an event that has interrupted your normal business operations.
Health and Disability Insurance: Health insurance provides health coverage for you and your employees. This insurance covers your employees for the expenses and loss of income caused by non work-related injuries, illnesses, and disabilities and death from any cause.
Life Insurance: Life and disability insurance covers your business in the event of the death or disability of key owners.
Cyber Insurance: Cover Data loss, destruction of data, privacy breach, Denial of Service Attack (DOS), Network failure, Transmission of Malicious Content, Misuse of personal or private information, etc.
Crime & Employee Dishonesty Insurance: To cover your business for fraudulent acts committed by your employees, e.g. theft or embezzlement of money, securities, and other business-owned property and for burglary, theft, and robbery of cash and other representations of money, e.g. money orders, postage stamps, travelers checks, and readily convertible securities, e.g. bearer bonds;
Mandatory Workers Compensation Insurance: To cover your employees for injuries and illnesses sustained during the course of employment. This would include medical expenses and loss of income due to a work-related disability;
Transportation/Inland & Ocean Marine Insurance: To pay for loss of damage to property you own or are responsible for while it is being transported or shipped to or from customers, manufacturers, processors, assemblers, warehouses, etc. by air, ship, or land vehicles either domestically or internationally.
Umbrella Liability Insurance: To provide an additional layer of liability insurance over your primary automobile liability, general liability, employers liability, and, if applicable, watercraft or aircraft liability policies;
Directors & Officers Liability Insurance: To defend your business and its directors or officers against allegations that they mismanaged the business in some way which caused financial loss to your clients (and/or others) and pay money damages in a court trial or settlement;
Condos Unit Owners Personal Insurance & Landlord / Rental Property Insurance: Cover expenses that come from having a loss within your property. Whether the unit owner is living in their unit or not, it is your responsibility to ensure that your personal assets and liabilities are adequately protected by your own personal insurance policy. This coverage includes all the content items that are brought into a unit or stored in a storage locker or premises, such as furnishings, electronics, clothing, etc. Most policies out there will also cover personal property while it is temporary off premises, on vacation for example.
Landlord property coverage is to protect the property that you own within your rental unit, which includes but is not limited to, appliances, window coverings, or if you rent out your unit fully furnished, then all of that property that is yours.
Rental Property insurance coverage allows you to protect you revenue source. Your property is your responsibility and if you property gets damaged by an insured peril, and your tenant can’t live there for a month or two (or more), you can purchase insurance to replace that rental income for the period of time your property is inhabitable.
Do online businesses need insurance?
All businesses need insurance. Here are some suggestions:
Property Insurance: To cover your owned, non-owned, and leased business property (contents, buildings if applicable, computers, office supplies, and any other property that you need to operate your business) for such perils as fire, windstorm, smoke damage, water damage, and theft.
EDP Insurance: To cover your computer hardware and software for such perils as mechanical breakdown and electrical injury;
Cyber Property and Liability Insurance: To cover your business for its activities on the Internet. Cyber Property coverages apply to losses sustained by your company directly. An example is damage to your company’s electronic data files caused by a hacker/security breach. Cyber Liability coverages apply to claims against your company by people who have been injured as a result of your actions or failure to act. For instance, a client sues you for negligence after his personal data, e.g credit card numbers or confidential information is stolen from your computer system and released online.
Loss of Income (Business Interruption) Insurance: To cover your business for the loss of income you would sustain because it was damaged by a covered peril under your property insurance, e.g. fire, windstorm, smoke damage, and theft;
Thinking of purchasing cyber insurance? Make sure the policy you choose covers more than paying ransomware. Paying cyber criminals should be a last resort. Your policy should include cleaning & rebuilding current systems, hiring experts, & purchasing new protections.
The purpose of cyber security is to protect all forms of digital data. Protecting personal information (SSN, credit card information, etc.), protecting proprietary information .(Facebook algorithms, Tesla vehicle designs, etc.), and other forms of digital data.
Cloud computing insurance is meant to protect a cloud provider. The implementation of a system and the preservation of important information comes with risks. If anything goes wrong, such as an outage at a critical time that results in business interruption, your client can hold you responsible and seek damages. Cloud insurance can not only provide compensation to your client as a result of a claim against you, but can also cover your legal defense and lost income.
What are the Top 100 AWS Solutions Architect Associate Certification Exam Questions and Answers Dump SAA-C03?
AWS Certified Solutions Architects are responsible for designing, deploying, and managing AWS cloud applications. The AWS Cloud Solutions Architect Associate exam validates an examinee’s ability to effectively demonstrate knowledge of how to design and deploy secure and robust applications on AWS technologies. The AWS Solutions Architect Associate training provides an overview of key AWS services, security, architecture, pricing, and support.
The AWS Certified Solutions Architect – Associate (SAA-C03) Examination is a required examination for the AWS Certified Solutions Architect – Professional level. Successful completion of this examination can lead to a salary raise or promotion for those in cloud roles. Below is the Top 100 AWS solutions architect associate exam prep facts and summaries questions and answers dump.
With average increases in salary of over 25% for certified individuals, you’re going to be in a much better position to secure your dream job or promotion if you earn your AWS Certified Solutions Architect Associate certification. You’ll also develop strong hands-on skills by doing the guided hands-on lab exercises in our course which will set you up for successfully performing in a solutions architect role.
We recommend that you allocate at least 60 minutes of study time per day and you will then be able to complete the certification within 5 weeks (including taking the actual exam). Study times can vary based on your experience with AWS and how much time you have each day, with some students passing their exams much faster and others taking a little longer. Get our eBook here.
The AWS Solutions Architect Associate exam is an associate-level exam that requires a solid understanding of the AWS platform and a broad range of AWS services. The AWS Certified Solutions Architect Associate exam questions are scenario-based questions and can be challenging. Despite this, the AWS Solutions Architect Associate is often earned by beginners to cloud computing.
The AWS Certified Solutions Architect – Associate (SAA-C03) exam is intended for individuals who perform in a solutions architect role. The exam validates a candidate’s ability to use AWS technologies to design solutions based on the AWS Well-Architected Framework.
The SAA-C03 exam is a multiple choice examination that is 65 questions in length. You can take the exam in a testing center or using an online proctored exam from your home or office. You have 130 minutes to complete your exam and the passing mark is 720 points out of 100 points (72%). If English is not your first language you can request an accommodation when booking your exam that will qualify you for an additional 30 minutes exam extension.
The exam also validates a candidate’s ability to complete the following tasks: • Design solutions that incorporate AWS services to meet current business requirements and future projected needs • Design architectures that are secure, resilient, high-performing, and cost-optimized • Review existing solutions and determine improvements
Unscored content The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.
Target candidate description The target candidate should have at least 1 year of hands-on experience designing cloud solutions that use AWS services
All AWS certification exam results are reported as a score from 100 to 1000. Your score shows how you performed on the examination as a whole and whether or not you passed. The passing score for the AWS Certified Solutions Architect Associate is 720 (72%).
Yes, you can now take all AWS Certification exams with online proctoring using Pearson Vue or PSI. Here’s a detailed guide on how to book your AWS exam.
There are no prerequisites for taking AWS exams. You do not need any programming knowledge or experience working with AWS. Everything you need to know is included in our courses. We do recommend that you have a basic understanding of fundamental computing concepts such as compute, storage, networking, and databases.
AWS Certified Solutions Architects are IT professionals who design cloud solutions with AWS services to meet given technical requirements. An AWS Solutions Architect Associate is expected to design and implement distributed systems on AWS that are high-performing, scalable, secure and cost optimized.
Domain 1: Design Secure Architectures This exam domain is focused on securing your architectures on AWS and comprises 30% of the exam. Task statements include:
Task Statement 1: Design secure access to AWS resources. Knowledge of: • Access controls and management across multiple accounts • AWS federated access and identity services (for example, AWS Identity and Access Management [IAM], AWS Single Sign-On [AWS SSO]) • AWS global infrastructure (for example, Availability Zones, AWS Regions) • AWS security best practices (for example, the principle of least privilege) • The AWS shared responsibility model
Skills in: • Applying AWS security best practices to IAM users and root users (for example, multi-factor authentication [MFA]) • Designing a flexible authorization model that includes IAM users, groups, roles, and policies • Designing a role-based access control strategy (for example, AWS Security Token Service [AWS STS], role switching, cross-account access) • Designing a security strategy for multiple AWS accounts (for example, AWS Control Tower, service control policies [SCPs]) • Determining the appropriate use of resource policies for AWS services • Determining when to federate a directory service with IAM roles
Task Statement 2: Design secure workloads and applications.
Knowledge of: • Application configuration and credentials security • AWS service endpoints • Control ports, protocols, and network traffic on AWS • Secure application access • Security services with appropriate use cases (for example, Amazon Cognito, Amazon GuardDuty, Amazon Macie) • Threat vectors external to AWS (for example, DDoS, SQL injection)
Skills in: • Designing VPC architectures with security components (for example, security groups, route tables, network ACLs, NAT gateways) • Determining network segmentation strategies (for example, using public subnets and private subnets) • Integrating AWS services to secure applications (for example, AWS Shield, AWS WAF, AWS SSO, AWS Secrets Manager) • Securing external network connections to and from the AWS Cloud (for example, VPN, AWS Direct Connect)
Task Statement 3: Determine appropriate data security controls.
Knowledge of: • Data access and governance • Data recovery • Data retention and classification • Encryption and appropriate key management
Skills in: • Aligning AWS technologies to meet compliance requirements • Encrypting data at rest (for example, AWS Key Management Service [AWS KMS]) • Encrypting data in transit (for example, AWS Certificate Manager [ACM] using TLS) • Implementing access policies for encryption keys • Implementing data backups and replications • Implementing policies for data access, lifecycle, and protection • Rotating encryption keys and renewing certificates
Domain 2: Design Resilient Architectures This exam domain is focused on designing resilient architectures on AWS and comprises 26% of the exam. Task statements include:
Task Statement 1: Design scalable and loosely coupled architectures. Knowledge of: • API creation and management (for example, Amazon API Gateway, REST API) • AWS managed services with appropriate use cases (for example, AWS Transfer Family, Amazon Simple Queue Service [Amazon SQS], Secrets Manager) • Caching strategies • Design principles for microservices (for example, stateless workloads compared with stateful workloads) • Event-driven architectures • Horizontal scaling and vertical scaling • How to appropriately use edge accelerators (for example, content delivery network [CDN]) • How to migrate applications into containers • Load balancing concepts (for example, Application Load Balancer) • Multi-tier architectures • Queuing and messaging concepts (for example, publish/subscribe) • Serverless technologies and patterns (for example, AWS Fargate, AWS Lambda) • Storage types with associated characteristics (for example, object, file, block) • The orchestration of containers (for example, Amazon Elastic Container Service [Amazon ECS],Amazon Elastic Kubernetes Service [Amazon EKS]) • When to use read replicas • Workflow orchestration (for example, AWS Step Functions)
Skills in: • Designing event-driven, microservice, and/or multi-tier architectures based on requirements • Determining scaling strategies for components used in an architecture design • Determining the AWS services required to achieve loose coupling based on requirements • Determining when to use containers • Determining when to use serverless technologies and patterns • Recommending appropriate compute, storage, networking, and database technologies based on requirements • Using purpose-built AWS services for workloads
Task Statement 2: Design highly available and/or fault-tolerant architectures. Knowledge of: • AWS global infrastructure (for example, Availability Zones, AWS Regions, Amazon Route 53) • AWS managed services with appropriate use cases (for example, Amazon Comprehend, Amazon Polly) • Basic networking concepts (for example, route tables) • Disaster recovery (DR) strategies (for example, backup and restore, pilot light, warm standby, active-active failover, recovery point objective [RPO], recovery time objective [RTO]) • Distributed design patterns • Failover strategies • Immutable infrastructure • Load balancing concepts (for example, Application Load Balancer) • Proxy concepts (for example, Amazon RDS Proxy) • Service quotas and throttling (for example, how to configure the service quotas for a workload in a standby environment) • Storage options and characteristics (for example, durability, replication) • Workload visibility (for example, AWS X-Ray)
Skills in: • Determining automation strategies to ensure infrastructure integrity • Determining the AWS services required to provide a highly available and/or fault-tolerant architecture across AWS Regions or Availability Zones • Identifying metrics based on business requirements to deliver a highly available solution • Implementing designs to mitigate single points of failure • Implementing strategies to ensure the durability and availability of data (for example, backups) • Selecting an appropriate DR strategy to meet business requirements • Using AWS services that improve the reliability of legacy applications and applications not built for the cloud (for example, when application changes are not possible) • Using purpose-built AWS services for workloads
Domain 3: Design High-Performing Architectures This exam domain is focused on designing high-performing architectures on AWS and comprises 24% of the exam. Task statements include:
Knowledge of: • Hybrid storage solutions to meet business requirements • Storage services with appropriate use cases (for example, Amazon S3, Amazon Elastic File System [Amazon EFS], Amazon Elastic Block Store [Amazon EBS]) • Storage types with associated characteristics (for example, object, file, block)
Skills in: • Determining storage services and configurations that meet performance demands • Determining storage services that can scale to accommodate future needs
Task Statement 2: Design high-performing and elastic compute solutions. Knowledge of: • AWS compute services with appropriate use cases (for example, AWS Batch, Amazon EMR, Fargate) • Distributed computing concepts supported by AWS global infrastructure and edge services • Queuing and messaging concepts (for example, publish/subscribe) • Scalability capabilities with appropriate use cases (for example, Amazon EC2 Auto Scaling, AWS Auto Scaling) • Serverless technologies and patterns (for example, Lambda, Fargate) • The orchestration of containers (for example, Amazon ECS, Amazon EKS)
Skills in: • Decoupling workloads so that components can scale independently • Identifying metrics and conditions to perform scaling actions • Selecting the appropriate compute options and features (for example, EC2 instance types) to meet business requirements • Selecting the appropriate resource type and size (for example, the amount of Lambda memory) to meet business requirements
Task Statement 3: Determine high-performing database solutions. Knowledge of: • AWS global infrastructure (for example, Availability Zones, AWS Regions) • Caching strategies and services (for example, Amazon ElastiCache) • Data access patterns (for example, read-intensive compared with write-intensive) • Database capacity planning (for example, capacity units, instance types, Provisioned IOPS) • Database connections and proxies • Database engines with appropriate use cases (for example, heterogeneous migrations, homogeneous migrations) • Database replication (for example, read replicas) • Database types and services (for example, serverless, relational compared with non-relational, in-memory)
Skills in: • Configuring read replicas to meet business requirements • Designing database architectures • Determining an appropriate database engine (for example, MySQL compared with PostgreSQL) • Determining an appropriate database type (for example, Amazon Aurora, Amazon DynamoDB) • Integrating caching to meet business requirements
Task Statement 4: Determine high-performing and/or scalable network architectures. Knowledge of: • Edge networking services with appropriate use cases (for example, Amazon CloudFront, AWS Global Accelerator) • How to design network architecture (for example, subnet tiers, routing, IP addressing) • Load balancing concepts (for example, Application Load Balancer) • Network connection options (for example, AWS VPN, Direct Connect, AWS PrivateLink)
Skills in: • Creating a network topology for various architectures (for example, global, hybrid, multi-tier) • Determining network configurations that can scale to accommodate future needs • Determining the appropriate placement of resources to meet business requirements • Selecting the appropriate load balancing strategy
Task Statement 5: Determine high-performing data ingestion and transformation solutions. Knowledge of: • Data analytics and visualization services with appropriate use cases (for example, Amazon Athena, AWS Lake Formation, Amazon QuickSight) • Data ingestion patterns (for example, frequency) • Data transfer services with appropriate use cases (for example, AWS DataSync, AWS Storage Gateway) • Data transformation services with appropriate use cases (for example, AWS Glue) • Secure access to ingestion access points • Sizes and speeds needed to meet business requirements • Streaming data services with appropriate use cases (for example, Amazon Kinesis)
Skills in: • Building and securing data lakes • Designing data streaming architectures • Designing data transfer solutions • Implementing visualization strategies • Selecting appropriate compute options for data processing (for example, Amazon EMR) • Selecting appropriate configurations for ingestion • Transforming data between formats (for example, .csv to .parquet)
Domain 4: Design Cost-Optimized Architectures This exam domain is focused optimizing solutions for cost-effectiveness on AWS and comprises 20% of the exam. Task statements include:
Task Statement 1: Design cost-optimized storage solutions. Knowledge of: • Access options (for example, an S3 bucket with Requester Pays object storage) • AWS cost management service features (for example, cost allocation tags, multi-account billing) • AWS cost management tools with appropriate use cases (for example, AWS Cost Explorer, AWS Budgets, AWS Cost and Usage Report) • AWS storage services with appropriate use cases (for example, Amazon FSx, Amazon EFS, Amazon S3, Amazon EBS) • Backup strategies • Block storage options (for example, hard disk drive [HDD] volume types, solid state drive [SSD] volume types) • Data lifecycles • Hybrid storage options (for example, DataSync, Transfer Family, Storage Gateway) • Storage access patterns • Storage tiering (for example, cold tiering for object storage) • Storage types with associated characteristics (for example, object, file, block)
Skills in: • Designing appropriate storage strategies (for example, batch uploads to Amazon S3 compared with individual uploads) • Determining the correct storage size for a workload • Determining the lowest cost method of transferring data for a workload to AWS storage • Determining when storage auto scaling is required • Managing S3 object lifecycles • Selecting the appropriate backup and/or archival solution • Selecting the appropriate service for data migration to storage services • Selecting the appropriate storage tier • Selecting the correct data lifecycle for storage • Selecting the most cost-effective storage service for a workload
Task Statement 2: Design cost-optimized compute solutions. Knowledge of: • AWS cost management service features (for example, cost allocation tags, multi-account billing) • AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report) • AWS global infrastructure (for example, Availability Zones, AWS Regions) • AWS purchasing options (for example, Spot Instances, Reserved Instances, Savings Plans) • Distributed compute strategies (for example, edge processing) • Hybrid compute options (for example, AWS Outposts, AWS Snowball Edge) • Instance types, families, and sizes (for example, memory optimized, compute optimized, virtualization) • Optimization of compute utilization (for example, containers, serverless computing, microservices) • Scaling strategies (for example, auto scaling, hibernation)
Skills in: • Determining an appropriate load balancing strategy (for example, Application Load Balancer [Layer 7] compared with Network Load Balancer [Layer 4] compared with Gateway Load Balancer) • Determining appropriate scaling methods and strategies for elastic workloads (for example, horizontal compared with vertical, EC2 hibernation) • Determining cost-effective AWS compute services with appropriate use cases (for example, Lambda, Amazon EC2, Fargate) • Determining the required availability for different classes of workloads (for example, production workloads, non-production workloads) • Selecting the appropriate instance family for a workload • Selecting the appropriate instance size for a workload
Task Statement 3: Design cost-optimized database solutions. Knowledge of: • AWS cost management service features (for example, cost allocation tags, multi-account billing) • AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report) • Caching strategies • Data retention policies • Database capacity planning (for example, capacity units) • Database connections and proxies • Database engines with appropriate use cases (for example, heterogeneous migrations, homogeneous migrations) • Database replication (for example, read replicas) • Database types and services (for example, relational compared with non-relational, Aurora, DynamoDB)
Skills in: • Designing appropriate backup and retention policies (for example, snapshot frequency) • Determining an appropriate database engine (for example, MySQL compared with PostgreSQL) • Determining cost-effective AWS database services with appropriate use cases (for example, DynamoDB compared with Amazon RDS, serverless) • Determining cost-effective AWS database types (for example, time series format, columnar format) • Migrating database schemas and data to different locations and/or different database engines
Task Statement 4: Design cost-optimized network architectures. Knowledge of: • AWS cost management service features (for example, cost allocation tags, multi-account billing) • AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report) • Load balancing concepts (for example, Application Load Balancer) • NAT gateways (for example, NAT instance costs compared with NAT gateway costs) • Network connectivity (for example, private lines, dedicated lines, VPNs) • Network routing, topology, and peering (for example, AWS Transit Gateway, VPC peering) • Network services with appropriate use cases (for example, DNS)
Skills in: • Configuring appropriate NAT gateway types for a network (for example, a single shared NAT gateway compared with NAT gateways for each Availability Zone) • Configuring appropriate network connections (for example, Direct Connect compared with VPN compared with internet) • Configuring appropriate network routes to minimize network transfer costs (for example, Region to Region, Availability Zone to Availability Zone, private to public, Global Accelerator, VPC endpoints) • Determining strategic needs for content delivery networks (CDNs) and edge caching • Reviewing existing workloads for network optimizations • Selecting an appropriate throttling strategy • Selecting the appropriate bandwidth allocation for a network device (for example, a single VPN compared with multiple VPNs, Direct Connect speed)
Which key tools, technologies, and concepts might be covered on the exam? The following is a non-exhaustive list of the tools and technologies that could appear on the exam. This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam. The general tools and technologies in this list appear in no particular order. AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance: • Compute • Cost management • Database • Disaster recovery • High performance • Management and governance • Microservices and component decoupling • Migration and data transfer • Networking, connectivity, and content delivery • Resiliency • Security • Serverless and event-driven design principles • Storage
AWS Services and Features There are lots of new services and feature updates in scope for the new AWS Certified Solutions Architect Associate certification! Here’s a list of some of the new services that will be in scope for the new version of the exam:
Analytics: • Amazon Athena • AWS Data Exchange • AWS Data Pipeline • Amazon EMR • AWS Glue • Amazon Kinesis • AWS Lake Formation • Amazon Managed Streaming for Apache Kafka (Amazon MSK) • Amazon OpenSearch Service (Amazon Elasticsearch Service) • Amazon QuickSight • Amazon Redshift
Management and Governance: • AWS Auto Scaling • AWS CloudFormation • AWS CloudTrail • Amazon CloudWatch • AWS Command Line Interface (AWS CLI) • AWS Compute Optimizer • AWS Config • AWS Control Tower • AWS License Manager • Amazon Managed Grafana • Amazon Managed Service for Prometheus • AWS Management Console • AWS Organizations • AWS Personal Health Dashboard • AWS Proton • AWS Service Catalog • AWS Systems Manager • AWS Trusted Advisor • AWS Well-Architected Tool
Media Services: • Amazon Elastic Transcoder • Amazon Kinesis Video Streams
Migration and Transfer: • AWS Application Discovery Service • AWS Application Migration Service (CloudEndure Migration) • AWS Database Migration Service (AWS DMS) • AWS DataSync • AWS Migration Hub • AWS Server Migration Service (AWS SMS) • AWS Snow Family • AWS Transfer Family
Out-of-scope AWS services and features The following is a non-exhaustive list of AWS services and features that are not covered on the exam. These services and features do not represent every AWS offering that is excluded from the exam content.
AWS solutions architect associate exam prep facts and summaries questions and answers dump – Solution Architecture Definition 1:
Solution architecture is a practice of defining and describing an architecture of a system delivered in context of a specific solution and as such it may encompass description of an entire system or only its specific parts. Definition of a solution architecture is typically led by a solution architect.
AWS solutions architect associate exam prep facts and summaries questions and answers dump – Solution Architecture Definition 2:
The AWS Certified Solutions Architect – Associate examination is intended for individuals who perform a solutions architect role and have one or more years of hands-on experience designing available, cost-efficient, fault-tolerant, and scalable distributed systems on AWS.
If you are running an application in a production environment and must add a new EBS volume with data from a snapshot, what could you do to avoid degraded performance during the volume’s first use? Initialize the data by reading each storage block on the volume. Volumes created from an EBS snapshot must be initialized. Initializing occurs the first time a storage block on the volume is read, and the performance impact can be impacted by up to 50%. You can avoid this impact in production environments by pre-warming the volume by reading all of the blocks.
If you are running a legacy application that has hard-coded static IP addresses and it is running on an EC2 instance; what is the best failover solution that allows you to keep the same IP address on a new instance? Elastic IP addresses (EIPs) are designed to be attached/detached and moved from one EC2 instance to another. They are a great solution for keeping a static IP address and moving it to a new instance if the current instance fails. This will reduce or eliminate any downtime uses may experience.
Which feature of Intel processors help to encrypt data without significant impact on performance? AES-NI
You can mount to EFS from which two of the following?
On-prem servers running Linux
EC2 instances running Linux
EFS is not compatible with Windows operating systems.
When a file(s) is encrypted and the stored data is not in transit it’s known as encryption at rest. What is an example of encryption at rest?
When would vertical scaling be necessary? When an application is built entirely into one source code, otherwise known as a monolithic application.
Fault-Tolerance allows for continuous operation throughout a failure, which can lead to a low Recovery Time Objective. RPO vs RTO
High-Availability means automating tasks so that an instance will quickly recover, which can lead to a low Recovery Time Objective. RPO vs. RTO
Frequent backups reduce the time between the last backup and recovery point, otherwise known as the Recovery Point Objective. RPO vs. RTO
Which represents the difference between Fault-Tolerance and High-Availability? High-Availability means the system will quickly recover from a failure event, and Fault-Tolerance means the system will maintain operations during a failure.
From a security perspective, what is a principal?An anonymous user falls under the definition of a principal. A principal can be an anonymous user acting on a system.
An authenticated user falls under the definition of a principal. A principal can be an authenticated user acting on a system.
What are two types of session data saving for an Application Session State?Stateless and Stateful
23. It is the customer’s responsibility to patch the operating system on an EC2 instance.
24. In designing an environment, what four main points should a Solutions Architect keep in mind? Cost-efficient, secure, application session state, undifferentiated heavy lifting: These four main points should be the framework when designing an environment.
25. In the context of disaster recovery, what does RPO stand for? RPO is the abbreviation for Recovery Point Objective.
26.What are the benefits of horizontal scaling?
Vertical scaling can be costly while horizontal scaling is cheaper.
Horizontal scaling suffers from none of the size limitations of vertical scaling.
Having horizontal scaling means you can easily route traffic to another instance of a server.
Top AWS solutions architect associate exam prep facts and summaries questions and answers dump – Quizzes
A company is developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? (Select TWO.)
A. CloudWatch
B. DynamoDB
C. Elastic Load Balancing
D. ElastiCache
E. Storage Gateway
Answer: B and D ( Get the SAA Exam Prep for More: iOS – Android – Windows )
Q1: A Solutions Architect is designing a critical business application with a relational database that runs on an EC2 instance. It requires a single EBS volume that can support up to 16,000 IOPS. Which Amazon EBS volume type can meet the performance requirements of this application?
A. EBS Provisioned IOPS SSD
B. EBS Throughput Optimized HDD
C. EBS General Purpose SSD
D. EBS Cold HDD
Answer: A ( Get the SAA Exam Prep for More: iOS – Android – Windows ) EBS Provisioned IOPS SSD provides sustained performance for mission-critical low-latency workloads. EBS General Purpose SSD can provide bursts of performance up to 3,000 IOPS and have a maximum baseline performance of 10,000 IOPS for volume sizes greater than 3.3 TB. The 2 HDD options are lower cost, high throughput volumes.
Q2: An application running on EC2 instances processes sensitive information stored on Amazon S3. The information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 is a security risk. Which solution will resolve the security concern?
A. Access the data through an Internet Gateway.
B. Access the data through a VPN connection.
C. Access the data through a NAT Gateway.
D.Access the data through a VPC endpoint for Amazon S3
Answer: D ( Get the SAA-C03 Exam Prep for More: iOS – Android – Windows ) VPC endpoints for Amazon S3 provide secure connections to S3 buckets that do not require a gateway or NAT instances. NAT Gateways and Internet Gateways still route traffic over the Internet to the public endpoint for Amazon S3. There is no way to connect to Amazon S3 via VPN.
Q3: An organization is building an Amazon Redshift cluster in their shared services VPC. The cluster will host sensitive data. How can the organization control which networks can access the cluster?
A. Run the cluster in a different VPC and connect through VPC peering.
B. Create a database user inside the Amazon Redshift cluster only for users on the network.
C. Define a cluster security group for the cluster that allows access from the allowed networks.
D. Only allow access to networks that connect with the shared services network via VPN.
Answer: ( Get the SAA-C03 Exam Prep for More: iOS – Android – Windows ) A security group can grant access to traffic from the allowed networks via the CIDR range for each network. VPC peering and VPN are connectivity services and cannot control traffic for security. Amazon Redshift user accounts address authentication and authorization at the user level and have no control over network traffic.
Q4: A web application allows customers to upload orders to an S3 bucket. The resulting Amazon S3 events trigger a Lambda function that inserts a message to an SQS queue. A single EC2 instance reads messages from the queue, processes them, and stores them in an DynamoDB table partitioned by unique order ID. Next month traffic is expected to increase by a factor of 10 and a Solutions Architect is reviewing the architecture for possible scaling problems. Which component is MOST likely to need re-architecting to be able to scale to accommodate the new traffic?
A. Lambda function
B. SQS queue
C. EC2 instance
D. DynamoDB table
Answer: C ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows ) A single EC2 instance will not scale and is a single point of failure in the architecture. A much better solution would be to have EC2 instances in an Auto Scaling group across 2 availability zones read messages from the queue. The other responses are all managed services that can be configured to scale or will scale automatically.
Q5: An application requires a highly available relational database with an initial storage capacity of 8 TB. The database will grow by 8 GB every day. To support expected traffic, at least eight read replicas will be required to handle database reads. Which option will meet these requirements?
A. DynamoDB
B. Amazon S3
C. Amazon Aurora
D. Amazon Redshift
Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows ) Amazon Aurora is a relational database that will automatically scale to accommodate data growth. Amazon Redshift does not support read replicas and will not automatically scale. DynamoDB is a NoSQL service, not a relational database. Amazon S3 is object storage, not a relational database.
C. Divide your files system into multiple smaller file systems.
D. Provision higher IOPs for your EFS.
Answer: B ( Get the SAA-C03 Exam Prep for More: iOS – Android – Windows ) Amazon EFS now allows you to instantly provision the throughput required for your applications independent of the amount of data stored in your file system. This allows you to optimize throughput for your application’s performance needs.
Q7: If you are designing an application that requires fast (10 – 25Gbps), low-latency connections between EC2 instances, what EC2 feature should you use?
A. Snapshots
B. Instance store volumes
C. Placement groups
D. IOPS provisioned instances.
Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows ) Placement groups are a clustering of EC2 instances in one Availability Zone with fast (up to 25Gbps) connections between them. This feature is used for applications that need extremely low-latency connections between instances.
Q8: A Solution Architect is designing an online shopping application running in a VPC on EC2 instances behind an ELB Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The application tier must read and write data to a customer managed database cluster. There should be no access to the database from the Internet, but the cluster must be able to obtain software patches from the Internet.
Which VPC design meets these requirements?
A. Public subnets for both the application tier and the database cluster
B. Public subnets for the application tier, and private subnets for the database cluster
C. Public subnets for the application tier and NAT Gateway, and private subnets for the database cluster
D. Public subnets for the application tier, and private subnets for the database cluster and NAT Gateway
Answer: C. The online application must be in public subnets to allow access from clients’ browsers. The database cluster must be in private subnets to meet the requirement that there be no access from the Internet. A NAT Gateway is required to give the database cluster the ability to download patches from the Internet. NAT Gateways must be deployed in public subnets.
Q9: What command should you run on a running instance if you want to view its user data (that is used at launch)?
A. curl http://254.169.254.169/latest/user-data
B. curl http://localhost/latest/meta-data/bootstrap
C. curl http://localhost/latest/user-data
D. curl http://169.254.169.254/latest/user-data
Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows ) Retrieve Instance User Data To retrieve user data from within a running instance, use the following URI: http://169.254.169.254/latest/user-data
Q10: A company is developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? (Select TWO.)
A. CloudWatch
B. DynamoDB
C. Elastic Load Balancing
D. ElastiCache
E. Storage Gateway
Answer: B. and D. ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows ) Both DynamoDB and ElastiCache provide high performance storage of key-value pairs. CloudWatch and ELB are not storage services. Storage Gateway is a storage service, but it is a hybrid Storage service that enables on-premises applications to use cloud storage.
A stateful web service will keep track of the “state” of a client’s connection and data over several requests. So for example, the client might login, select a users account data, update their address, attach a photo, and change the status flag, then disconnect.
In a stateless web service, the server doesn’t keep any information from one request to the next. The client needs to do it’s work in a series of simple transactions, and the client has to keep track of what happens between requests. So in the above example, the client needs to do each operation separately: connect and update the address, disconnect. Connect and attach the photo, disconnect. Connect and change the status flag, disconnect.
A stateless web service is much simpler to implement, and can handle greater volume of clients.
Q11: From a security perspective, what is a principal?
A. An identity
B. An anonymous user
C. An authenticated user
D. A resource
Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows )
An anonymous user falls under the definition of a principal. A principal can be an anonymous user acting on a system. An authenticated user falls under the definition of a principal. A principal can be an authenticated user acting on a system.
Q12: What are the characteristics of a tiered application?
A. All three application layers are on the same instance
B. The presentation tier is on an isolated instance than the logic layer
C. None of the tiers can be cloned
D. The logic layer is on an isolated instance than the data layer
E. Additional machines can be added to help the application by implementing horizontal scaling
F. Incapable of horizontal scaling
Answer: B. D. and E. ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows )
In a tiered application, the presentation layer is separate from the logic layer; the logic layer is separate from the data layer. Since parts of the application are isolated, they can scale horizontally.
Q17: You lead a team to develop a new online game application in AWS EC2. The application will have a large number of users globally. For a great user experience, this application requires very low network latency and jitter. If the network speed is not fast enough, you will lose customers. Which tool would you choose to improve the application performance? (Select TWO.)
A. AWS VPN
B. AWS Global Accelerator
C. Direct Connect
D. API Gateway
E. CloudFront
Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep App for More: iOS – Android – Windows )
Notes: This online game application has global users and needs low latency. Both CloudFront and Global Accelerator can speed up the distribution of contents over the AWS global network. AWS Global Accelerator works at the network layer and is able to direct traffic to optimal endpoints. Check what is global-accelerator for reference. CloudFront delivers content through edge locations and users are routed to the edge location that has the lowest time delay.
Q18: A company has a media processing application deployed in a local data center. Its file storage is built on a Microsoft Windows file server. The application and file server need to be migrated to AWS. You want to quickly set up the file server in AWS and the application code should continue working to access the file systems. Which method should you choose to create the file server?
A. Create a Windows File Server from Amazon WorkSpaces.
B. Configure a high performance Windows File System in Amazon EFS.
C. Create a Windows File Server in Amazon FSx.
D. Configure a secure enterprise storage through Amazon WorkDocs.
Answer: C – ( Get the SAA-C03 Exam Prep App for More: iOS – Android – Windows )
Notes: In this question, a Windows file server is required in AWS and the application should continue to work unchanged. Amazon FSx for Windows File Server is the correct answer as it is backed by a fully native Windows file system.
Q19: You are developing an application using AWS SDK to get objects from AWS S3. The objects have big sizes and sometimes there are failures when getting objects especially when the network connectivity is poor. You want to get a specific range of bytes in a single GET request and retrieve the whole object in parts. Which method can achieve this?
A. Enable multipart upload in the AWS SDK.
B. Use the “Range” HTTP header in a GET request to download the specified range bytes of an object.
C. Reduce the retry requests and enlarge the retry timeouts through AWS SDK when fetching S3 objects.
D. Retrieve the whole S3 object through a single GET operation.
Answer: – ( Get the SAA-C02 / SAA-C03 Exam Prep App for More: iOS – Android – Windows )
Notes: Because with byte-range fetches, users can establish concurrent connections to Amazon S3 to fetch different parts from within the same object.
Through the “Range” header in the HTTP GET request, a specified portion of the objects can be downloaded instead of the whole objects. Check the explanations in here.
Q20: You have an application hosted in an Auto Scaling group and an application load balancer distributes traffic to the ASG. You want to add a scaling policy that keeps the average aggregate CPU utilization of the Auto Scaling group to be 60 percent. The capacity of the Auto Scaling group should increase or decrease based on this target value. Which scaling policy does it belong to?
A. Target tracking scaling policy.
B. Step scaling policy.
C. Simple scaling policy.
D. Scheduled scaling policy.
Answer: A – ( Get the SAA-C03 Exam Prep App for More: iOS – Android – Windows )
Notes: A target tracking scaling policy can be applied to check the ASGAverageCPUUtilization metric. In ASG, you can add a target tracking scaling policy based on a target. Check here for different scaling policies.
Q21: You need to launch a number of EC2 instances to run Cassandra. There are large distributed and replicated workloads in Cassandra and you plan to launch instances using EC2 placement groups. The traffic should be distributed evenly across several partitions and each partition should contain multiple instances. Which strategy would you use when launching the placement groups?
A. Cluster placement strategy
B. Spread placement strategy.
C. Partition placement strategy.
D. Network placement strategy.
Answer: – ( Get the SAA-C02 / SAA-C03 Exam Prep App for More: iOS – Android – Windows )
Notes: Placement groups have the placement strategies of Cluster, Partition and Spread. With the Partition placement strategy, instances in one partition do not share the underlying hardware with other partitions. This strategy is suitable for distributed and replicated workloads such as Cassandra. Details please refer to Placement Groups Limitation partition.
Q22: To improve the network performance, you launch a C5 EC2 Amazon Linux instance and enable enhanced networking by modifying the instance attribute with “aws ec2 modify-instance-attribute –instance-id instance_id –ena-support”. Which mechanism does the EC2 instance use to enhance the networking capabilities?
A. Intel 82599 Virtual Function (VF) interface.
B. Elastic Fabric Adapter (EFA).
C. Elastic Network Adapter (ENA).
D. Elastic Network Interface (ENI).
Answer: C
Notes: Enhanced networking has two mechanisms: Elastic Network Adapter (ENA) and Intel 82599Virtual Function (VF) interface. For ENA, users can enable it with –ena-support. References can be found here
Q23: You work for an online retailer where any downtime at all can cause a significant loss of revenue. You have architected your application to be deployed on an Auto Scaling Group of EC2 instances behind a load balancer. You have configured and deployed these resources using a CloudFormation template. The Auto Scaling Group is configured with default settings, and a simple CPU utilization scaling policy. You have also set up multiple Availability Zones for high availability. The Load Balancer does health checks against an html file generated by script. When you begin performing load testing on your application and notice in CloudWatch that the load balancer is not sending traffic to one of your EC2 instances. What could be the problem?
A. The EC2 instance has failed the load balancer health check.
B. The instance has not been registered with CloudWatch.
C. The EC2 instance has failed EC2 status checks.
D. You are load testing at a moderate traffic level and not all instances are needed.
Notes: The load balancer will route the incoming requests only to the healthy instances. The EC2 instance may have passed status check and be considered health to the Auto Scaling Group, but the ELB may not use it if the ELB health check has not been met. The ELB health check has a default of 30 seconds between checks, and a default of 3 checks before making a decision. Therefore the instance could be visually available but unused for at least 90 seconds before the GUI would show it as failed. In CloudWatch where the issue was noticed it would appear to be a healthy EC2 instance but with no traffic. Which is what was observed.
References: ELB HealthCheck
Q24: Your company is using a hybrid configuration because there are some legacy applications which are not easily converted and migrated to AWS. And with this configuration comes a typical scenario where the legacy apps must maintain the same private IP address and MAC address. You are attempting to convert the application to the cloud and have configured an EC2 instance to house the application. What you are currently testing is removing the ENI from the legacy instance and attaching it to the EC2 instance. You want to attempt a cold attach. What does this mean?
A. Attach ENI when it’s stopped.
B. Attach ENI before the public IP address is assigned.
C. Attach ENI to an instance when it’s running.
D. Attach ENI when the instance is being launched.
Notes: Best practices for configuring network interfaces You can attach a network interface to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach). You can detach secondary network interfaces when the instance is running or stopped. However, you can’t detach the primary network interface. You can move a network interface from one instance to another, if the instances are in the same Availability Zone and VPC but in different subnets. When launching an instance using the CLI, API, or an SDK, you can specify the primary network interface and additional network interfaces. Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance. A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly. Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves. Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance. If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing. If possible, use a secondary private IPv4 address on the primary network interface instead. For more information, see Assigning a secondary private IPv4 address.
Q25: Your company has recently converted to a hybrid cloud environment and will slowly be migrating to a fully AWS cloud environment. The AWS side is in need of some steps to prepare for disaster recovery. A disaster recovery plan needs drawn up and disaster recovery drills need to be performed for compliance reasons. The company wants to establish Recovery Time and Recovery Point Objectives. The RTO and RPO can be pretty relaxed. The main point is to have a plan in place, with as much cost savings as possible. Which AWS disaster recovery pattern will best meet these requirements?
A. Warm Standby
B. Backup and restore
C. Multi Site
D. Pilot Light
Answer: B
Notes: Backup and Restore: This is the least expensive option and cost is the overriding factor.
Q26: An international travel company has an application which provides travel information and alerts to users all over the world. The application is hosted on groups of EC2 instances in Auto Scaling Groups in multiple AWS Regions. There are also load balancers routing traffic to these instances. In two countries, Ireland and Australia, there are compliance rules in place that dictate users connect to the application in eu-west-1 and ap-southeast-1. Which service can you use to meet this requirement?
A. Use Route 53 weighted routing.
B. Use Route 53 geolocation routing.
C. Configure CloudFront and the users will be routed to the nearest edge location.
D. Configure the load balancers to route users to the proper region.
Notes: Geolocation routing lets you choose the resources that serve your traffic based on the geographic location of your users, meaning the location that DNS queries originate from. For example, you might want all queries from Europe to be routed to an ELB in the Frankfurt region. When you use geolocation routing, you can localize your content and present some or all of your website in the language of your users. You can also use geolocation routing to restrict distribution of content to only the locations in which you have distribution rights. Another possible use is for balancing load across endpoints in a predictable, easy-to-manage way, so that each user location is consistently routed to the same endpoint.
Q26: You have taken over management of several instances in the company AWS environment. You want to quickly review scripts used to bootstrap the instances at runtime. A URL command can be used to do this. What can you append to the URL http://169.254.169.254/latest/ to retrieve this data?
A. user-data/
B. instance-demographic-data/
C. meta-data/
D. instance-data/
Answer: A
Notes: When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives.
Q27: A software company has created an application to capture service requests from users and also enhancement requests. The application is deployed on an Auto Scaling group of EC2 instances fronted by an Application Load Balancer. The Auto Scaling group has scaled to maximum capacity, but there are still requests being lost. The cost of these instances is becoming an issue. What step can the company take to ensure requests aren’t lost?
A. Use larger instances in the Auto Scaling group.
B. Use spot instances to save money.
C. Use an SQS queue with the Auto Scaling group to capture all requests.
D. Use a Network Load Balancer instead for faster throughput.
Notes: There are some scenarios where you might think about scaling in response to activity in an Amazon SQS queue. For example, suppose that you have a web app that lets users upload images and use them online. In this scenario, each image requires resizing and encoding before it can be published. The app runs on EC2 instances in an Auto Scaling group, and it’s configured to handle your typical upload rates. Unhealthy instances are terminated and replaced to maintain current instance levels at all times. The app places the raw bitmap data of the images in an SQS queue for processing. It processes the images and then publishes the processed images where they can be viewed by users. The architecture for this scenario works well if the number of image uploads doesn’t vary over time. But if the number of uploads changes over time, you might consider using dynamic scaling to scale the capacity of your Auto Scaling group.
Q28: A company has an auto scaling group of EC2 instances hosting their retail sales application. Any significant downtime for this application can result in large losses of profit. Therefore the architecture also includes an Application Load Balancer and an RDS database in a Multi-AZ deployment. The company has a very aggressive Recovery Time Objective (RTO) in case of disaster. How long will a failover typically complete?
A. Under 10 minutes
B. Within an hour
C. Almost instantly
D. one to two minutes
Answer: D
Notes: What happens during Multi-AZ failover and how long does it take? Failover is automatically handled by Amazon RDS so that you can resume database operations as quickly as possible without administrative intervention. When failing over, Amazon RDS simply flips the canonical name record (CNAME) for your DB instance to point at the standby, which is in turn promoted to become the new primary. We encourage you to follow best practices and implement database connection retry at the application layer. Failovers, as defined by the interval between the detection of the failure on the primary and the resumption of transactions on the standby, typically complete within one to two minutes. Failover time can also be affected by whether large uncommitted transactions must be recovered; the use of adequately large instance types is recommended with Multi-AZ for best results. AWS also recommends the use of Provisioned IOPS with Multi-AZ instances for fast, predictable, and consistent throughput performance.
Q29: You have two EC2 instances running in the same VPC, but in different subnets. You are removing the secondary ENI from an EC2 instance and attaching it to another EC2 instance. You want this to be fast and with limited disruption. So you want to attach the ENI to the EC2 instance when it’s running. What is this called?
Notes: Here are some best practices for configuring network interfaces. You can attach a network interface to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach). You can detach secondary network interfaces when the instance is running or stopped. However, you can’t detach the primary network interface. You can move a network interface from one instance to another if the instances are in the same Availability Zone and VPC but in different subnets. When launching an instance using the CLI, API, or an SDK, you can specify the primary network interface and additional network interfaces. Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance. A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly. Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves. Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance. If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing. If possible, use a secondary private IPv4 address on the primary network interface instead. For more information, see Assigning a secondary private IPv4 address.
Q30: You suspect that one of the AWS services your company is using has gone down. How can you check on the status of this service?
A. AWS Trusted Advisor
B. Amazon Inspector
C. AWS Personal Health Dashboard
D. AWS Organizations
Answer: C
Notes: AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that may impact you. While the Service Health Dashboard displays the general status of AWS services, Personal Health Dashboard gives you a personalized view of the performance and availability of the AWS services underlying your AWS resources. The dashboard displays relevant and timely information to help you manage events in progress, and provides proactive notification to help you plan for scheduled activities. With Personal Health Dashboard, alerts are triggered by changes in the health of AWS resources, giving you event visibility and guidance to help quickly diagnose and resolve issues.
Q31: You have configured an Auto Scaling Group of EC2 instances fronted by an Application Load Balancer and backed by an RDS database. You want to begin monitoring the EC2 instances using CloudWatch metrics. Which metric is not readily available out of the box?
Notes: Memory utilization is not available as an out of the box metric in CloudWatch. You can, however, collect memory metrics when you configure a custom metric for CloudWatch.
Types of custom metrics that you can set up include:
Q32: Several instances you are creating have a specific data requirement. The requirement states that the data on the root device needs to persist independently from the lifetime of the instance. After considering AWS storage options, which is the simplest way to meet these requirements?
A. Store your root device data on Amazon EBS.
B. Store the data on the local instance store.
C. Create a cron job to migrate the data to S3.
D. Send the data to S3 using S3 lifecycle rules.
Answer: A
Notes: By using Amazon EBS, data on the root device will persist independently from the lifetime of the instance. This enables you to stop and restart the instance at a subsequent time, which is similar to shutting down your laptop and restarting it when you need it again.
Q33: A company has an Auto Scaling Group of EC2 instances hosting their retail sales application. Any significant downtime for this application can result in large losses of profit. Therefore the architecture also includes an Application Load Balancer and an RDS database in a Multi-AZ deployment. What will happen to preserve high availability if the primary database fails?
A. A Lambda function kicks off a CloudFormation template to deploy a backup database.
B. The CNAME is switched from the primary db instance to the secondary.
C. Route 53 points the CNAME to the secondary database instance.
D. The Elastic IP address for the primary database is moved to the secondary database.
Notes: Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
Failover is automatically handled by Amazon RDS so that you can resume database operations as quickly as possible without administrative intervention. When failing over, Amazon RDS simply flips the canonical name record (CNAME) for your DB instance to point at the standby, which is in turn promoted to become the new primary.
Q34: After several issues with your application and unplanned downtime, your recommendation to migrate your application to AWS is approved. You have set up high availability on the front end with a load balancer and an Auto Scaling Group. What step can you take with your database to configure high-availability and ensure minimal downtime (under five minutes)?
A. Create a read replica.
B. Enable Multi-AZ failover on the database.
C. Take frequent snapshots of your database.
D. Create your database using CloudFormation and save the template for reuse.
Answer: B
Notes: In the event of a planned or unplanned outage of your DB instance, Amazon RDS automatically switches to a standby replica in another Availability Zone if you have enabled Multi-AZ. The time it takes for the failover to complete depends on the database activity and other conditions at the time the primary DB instance became unavailable. Failover times are typically 60–120 seconds. However, large transactions or a lengthy recovery process can increase failover time. When the failover is complete, it can take additional time for the RDS console to reflect the new Availability Zone. Note the above sentences. Large transactions could cause a problem in getting back up within five minutes, but this is clearly the best of the available choices to attempt to meet this requirement. We must move through our questions on the exam quickly, but always evaluate all the answers for the best possible solution.
Q35: A new startup is considering the advantages of using DynamoDB versus a traditional relational database in AWS RDS. The NoSQL nature of DynamoDB presents a small learning curve to the team members who all have experience with traditional databases. The company will have multiple databases, and the decision will be made on a case-by-case basis. Which of the following use cases would favour DynamoDB? Select two.
Notes: DynamoDB is a NoSQL database that supports key-value and document data structures. A key-value store is a database service that provides support for storing, querying, and updating collections of objects that are identified using a key and values that contain the actual content being stored. Meanwhile, a document data store provides support for storing, querying, and updating items in a document format such as JSON, XML, and HTML. DynamoDB’s fast and predictable performance characteristics make it a great match for handling session data. Plus, since it’s a fully-managed NoSQL database service, you avoid all the work of maintaining and operating a separate session store.
Storing metadata for Amazon S3 objects is correct because the Amazon DynamoDB stores structured data indexed by primary key and allows low-latency read and write access to items ranging from 1 byte up to 400KB. Amazon S3 stores unstructured blobs and is suited for storing large objects up to 5 TB. In order to optimize your costs across AWS services, large objects or infrequently accessed data sets should be stored in Amazon S3, while smaller data elements or file pointers (possibly to Amazon S3 objects) are best saved in Amazon DynamoDB.
Q36: You have been tasked with designing a strategy for backing up EBS volumes attached to an instance-store-backed EC2 instance. You have been asked for an executive summary on your design, and the executive summary should include an answer to the question, “What can an EBS volume do when snapshotting the volume is in progress”?
A. The volume can be used normally while the snapshot is in progress.
B. The volume can only accommodate writes while a snapshot is in progress.
C. The volume can not be used while a snapshot is in progress.
D. The volume can only accommodate reads while a snapshot is in progress.
Answer: A
Notes: You can create a point-in-time snapshot of an EBS volume and use it as a baseline for new volumes or for data backup. If you make periodic snapshots of a volume, the snapshots are incremental; the new snapshot saves only the blocks that have changed since your last snapshot. Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed. While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume.
Q37: You are working as a Solutions Architect in a large healthcare organization. You have many Auto Scaling Groups that you need to create. One requirement is that you need to reuse some software licenses and therefore need to use dedicated hosts on EC2 instances in your Auto Scaling Groups. What step must you take to meet this requirement?
A. Create your launch configuration, but manually change the instances to Dedicated Hosts in the EC2 console.
B. Use a launch template with your Auto Scaling Group.
C. Create the Dedicated Host EC2 instances, then add them to an existing Auto Scaling Group.
D. Make sure your launch configurations are using Dedicated Hosts.
Answer: B
Notes: In addition to the features of Amazon EC2 Auto Scaling that you can configure by using launch templates, launch templates provide more advanced Amazon EC2 configuration options. For example, you must use launch templates to use Amazon EC2 Dedicated Hosts. Dedicated Hosts are physical servers with EC2 instance capacity that are dedicated to your use. While Amazon EC2 Dedicated Instances also run on dedicated hardware, the advantage of using Dedicated Hosts over Dedicated Instances is that you can bring eligible software licenses from external vendors and use them on EC2 instances. If you currently use launch configurations, you can specify a launch template when you update an Auto Scaling group that was created using a launch configuration. To create a launch template to use with an Auto Scaling Group, create the template from scratch, create a new version of an existing template, or copy the parameters from a launch configuration, running instance, or other template.
Q38: Your organization uses AWS CodeDeploy for deployments. Now you are starting a project on the AWS Lambda platform. For your deployments, you’ve been given a requirement of performing blue-green deployments. When you perform deployments, you want to split traffic, sending a small percentage of the traffic to the new version of your application. Which deployment configuration will allow this splitting of traffic?
Notes: With canary, traffic is shifted in two increments. You can choose from predefined canary options that specify the percentage of traffic shifted to your updated Lambda function version in the first increment and the interval, in minutes, before the remaining traffic is shifted in the second increment.
Q39: A financial institution has an application that produces huge amounts of actuary data, which is ultimately expected to be in the terabyte range. There is a need to run complex analytic queries against terabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Which storage service will best meet this requirement?
A. RDS
B. DynamoDB
C. Redshift
D. ElastiCache
Answer: C
Notes: Amazon Redshift is a fast, fully-managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It enables you to run complex analytic queries against terabytes to petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Most results come back in seconds. With Redshift, you can start small for just $0.25 per hour with no commitments and scale-out to petabytes of data for $1,000 per terabyte per year, less than a tenth of the cost of traditional on-premises solutions. Amazon Redshift also includes Amazon Redshift Spectrum, allowing you to run SQL queries directly against exabytes of unstructured data in Amazon S3 data lakes. No loading or transformation is required, and you can use open data formats, including Avro, CSV, Grok, Amazon Ion, JSON, ORC, Parquet, RCFile, RegexSerDe, Sequence, Text, and TSV. Redshift Spectrum automatically scales query compute capacity based on the data retrieved, so queries against Amazon S3 run fast, regardless of data set size.
Q40: A company has an application for sharing static content, such as photos. The popularity of the application has grown, and the company is now sharing content worldwide. This worldwide service has caused some issues with latency. What AWS services can be used to host a static website, serve content to globally dispersed users, and address latency issues, while keeping cost under control? Choose two.
Notes: Amazon S3 is an object storage built to store and retrieve any amount of data from anywhere on the Internet. It’s a simple storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low costs. AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around the world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Both services integrate with AWS Shield for DDoS protection.
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services. CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing, or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience. Lastly, if you use AWS origins such as Amazon S3, Amazon EC2, or Elastic Load Balancing, you don’t pay for any data transferred between these services and CloudFront.
Q41: You have just been hired by a large organization which uses many different AWS services in their environment. Some of the services which handle data include: RDS, Redshift, ElastiCache, DynamoDB, S3, and Glacier. You have been instructed to configure a web application using stateless web servers. Which services can you use to handle session state data? Choose two.
Q42: After an IT Steering Committee meeting you have been put in charge of configuring a hybrid environment for the company’s compute resources. You weigh the pros and cons of various technologies based on the requirements you are given. Your primary requirement is the necessity for a private, dedicated connection, which bypasses the Internet and can provide throughput of 10 Gbps. Which option will you select?
A. AWS Direct Connect
B. VPC Peering
C. AWS VPN
D. AWS Direct Gateway
Answer: A
Notes: AWS Direct Connect can reduce network costs, increase bandwidth throughput, and provide a more consistent network experience than internet-based connections. It uses industry-standard 802.1q VLANs to connect to Amazon VPC using private IP addresses. You can choose from an ecosystem of WAN service providers for integrating your AWS Direct Connect endpoint in an AWS Direct Connect location with your remote networks. AWS Direct Connect lets you establish 1 Gbps or 10 Gbps dedicated network connections (or multiple connections) between AWS networks and one of the AWS Direct Connect locations. You can also work with your provider to create sub-1G connection or use link aggregation group (LAG) to aggregate multiple 1 gigabit or 10 gigabit connections at a single AWS Direct Connect endpoint, allowing you to treat them as a single, managed connection. A Direct Connect gateway is a globally available resource to enable connections to multiple Amazon VPCs across different regions or AWS accounts.
Q43: An application is hosted on an EC2 instance in a VPC. The instance is in a subnet in the VPC, and the instance has a public IP address. There is also an internet gateway and a security group with the proper ingress configured. But your testers are unable to access the instance from the Internet. What could be the problem?
A. Make sure the instance has a private IP address.
B. Add a route to the route table, from the subnet containing the instance, to the Internet Gateway.
C. A NAT gateway needs to be configured.
D. A Virtual private gateway needs to be configured.
The question doesn’t state if the subnet containing the instance is public or private. An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet. To enable access to or from the internet for instances in a subnet in a VPC, you must do the following:
Attach an internet gateway to your VPC.
Add a route to your subnet’s route table that directs internet-bound traffic to the internet gateway. If a subnet is associated with a route table that has a route to an internet gateway, it’s known as a public subnet. If a subnet is associated with a route table that does not have a route to an internet gateway, it’s known as a private subnet.
Ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address).
Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance.
In your subnet route table, you can specify a route for the internet gateway to all destinations not explicitly known to the route table (0.0.0.0/0 for IPv4 or ::/0 for IPv6). Alternatively, you can scope the route to a narrower range of IP addresses. For example, the public IPv4 addresses of your company’s public endpoints outside of AWS, or the elastic IP addresses of other Amazon EC2 instances outside your VPC. To enable communication over the Internet for IPv4, your instance must have a public IPv4 address or an Elastic IP address that’s associated with a private IPv4 address on your instance. Your instance is only aware of the private (internal) IP address space defined within the VPC and subnet. The internet gateway logically provides the one-to-one NAT on behalf of your instance so that when traffic leaves your VPC subnet and goes to the Internet, the reply address field is set to the public IPv4 address or elastic IP address of your instance and not its private IP address. Conversely, traffic that’s destined for the public IPv4 address or elastic IP address of your instance has its destination address translated into the instance’s private IPv4 address before the traffic is delivered to the VPC. To enable communication over the Internet for IPv6, your VPC and subnet must have an associated IPv6 CIDR block, and your instance must be assigned an IPv6 address from the range of the subnet. IPv6 addresses are globally unique, and therefore public by default.
Q44: A data company has implemented a subscription service for storing video files. There are two levels of subscription: personal and professional use. The personal users can upload a total of 5 GB of data, and professional users can upload as much as 5 TB of data. The application can upload files of size up to 1 TB to an S3 Bucket. What is the best way to upload files of this size?
A. Multipart upload
B. Single-part Upload
C. AWS Snowball
D. AWS SnowMobile
Answers: A
Notes: The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object (see Operations on Objects). Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket. You can list all of your in-progress multipart uploads or get a list of the parts that you have uploaded for a specific multipart upload. Each of these operations is explained in this section.
Q45: You have multiple EC2 instances housing applications in a VPC in a single Availability Zone. The applications need to communicate at extremely high throughputs to avoid latency for end users. The average throughput needs to be 6 Gbps. What’s the best measure you can do to ensure this throughput?
Notes: Amazon Web Services’ (AWS) solution to reducing latency between instances involves the use of placement groups. As the name implies, a placement group is just that — a group. AWS instances that exist within a common availability zone can be grouped into a placement group. Group members are able to communicate with one another in a way that provides low latency and high throughput. A cluster placement group is a logical grouping of instances within a single Availability Zone. A cluster placement group can span peered VPCs in the same Region. Instances in the same cluster placement group enjoy a higher per-flow throughput limit of up to 10 Gbps for TCP/IP traffic and are placed in the same high-bisection bandwidth segment of the network.
Q46: A team member has been tasked to configure four EC2 instances for four separate applications. These are not high-traffic apps, so there is no need for an Auto Scaling Group. The instances are all in the same public subnet and each instance has an EIP address, and all of the instances have the same Security Group. But none of the instances can send or receive internet traffic. You verify that all the instances have a public IP address. You also verify that an internet gateway has been configured. What is the most likely issue?
A. There is no route in the route table to the internet gateway (or it has been deleted).
B. Each instance needs its own security group.
C. The route table is corrupt.
D. You are using the default nacl.
Answers: A
Notes: The question details all of the configuration needed for internet access, except for a route to the IGW in the route table. This is definitely a key step in any checklist for internet connectivity. It is quite possible to have a subnet with the ‘Public’ attribute set but no route to the Internet in the assigned Route table. (test it yourself). This may have been a setup error, or someone may have thoughtlessly altered the shared Route table for a special case instead of creating a new Route table for the special case.
Q47: You have been assigned to create an architecture which uses load balancers to direct traffic to an Auto Scaling Group of EC2 instances across multiple Availability Zones. The application to be deployed on these instances is a life insurance application which requires path-based and host-based routing. Which type of load balancer will you need to use?
A. Any type of load balancer will meet these requirements.
B. Classic Load Balancer
C. Network Load Balancer
D. Application Load Balancer
Answers: D
Notes: Only the Application Load Balancer can support path-based and host-based routing. Using an Application Load Balancer instead of a Classic Load Balancer has the following benefits:
Support for path-based routing. You can configure rules for your listener that forward requests based on the URL in the request. This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL.
Support for host-based routing. You can configure rules for your listener that forward requests based on the host field in the HTTP header. This enables you to route requests to multiple domains using a single load balancer.
Support for routing based on fields in the request, such as standard and custom HTTP headers and methods, query parameters, and source IP addresses.
Support for routing requests to multiple applications on a single EC2 instance. You can register each instance or IP address with the same target group using multiple ports.
Support for redirecting requests from one URL to another.
Support for returning a custom HTTP response.
Support for registering targets by IP address, including targets outside the VPC for the load balancer.
Support for registering Lambda functions as targets.
Support for the load balancer to authenticate users of your applications through their corporate or social identities before routing requests.
Support for containerized applications. Amazon Elastic Container Service (Amazon ECS) can select an unused port when scheduling a task and register the task with a target group using this port. This enables you to make efficient use of your clusters.
Support for monitoring the health of each service independently, as health checks are defined at the target group level and many CloudWatch metrics are reported at the target group level. Attaching a target group to an Auto Scaling group enables you to scale each service dynamically based on demand.
Access logs contain additional information and are stored in compressed format.
Q48: You have been assigned to create an architecture which uses load balancers to direct traffic to an Auto Scaling Group of EC2 instances across multiple Availability Zones. You were considering using an Application Load Balancer, but some of the requirements you have been given seem to point to a Classic Load Balancer. Which requirement would be better served by an Application Load Balancer?
A. Support for EC2-Classic
B. Path-based routing
C. Support for sticky sessions using application-generated cookies
D. Support for TCP and SSL listeners
Answers: B
Notes:
Using an Application Load Balancer instead of a Classic Load Balancer has the following benefits:
Support for path-based routing. You can configure rules for your listener that forward requests based on the URL in the request. This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL.
Q49: You have been tasked to review your company disaster recovery plan due to some new requirements. The driving factor is that the Recovery Time Objective has become very aggressive. Because of this, it has been decided to configure Multi-AZ deployments for the RDS MySQL databases. Unrelated to DR, it has been determined that some read traffic needs to be offloaded from the master database. What step can be taken to meet this requirement?
A. Convert to Aurora to allow the standby to serve read traffic.
B. Redirect some of the read traffic to the standby database.
C. Add DAX to the solution to alleviate excess read traffic.
D. Add read replicas to offload some read traffic.
Notes: Amazon RDS Read Replicas for MySQL and MariaDB now support Multi-AZ deployments. Combining Read Replicas with Multi-AZ enables you to build a resilient disaster recovery strategy and simplify your database engine upgrade process. Amazon RDS Read Replicas enable you to create one or more read-only copies of your database instance within the same AWS Region or in a different AWS Region. Updates made to the source database are then asynchronously copied to your Read Replicas. In addition to providing scalability for read-heavy workloads, Read Replicas can be promoted to become a standalone database instance when needed.
Q50: A gaming company is designing several new games which focus heavily on player-game interaction. The player makes a certain move and the game has to react very quickly to change the environment based on that move and to present the next decision for the player in real-time. A tool is needed to continuously collect data about player-game interactions and feed the data into the gaming platform in real-time. Which AWS service can best meet this need?
A. AWS Lambda
B. Kinesis Data Streams
C. Kinesis Data Analytics
D. AWS IoT
Answers: B
Notes: Kinesis Data Streams can be used to continuously collect data about player-game interactions and feed the data into your gaming platform. With Kinesis Data Streams, you can design a game that provides engaging and dynamic experiences based on players’ actions and behaviors.
Q51: You are designing an architecture for a financial company which provides a day trading application to customers. After viewing the traffic patterns for the existing application you notice that traffic is fairly steady throughout the day, with the exception of large spikes at the opening of the market in the morning and at closing around 3 pm. Your architecture will include an Auto Scaling Group of EC2 instances. How can you configure the Auto Scaling Group to ensure that system performance meets the increased demands at opening and closing of the market?
A. Configure a Dynamic Scaling Policy to scale based on CPU Utilization.
B. Use a load balancer to ensure that the load is distributed evenly during high-traffic periods.
C. Configure your Auto Scaling Group to have a desired size which will be able to meet the demands of the high-traffic periods.
D. Use a predictive scaling policy on the Auto Scaling Group to meet opening and closing spikes.
Notes: Use a predictive scaling policy on the Auto Scaling Group to meet opening and closing spikes: Using data collected from your actual EC2 usage and further informed by billions of data points drawn from our own observations, we use well-trained Machine Learning models to predict your expected traffic (and EC2 usage) including daily and weekly patterns. The model needs at least one day’s of historical data to start making predictions; it is re-evaluated every 24 hours to create a forecast for the next 48 hours. What we can gather from the question is that the spikes at the beginning and end of day can potentially affect performance. Sure, we can use dynamic scaling, but remember, scaling up takes a little bit of time. We have the information to be proactive, use predictive scaling, and be ready for these spikes at opening and closing.
Q52: A software gaming company has produced an online racing game which uses CloudFront for fast delivery to worldwide users. The game also uses DynamoDB for storing in-game and historical user data. The DynamoDB table has a preconfigured read and write capacity. Users have been reporting slow down issues, and an analysis has revealed that the DynamoDB table has begun throttling during peak traffic times. Which step can you take to improve game performance?
A. Add a load balancer in front of the web servers.
B. Add ElastiCache to cache frequently accessed data in memory.
C. Add an SQS Queue to queue requests which could be lost.
D. Make sure DynamoDB Auto Scaling is turned on.
Answers: D
Notes: Amazon DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns. This enables a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic, without throttling. When the workload decreases, Application Auto Scaling decreases the throughput so that you don’t pay for unused provisioned capacity. Note that if you use the AWS Management Console to create a table or a global secondary index, DynamoDB auto scaling is enabled by default. You can modify your auto scaling settings at any time.
Q53: You have configured an Auto Scaling Group of EC2 instances. You have begun testing the scaling of the Auto Scaling Group using a stress tool to force the CPU utilization metric being used to force scale out actions. The stress tool is also being manipulated by removing stress to force a scale in. But you notice that these actions are only taking place in five-minute intervals. What is happening?
A. Auto Scaling Groups can only scale in intervals of five minutes or greater.
B. The Auto Scaling Group is following the default cooldown procedure.
C. A load balancer is managing the load and limiting the effectiveness of stressing the servers.
D. The stress tool is configured to run for five minutes.
Notes: The cooldown period helps you prevent your Auto Scaling group from launching or terminating additional instances before the effects of previous activities are visible. You can configure the length of time based on your instance startup time or other application needs. When you use simple scaling, after the Auto Scaling group scales using a simple scaling policy, it waits for a cooldown period to complete before any further scaling activities due to simple scaling policies can start. An adequate cooldown period helps to prevent the initiation of an additional scaling activity based on stale metrics. By default, all simple scaling policies use the default cooldown period associated with your Auto Scaling Group, but you can configure a different cooldown period for certain policies, as described in the following sections. Note that Amazon EC2 Auto Scaling honors cooldown periods when using simple scaling policies, but not when using other scaling policies or scheduled scaling. A default cooldown period automatically applies to any scaling activities for simple scaling policies, and you can optionally request to have it apply to your manual scaling activities. When you use the AWS Management Console to update an Auto Scaling Group, or when you use the AWS CLI or an AWS SDK to create or update an Auto Scaling Group, you can set the optional default cooldown parameter. If a value for the default cooldown period is not provided, its default value is 300 seconds.
Q54: A team of architects is designing a new AWS environment for a company which wants to migrate to the Cloud. The architects are considering the use of EC2 instances with instance store volumes. The architects realize that the data on the instance store volumes are ephemeral. Which action will not cause the data to be deleted on an instance store volume?
A. Reboot
B. The underlying disk drive fails.
C. Hardware disk failure.
D. Instance is stopped
Answers: A
Notes: Some Amazon Elastic Compute Cloud (Amazon EC2) instance types come with a form of directly attached, block-device storage known as the instance store. The instance store is ideal for temporary storage, because the data stored in instance store volumes is not persistent through instance stops, terminations, or hardware failures.
Q55: You work for an advertising company that has a real-time bidding application. You are also using CloudFront on the front end to accommodate a worldwide user base. Your users begin complaining about response times and pauses in real-time bidding. Which service can be used to reduce DynamoDB response times by an order of magnitude (milliseconds to microseconds)?
A. DAX
B. DynamoDB Auto Scaling
C. Elasticache
D. CloudFront Edge Caches
Answers: A
Notes: Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache that can reduce Amazon DynamoDB response times from milliseconds to microseconds, even at millions of requests per second. While DynamoDB offers consistent single-digit millisecond latency, DynamoDB with DAX takes performance to the next level with response times in microseconds for millions of requests per second for read-heavy workloads. With DAX, your applications remain fast and responsive, even when a popular event or news story drives unprecedented request volumes your way. No tuning required.
Q56: A travel company has deployed a website which serves travel updates to users all over the world. The traffic this database serves is very read heavy and can have some latency issues at certain times of the year. What can you do to alleviate these latency issues?
Notes: Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances. Read replicas are available in Amazon RDS for MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server as well as Amazon Aurora.
Q57: A large financial institution is gradually moving their infrastructure and applications to AWS. The company has data needs that will utilize all of RDS, DynamoDB, Redshift, and ElastiCache. Which description best describes Amazon Redshift?
A. Key-value and document database that delivers single-digit millisecond performance at any scale.
B. Cloud-based relational database.
C. Can be used to significantly improve latency and throughput for many read-heavy application workloads.
D. Near real-time complex querying on massive data sets.
Answers: D
Notes: Amazon Redshift is a fast, fully-managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against terabytes to petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Most results come back in seconds. With Redshift, you can start small for just $0.25 per hour with no commitments and scale out to petabytes of data for $1,000 per terabyte per year, less than a tenth the cost of traditional on-premises solutions. Amazon Redshift also includes Amazon Redshift Spectrum, allowing you to run SQL queries directly against exabytes of unstructured data in Amazon S3 data lakes. No loading or transformation is required, and you can use open data formats, including Avro, CSV, Grok, Amazon Ion, JSON, ORC, Parquet, RCFile, RegexSerDe, Sequence, Text, and TSV. Redshift Spectrum automatically scales query compute capacity based on the data retrieved, so queries against Amazon S3 run fast, regardless of data set size.
Q58: You are designing an architecture which will house an Auto Scaling Group of EC2 instances. The application hosted on the instances is expected to be an extremely popular social networking site. Forecasts for traffic to this site expect very high traffic and you will need a load balancer to handle tens of millions of requests per second while maintaining high throughput at ultra low latency. You need to select the type of load balancer to front your Auto Scaling Group to meet this high traffic requirement. Which load balancer will you select?
A. You will need an Application Load Balancer to meet this requirement.
B. All the AWS load balancers meet the requirement and perform the same.
C. You will select a Network Load Balancer to meet this requirement.
D. You will need a Classic Load Balancer to meet this requirement.
Answers: C
Notes: Network Load Balancer Overview: A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration. When you enable an Availability Zone for the load balancer, Elastic Load Balancing creates a load balancer node in the Availability Zone. By default, each load balancer node distributes traffic across the registered targets in its Availability Zone only. If you enable cross-zone load balancing, each load balancer node distributes traffic across the registered targets in all enabled Availability Zones. It is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on your part. The Network Load Balancer is API-compatible with the Application Load Balancer, including full programmatic control of Target Groups and Targets. Here are some of the most important features:
Static IP Addresses – Each Network Load Balancer provides a single IP address for each Availability Zone in its purview. If you have targets in us-west-2a and other targets in us-west-2c, NLB will create and manage two IP addresses (one per AZ); connections to that IP address will spread traffic across the instances in all the VPC subnets in the AZ. You can also specify an existing Elastic IP for each AZ for even greater control. With full control over your IP addresses, a Network Load Balancer can be used in situations where IP addresses need to be hard-coded into DNS records, customer firewall rules, and so forth.
Zonality – The IP-per-AZ feature reduces latency with improved performance, improves availability through isolation and fault tolerance, and makes the use of Network Load Balancers transparent to your client applications. Network Load Balancers also attempt to route a series of requests from a particular source to targets in a single AZ while still providing automatic failover should those targets become unavailable.
Source Address Preservation – With Network Load Balancer, the original source IP address and source ports for the incoming connections remain unmodified, so application software need not support X-Forwarded-For, proxy protocol, or other workarounds. This also means that normal firewall rules, including VPC Security Groups, can be used on targets.
Long-running Connections – NLB handles connections with built-in fault tolerance, and can handle connections that are open for months or years, making them a great fit for IoT, gaming, and messaging applications.
Failover – Powered by Route 53 health checks, NLB supports failover between IP addresses within and across regions.
Q59: An organization of about 100 employees has performed the initial setup of users in IAM. All users except administrators have the same basic privileges. But now it has been determined that 50 employees will have extra restrictions on EC2. They will be unable to launch new instances or alter the state of existing instances. What will be the quickest way to implement these restrictions?
A. Create an IAM Role for the restrictions. Attach it to the EC2 instances.
B. Create the appropriate policy. Place the restricted users in the new policy.
C. Create the appropriate policy. With only 20 users, attach the policy to each user.
D. Create the appropriate policy. Create a new group for the restricted users. Place the restricted users in the new group and attach the policy to the group.
Notes: You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. AWS supports six types of policies: identity-based policies, resource-based policies, permissions boundaries, Organizations SCPs, ACLs, and session policies. IAM policies define permissions for an action regardless of the method that you use to perform the operation. For example, if a policy allows the GetUser action, then a user with that policy can get user information from the AWS Management Console, the AWS CLI, or the AWS API. When you create an IAM user, you can choose to allow console or programmatic access. If console access is allowed, the IAM user can sign in to the console using a user name and password. Or if programmatic access is allowed, the user can use access keys to work with the CLI or API.
Q60: You are managing S3 buckets in your organization. This management of S3 extends to Amazon Glacier. For auditing purposes you would like to be informed if an object is restored to S3 from Glacier. What is the most efficient way you can do this?
A. Create a CloudWatch event for uploads to S3
B. Create an SNS notification for any upload to S3.
C. Configure S3 notifications for restore operations from Glacier.
D. Create a Lambda function which is triggered by restoration of object from Glacier to S3.
Answers: C
Notes: The Amazon S3 notification feature enables you to receive notifications when certain events happen in your bucket. To enable notifications, you must first add a notification configuration that identifies the events you want Amazon S3 to publish and the destinations where you want Amazon S3 to send the notifications. An S3 notification can be set up to notify you when objects are restored from Glacier to S3.
Q61: Your company has gotten back results from an audit. One of the mandates from the audit is that your application, which is hosted on EC2, must encrypt the data before writing this data to storage. Which service could you use to meet this requirement?
A. AWS Cloud HSM
B. Security Token Service
C. EBS encryption
D. AWS KMS
Answers: D
Notes: You can configure your application to use the KMS API to encrypt all data before saving it to disk. This link details how to choose an encryption service for various use cases:
Q62: Recent worldwide events have dictated that you perform your duties as a Solutions Architect from home. You need to be able to manage several EC2 instances while working from home and have been testing the ability to ssh into these instances. One instance in particular has been a problem and you cannot ssh into this instance. What should you check first to troubleshoot this issue?
A. Make sure that the security group for the instance has ingress on port 80 from your home IP address.
B. Make sure that your VPC has a connected Virtual Private Gateway.
C. Make sure that the security group for the instance has ingress on port 22 from your home IP address.
D. Make sure that the Security Group for the instance has ingress on port 443 from your home IP address.
Notes: The rules of a security group control the inbound traffic that’s allowed to reach the instances that are associated with the security group. The rules also control the outbound traffic that’s allowed to leave them. The following are the characteristics of security group rules:
By default, security groups allow all outbound traffic.
Security group rules are always permissive; you can’t create rules that deny access.
Security groups are stateful. If you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. For VPC security groups, this also means that responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules. For more information, see Connection tracking.
You can add and remove rules at any time. Your changes are automatically applied to the instances that are associated with the security group. The effect of some rule changes can depend on how the traffic is tracked. For more information, see Connection tracking. When you associate multiple security groups with an instance, the rules from each security group are effectively aggregated to create one set of rules. Amazon EC2 uses this set of rules to determine whether to allow access. You can assign multiple security groups to an instance. Therefore, an instance can have hundreds of rules that apply. This might cause problems when you access the instance. We recommend that you condense your rules as much as possible.
Q62: A consultant is hired by a small company to configure an AWS environment. The consultant begins working with the VPC and launching EC2 instances within the VPC. The initial instances will be placed in a public subnet. The consultant begins to create security groups. What is true of the default security group?
A. You can delete this group, however, you can’t change the group’s rules.
B. You can delete this group or you can change the group’s rules.
C. You can’t delete this group, nor can you change the group’s rules.
D. You can’t delete this group, however, you can change the group’s rules.
Answers: D
Notes: Your VPC includes a default security group. You can’t delete this group, however, you can change the group’s rules. The procedure is the same as modifying any other security group. For more information, see Adding, removing, and updating rules.
Q63: You are evaluating the security setting within the main company VPC. There are several NACLs and security groups to evaluate and possibly edit. What is true regarding NACLs and security groups?
A. Network ACLs and security groups are both stateful.
B. Network ACLs and security groups are both stateless.
C. Network ACLs are stateless, and security groups are stateful.
D. Network ACLs and stateful, and security groups are stateless.
Notes: Network ACLs are stateless, which means that responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa).
The following are the basic characteristics of security groups for your VPC:
There are quotas on the number of security groups that you can create per VPC, the number of rules that you can add to each security group, and the number of security groups that you can associate with a network interface. For more information, see Amazon VPC quotas.
You can specify allow rules, but not deny rules.
You can specify separate rules for inbound and outbound traffic.
When you create a security group, it has no inbound rules. Therefore, no inbound traffic originating from another host to your instance is allowed until you add inbound rules to the security group.
By default, a security group includes an outbound rule that allows all outbound traffic. You can remove the rule and add outbound rules that allow specific outbound traffic only. If your security group has no outbound rules, no outbound traffic originating from your instance is allowed.
Security groups are stateful. If you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. Responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules.
Q64: Your company needs to deploy an application in the company AWS account. The application will reside on EC2 instances in an Auto Scaling Group fronted by an Application Load Balancer. The company has been using Elastic Beanstalk to deploy the application due to limited AWS experience within the organization. The application now needs upgrades and a small team of subcontractors have been hired to perform these upgrades. What can be used to provide the subcontractors with short-lived access tokens that act as temporary security credentials to the company AWS account?
A. IAM Roles
B. AWS STS
C. IAM user accounts
D. AWS SSO
Answers: B
Notes: AWS Security Token Service (AWS STS) is the service that you can use to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use. You can use the AWS Security Token Service (AWS STS) to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use, with the following differences: Temporary security credentials are short-term, as the name implies. They can be configured to last for anywhere from a few minutes to several hours. After the credentials expire, AWS no longer recognizes them or allows any kind of access from API requests made with them. Temporary security credentials are not stored with the user but are generated dynamically and provided to the user when requested. When (or even before) the temporary security credentials expire, the user can request new credentials, as long as the user requesting them still has permissions to do so.
Q65: The company you work for has reshuffled teams a bit and you’ve been moved from the AWS IAM team to the AWS Network team. One of your first assignments is to review the subnets in the main VPCs. What are two key concepts regarding subnets?
A. A subnet spans all the Availability Zones in a Region.
B. Private subnets can only hold database.
C. Each subnet maps to a single Availability Zone.
D. Every subnet you create is associated with the main route table for the VPC.
E. Each subnet is associated with one security group.
Notes: A VPC spans all the Availability Zones in the region. After creating a VPC, you can add one or more subnets in each Availability Zone. When you create a subnet, you specify the CIDR block for the subnet, which is a subset of the VPC CIDR block. Each subnet must reside entirely within one Availability Zone and cannot span zones. Availability Zones are distinct locations that are engineered to be isolated from failures in other Availability Zones. By launching instances in separate Availability Zones, you can protect your applications from the failure of a single location. A VPC spans all of the Availability Zones in the Region. After creating a VPC, you can add one or more subnets in each Availability Zone. You can optionally add subnets in a Local Zone, which is an AWS infrastructure deployment that places compute, storage, database, and other select services closer to your end users. A Local Zone enables your end users to run applications that require single-digit millisecond latencies. For information about the Regions that support Local Zones, see Available Regions in the Amazon EC2 User Guide for Linux Instances. When you create a subnet, you specify the CIDR block for the subnet, which is a subset of the VPC CIDR block. We assign a unique ID to each subnet.
Q66: Amazon Web Services offers 4 different levels of support. Which of the following are valid support levels? Choose 3
A. Enterprise
B. Developer
C. Corporate
D. Business
E. Free Tier
Answer: A B D Notes: The correct answers are Enterprise, Business, Developer. References: https://docs.aws.amazon.com/
Q67: You are reviewing Change Control requests, and you note that there is a change designed to reduce wasted CPU cycles by increasing the value of your Amazon SQS “VisibilityTimeout” attribute. What does this mean?
A. While processing a message, a consumer instance can amend the message visibility counter by a fixed amount.
B. When a consumer instance retrieves a message, that message will be hidden from other consumer instances for a fixed period.
C. When the consumer instance polls for new work the SQS service will allow it to wait a certain time for a message to be available before closing the connection.
D. While processing a message, a consumer instance can reset the message visibility by restarting the preset timeout counter.
E. When the consumer instance polls for new work, the consumer instance will wait a certain time until it has a full workload before closing the connection.
F. When a new message is added to the SQS queue, it will be hidden from consumer instances for a fixed period.
Answer: B Notes: Poor timing of SQS processes can significantly impact the cost effectiveness of the solution. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours. References: https://docs.aws.amazon.com/sqs
Q68: You are a security architect working for a large antivirus company. The production environment has recently been moved to AWS and is in a public subnet. You are able to view the production environment over HTTP. However, when your customers try to update their virus definition files over a custom port, that port is blocked. You log in to the console and you allow traffic in over the custom port. How long will this take to take effect?
A. After a few minutes.
B. Immediately.
C. Straight away, but to the new instances only.
D. Straight away to the new instances, but old instances must be stopped and restarted before the new rules apply.
Q69: Amazon SQS keeps track of all tasks and events in an application.
A. True
B. False
Answer: B Notes: Amazon SWF (not Amazon SQS) keeps track of all tasks and events in an application. Amazon SQS requires you to implement your own application-level tracking, especially if your application uses multiple queues. Amazon SWF FAQs. References: References: https://docs.aws.amazon.com/sqs
Q70: Your Security Manager has hired a security contractor to audit your network and firewall configurations. The consultant doesn’t have access to an AWS account. You need to provide the required access for the auditing tasks, and answer a question about login details for the official AWS firewall appliance. Which of the following might you do? Choose 2
A. Create an IAM User with a policy that can Read Security Group and NACL settings.
B. Explain that AWS implements network security differently and that there is no such thing as an official AWS firewall appliance. Security Groups and NACLs are used instead.
C. Create an IAM Role with a policy that can Read Security Group and NACL settings.
D. Explain that AWS is a cloud service and that AWS manages the Network appliances.
E. Create an IAM Role with a policy that can Read Security Group and Route settings.
Answer: A and B Notes: Create an IAM user for the auditor and explain that the firewall functionality is implemented as stateful Security Groups, and stateless subnet NACLs. AWS has removed the Firewall appliance from the hub of the network and implemented the firewall functionality as stateful Security Groups, and stateless subnet NACLs. This is not a new concept in networking, but rarely implemented at this scale. References: https://docs.aws.amazon.com/iam
Q71: How many internet gateways can I attach to my custom VPC?
A. 5 B. 3 C. 2 D. 1
Answer: D Notes: 1 References: https://docs.aws.amazon.com/vpc
Q72: How long can a message be retained in an SQS Queue?
Q73: Although your application customarily runs at 30% usage, you have identified a recurring usage spike (>90%) between 8pm and midnight daily. What is the most cost-effective way to scale your application to meet this increased need?
A. Manually deploy Reactive Event-based Scaling each night at 7:45.
B. Deploy additional EC2 instances to meet the demand.
C. Use scheduled scaling to boost your capacity at a fixed interval.
D. Increase the size of the Resource Group to meet demand.
Answer: C Notes: Scheduled scaling allows you to set your own scaling schedule. For example, let’s say that every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling actions based on the predictable traffic patterns of your web application. Scaling actions are performed automatically as a function of time and date. Reference: Scheduled scaling for Amazon EC2 Auto Scaling.
Q74: To save money, you quickly stored some data in one of the attached volumes of an EC2 instance and stopped it for the weekend. When you returned on Monday and restarted your instance, you discovered that your data was gone. Why might that be?
A. The EBS volume was not large enough to store your data.
B. The instance failed to connect to the root volume on Monday.
C. The elastic block-level storage service failed over the weekend.
D. The volume was ephemeral, block-level storage. Data on an instance store volume is lost if an instance is stopped.
Answer: D Notes: the EC2 instance had an instance store volume attached to it. Instance store volumes are ephemeral, meaning that data in attached instance store volumes is lost if the instance stops. Reference: Instance store lifetime
Q75: Select all the true statements on S3 URL styles: Choose 2
A. Virtual hosted-style URLs will be eventually depreciated in favor of Path-Style URLs for S3 bucket access.
B. Virtual-host-style URLs (such as: https://bucket-name.s3.Region.amazonaws.com/key name) are supported by AWS.
C. Path-Style URLs (such as https://s3.Region.amazonaws.com/bucket-name/key name) are supported by AWS.
D. DNS compliant names are NOT recommended for the URLs to access S3.
Answer: B and C Notes: Virtual-host-style URLs and Path-Style URLs (soon to be retired) are supported by AWS. DNS compliant names are recommended for the URLs to access S3. References: https://docs.aws.amazon.com/s3
Q76: With EBS, I can ____. Choose 2
A. Create an encrypted snapshot from an unencrypted snapshot by creating an encrypted copy of the unencrypted snapshot.
B. Create an unencrypted volume from an encrypted snapshot.
C. Create an encrypted volume from a snapshot of another encrypted volume.
D. Encrypt an existing volume.
Answer: A and C Notes: Although there is no direct way to encrypt an existing unencrypted volume or snapshot, you can encrypt them by creating either a volume or a snapshot. Reference: Encrypting unencrypted resources. You can create an encrypted volume from a snapshot of another encrypted volume. References: https://docs.aws.amazon.com/ebs
Q77: You have been engaged by a company to design and lead a migration to an AWS environment. The team is concerned about the capabilities of the new environment, especially when it comes to high availability and cost-effectiveness. The design calls for about 20 instances (c3.2xlarge) pulling jobs/messages from SQS. Network traffic per instance is estimated to be around 500 Mbps at the beginning and end of each job. Which configuration should you plan on deploying?
A. Use a 2nd Network Interface to separate the SQS traffic for the storage traffic.
B. Choose a different instance type that better matched the traffic demand.
C.Spread the Instances over multiple AZs to minimize the traffic concentration and maximize fault-tolerance.
D. Deploy as a Cluster Placement Group as the aggregated burst traffic could be around 10 Gbps.
Answer: C Notes: With a multi-AZ configuration, an additional reliability point is scored as the entire Availability Zone itself is ruled out as a single point of failure. This ensures high availability. Wherever possible, use simple solutions such as spreading the load out rather than expensive high tech solutions. References:AZ
Q78: You are a solutions architect working for a cosmetics company. Your company has a busy Magento online store that consists of a two-tier architecture. The web servers are on EC2 instances deployed across multiple AZs, and the database is on a Multi-AZ RDS MySQL database instance. Your store is having a Black Friday sale in five days, and having reviewed the performance for the last sale you expect the site to start running very slowly during the peak load. You investigate and you determine that the database was struggling to keep up with the number of reads that the store was generating. Which solution would you implement to improve the application read performance the most?
A. Deploy an Amazon ElastiCache cluster with nodes running in each AZ.
B. Upgrade your RDS MySQL instance to use provisioned IOPS.
C. Add an RDS Read Replica in each AZ.
D. Upgrade the RDS MySQL instance to a larger type.
Answer: C Notes: RDS Replicas can substantially increase the Read performance of your database. Multiple read replicas can be made to increase performance further. It will also require the least modifications to any code, and is generally possible to be implemented in the timeframe specified References:RDS
Q79: Which native AWS service will act as a file system mounted on an S3 bucket?
A. Amazon Elastic Block Store
B. File Gateway
C. Amazon S3
D. Amazon Elastic File System
Answer: B Notes: A file gateway supports a file interface into Amazon Simple Storage Service (Amazon S3) and combines a service and a virtual software appliance. By using this combination, you can store and retrieve objects in Amazon S3 using industry-standard file protocols such as Network File System (NFS) and Server Message Block (SMB). The software appliance, or gateway, is deployed into your on-premises environment as a virtual machine (VM) running on VMware ESXi, Microsoft Hyper-V, or Linux Kernel-based Virtual Machine (KVM) hypervisor. The gateway provides access to objects in S3 as files or file share mount points. You can manage your S3 data using lifecycle policies, cross-region replication, and versioning. You can think of a file gateway as a file system mount on S3. Reference: What is AWS Storage Gateway? .
Q80:You have been evaluating the NACLS in your company. Most of the NACLs are configured the same: 100 All Traffic Allow 200 All Traffic Deny ‘*’ All Traffic Deny If a request comes in, how will it be evaluated?
A. The default will deny traffic.
B. The request will be allowed.
C. The highest numbered rule will be used, a deny.
D. All rules will be evaluated and the end result will be Deny.
Answer: B
Notes: Rules are evaluated starting with the lowest numbered rule. As soon as a rule matches traffic, it’s applied immediately regardless of any higher-numbered rule that may contradict it. The following are the basic things that you need to know about network ACLs: Your VPC automatically comes with a modifiable default network ACL. By default, it allows all inbound and outbound IPv4 traffic and, if applicable, IPv6 traffic. You can create a custom network ACL and associate it with a subnet. By default, each custom network ACL denies all inbound and outbound traffic until you add rules. Each subnet in your VPC must be associated with a network ACL. If you don’t explicitly associate a subnet with a network ACL, the subnet is automatically associated with the default network ACL. You can associate a network ACL with multiple subnets. However, a subnet can be associated with only one network ACL at a time. When you associate a network ACL with a subnet, the previous association is removed. A network ACL contains a numbered list of rules. We evaluate the rules in order, starting with the lowest-numbered rule, to determine whether traffic is allowed in or out of any subnet associated with the network ACL. The highest number that you can use for a rule is 32766. We recommend that you start by creating rules in increments (for example, increments of 10 or 100) so that you can insert new rules where you need to later on. A network ACL has separate inbound and outbound rules, and each rule can either allow or deny traffic. Network ACLs are stateless, which means that responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa).
Q81: You have been given an assignment to configure Network ACLs in your VPC. Before configuring the NACLs, you need to understand how the NACLs are evaluated. How are NACL rules evaluated?
A. NACL rules are evaluated by rule number from lowest to highest and executed immediately when a matching rule is found.
B. NACL rules are evaluated by rule number from highest to lowest, and executed immediately when a matching rule is found.
C. All NACL rules that you configure are evaluated before traffic is passed through.
D. NACL rules are evaluated by rule number from highest to lowest, and all are evaluated before traffic is passed through.
Answer: A
Notes: NACL rules are evaluated by rule number from lowest to highest and executed immediately when a matching rule is found.
You can add or remove rules from the default network ACL, or create additional network ACLs for your VPC. When you add or remove rules from a network ACL, the changes are automatically applied to the subnets that it’s associated with. A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC. The following are the parts of a network ACL rule:
Rule number. Rules are evaluated starting with the lowest-numbered rule. As soon as a rule matches traffic, it’s applied regardless of any higher-numbered rule that might contradict it.
Type. The type of traffic, for example, SSH. You can also specify all traffic or a custom range.
Protocol. You can specify any protocol that has a standard protocol number. For more information, see Protocol Numbers. If you specify ICMP as the protocol, you can specify any or all of the ICMP types and codes.
Port range. The listening port or port range for the traffic. For example, 80 for HTTP traffic.
Source. [Inbound rules only] The source of the traffic (CIDR range).
Destination. [Outbound rules only] The destination for the traffic (CIDR range).
Allow/Deny. Whether to allow or deny the specified traffic.
Q82: Your company has gone through an audit with a focus on data storage. You are currently storing historical data in Amazon Glacier. One of the results of the audit is that a portion of the infrequently-accessed historical data must be able to be accessed immediately upon request. Where can you store this data to meet this requirement?
A. S3 Standard
B. Leave infrequently-accessed data in Glacier.
C. S3 Standard-IA
D. Store the data in EBS
Answer: C
Notes: S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of S3 Standard, with a low-per-GB storage price and per GB retrieval fee. This combination of low cost and high performance make S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files. S3 Storage Classes can be configured at the object level and a single bucket can contain objects stored across S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA. You can also use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes.
Q84: After an IT Steering Committee meeting, you have been put in charge of configuring a hybrid environment for the company’s compute resources. You weigh the pros and cons of various technologies, such as VPN and Direct Connect, and based on the requirements you have decided to configure a VPN connection. What features and advantages can a VPN connection provide?
A VPN provides a connection between an on-premises network and a VPC, using a secure and private connection with IPsec and TLS.
A VPC/VPN Connection utilizes IPSec to establish encrypted network connectivity between your intranet and Amazon VPC over the Internet. VPN Connections can be configured in minutes and are a good solution if you have an immediate need, have low-to-modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity.
AWS Client VPN is a managed client-based VPN service that enables you to securely access your AWS resources or your on-premises network. With AWS Client VPN, you configure an endpoint to which your users can connect to establish a secure TLS VPN session. This enables clients to access resources in AWS or on-premises from any location using an OpenVPN-based VPN client.
Q86: Your company has decided to go to a hybrid cloud environment. Part of this effort will be to move a large data warehouse to the cloud. The warehouse is 50TB, and will take over a month to migrate given the current bandwidth available. What is the best option available to perform this migration considering both cost and performance aspects?
The AWS Snowball Edge is a type of Snowball device with on-board storage and compute power for select AWS capabilities. Snowball Edge can undertake local processing and edge-computing workloads in addition to transferring data between your local environment and the AWS Cloud.
Each Snowball Edge device can transport data at speeds faster than the internet. This transport is done by shipping the data in the appliances through a regional carrier. The appliances are rugged shipping containers, complete with E Ink shipping labels. The AWS Snowball Edge device differs from the standard Snowball because it can bring the power of the AWS Cloud to your on-premises location, with local storage and compute functionality.
Snowball Edge devices have three options for device configurations: storage optimized, compute optimized, and with GPU. When this guide refers to Snowball Edge devices, it’s referring to all options of the device. Whenever specific information applies to only one or more optional configurations of devices, like how the Snowball Edge with GPU has an on-board GPU, it will be called out. For more information, see Snowball Edge Device Options.
Q87: You have been assigned the review of the security in your company AWS cloud environment. Your final deliverable will be a report detailing potential security issues. One of the first things that you need to describe is the responsibilities of the company under the shared responsibility module. Which measure is the customer’s responsibility?
EC2 instance OS Patching
Notes:Security and compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates. The customer assumes responsibility for, and management of, the guest operating system (including updates and security patches), other associated application software, and the configuration of the AWS provided security group firewall. Customers should carefully consider the services they choose, as their responsibilities vary depending on the services used, the integration of those services into their IT environment, and applicable laws and regulations. The nature of this shared responsibility also provides the flexibility and customer control that permits the deployment. As shown in the chart below, this differentiation of responsibility is commonly referred to as Security “of” the Cloud versus Security “in” the Cloud.
Customers that deploy an Amazon EC2 instance are responsible for management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.
Q88: You work for a busy real estate company, and you need to protect your data stored on S3 from accidental deletion. Which of the following actions might you take to achieve this? Choose 2
A. Create a bucket policy that prohibits anyone from deleting things from the bucket. B. Enable S3 – Infrequent Access Storage (S3 – IA). C. Enable versioning on the bucket. If a file is accidentally deleted, delete the delete marker. D. Configure MFA-protected API access. E. Use pre-signed URL’s so that users will not be able to accidentally delete data.
Answer: C and D Notes: The best answers are to allow versioning on the bucket and to protect the objects by configuring MFA-protected API access. Reference:https://docs.aws.amazon.com/s3
Q89: AWS intends to shut down your spot instance; which of these scenarios is possible? Choose 3
A. AWS sends a notification of termination and you receive it 120 seconds before the intended forced shutdown.
B. AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, and you delay it by sending a ‘Delay300’ instruction before the forced shutdown takes effect.
C. AWS sends a notification of termination and you receive it 120 seconds before the intended forced shutdown, but AWS does not action the shutdown.
D. AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, but you block the shutdown because you used ‘Termination Protection’ when you initialized the instance.
E. AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, but the defined duration period (also known as Spot blocks) hasn’t ended yet.
F. AWS sends a notification of termination, but you do not receive it within the 120 seconds and the instance is shutdown.
Answer: A E and F Notes: When Amazon EC2 is going to interrupt your Spot Instance, it emits an event two minutes prior to the actual interruption (except for hibernation, which gets the interruption notice, but not two minutes in advance because hibernation begins immediately).
In rare situations, Spot blocks may be interrupted due to Amazon EC2 capacity needs. In these cases, AWS provides a two-minute warning before the instance is terminated, and customers are not charged for the terminated instances even if they have used them.
It is possible that your Spot Instance is terminated before the warning can be made available. Reference: https://docs.aws.amazon.com/ec2
Q90: What does the “EAR” in a policy document stand for?
A. Effects, APIs, Roles B. Effect, Action, Resource C. Ewoks, Always, Romanticize D. Every, Action, Reasonable
Answer: B. Notes: The elements included in a policy document that make up the “EAR” are effect, action, and resource. Reference: Policies and Permissions in IAM
Q91: _____ provides real-time streaming of data.
A. Kinesis Data Analytics B. Kinesis Data Firehose C. Kinesis Data Streams D. SQS
Answer: C Notes: Kinesis Data Streams offers real-time data streaming Reference: Amazon Kinesis Data Streams –
Q92: You can use _ to build a schema for your data, and _ to query the data that’s stored in S3.
A. Glue, Athena B. EC2, SQS C. EC2, Glue D. Athena, Lambda
Answer: A Notes: Kinesis Data Streams offers real-time data streaming Reference: Glue and Athena are correct –
Q93: What type of work does EMR perform?
A. Data processing information (DPI) jobs. B. Big data (BD) jobs. C. Extract, transform, and load (ETL) jobs. D. Huge amounts of data (HAD) jobs
Answer: C Notes: EMR excels at extract, transform, and load (ETL) jobs. Reference: Apache EMR – https://aws.amazon.com/emr/
Q94: _____ allows you to transform data using SQL as it’s being passed through Kinesis.
A. RDS B. Kinesis Data Analytics C. Redshift D. DynamoDB
Answer: B Notes: Kinesis Data Analytics allows you to transform data using SQL. Reference: Amazon Kinesis Data Analytics –
Q95 [SAA-C03]: A company runs a public-facing three-tier web application in a VPC across multiple Availability Zones. Amazon EC2 instances for the application tier running in private subnets need to download software patches from the internet. However, the EC2 instances cannot be directly accessible from the internet. Which actions should be taken to allow the EC2 instances to download the needed patches? (Select TWO.)
A. Configure a NAT gateway in a public subnet. B. Define a custom route table with a route to the NAT gateway for internet traffic and associate it with the private subnets for the application tier. C. Assign Elastic IP addresses to the EC2 instances. D. Define a custom route table with a route to the internet gateway for internet traffic and associate it with the private subnets for the application tier. E. Configure a NAT instance in a private subnet.
Answer: A. B. Notes: – A NAT gateway forwards traffic from the EC2 instances in the private subnet to the internet or other AWS services, and then sends the response back to the instances. After a NAT gateway is created, the route tables for private subnets must be updated to point internet traffic to the NAT gateway.
Q96 [SAA-C03]: A solutions architect wants to design a solution to save costs for Amazon EC2 instances that do not need to run during a 2-week company shutdown. The applications running on the EC2 instances store data in instance memory that must be present when the instances resume operation. Which approach should the solutions architect recommend to shut down and resume the EC2 instances?
A. Modify the application to store the data on instance store volumes. Reattach the volumes while restarting them. B. Snapshot the EC2 instances before stopping them. Restore the snapshot after restarting the instances. C. Run the applications on EC2 instances enabled for hibernation. Hibernate the instances before the 2- week company shutdown. D. Note the Availability Zone for each EC2 instance before stopping it. Restart the instances in the same Availability Zones after the 2-week company shutdown.
Answer: C. Notes: Hibernating EC2 instances save the contents of instance memory to an Amazon Elastic Block Store (Amazon EBS) root volume. When the instances restart, the instance memory contents are reloaded.
Q97 [SAA-C03]: A company plans to run a monitoring application on an Amazon EC2 instance in a VPC. Connections are made to the EC2 instance using the instance’s private IPv4 address. A solutions architect needs to design a solution that will allow traffic to be quickly directed to a standby EC2 instance if the application fails and becomes unreachable. Which approach will meet these requirements?
A) Deploy an Application Load Balancer configured with a listener for the private IP address and register the primary EC2 instance with the load balancer. Upon failure, de-register the instance and register the standby EC2 instance. B) Configure a custom DHCP option set. Configure DHCP to assign the same private IP address to the standby EC2 instance when the primary EC2 instance fails. C) Attach a secondary elastic network interface to the EC2 instance configured with the private IP address. Move the network interface to the standby EC2 instance if the primary EC2 instance becomes unreachable. D) Associate an Elastic IP address with the network interface of the primary EC2 instance. Disassociate the Elastic IP from the primary instance upon failure and associate it with a standby EC2 instance.
Answer: C. Notes: A secondary elastic network interface can be added to an EC2 instance. While primary network interfaces cannot be detached from an instance, secondary network interfaces can be detached and attached to a different EC2 instance.
Q98 [SAA-C03]: An analytics company is planning to offer a web analytics service to its users. The service will require that the users’ webpages include a JavaScript script that makes authenticated GET requests to the company’s Amazon S3 bucket. What must a solutions architect do to ensure that the script will successfully execute?
A. Enable cross-origin resource sharing (CORS) on the S3 bucket. B. Enable S3 Versioning on the S3 bucket. C. Provide the users with a signed URL for the script. D. Configure an S3 bucket policy to allow public execute privileges.
Answer: A. Notes: Web browsers will block running a script that originates from a server with a domain name that is different from the webpage. Amazon S3 can be configured with CORS to send HTTP headers that allow the script to run
Q99 [SAA-C03]: A company’s security team requires that all data stored in the cloud be encrypted at rest at all times using encryption keys stored on premises. Which encryption options meet these requirements? (Select TWO.)
A. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). B. Use server-side encryption with AWS KMS managed encryption keys (SSE-KMS). C. Use server-side encryption with customer-provided encryption keys (SSE-C). D. Use client-side encryption to provide at-rest encryption. E. Use an AWS Lambda function invoked by Amazon S3 events to encrypt the data using the customer’s keys.
Answer: C. D. Notes: Server-side encryption with customer-provided keys (SSE-C) enables Amazon S3 to encrypt objects on the server side using an encryption key provided in the PUT request. The same key must be provided in the GET requests for Amazon S3 to decrypt the object. Customers also have the option to encrypt data on the client side before uploading it to Amazon S3, and then they can decrypt the data after downloading it. AWS software development kits (SDKs) provide an S3 encryption client that streamlines the process.
Q100 [SAA-C03]: A company uses Amazon EC2 Reserved Instances to run its data processing workload. The nightly job typically takes 7 hours to run and must finish within a 10-hour time window. The company anticipates temporary increases in demand at the end of each month that will cause the job to run over the time limit with the capacity of the current resources. Once started, the processing job cannot be interrupted before completion. The company wants to implement a solution that would provide increased resource capacity as cost-effectively as possible. What should a solutions architect do to accomplish this?
A) Deploy On-Demand Instances during periods of high demand. B) Create a second EC2 reservation for additional instances. C) Deploy Spot Instances during periods of high demand. D) Increase the EC2 instance size in the EC2 reservation to support the increased workload.
Answer: A. Notes: While Spot Instances would be the least costly option, they are not suitable for jobs that cannot be interrupted or must complete within a certain time period. On-Demand Instances would be billed for the number of seconds they are running.
Q101 [SAA-C03]: A company runs an online voting system for a weekly live television program. During broadcasts, users submit hundreds of thousands of votes within minutes to a front-end fleet of Amazon EC2 instances that run in an Auto Scaling group. The EC2 instances write the votes to an Amazon RDS database. However, the database is unable to keep up with the requests that come from the EC2 instances. A solutions architect must design a solution that processes the votes in the most efficient manner and without downtime. Which solution meets these requirements?
A. Migrate the front-end application to AWS Lambda. Use Amazon API Gateway to route user requests to the Lambda functions. B. Scale the database horizontally by converting it to a Multi-AZ deployment. Configure the front-end application to write to both the primary and secondary DB instances. C. Configure the front-end application to send votes to an Amazon Simple Queue Service (Amazon SQS) queue. Provision worker instances to read the SQS queue and write the vote information to the database. D. Use Amazon EventBridge (Amazon CloudWatch Events) to create a scheduled event to re-provision the database with larger, memory optimized instances during voting periods. When voting ends, re-provision the database to use smaller instances.
Answer: C. Notes: – Decouple the ingestion of votes from the database to allow the voting system to continue processing votes without waiting for the database writes. Add dedicated workers to read from the SQS queue to allow votes to be entered into the database at a controllable rate. The votes will be added to the database as fast as the database can process them, but no votes will be lost.
Q102 [SAA-C03]: A company has a two-tier application architecture that runs in public and private subnets. Amazon EC2 instances running the web application are in the public subnet and an EC2 instance for the database runs on the private subnet. The web application instances and the database are running in a single Availability Zone (AZ). Which combination of steps should a solutions architect take to provide high availability for this architecture? (Select TWO.)
A. Create new public and private subnets in the same AZ. B. Create an Amazon EC2 Auto Scaling group and Application Load Balancer spanning multiple AZs for the web application instances. C. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer. D. Create new public and private subnets in a new AZ. Create a database using an EC2 instance in the public subnet in the new AZ. Migrate the old database contents to the new database. E. Create new public and private subnets in the same VPC, each in a new AZ. Create an Amazon RDS Multi-AZ DB instance in the private subnets. Migrate the old database contents to the new DB instance.
Answer: B. E. Notes: Create new subnets in a new Availability Zone (AZ) to provide a redundant network. Create an Auto Scaling group with instances in two AZs behind the load balancer to ensure high availability of the web application and redistribution of web traffic between the two public AZs. Create an RDS DB instance in the two private subnets to make the database tier highly available too.
Q103 [SAA-C03]: A website runs a custom web application that receives a burst of traffic each day at noon. The users upload new pictures and content daily, but have been complaining of timeouts. The architecture uses Amazon EC2 Auto Scaling groups, and the application consistently takes 1 minute to initiate upon boot up before responding to user requests. How should a solutions architect redesign the architecture to better respond to changing traffic?
A. Configure a Network Load Balancer with a slow start configuration. B. Configure Amazon ElastiCache for Redis to offload direct requests from the EC2 instances. C. Configure an Auto Scaling step scaling policy with an EC2 instance warmup condition. D. Configure Amazon CloudFront to use an Application Load Balancer as the origin.
Answer: C. Notes: The current configuration puts new EC2 instances into service before they are able to respond to transactions. This could also cause the instances to overscale. With a step scaling policy, you can specify the number of seconds that it takes for a newly launched instance to warm up. Until its specified warm-up time has expired, an EC2 instance is not counted toward the aggregated metrics of the Auto Scaling group. While scaling out, the Auto Scaling logic does not consider EC2 instances that are warming up as part of the current capacity of the Auto Scaling group. Therefore, multiple alarm breaches that fall in the range of the same step adjustment result in a single scaling activity. This ensures that you do not add more instances than you need.
Q104 [SAA-C03]: An application running on AWS uses an Amazon Aurora Multi-AZ DB cluster deployment for its database. When evaluating performance metrics, a solutions architect discovered that the database reads are causing high I/O and adding latency to the write requests against the database. What should the solutions architect do to separate the read requests from the write requests?
A. Enable read-through caching on the Aurora database. B. Update the application to read from the Multi-AZ standby instance. C. Create an Aurora replica and modify the application to use the appropriate endpoints. D. Create a second Aurora database and link it to the primary database as a read replica.
Answer: C. Notes: Aurora Replicas provide a way to offload read traffic. Aurora Replicas share the same underlying storage as the main database, so lag time is generally very low. Aurora Replicas have their own endpoints, so the application will need to be configured to direct read traffic to the new endpoints. Reference: Aurora Replicas
Question 106: A company plans to migrate its on-premises workload to AWS. The current architecture is composed of a Microsoft SharePoint server that uses a Windows shared file storage. The Solutions Architect needs to use a cloud storage solution that is highly available and can be integrated with Active Directory for access control and authentication. Which of the following options can satisfy the given requirement?
A. Create a file system using Amazon EFS and join it to an Active Directory domain. B. Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory domain in AWS. C. Create a Network File System (NFS) file share using AWS Storage Gateway. D. Launch an Amazon EC2 Windows Server to mount a new S3 bucket as a file volume.
Answer: B. Notes: Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessible over the industry-standard Service Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. Amazon FSx is accessible from Windows, Linux, and MacOS compute instances and devices. Thousands of compute instances and devices can access a file system concurrently. ReferenceText: FSx
Question 107: A company plans to migrate its on-premises workload to AWS. The current architecture is composed of a Microsoft SharePoint server that uses a Windows shared file storage. The Solutions Architect needs to use a cloud storage solution that is highly available and can be integrated with Active Directory for access control and authentication. Which of the following options can satisfy the given requirement?
A. Create a file system using Amazon EFS and join it to an Active Directory domain. B. Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory domain in AWS. C. Create a Network File System (NFS) file share using AWS Storage Gateway. D. Launch an Amazon EC2 Windows Server to mount a new S3 bucket as a file volume.
Answer: B. Notes: Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessible over the industry-standard Service Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. Amazon FSx is accessible from Windows, Linux, and MacOS compute instances and devices. Thousands of compute instances and devices can access a file system concurrently. Reference: FSx Category: Design Resilient Architectures
Question 108: A Forex trading platform, which frequently processes and stores global financial data every minute, is hosted in your on-premises data center and uses an Oracle database. Due to a recent cooling problem in their data center, the company urgently needs to migrate their infrastructure to AWS to improve the performance of their applications. As the Solutions Architect, you are responsible in ensuring that the database is properly migrated and should remain available in case of database server failure in the future. Which of the following is the most suitable solution to meet the requirement?
A. Create an Oracle database in RDS with Multi-AZ deployments. B. Launch an Oracle database instance in RDS with Recovery Manager (RMAN) enabled. C. Launch an Oracle Real Application Clusters (RAC) in RDS. D. Convert the database schema using the AWS Schema Conversion Tool and AWS Database Migration Service. Migrate the Oracle database to a non-cluster Amazon Aurora with a single instance.
Answer: A. Notes: Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. Reference: RDS Multi AZ Category: Design Resilient Architectures
Question 109: A data analytics company, which uses machine learning to collect and analyze consumer data, is using Redshift cluster as their data warehouse. You are instructed to implement a disaster recovery plan for their systems to ensure business continuity even in the event of an AWS region outage. Which of the following is the best approach to meet this requirement?
A. Do nothing because Amazon Redshift is a highly available, fully-managed data warehouse which can withstand an outage of an entire AWS region. B. Enable Cross-Region Snapshots Copy in your Amazon Redshift Cluster. C. Create a scheduled job that will automatically take the snapshot of your Redshift Cluster and store it to an S3 bucket. Restore the snapshot in case of an AWS region outage. D. Use Automated snapshots of your Redshift Cluster.
Answer: B. Notes: You can configure Amazon Redshift to copy snapshots for a cluster to another region. To configure cross-region snapshot copy, you need to enable this copy feature for each cluster and configure where to copy snapshots and how long to keep copied automated snapshots in the destination region. When cross-region copy is enabled for a cluster, all new manual and automatic snapshots are copied to the specified region. Reference: Redshift Snapshots
Category: Design Resilient Architectures
Question 109: A start-up company has an EC2 instance that is hosting a web application. The volume of users is expected to grow in the coming months and hence, you need to add more elasticity and scalability in your AWS architecture to cope with the demand. Which of the following options can satisfy the above requirement for the given scenario? (Select TWO.)
A. Set up two EC2 instances and then put them behind an Elastic Load balancer (ELB). B. Set up two EC2 instances deployed using Launch Templates and integrated with AWS Glue. C. Set up an S3 Cache in front of the EC2 instance. D. Set up two EC2 instances and use Route 53 to route traffic based on a Weighted Routing Policy. E. Set up an AWS WAF behind your EC2 Instance.
Answer: A. D. Notes: Using an Elastic Load Balancer is an ideal solution for adding elasticity to your application. Alternatively, you can also create a policy in Route 53, such as a Weighted routing policy, to evenly distribute the traffic to 2 or more EC2 instances. Hence, setting up two EC2 instances and then put them behind an Elastic Load balancer (ELB) and setting up two EC2 instances and using Route 53 to route traffic based on a Weighted Routing Policy are the correct answers. Reference: Elastic Load Balancing Category: Design Resilient Architectures
Question 110: A company plans to deploy a Docker-based batch application in AWS. The application will be used to process both mission-critical data as well as non-essential batch jobs. Which of the following is the most cost-effective option to use in implementing this architecture?
A. Use ECS as the container management service then set up Reserved EC2 Instances for processing both mission-critical and non-essential batch jobs. B. Use ECS as the container management service then set up a combination of Reserved and Spot EC2 Instances for processing mission-critical and non-essential batch jobs respectively. C. Use ECS as the container management service then set up On-Demand EC2 Instances for processing both mission-critical and non-essential batch jobs. D. Use ECS as the container management service then set up Spot EC2 Instances for processing both mission-critical and non-essential batch jobs.
Answer: B. Notes: Amazon ECS lets you run batch workloads with managed or custom schedulers on Amazon EC2 On-Demand Instances, Reserved Instances, or Spot Instances. You can launch a combination of EC2 instances to set up a cost-effective architecture depending on your workload. You can launch Reserved EC2 instances to process the mission-critical data and Spot EC2 instances for processing non-essential batch jobs. There are two different charge models for Amazon Elastic Container Service (ECS): Fargate Launch Type Model and EC2 Launch Type Model. With Fargate, you pay for the amount of vCPU and memory resources that your containerized application requests while for EC2 launch type model, there is no additional charge. You pay for AWS resources (e.g. EC2 instances or EBS volumes) you create to store and run your application. You only pay for what you use, as you use it; there are no minimum fees and no upfront commitments. In this scenario, the most cost-effective solution is to use ECS as the container management service then set up a combination of Reserved and Spot EC2 Instances for processing mission-critical and non-essential batch jobs respectively. You can use Scheduled Reserved Instances (Scheduled Instances) which enables you to purchase capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term. This will ensure that you have an uninterrupted compute capacity to process your mission-critical batch jobs. Reference: Emazon ECS
Category: Design Resilient Architectures
[/bgcollapse]
Question 111: A company has recently adopted a hybrid cloud architecture and is planning to migrate a database hosted on-premises to AWS. The database currently has over 50 TB of consumer data, handles highly transactional (OLTP) workloads, and is expected to grow. The Solutions Architect should ensure that the database is ACID-compliant and can handle complex queries of the application. Which type of database service should the Architect use?
A. Amazon DynamoDB B. Amazon RDS C. Amazon Redshift D. Amazon Aurora
Answer: D. Notes: Amazon Aurora (Aurora) is a fully managed relational database engine that’s compatible with MySQL and PostgreSQL. You already know how MySQL and PostgreSQL combine the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. The code, tools, and applications you use today with your existing MySQL and PostgreSQL databases can be used with Aurora. With some workloads, Aurora can deliver up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. Aurora includes a high-performance storage subsystem. Its MySQL- and PostgreSQL-compatible database engines are customized to take advantage of that fast distributed storage. The underlying storage grows automatically as needed, up to 64 tebibytes (TiB). Aurora also automates and standardizes database clustering and replication, which are typically among the most challenging aspects of database configuration and administration. Reference: Aurora Category: Design Resilient Architectures
Question 112: An online stocks trading application that stores financial data in an S3 bucket has a lifecycle policy that moves older data to Glacier every month. There is a strict compliance requirement where a surprise audit can happen at anytime and you should be able to retrieve the required data in under 15 minutes under all circumstances. Your manager instructed you to ensure that retrieval capacity is available when you need it and should handle up to 150 MB/s of retrieval throughput. Which of the following should you do to meet the above requirement? (Select TWO.)
A. Retrieve the data using Amazon Glacier Select. B. Use Bulk Retrieval to access the financial data. C. Purchase provisioned retrieval capacity. D. Use Expedited Retrieval to access the financial data. E. Specify a range, or portion, of the financial data archive to retrieve.
Answer: C. D. Notes: Expedited retrievals allow you to quickly access your data when occasional urgent requests for a subset of archives are required. For all but the largest archives (250 MB+), data accessed using Expedited retrievals are typically made available within 1–5 minutes. Provisioned Capacity ensures that retrieval capacity for Expedited retrievals is available when you need it. To make an Expedited, Standard, or Bulk retrieval, set the Tier parameter in the Initiate Job (POST jobs) REST API request to the option you want, or the equivalent in the AWS CLI or AWS SDKs. If you have purchased provisioned capacity, then all expedited retrievals are automatically served through your provisioned capacity. Provisioned capacity ensures that your retrieval capacity for expedited retrievals is available when you need it. Each unit of capacity provides that at least three expedited retrievals can be performed every five minutes and provides up to 150 MB/s of retrieval throughput. You should purchase provisioned retrieval capacity if your workload requires highly reliable and predictable access to a subset of your data in minutes. Without provisioned capacity Expedited retrievals are accepted, except for rare situations of unusually high demand. However, if you require access to Expedited retrievals under all circumstances, you must purchase provisioned retrieval capacity. Reference: Amazon Glacier Category: Design Resilient Architectures
Question 113: An organization stores and manages financial records of various companies in its on-premises data center, which is almost out of space. The management decided to move all of their existing records to a cloud storage service. All future financial records will also be stored in the cloud. For additional security, all records must be prevented from being deleted or overwritten. Which of the following should you do to meet the above requirement? A. Use AWS Storage Gateway to establish hybrid cloud storage. Store all of your data in Amazon S3 and enable object lock. B. Use AWS DataSync to move the data. Store all of your data in Amazon EFS and enable object lock. C. Use AWS Storage Gateway to establish hybrid cloud storage. Store all of your data in Amazon EBS and enable object lock. D. Use AWS DataSync to move the data. Store all of your data in Amazon S3 and enable object lock.
Answer: D. Notes: AWS DataSync allows you to copy large datasets with millions of files, without having to build custom solutions with open source tools, or license and manage expensive commercial network acceleration software. You can use DataSync to migrate active data to AWS, transfer data to the cloud for analysis and processing, archive data to free up on-premises storage capacity, or replicate data to AWS for business continuity. AWS DataSync enables you to migrate your on-premises data to Amazon S3, Amazon EFS, and Amazon FSx for Windows File Server. You can configure DataSync to make an initial copy of your entire dataset, and schedule subsequent incremental transfers of changing data towards Amazon S3. Enabling S3 Object Lock prevents your existing and future records from being deleted or overwritten. AWS DataSync is primarily used to migrate existing data to Amazon S3. On the other hand, AWS Storage Gateway is more suitable if you still want to retain access to the migrated data and for ongoing updates from your on-premises file-based applications. ReferenceText: AWS DataSync ReferenceUrl: https://aws.amazon.com/datasync/faqs/ Category: Design Secure Applications and Architectures
Question 114: A solutions architect is designing a solution to run a containerized web application by using Amazon Elastic Container Service (Amazon ECS). The solutions architect wants to minimize cost by running multiple copies of a task on each container instance. The number of task copies must scale as the load increases and decreases. Which routing solution distributes the load to the multiple tasks?
A. Configure an Application Load Balancer to distribute the requests by using path-based routing. B. Configure an Application Load Balancer to distribute the requests by using dynamic host port mapping. C. Configure an Amazon Route 53 alias record set to distribute the requests with a failover routing policy. D. Configure an Amazon Route 53 alias record set to distribute the requests with a weighted routing policy.
Answer: B. Notes: With dynamic host port mapping, multiple tasks from the same service are allowed for each container instance. You can use weighted routing policies to route traffic to instances at proportions that you specify. You cannot use weighted routing policies to manage multiple tasks on a single container. Notes: With dynamic host port mapping, multiple tasks from the same service are allowed for each container instance. You can use weighted routing policies to route traffic to instances at proportions that you specify. You cannot use weighted routing policies to manage multiple tasks on a single container. Reference:Choosing a routing policy Category: Design Cost-Optimized Architectures
Question 115: Question: A Solutions Architect needs to deploy a mobile application that can collect votes for a popular singing competition. Millions of users from around the world will submit votes using their mobile phones. These votes must be collected and stored in a highly scalable and highly available data store which will be queried for real-time ranking. Which of the following combination of services should the architect use to meet this requirement? A. Amazon Redshift and AWS Mobile Hub B. Amazon DynamoDB and AWS AppSync C. Amazon Relational Database Service (RDS) and Amazon MQ D. Amazon Aurora and Amazon Cognito
Answer: B. Notes: When the word durability pops out, the first service that should come to your mind is Amazon S3. Since this service is not available in the answer options, we can look at the other data store available which is Amazon DynamoDB. DynamoDB is durable, scalable, and highly available data store which can be used for real-time tabulation. You can also use AppSync with DynamoDB to make it easy for you to build collaborative apps that keep shared data updated in real time. You just specify the data for your app with simple code statements and AWS AppSync manages everything needed to keep the app data updated in real time. This will allow your app to access data in Amazon DynamoDB, trigger AWS Lambda functions, or run Amazon Elasticsearch queries and combine data from these services to provide the exact data you need for your app.
Question 116: The usage of a company’s image-processing application is increasing suddenly with no set pattern. The application’s processing time grows linearly with the size of the image. The processing can take up to 20 minutes for large image files. The architecture consists of a web tier, an Amazon Simple Queue Service (Amazon SQS) standard queue, and message consumers that process the images on Amazon EC2 instances. When a high volume of requests occurs, the message backlog in Amazon SQS increases. Users are reporting the delays in processing. A solutions architect must improve the performance of the application in compliance with cloud best practices. Which solution will meet these requirements?
A. Purchase enough Dedicated Instances to meet the peak demand. Deploy the instances for the consumers. B. Convert the existing SQS standard queue to an SQS FIFO queue. Increase the visibility timeout. C. Configure a scalable AWS Lambda function as the consumer of the SQS messages. D. Create a message consumer that is an Auto Scaling group of instances. Configure the Auto Scaling group to scale based upon the ApproximateNumberOfMessages Amazon CloudWatch metric.
Answer: D. Notes: FIFO queues will solve problems that occur when messages are processed out of order. FIFO queues will not improve performance during sudden volume increases. Additionally, you cannot convert SQS queues after you create them. Reference: FIFO Queues
Question 117: An application is hosted on an EC2 instance with multiple EBS Volumes attached and uses Amazon Neptune as its database. To improve data security, you encrypted all of the EBS volumes attached to the instance to protect the confidential data stored in the volumes. Which of the following statements are true about encrypted Amazon Elastic Block Store volumes? (Select TWO.)
A. All data moving between the volume and the instance are encrypted. B. Snapshots are automatically encrypted. C. The volumes created from the encrypted snapshot are not encrypted. D. Snapshots are not automatically encrypted. E. Only the data in the volume is encrypted and not all the data moving between the volume and the instance. Answer: A. B. Notes: Amazon Elastic Block Store (Amazon EBS) provides block level storage volumes for use with EC2 instances. EBS volumes are highly available and reliable storage volumes that can be attached to any running instance that is in the same Availability Zone. EBS volumes that are attached to an EC2 instance are exposed as storage volumes that persist independently from the life of the instance. Encryption operations occur on the servers that host EC2 instances, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. You can encrypt both the boot and data volumes of an EC2 instance. Reference:EBS
Question 118: A reporting application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. For complex reports, the application can take up to 15 minutes to respond to a request. A solutions architect is concerned that users will receive HTTP 5xx errors if a report request is in process during a scale-in event. What should the solutions architect do to ensure that user requests will be completed before instances are terminated?
A. Enable sticky sessions (session affinity) for the target group of the instances. B. Increase the instance size in the Application Load Balancer target group. C. Increase the cooldown period for the Auto Scaling group to a greater amount of time than the time required for the longest running responses. D. Increase the deregistration delay timeout for the target group of the instances to greater than 900 seconds.
Answer: D. Notes: By default, Elastic Load Balancing waits 300 seconds before the completion of the deregistration process, which can help in-flight requests to the target become complete. To change the amount of time that Elastic Load Balancing waits, update the deregistration delay value. Reference: Deregistration Delay.
Question 119: A company used Amazon EC2 Spot Instances for a demonstration that is now complete. A solutions architect must remove the Spot Instances to stop them from incurring cost. What should the solutions architect do to meet this requirement?
A. Cancel the Spot request only. B. Terminate the Spot Instances only. C. Cancel the Spot request. Terminate the Spot Instances. D. Terminate the Spot Instances. Cancel the Spot request.
Answer: C. Notes: To remove the Spot Instances, the appropriate steps are to cancel the Spot request and then to terminate the Spot Instances. Reference:Spot Instances
Question 120: Which components are required to build a site-to-site VPN connection on AWS? (Select TWO.) A. An Internet Gateway B. A NAT gateway C. A customer Gateway D. A Virtual Private Gateway E. Amazon API Gateway
Answer: C. D. Notes: A virtual private gateway is attached to a VPC to create a site-to-site VPN connection on AWS. You can accept private encrypted network traffic from an on-premises data center into your VPC without the need to traverse the open public internet. A customer gateway is required for the VPN connection to be established. A customer gateway device is set up and configured in the customer’s data center. Reference: What is AWS Site-to-Site VPN?
Question 121: A company runs its website on Amazon EC2 instances behind an Application Load Balancer that is configured as the origin for an Amazon CloudFront distribution. The company wants to protect against cross-site scripting and SQL injection attacks. Which approach should a solutions architect recommend to meet these requirements?
A. Enable AWS Shield Advanced. List the CloudFront distribution as a protected resource. B. Define an AWS Shield Advanced policy in AWS Firewall Manager to block cross-site scripting and SQL injection attacks. C. Set up AWS WAF on the CloudFront distribution. Use conditions and rules that block cross-site scripting and SQL injection attacks. D. Deploy AWS Firewall Manager on the EC2 instances. Create conditions and rules that block cross-site scripting and SQL injection attacks.
Answer: C. Notes: AWS WAF can detect the presence of SQL code that is likely to be malicious (known as SQL injection). AWS WAF also can detect the presence of a script that is likely to be malicious (known as cross-site scripting). Reference: AWS WAF.
Question 122: A media company is designing a new solution for graphic rendering. The application requires up to 400 GB of storage for temporary data that is discarded after the frames are rendered. The application requires approximately 40,000 random IOPS to perform the rendering. What is the MOST cost-effective storage option for this rendering application? A. A storage optimized Amazon EC2 instance with instance store storage B. A storage optimized Amazon EC2 instance with a Provisioned IOPS SSD (io1 or io2) Amazon Elastic Block Store (Amazon EBS) volume C. A burstable Amazon EC2 instance with a Throughput Optimized HDD (st1) Amazon Elastic Block Store (Amazon EBS) volume D. A burstable Amazon EC2 instance with Amazon S3 storage over a VPC endpoint
Answer: A. Notes: SSD-Backed Storage Optimized (i2) instances provide more than 365,000 random IOPS. The instance store has no additional cost, compared with the regular hourly cost of the instance. Reference: Amazon EC2 pricing.
Question 123: A company is deploying a new application that will consist of an application layer and an online transaction processing (OLTP) relational database. The application must be available at all times. However, the application will have periods of inactivity. The company wants to pay the minimum for compute costs during these idle periods. Which solution meets these requirements MOST cost-effectively? A. Run the application in containers with Amazon Elastic Container Service (Amazon ECS) on AWS Fargate. Use Amazon Aurora Serverless for the database. B. Run the application on Amazon EC2 instances by using a burstable instance type. Use Amazon Redshift for the database. C. Deploy the application and a MySQL database to Amazon EC2 instances by using AWS CloudFormation. Delete the stack at the beginning of the idle periods. D. Deploy the application on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. Use Amazon RDS for MySQL for the database.
Answer: A. Notes: When Amazon ECS uses Fargate for compute, it incurs no costs when the application is idle. Aurora Serverless also incurs no compute costs when it is idle. Reference: AWS Fargate Pricing.
Question 124:Which options best describe characteristics of events in event-driven design? (Select THREE.)
A. Events are usually processed asynchronously
B. Events usually expect an immediate reply
C. Events are used to share information about a change in state
D. Events are observable
E. Events direct the actions of targets
Answer: A. C. D. Notes: Events are used to share information about a change in state. Events are observable and usually processed asynchronously. Events do not direct the actions of targets, and events do not expect a reply. Events can be used to trigger synchronous communications, and in this case, an event source like API Gateway might wait for a response. Reference:Event Driven Design on AWS
Questions 125: Which of these scenarios would lead you to choose AWS AppSync and GraphQL APIs over API Gateway and REST APIs? Choose THREE.
A. You need a strongly typed schema for developers.
B. You need a server-controlled response.
C. You need multiple authentication options to the same API.
D. You need to integrate with existing clients.
E. You need client-specific responses that require data from many backend resources.
Answer: A. C. E Notes: With GraphQL, you define the schema and data types in advance. If it’s not in the schema, you can’t query for it. Developers can download the schema and generate source code off the schema to work with it. Consider GraphQL for applications where you need a client-specific response that needs data from lots of backend sources. When you need a server-controlled choose REST. AWS AppSync allows you to use multiple authentication options on the same API, but API Gateway allows you to associate only one authentication option per resource. When you need to integrate with existing clients, REST is much more mature, and there are more tools in which to use it. Most clients are written for REST. Reference: GraphQL vs. REST
Question 126: Which options are TRUE statements about serverless security? (Select THREE.)
A. Logging and metrics are especially critical because you can’t go back to the server to see what happened when something fails.
B. Because you aren’t responsible for the operating system and the network itself, you don’t need to worry about mitigating external attacks.
C. The distributed perimeter means your code needs to defend each of the potential paths that might be used to reach your functions.
D. You can use Lambda’s fine-grained controls to scope its reach with a much smaller set of permissions as opposed to traditional approaches.
E. You may use the same tooling as with your server-based applications, but the best practices you follow will be different.
Answer: A. C. and D.
Notes: In Lambda’s ephemeral environment, logging and metrics are more critical because once the code runs, you can no longer go back to the server to find out what has happened. The security perimeter you are defending has to consider the different services that might trigger a function, and your code needs to defend each of those potential paths. You can use Lambda’s fine-grained controls to scope its reach with a much smaller set of permissions as opposed to traditional approaches where you may give broad permissions for your application on its servers. Scope your functions to limit permission sharing between any unrelated components. Security best practices don’t change with serverless, but the tooling you’ll use will change. For example, techniques such as installing agents on your host may not be relevant any more. While you aren’t responsible for the operating system or the network itself, you do need to protect your network boundaries and mitigate external attacks.
Question 127: Which options are examples of steps you take to protect your serverless application from attacks? (Select FOUR.)
A. Update your operating system with the latest patches.
B. Configure geoblocking on Amazon CloudFront in front of regional API endpoints.
C. Disable origin access identity on Amazon S3.
D. Disable CORS on your APIs.
E. Use resource policies to limit access to your APIs to users from a specified account.
F. Filter out specific traffic patterns with AWS WAF.
G. Parameterize queries so that your Lambda function expects a single input.
Answer: B. E. F. G
Notes: You aren’t responsible for the operating system or network configuration where your functions run, and AWS is ensuring the security of the data within those managed services. You are responsible for protecting data entering your application and limiting access to your AWS resources. You still need to protect data that originates client-side or that travels to or from endpoints outside AWS.
When integrating CloudFront with regional API endpoints, CloudFront also supports geoblocking, which you can use to prevent requests from being served from particular geographic locations.
Use origin access identity with Amazon S3 to allow bucket access only through CloudFront.
CORS is a browser security feature that restricts cross-origin HTTP requests that are initiated from scripts running in the browser. It is enforced by the browser. If your APIs will receive cross-origin requests, you should enable CORS support in API Gateway.
IAM resource policies can be used to limit access to your APIs. For example, you can restrict access to users from a specified AWS account or deny traffic from a specified source IP address or CIDR block.
AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits. AWS WAF lets you create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns you define.
Lambda functions are triggered by events. These events submit an event parameter to the Lambda function and could be exploited for SQL injection. You can prevent this type of attack by parameterizing queries so that your Lambda function expects a single input.
Question 128:Which options reflect best practices for automating your deployment pipeline with serverless applications? (Select TWO.)
A. Select one deployment framework and use it for all of your deployments for consistency.
B. Use different AWS accounts for each environment in your deployment pipeline.
C. Use AWS SAM to configure safe deployments and include pre- and post-traffic tests.
D. Create a specific AWS SAM template to match each environment to keep them distinct.
Answer: B. and C.
Notes: You may use multiple deployment frameworks for an application so that you can use the framework that best suits the type of deployment. For example, you might use the AWS SAM framework to define your application stack and deployment preferences and then use AWS CDK to provision any infrastructure-related resources, such as the CI/CD pipeline.
It is a best practice to use different AWS accounts for each environment. This approach limits the blast radius of issues that occur and makes it less complex to differentiate which resources are associated with each environment. Because of the way costs are calculated with serverless, spinning up additional environments doesn’t add much to your cost.
AWS SAM lets you configure safe deployment preferences so that you can run code before the deployment, and after the deployment and rollback if there is a problem. You can also specify a method for shifting traffic to the new version a little bit at a time.
It is a best practice to use one AWS SAM template across environments and use options to parameterize values that are different per environment. This helps ensure that the environment is built with exactly the same stack.
Question 129: Your application needs to connect to an Amazon RDS instance on the backend. What is the best recommendation to the developer whose function must read from and write to the Amazon RDS instance?
A. Use reserved concurrency to limit the number of concurrent functions that would try to write to the database
B. Use the database proxy feature to provide connection pooling for the functions
C. Initialize the number of connections you want outside of the handler
D. Use the database TTL setting to clean up connections
Answer: B Notes: Use the database proxy feature to provide connection pooling for the functions
Question 130: A company runs a cron job on an Amazon EC2 instance on a predefined schedule. The cron job calls a bash script that encrypts a 2 KB file. A security engineer creates an AWS Key Management Service (AWS KMS) CMK with a key policy.
The key policy and the EC2 instance role have the necessary configuration for this job.
Which process should the bash script use to encrypt the file?
A) Use the aws kms encrypt command to encrypt the file by using the existing CMK.
B) Use the aws kms create-grant command to generate a grant for the existing CMK.
C) Use the aws kms encrypt command to generate a data key. Use the plaintext data key to encrypt the file.
D) Use the aws kms generate-data-key command to generate a data key. Use the encrypted data key to encrypt the file.
Answer: D
Notes: KMS allow encryption for raw data up to 4K but it is not recommended so A is possible but not good practice. Create grant is a ‘policy’ things, not an encryption things. Kms encrypt doesn’t generate data key. Only D generate a data key clear text and encrypted. You then encrypt the file with the data key, add the encrypted data key to the encrypted file metadata for later decryption.
Question 131: A Security engineer must develop an AWS Identity and Access Management (IAM) strategy for a company’s organization in AWS Organizations. The company needs to give developers autonomy to develop and test their applications on AWS, but the company also needs to implement security guardrails to help protect itself. The company creates and distributes applications with different levels of data classification and types. The solution must maximize scalability.
Which combination of steps should the security engineer take to meet these requirements? (Choose three.)
A) Create an SCP to restrict access to highly privileged or unauthorized actions to specific AM principals. Assign the SCP to the appropriate AWS accounts.
B) Create an IAM permissions boundary to allow access to specific actions and IAM principals. Assign the IAM permissions boundary to all AM principals within the organization
C) Create a delegated IAM role that has capabilities to create other IAM roles. Use the delegated IAM role to provision IAM principals by following the principle of least privilege.
D) Create OUs based on data classification and type. Add the AWS accounts to the appropriate OU. Provide developers access to the AWS accounts based on business need.
E) Create IAM groups based on data classification and type. Add only the required developers’ IAM role to the IAM groups within each AWS account.
F) Create IAM policies based on data classification and type. Add the minimum required IAM policies to the developers’ IAM role within each AWS account.
Answer: A B and C
Notes:
If you look at the choices, there are three related to SCP, which controls services, and three related to IAM and permissions boundaries.
Limiting services doesn’t help with data classification – using boundaries, policies and roles give you the scalability and can solve the problem.
Question 132: A company is ready to deploy a public web application. The company will use AWS and will host the application on an Amazon EC2 instance. The company must use SSL/TLS encryption. The company is already using AWS Certificate Manager (ACM) and will export a certificate for use with the deployment.
How can a security engineer deploy the application to meet these requirements?
A) Put the EC2 instance behind an Application Load Balancer (ALB). In the EC2 console, associate the certificate with the ALB by choosing HTTPS and 443.
B) Put the EC2 instance behind a Network Load Balancer. Associate the certificate with the EC2 instance.
C) Put the EC2 instance behind a Network Load Balancer (NLB). In the EC2 console, associate the certificate with the NLB by choosing HTTPS and 443.
D) Put the EC2 instance behind an Application Load Balancer. Associate the certificate with the EC2 instance.
Answer: A
Notes: You can’t directly install Amazon-issued certificates on Amazon Elastic Compute Cloud (EC2) instances. Instead, use the certificate with a load balancer, and then register the EC2 instance behind the load balancer.
What are the 6 pillars of a well architected framework:
AWS Well-Architected helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications and workloads. Based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization — AWS Well-Architected provides a consistent approach for customers and partners to evaluate architectures, and implement designs that can scale over time.
1. Operational Excellence
The operational excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. You can find prescriptive guidance on implementation in the Operational Excellence Pillar whitepaper.
2. Security The security pillar includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies. You can find prescriptive guidance on implementation in the Security Pillar whitepaper.
3. Reliability The reliability pillar includes the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. You can find prescriptive guidance on implementation in the Reliability Pillar whitepaper.
4. Performance Efficiency The performance efficiency pillar includes the ability to use computing resources efficiently to meet system requirements and to maintain that efficiency as demand changes and technologies evolve. You can find prescriptive guidance on implementation in the Performance Efficiency Pillar whitepaper.
5. Cost Optimization The cost optimization pillar includes the ability to avoid or eliminate unneeded cost or suboptimal resources. You can find prescriptive guidance on implementation in the Cost Optimization Pillar whitepaper.
6. Sustainability
The ability to increase efficiency across all components of a workload by maximizing the benefits from the provisioned resources.
There are six best practice areas for sustainability in the cloud:
Region Selection – AWS Global Infrastructure
User Behavior Patterns – Auto Scaling, Elastic Load Balancing
Software and Architecture Patterns – AWS Design Principles
The AWS Well-Architected Framework provides architectural best practices across the five pillars for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. The framework provides a set of questions that allows you to review an existing or proposed architecture. It also provides a set of AWS best practices for each pillar. Using the Framework in your architecture helps you produce stable and efficient systems, which allows you to focus on functional requirements.
Other AWS Facts and Summaries and Questions/Answers Dump
The reality, of course, today is that if you come up with a great idea you don’t get to go quickly to a successful product. There’s a lot of undifferentiated heavy lifting that stands between your idea and that success. The kinds of things that I’m talking about when I say undifferentiated heavy lifting are things like these: figuring out which servers to buy, how many of them to buy, what time line to buy them.
Eventually you end up with heterogeneous hardware and you have to match that. You have to think about backup scenarios if you lose your data center or lose connectivity to a data center. Eventually you have to move facilities. There’s negotiations to be done. It’s a very complex set of activities that really is a big driver of ultimate success.
But they are undifferentiated from, it’s not the heart of, your idea. We call this muck. And it gets worse because what really happens is you don’t have to do this one time. You have to drive this loop. After you get your first version of your idea out into the marketplace, you’ve done all that undifferentiated heavy lifting, you find out that you have to cycle back. Change your idea. The winners are the ones that can cycle this loop the fastest.
On every cycle of this loop you have this undifferentiated heavy lifting, or muck, that you have to contend with. I believe that for most companies, and it’s certainly true at Amazon, that 70% of your time, energy, and dollars go into the undifferentiated heavy lifting and only 30% of your energy, time, and dollars gets to go into the core kernel of your idea.
I think what people are excited about is that they’re going to get a chance they see a future where they may be able to invert those two. Where they may be able to spend 70% of their time, energy and dollars on the differentiated part of what they’re doing.
So my exam was yesterday and I got the results in 24 hours. I think that’s how they review all saa exams, not showing the results right away anymore.
I scored 858. Was practicing with Stephan’s udemy lectures and Bonso exam tests. My test results were as follows Test 1. 63%, 93% Test 2. 67%, 87% Test 3. 81 % Test 4. 72% Test 5. 75 % Test 6. 81% Stephan’s test. 80%
I was reading all question explanations (even the ones I got correct)
The actual exam was pretty much similar to these. The topics I got were:
A lot of S3 (make sure you know all of it from head to toes)
VPC peering
DataSync and Database Migration Service in same questions. Make sure you know the difference
One EKS question
2-3 KMS questions
Security group question
A lot of RDS Multi-AZ
SQS + SNS fan out pattern
ECS microservice architecture question
Route 53
NAT gateway
And that’s all I can remember)
I took extra 30 minutes, because English is not my native language and I had plenty of time to think and then review flagged questions.
Hey guys, just giving my update so all of you guys working towards your certs can stay motivated as these success stories drove me to reach this goal.
Background: 12 years of military IT experience, never worked with the cloud. I’ve done 7 deployments (that is a lot in 12 years), at which point I came home from the last one burnt out with a family that barely knew me. I knew I needed a change, but had no clue where to start or what I wanted to do. I wasn’t really interested in IT but I knew it’d pay the bills. After seeing videos about people in IT working from home(which after 8+ years of being gone from home really appealed to me), I stumbled across a video about a Solutions Architect’s daily routine working from home and got me interested in AWS.
AWS Solutions Architect SAA Certification Preparation time: It took me 68 days straight of hard work to pass this exam with confidence. No rest days, more than 120 pages of hand-written notes and hundreds and hundreds of flash cards.
In the beginning, I hopped on Stephane Maarek’s course for the CCP exam just to see if it was for me. I did the course in about a week and then after doing some research on here, got the CCP Practice exams from tutorialsdojo.com Two weeks after starting the Udemy course, I passed the exam. By that point, I’d already done lots of research on the different career paths and the best way to study, etc.
Cantrill(10/10) – That same day, I hopped onto Cantrill’s course for the SAA and got to work. Somebody had mentioned that by doing his courses you’d be over-prepared for the exam. While I think a combination of material is really important for passing the certification with confidence, I can say without a doubt Cantrill’s courses got me 85-90% of the way there. His forum is also amazing, and has directly contributed to me talking with somebody who works at AWS to land me a job, which makes the money I spent on all of his courses A STEAL. As I continue my journey (up next is SA Pro), I will be using all of his courses.
Neal Davis(8/10) – After completing Cantrill’s course, I found myself needing a resource to reinforce all the material I’d just learned. AWS is an expansive platform and the many intricacies of the different services can be tricky. For this portion, I relied on Neal Davis’s Training Notes series. These training notes are a very condensed version of the information you’ll need to pass the exam, and with the proper context are very useful to find the things you may have missed in your initial learnings. I will be using his other Training Notes for my other exams as well.
TutorialsDojo(10/10) – These tests filled in the gaps and allowed me to spot my weaknesses and shore them up. I actually think my real exam was harder than these, but because I’d spent so much time on the material I got wrong, I was able to pass the exam with a safe score.
As I said, I was surprised at how difficult the exam was. A lot of my questions were related to DBs, and a lot of them gave no context as to whether the data being loaded into them was SQL or NoSQL which made the choice selection a little frustrating. A lot of the questions have 2 VERY SIMILAR answers, and often time the wording of the answers could be easy to misinterpret (such as when you are creating a Read Replica, do you attach it to the primary application DB that is slowing down because of read issues or attach it to the service that is causing the primary DB to slow down). For context, I was scoring 95-100% on the TD exams prior to taking the test and managed a 823 on the exam so I don’t know if I got unlucky with a hard test or if I’m not as prepared as I thought I was (i.e. over-thinking questions).
Anyways, up next is going back over the practical parts of the course as I gear up for the SA Pro exam. I will be taking my time with this one, and re-learning the Linux CLI in preparation for finding a new job.
PS if anybody on here is hiring, I’m looking! I’m the hardest worker I know and my goal is to make your company as streamlined and profitable as possible. 🙂
Testimonial: How did you prepare for AWS Certified Solutions Architect – Associate Level certification?
Best way to prepare for aws solution architect associate certification
Practical knowledge is 30% important and rest is Jayendra blog and Dumps.
Buying udemy courses doesn’t make you pass, I can tell surely without going to dumps and without going to jayendra’s blog not easy to clear the certification.
Read FAQs of S3, IAM, EC2, VPC, SQS, Autoscaling, Elastic Load Balancer, EBS, RDS, Lambda, API Gateway, ECS.
Read the Security Whitepaper and Shared Responsibility model.
The most important thing is basic questions from the last introduced topics to the exam is very important like Amazon Kinesis, etc…
Whitepapers are the important information about each services that are published by Amazon in their website. If you are preparing for the AWS certifications, it is very important to use the some of the most recommended whitepapers to read before writing the exam.
Data Security questions could be the more challenging and it’s worth noting that you need to have a good understanding of security processes described in the whitepaper titled “Overview of Security Processes”.
In the above list, most important whitepapers are Overview of Security Processes and Storage Options in the Cloud. Read more here…
Stephen Maarek’s Udemy course, and his 6 exam practices
Adrian Cantrill’s online course (about `60% done)
TutorialDojo’s exams
(My company has udemy business account so I was able to use Stephen’s course/exam)
I scheduled my exam at the end of March, and started with Adrian’s. But I was dumb thinking that I could go through his course within 3 weeks… I stopped around 12% of his course and went to the textbook and finished reading the all-in-one exam guide within a weekend. Then I started going through Stephen’s course. While learning the course, I pushed back the exam to end of April, because I knew I wouldn’t be ready by the exam comes along.
Five days before the exam, I finished Stephen’s course, and then did his final exam on the course. I failed miserably (around 50%). So I did one of Stephen’s practice exam and did worse (42%). I thought maybe it might be his exams that are slightly difficult, so I went and bought Jon Bonso’s exam and got 60% on his first one. And then I realized based on all the questions on the exams, I was definitely lacking some fundamentals. I went back to Adrian’s course and things were definitely sticking more – I think it has to do with his explanations + more practical stuff. Unfortunately, I could not finish his course before the exam (because I was cramming), and on the day of the exam, I could only do Bonso’s four of six exams, with barely passing one of them.
Please, don’t do what I did. I was desperate to get this thing over with it. I wanted to move on and work on other things for job search, but if you’re not in this situation, please don’t do this. I can’t for love of god tell you about OAI and Cloudfront and why that’s different than S3 URL. The only thing that I can remember is all the practical stuff that I did with Adrian’s course. I’ll never forget how to create VPC, because he make you manually go through it. I’m not against Stephen’s course – they are different on its own way (see the tips below).
So here’s what I recommend doing before writing for aws exam:
Don’t schedule your exam beforehand. Go through the materials that you are doing, and make sure you get at least 80% on all of the Jon Bonso’s exam (I’d recommend maybe 90% or higher)
If you like to learn things practically, I do recommend Adrian’s course. If you like to learn things conceptually, go with Stephen Maarek’s course. I find Stephen’s course more detailed when going through different architectures, but I can’t really say that because I didn’t really finish Adrian’s course
Jon Bonso’s exam was about the same difficulty as the actual exam. But they’re slightly more tricky. For example, many of the questions will give you two different situation and you really have to figure out what they are asking for because they might contradict to each other, but the actual question is asking one specific thing. However, there were few questions that were definitely obvious if you knew the service.
I’m upset that even though I passed the exam, I’m still lacking some practical stuff, so I’m just going to go through Adrian’s Developer exam but without cramming this time. If you actually learn the materials and practice them, they are definitely useful in the real world. I hope this will help you passing and actually learning the stuff.
P.S I vehemently disagree with Adrian in one thing in his course. doggogram.io is definitely better than catagram.io, although his cats are pretty cool
I sat the exam at a PearsonVUE test centre and scored 816.
The exam had lots of questions around S3, RDS and storage. To be honest it was a bit of a blur but they are the ones I remember.
I was a bit worried before sitting the exam as I was only hit 76% in the official AWS practice exam the night before but it turned out alright in the end!
I have around 8 years of experience in IT but AWS was relatively new to me around 5 weeks ago.
Training Material Used
Firstly I ran through the u/stephanemaarek course which I found to pretty much cover all that was required!
I then used the u/Tutorials_Dojo practice exams. I took one before starting Stephane’s course to see where I was at with no training. I got 46% but I suppose a few of them were lucky guesses!
I then finished the course and took another test and hit around 65%, TD was great as they gave explanations on the answers. I then used this go back to the course to go over my weak areas again.
I then seemed to not be able to get higher than the low 70% on the exams so I went through u/neal-davis course, this was also great as it had an “Exam Cram” video at the end of each topic.
I also set up flashcards on BrainScape which helped me remember AWS services and what their function is.
All in all it was a great learning experience and I look forward to putting my skills into action!
S3 Use cases, storage tiers, cloudfront were pretty prominent too
Only got one “figure out what’s wrong with this IAM policy” question
A handful of dynamodb questions and a handful for picking use cases between different database types or caching layers.
Other typical tips: When you’re unclear on what answer you should pick, or if they seem very similar – work on eliminating answers first. “It can’t be X because oy Y” and that can help a lot.
Testimonial: Passed the AWS Solutions Architect Associate exam! I prepared mostly from freely available resources as my basics were strong. Bought Jon Bonso’s tests on Udemy and they turned out to be super important while preparing for those particular type of questions (i.e. the questions which feel subjective, but they aren’t), understanding line of questioning and most suitable answers for some common scenarios.
Created a Notion notebook to note down those common scenarios, exceptions, what supports what, integrations etc. Used that notebook and cheat sheets on Tutorials Dojo website for revision on final day.
Found the exam was little tougher than Jon Bonso’s, but his practice tests on Udemy were crucial. Wouldn’t have passed it without them.
Piece of advice for upcoming test aspirants: Get your basics right, especially networking. Understand properly how different services interact in VPC. Focus more on the last line of the question. It usually gives you a hint upon what exactly is needed. Whether you need cost optimization, performance efficiency or high availability. Little to no operational effort means serverless. Understand all serverless services thoroughly.
I have almost no experience with AWS, except for completing the Certified Cloud Practitioner earlier this year. My work is pushing all IT employees to complete some cloud training and certifications, which is why I chose to do this.
How I Studied: My company pays for acloudguru subscriptions for its employees, so I used that for the bulk of my learning. I took notes on 3×5 notecards on the key terms and concepts for review.
Once I scored passing grades on the ACG practice tests, I took the Jon Bonso tests on Udemy, which are much more difficult and fairly close to the difficulty of the actual exam. I scored 45%-74% on every Bonso practice test, and spent 1-2 hours after each test reviewing what I missed, supplementing my note cards, and taking time to understand my weak spots. I only took these tests once each, but in between each practice test, I would review all my note cards until I had the content largely memorized.
The Test: This was one of the most difficult certification tests I’ve ever done. The exam was remote proctored with PearsonVUE (I used PSI for the CCP and didn’t like it as much) I felt like I was failing half the time. I marked about 25% of the questions for review, and I used up the entire allotted time. The questions are mostly about understanding which services interact with which other services, or which services are incompatible with the scenario. It was important for me to read through each response and eliminate the ones that don’t make sense. A lot of the responses mentioned a lot of AWS services that sound good but don’t actually work together (i.e. if it doesn’t make sense to have service X querying database Y, so that probably isn’t the right answer). I can’t point to one domain that really needs to be studied more than any other. You need to know all of the content for the exam.
Final Thoughts: The ACG practice tests are not a good metric for success for the actual SAA exam, and I would not have passed without Bonso’s tests showing me my weak spots. PearsonVUE is better than PSI. Make sure to study everything thoroughly and review excessively. You don’t necessarily need 5 different study sources and years of experience to be able to pass (although both of those definitely help) and good luck to anyone that took the time to read!
AWS Certified Solutions Architect Associate So glad to pass my first AWS certification after 6 weeks of preparation.
My Preparation:
After a series of trial of error in regards to picking the appropriate learning content. Eventually, I went with the community’s advice, and took the course presented by the amazing u/stephanemaarek, in addition to the practice exams by Jon Bonso. At this point, I can’t say anything that hasn’t been said already about how helpful they are. It’s a great combination of learning material, I appreciate the instructor’s work, and the community’s help in this sub.
Review:
Throughout the course I noted down the important points, and used the course slides as a reference in the first review iteration. Before resorting to Udemy’s practice exams, I purchased a practice exam from another website, that I regret (not to defame the other vendor, I would simply recommend Udemy). Udemy’s practice exams were incredible, in that they made me aware of the points I hadn’t understood clearly. After each exam, I would go both through the incorrect answers, as well as the questions I marked for review, wrote down the topic for review, and read the explanation thoroughly. The explanations point to the respective documentation in AWS, which is a recommended read, especially if you don’t feel confident with the service. What I want to note, is that I didn’t get satisfying marks on the first go at the practice exams (I got an average of ~70%). Throughout the 6 practice exams, I aggregated a long list of topics to review, went back to the course slides and practice-exams explanations, in addition to the AWS documentation for the respective service. On the second go I averaged 85%. The second attempt at the exams was important as a confidence boost, as I made sure I understood the services more clearly.
The take away:
Don’t feel disappointed if you get bad results at your practice-exams. Make sure to review the topics and give it another shot.
The AWS documentation is your friend! It is vert clear and concise. My only regret is not having referenced the documentation enough after learning new services.
The exam:
I scheduled the exam using PSI. I was very confident going into the exam. But going through such an exam environment for the first time made me feel under pressure. Partly, because I didn’t feel comfortable being monitored (I was afraid to get eliminated if I moved or covered my mouth), but mostly because there was a lot at stake from my side, and I had to pass it in the first go. The questions were harder than expected, but I tried analyze the questions more, and eliminate the invalid answers. I was very nervous and kept reviewing flagged questions up to the last minute. Luckily, I pulled through.
The take away:
The proctors are friendly, just make sure you feel comfortable in the exam place, and use the practice exams to prepare for the actual’s exam’s environment. That includes sitting in a straight posture, not talking/whispering, or looking away.
Make sure to organize the time dedicated for each questions well, and don’t let yourself get distracted by being monitored like I did.
Don’t skip the question that you are not sure of. Try to select the most probable answer, then flag the question. This will make the very-stressful, last-minute review easier.
You have been engaged by a company to design and lead a migration to an AWS environment. The team is concerned about the capabilities of the new environment, especially when it comes to high availability and cost-effectiveness. The design calls for about 20 instances (c3.2xlarge) pulling jobs/messages from SQS. Network traffic per instance is estimated to be around 500 Mbps at the beginning and end of each job. Which configuration should you plan on deploying?
Spread the Instances over multiple AZs to minimize the traffic concentration and maximize fault-tolerance. With a multi-AZ configuration, an additional reliability point is scored as the entire Availability Zone itself is ruled out as a single point of failure. This ensures high availability. Wherever possible, use simple solutions such as spreading the load out rather than expensive high tech solutions
To save money, you quickly stored some data in one of the attached volumes of an EC2 instance and stopped it for the weekend. When you returned on Monday and restarted your instance, you discovered that your data was gone. Why might that be?
The volume was ephemeral, block-level storage. Data on an instance store volume is lost if an instance is stopped.
The most likely answer is that the EC2 instance had an instance store volume attached to it. Instance store volumes are ephemeral, meaning that data in attached instance store volumes is lost if the instance stops.
Your company likes the idea of storing files on AWS. However, low-latency service of the last few days of files is important to customer service. Which Storage Gateway configuration would you use to achieve both of these ends?
A file gateway simplifies file storage in Amazon S3, integrates to existing applications through industry-standard file system protocols, and provides a cost-effective alternative to on-premises storage. It also provides low-latency access to data through transparent local caching.
Cached volumes allow you to store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data.
You’ve been commissioned to develop a high-availability application with a stateless web tier. Identify the most cost-effective means of reaching this end.
Use an Elastic Load Balancer, a multi-AZ deployment of an Auto-Scaling group of EC2 Spot instances (primary) running in tandem with an Auto-Scaling group of EC2 On-Demand instances (secondary), and DynamoDB.
With proper scripting and scaling policies, running EC2 On-Demand instances behind the Spot instances will deliver the most cost-effective solution because On-Demand instances will only spin up if the Spot instances are not available. DynamoDB lends itself to supporting stateless web/app installations better than RDS .
You are building a NAT Instance in an m3.medium using the AWS Linux2 distro with amazon-linux-extras installed. Which of the following do you need to set?
Ensure that “Source/Destination Checks” is disabled on the NAT instance. With a NAT instance, the most common oversight is forgetting to disable Source/Destination Checks. TNote: This is a legacy topic and while it may appear on the AWS exam it will only do so infrequently.
You are reviewing Change Control requests and you note that there is a proposed change designed to reduce errors due to SQS Eventual Consistency by updating the “DelaySeconds” attribute. What does this mean?
When a new message is added to the SQS queue, it will be hidden from consumer instances for a fixed period.
Delay queues let you postpone the delivery of new messages to a queue for a number of seconds, for example, when your consumer application needs additional time to process messages. If you create a delay queue, any messages that you send to the queue remain invisible to consumers for the duration of the delay period. The default (minimum) delay for a queue is 0 seconds. The maximum is 15 minutes. To set delay seconds on individual messages, rather than on an entire queue, use message timers to allow Amazon SQS to use the message timer’s DelaySeconds value instead of the delay queue’s DelaySeconds value. Reference: Amazon SQS delay queues.
Amazon SQS keeps track of all tasks and events in an application: True or False?
False. Amazon SWF (not Amazon SQS) keeps track of all tasks and events in an application. Amazon SQS requires you to implement your own application-level tracking, especially if your application uses multiple queues. Amazon SWF FAQs.
You work for a company, and you need to protect your data stored on S3 from accidental deletion. Which actions might you take to achieve this?
Allow versioning on the bucket and to protect the objects by configuring MFA-protected API access.
Your Security Manager has hired a security contractor to audit your network and firewall configurations. The consultant doesn’t have access to an AWS account. You need to provide the required access for the auditing tasks, and answer a question about login details for the official AWS firewall appliance. Which actions might you do?
AWS has removed the Firewall appliance from the hub of the network and implemented the firewall functionality as stateful Security Groups, and stateless subnet NACLs. This is not a new concept in networking, but rarely implemented at this scale.
Create an IAM user for the auditor and explain that the firewall functionality is implemented as stateful Security Groups, and stateless subnet NACLs
Amazon ElastiCache can fulfill a number of roles. Which operations can be implemented using ElastiCache for Redis.
Amazon ElastiCache offers a fully managed Memcached and Redis service. Although the name only suggests caching functionality, the Redis service in particular can offer a number of operations such as Pub/Sub, Sorted Sets and an In-Memory Data Store. However, Amazon ElastiCache for Redis doesn’t support multithreaded architectures.
You have been asked to deploy an application on a small number of EC2 instances. The application must be placed across multiple Availability Zones and should also minimize the chance of underlying hardware failure. Which actions would provide this solution?
Deploy the EC2 servers in a Spread Placement Group.
Spread Placement Groups are recommended for applications that have a small number of critical instances which need to be kept separate from each other. Launching instances in a Spread Placement Group reduces the risk of simultaneous failures that might occur when instances share the same underlying hardware. Spread Placement Groups provide access to distinct hardware, and are therefore suitable for mixing instance types or launching instances over time. In this case, deploying the EC2 instances in a Spread Placement Group is the only correct option.
You manage a NodeJS messaging application that lives on a cluster of EC2 instances. Your website occasionally experiences brief, strong, and entirely unpredictable spikes in traffic that overwhelm your EC2 instances’ resources and freeze the application. As a result, you’re losing recently submitted messages from end-users. You use Auto Scaling to deploy additional resources to handle the load during spikes, but the new instances don’t spin-up fast enough to prevent the existing application servers from freezing. Can you provide the most cost-effective solution in preventing the loss of recently submitted messages?
Use Amazon SQS to decouple the application components and keep the messages in queue until the extra Auto-Scaling instances are available.
Neither increasing the size of your EC2 instances nor maintaining additional EC2 instances is cost-effective, and pre-warming an ELB signifies that these spikes in traffic are predictable. The cost-effective solution to the unpredictable spike in traffic is to use SQS to decouple the application components.
True statements on S3 URL styles
Virtual-host-style URLs (such as: https://bucket-name.s3.Region.amazonaws.com/key name) are supported by AWS.
Path-Style URLs (such as https://s3.Region.amazonaws.com/bucket-name/key name) are supported by AWS.
You run an automobile reselling company that has a popular online store on AWS. The application sits behind an Auto Scaling group and requires new instances of the Auto Scaling group to identify their public and private IP addresses. How can you achieve this?
Using a Curl or Get Command to get the latest meta-data from http://169.254.169.254/latest/meta-data/
What data formats are used to create CloudFormation templates?
JSOn and YAML
You have launched a NAT instance into a public subnet, and you have configured all relevant security groups, network ACLs, and routing policies to allow this NAT to function. However, EC2 instances in the private subnet still cannot communicate out to the internet. What troubleshooting steps should you take to resolve this issue?
Disable the Source/Destination Check on your NAT instance.
A NAT instance sends and retrieves traffic on behalf of instances in a private subnet. As a result, source/destination checks on the NAT instance must be disabled to allow the sending and receiving traffic for the private instances. Route 53 resolves DNS names, so it would not help here. Traffic that is originating from your NAT instance will not pass through an ELB. Instead, it is sent directly from the public IP address of the NAT Instance out to the Internet.
You need a storage service that delivers the lowest-latency access to data for a database running on a single EC2 instance. Which of the following AWS storage services is suitable for this use case?
Amazon EBS is a block level storage service for use with Amazon EC2. Amazon EBS can deliver performance for workloads that require the lowest-latency access to data from a single EC2 instance. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.
What are DynamoDB use cases?
Use cases include storing JSON data, BLOB data and storing web session data.
You are reviewing Change Control requests, and you note that there is a change designed to reduce costs by updating the Amazon SQS “WaitTimeSeconds” attribute. What does this mean?
When the consumer instance polls for new work, the SQS service will allow it to wait a certain time for one or more messages to be available before closing the connection.
Poor timing of SQS processes can significantly impact the cost effectiveness of the solution.
Long polling helps reduce the cost of using Amazon SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request) and false empty responses (when messages are available but aren’t included in a response).
You have been asked to decouple an application by utilizing SQS. The application dictates that messages on the queue CAN be delivered more than once, but must be delivered in the order they have arrived while reducing the number of empty responses. Which option is most suitable?
Configure a FIFO SQS queue and enable long polling.
You are a security architect working for a large antivirus company. The production environment has recently been moved to AWS and is in a public subnet. You are able to view the production environment over HTTP. However, when your customers try to update their virus definition files over a custom port, that port is blocked. You log in to the console and you allow traffic in over the custom port. How long will this take to take effect?
Immediately.
You need to restrict access to an S3 bucket. Which methods can you use to do so?
There are two ways of securing S3, using either Access Control Lists (Permissions) or by using bucket Policies.
You are reviewing Change Control requests, and you note that there is a change designed to reduce wasted CPU cycles by increasing the value of your Amazon SQS “VisibilityTimeout” attribute. What does this mean?
When a consumer instance retrieves a message, that message will be hidden from other consumer instances for a fixed period.
Poor timing of SQS processes can significantly impact the cost effectiveness of the solution. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours.
With EBS, I can ____.
Create an encrypted volume from a snapshot of another encrypted volume.
Create an encrypted snapshot from an unencrypted snapshot by creating an encrypted copy of the unencrypted snapshot.
You can create an encrypted volume from a snapshot of another encrypted volume.
Although there is no direct way to encrypt an existing unencrypted volume or snapshot, you can encrypt them by creating either a volume or a snapshot. Reference: Encrypting unencrypted resources.
Following advice from your consultant, you have configured your VPC to use dedicated hosting tenancy. Your VPC has an Amazon EC2 Auto Scaling designed to launch or terminate Amazon EC2 instances on a regular basis, in order to meet workload demands. A subsequent change to your application has rendered the performance gains from dedicated tenancy superfluous, and you would now like to recoup some of these greater costs. How do you revert your instance tenancy attribute of a VPC to default for new launched EC2 instances?
Modify the instance tenancy attribute of your VPC from dedicated to default using the AWS CLI, an AWS SDK, or the Amazon EC2 API.
You can change the instance tenancy attribute of a VPC from dedicated to default. Modifying the instance tenancy of the VPC does not affect the tenancy of any existing instances in the VPC. The next time you launch an instance in the VPC, it has a tenancy of default, unless you specify otherwise during launch. You can modify the instance tenancy attribute of a VPC using the AWS CLI, an AWS SDK, or the Amazon EC2 API only. Reference: Change the tenancy of a VPC.
Amazon DynamoDB is a fast, fully managed NoSQL database service. DynamoDB makes it simple and cost-effective to store and retrieve any amount of data and serve any level of request traffic.
DynamoDB is used to create tables that store and retrieve any level of data.
DynamoDB uses SSD’s to store data.
Provides Automatic and synchronous data.
Maximum item size is 400KB
Supports cross-region replication.
DynamoDB Core Concepts:
The fundamental concepts around DynamoDB are:
Tables-which is a collection of data.
Items- They are the individual entries in the table.
Attributes- These are the properties associated with the entries.
Primary Keys.
Secondary Indexes.
DynamoDB streams.
Secondary Indexes:
The Secondary index is a data structure that contains a subset of attributes from the table, along with an alternate key that supports Query operations.
Every secondary index is related to only one table, from where it obtains data. This is called base table of the index.
When you create an index you create an alternate key for the index i.e. Partition Key and Sort key, DynamoDB creates a copy of the attributes into the index, including primary key attributes derived from the table.
After this is done, you use the query/scan in the same way as you would use a query on a table.
Every secondary index is instinctively maintained by DynamoDB.
DynamoDB Indexes: DynamoDB supports two indexes:
Local Secondary Index (LSI)- The index has the same partition key as the base table but a different sort key,
Global Secondary index (GSI)- The index has a partition key and sort key are different from those on the base table.
While creating more than one table using secondary table , you must do it in a sequence. Create table one after the another. When you create the first table wait for it to be active.
Once that table is active, create another table and wait for it to get active and so on. If you try to create one or more tables continuously DynamoDB will return a LimitExceededException.
You must specify the following, for every secondary index:
Type- You must mention the type of index you are creating whether it is a Global Secondary Index or a Local Secondary index.
Name- You must specify the name for the index. The rules for naming the indexes are the same as that for the table it is connected with. You can use the same name for the indexes that are connected with the different base table.
Key- The key schema for the index states that every attribute in the index must be of the top level attribute of type-string, number, or binary. Other data types which include documents and sets are not allowed. Other requirements depend on the type of index you choose.
For GSI- The partitions key can be any scalar attribute of the base table.
Sort key is optional and this too can be any scalar attribute of the base table.
For LSI- The partition key must be the same as the base table’s partition key.
The sort key must be a non-key table attribute.
Additional Attributes: The additional attributes are in addition to the tables key attributes. They are automatically projected into every index. You can use attributes for any data type, including scalars, documents and sets.
Throughput: The throughput settings for the index if necessary are:
GSI: Specify read and write capacity unit settings. These provisioned throughput settings are not dependent on the base tables settings.
LSI- You do not need to specify read and write capacity unit settings. Any read and write operations on the local secondary index are drawn from the provisioned throughput settings of the base table.
You can create upto 5 Global and 5 Local Secondary Indexes per table. With the deletion of a table all the indexes are connected with the table are also deleted.
You can use the Scan or Query operation to fetch the data from the table. DynamoDB will give you the results in descending or ascending order.
Network Load Balancer Overview: A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration. When you enable an Availability Zone for the load balancer, Elastic Load Balancing creates a load balancer node in the Availability Zone. By default, each load balancer node distributes traffic across the registered targets in its Availability Zone only. If you enable cross-zone load balancing, each load balancer node distributes traffic across the registered targets in all enabled Availability Zones. It is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on your part. The Network Load Balancer is API-compatible with the Application Load Balancer, including full programmatic control of Target Groups and Targets. Here are some of the most important features:
Static IP Addresses – Each Network Load Balancer provides a single IP address for each Availability Zone in its purview. If you have targets in us-west-2a and other targets in us-west-2c, NLB will create and manage two IP addresses (one per AZ); connections to that IP address will spread traffic across the instances in all the VPC subnets in the AZ. You can also specify an existing Elastic IP for each AZ for even greater control. With full control over your IP addresses, a Network Load Balancer can be used in situations where IP addresses need to be hard-coded into DNS records, customer firewall rules, and so forth.
Zonality – The IP-per-AZ feature reduces latency with improved performance, improves availability through isolation and fault tolerance, and makes the use of Network Load Balancers transparent to your client applications. Network Load Balancers also attempt to route a series of requests from a particular source to targets in a single AZ while still providing automatic failover should those targets become unavailable.
Source Address Preservation – With Network Load Balancer, the original source IP address and source ports for the incoming connections remain unmodified, so application software need not support X-Forwarded-For, proxy protocol, or other workarounds. This also means that normal firewall rules, including VPC Security Groups, can be used on targets.
Long-running Connections – NLB handles connections with built-in fault tolerance, and can handle connections that are open for months or years, making them a great fit for IoT, gaming, and messaging applications.
Failover – Powered by Route 53 health checks, NLB supports failover between IP addresses within and across regions.
There are two types of VPC endpoints: (1) interface endpoints and (2) gateway endpoints. Interface endpoints enable connectivity to services over AWS PrivateLink.
Amazon AWS uses key pair to encrypt and decrypt login information.
A sender uses a public key to encrypt data, which its receiver then decrypts using another private key. These two keys, public and private, are known as a key pair.
You need a key pair to be able to connect to your instances. The way this works on Linux and Windows instances is different.
First, when you launch a new instance, you assign a key pair to it. Then, when you log in to it, you use the private key.
The difference between Linux and Windows instances is that Linux instances do not have a password already set and you must use the key pair to log in to Linux instances. On the other hand, on Windows instances, you need the key pair to decrypt the administrator password. Using the decrypted password, you can use RDP and then connect to your Windows instance.
Amazon EC2 stores only the public key, and you can either generate it inside Amazon EC2 or you can import it. Since the private key is not stored by Amazon, it’s advisable to store it in a secure place as anyone who has this private key can log in on your behalf.
AWS PrivateLink provides private connectivity between VPCs and services hosted on AWS or on-premises, securely on the Amazon network. By providing a private endpoint to access your services, AWS PrivateLink ensures your traffic is not exposed to the public internet.
There are two types of Security Groups based on where you launch your instance. When you launch your instance on EC2-Classic, you have to specify an EC2-Classic Security Group . On the other hand, when you launch an instance in a VPC, you will have to specify an EC2-VPC Security Group. Now that we have a clear understanding what we are comparing, lets see their main differences:
I think this is historical in nature. S3 and DynamoDB were the first services to support VPC endpoints. The release of those VPC endpoint features pre-dates two important services that subsequently enabled interface endpoints: Network Load Balancer and AWS PrivateLink.
Take advantage of execution context reuse to improve the performance of your function. Initialize SDK clients and database connections outside of the function handler, and cache static assets locally in the /tmp directory. Subsequent invocations processed by the same instance of your function can reuse these resources. This saves execution time and avoid potential data leaks across invocations, don’t use the execution context to store user data, events, or other information with security implications. If your function relies on a mutable state that can’t be stored in memory within the handler, consider creating a separate function or separate versions of a function for each user.
Use AWS Lambda Environment Variables to pass operational parameters to your function. For example, if you are writing to an Amazon S3 bucket, instead of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable.
You can use VPC Flow Logs. The steps would be the following:
Enable VPC Flow Logs for the VPC your EC2 instance lives in. You can do this from the VPC console
Having VPC Flow Logs enabled will create a CloudWatch Logs log group
Find the Elastic Network Interface assigned to your EC2 instance. Also, get the private IP of your EC2 instance. You can do this from the EC2 console.
Find the CloudWatch Logs log stream for that ENI.
Search the log stream for records where your Windows instance’s IP is the destination IP, make sure the port is the one you’re looking for. You’ll see records that tell you if someone has been connecting to your EC2 instance. For example, there are bytes transferred, status=ACCEPT, log-status=OK. You will also know the source IP that connected to your instance.
I recommend using CloudWatch Logs Metric Filters, so you don’t have to do all this manually. Metric Filters will find the patterns I described in your CloudWatch Logs entries and will publish a CloudWatch metric. Then you can trigger an alarm that notifies you when someone logs in to your instance.
Here are more details from the AWS Official Blog and the AWS documentation for VPC Flow Logs records:
Also, there are 3rd-party tools that simplify all these steps for you and give you very nice visibility and alerts into what’s happening in your AWS network resources. I’ve tried Observable Networks and it’s great: Observable Networks
Typically outbound traffic is not blocked by NAT on any port, so you would not need to explicitly allow those, since they should already be allowed. Your firewall generally would have a rule to allow return traffic that was initiated outbound from inside your office.
Packet sniffing by other tenants. It is not possible for a virtual instance running in promiscuous mode to receive or “sniff” traffic that is intended for a different virtual instance. While you can place your interfaces into promiscuous mode, the hypervisor will not deliver any traffic to them that is not addressed to them. Even two virtual instances that are owned by the same customer located on the same physical host cannot listen to each other’s traffic. Attacks such as ARP cache poisoning do not work within Amazon EC2 and Amazon VPC. While Amazon EC2 does provide ample protection against one customer inadvertently or maliciously attempting to view another’s data, as a standard practice you should encrypt sensitive traffic.
But as you can see, they still recommend that you should maintain encryption inside your network. We have taken the approach of terminating SSL at the external interface of the ELB, but then initiating SSL from the ELB to our back-end servers, and even further, to our (RDS) databases. It’s probably belt-and-suspenders, but in my industry it’s needed. Heck, we have some interfaces that require HTTPS and a VPN.
What’s the use case for S3 Pre-signed URL for uploading objects?
I get the use-case to allow access to private/premium content in S3 using Presigned-url that can be used to view or download the file until the expiration time set, But what’s a real life scenario in which a Webapp would have the need to generate URI to give users temporary credentials to upload an object, can’t the same be done by using the SDK and exposing a REST API at the backend.
Asking this since I want to build a POC for this functionality in Java, but struggling to find a real-world use-case for the same
Pre-signed URLs are used to provide short-term access to a private object in your S3 bucket. They work by appending an AWS Access Key, expiration time, and Sigv4 signature as query parameters to the S3 object. There are two common use cases when you may want to use them:
Simple, occasional sharing of private files.
Frequent, programmatic access to view or upload a file in an application.
Imagine you may want to share a confidential presentation with a business partner, or you want to allow a friend to download a video file you’re storing in your S3 bucket. In both situations, you could generate a URL, and share it to allow the recipient short-term access.
There are a couple of different approaches for generating these URLs in an ad-hoc, one-off fashion, including:
First time going there, what like to know in advance the do and don’t … from people with previous experiences.
Pre-plan as much as you can, but don’t sweat it in the moment if it doesn’t work out. The experience and networking are as if not more valuable than the sessions.
Deliberately know where your exits are. Most of Vegas is designed to keep you inside — when you’re burned out from the crowds and knowledge deluge is not the time to be trying to figure out how the hell you get out of wherever you are.
Study maps of how the properties interconnect before you go. You can get a lot of places without ever going outside. Be able to make a deliberate decision of what route to take. Same thing for the outdoor escalators and pedestrian bridges — they’re not necessarily intuitive, but if you know where they go, they’re a life saver running between events.
Drink more water and eat less food than you think you need to. Your mind and body will thank you.
Be prepared for all of the other Vegasisms if you ever plan on leaving the con boundaries (like to walk down the street to another venue) — you will likely be propositioned by mostly naked showgirls, see overt advertisement for or even be directly propositioned by prostitutes and their business associates, witness some pretty awful homelessness, and be “accidentally bumped into” pretty regularly by amateur pickpockets.
Switching gears between “work/AWS” and “surviving Vegas” multiple times a day can be seriously mentally taxing. I haven’t found any way to prevent that, just know it’s going to happen.
Take a burner laptop and not your production access work machine. You don’t want to accidentally crater your production environment because you gave the wrong cred as part of a lab.
There are helpful staffers everywhere around the con — don’t be afraid to leverage them — they tend to be much better informed than the ushers/directors/crowd wranglers at other cons.
Plan on getting Covid or at very least Con Crud. If you’re not used to being around a million sick people in the desert, it’s going to take its toll on your body one way or another.
Don’t set morning alarms. If your body needs to sleep in, that was more important than whatever morning session you wanted to catch. Watch the recording later on your own time and enjoy your mental clarity for the rest of the day.
Wander the expo floor when you’re bored to get a big picture of the ecosystem, but don’t expect anything too deep. The partner booths are all fun and games and don’t necessarily align with reality. Hang out at the “Ask AWS” booths — people ask some fun interesting questions and AWS TAMs/SAs and the other folks staffing the booth tend not to suck.
Listen to The Killers / Brandon Flowers when walking around outside — he grew up in Las Vegas and a lot of his music has subtle (and not so subtle) hints on how to survive and thrive there.
I’m sure there’s more, but that’s what I can think of off the top of my head.
This is more Vegas-advice than pure Re:Invent advice, but if you’re going to be in the city for more than 3 days try to either:
Find a way off/out of the strip for an afternoon. A hike out at Red Rocks is a great option.
Get a pass to the spa at your hotel so that you can escape the casino/event/hotel room trap. It’s amazing how shitty you feel without realizing it until you do a quick workout and steam/sauna/ice bath routine.
I’ve also seen a whole variety of issues that people run into during hands-on workshops where for one reason or another their corporate laptop/email/security won’t let them sign up and log into a new AWS account. Make sure you don’t have any restrictions there, as that’ll be a big hassle. The workshops have been some of the best and most memorable sessions for me.
More tips:
Sign up for all the parties! Try to get your sessions booked too, it’s a pain to be on waitlists. Don’t do one session at Venetian followed by a session at MGM. You’ll never make it in time. Try to group your sessions by location/day.
We catalog all the parties, keep a list of the latest (and older) guides, the Expo floor plan, drawings, etc. On Twitter as well @reInventParties
Hidden gem if you’re into that sort of thing, the Pinball Museum is a great place to hang for a bit with some friends.
Bring sunscreen, a water bottle you like, really comfortable shoes, and lip balm.
Get at least one cert if you don’t already have one. The Cert lounge is a wonderful place to chill and the swag there is top tier.
Check the partner parties, they have good food and good swag.
Register with an alt email address (something like yourname+reinvent@domain.com) so you can set an email rule for all the spam.
If your workplace has an SA, coordinate with them for schedules and info. They will also curate calendars for you and get you insider info if you want them to.
Prioritize workshops and chalk talks. Partner talks are long advertisements, take them with a grain of salt.
Even if you are an introvert, network. There are folks there with valuable insights and skills. You are one of those.
Don’t underestimate the distance between venues. Getting from MGM to Venetian can take forever.
Bring very comfortable walking shoes and be prepared to spend a LOT of time on your feet and walking 25-30,000 steps a day. All of the other comments and ideas are awesome. The most important thing to remember, especially for your very first year, is to have fun. Don’t just sit in breakouts all day and then go back to your hotel. Go to the after dark events. Don’t get too hung up on if you don’t make it to all the breakout sessions you want to go to. Let your first year be a learning curve on how to experience and enjoy re:Invent. It is the most epic week in Vegas you will ever experience. Maybe we will bump into each other. Love meeting new people.
Join Peter DeSantis, Senior Vice President, Utility Computing and Apps, to learn how AWS has optimized its cloud infrastructure to run some of the world’s most demanding workloads and give your business a competitive edge.
Join Dr. Werner Vogels, CTO, Amazon.com, as he goes behind the scenes to show how Amazon is solving today’s hardest technology problems. Based on his experience working with some of the largest and most successful applications in the world, Dr. Vogels shares his insights on building truly resilient architectures and what that means for the future of software development.
Applied artificial intelligence (AI) solutions, such as contact center intelligence (CCI), intelligent document processing (IDP), and media intelligence (MI), have had a significant market and business impact for customers, partners, and AWS. This session details how partners can collaborate with AWS to differentiate their products and solutions with AI and machine learning (ML). It also shares partner and customer success stories and discusses opportunities to help customers who are looking for turnkey solutions.
An implication of applying the microservices architectural style is that a lot of communication between components is done over the network. In order to achieve the full capabilities of microservices, this communication needs to happen in a loosely coupled manner. In this session, explore some fundamental application integration patterns based on messaging and connect them to real-world use cases in a microservices scenario. Also, learn some of the benefits that asynchronous messaging can have over REST APIs for communication between microservices.
Avoiding unexpected user behavior and maintaining reliable performance is crucial. This session is for application developers who want to learn how to maintain application availability and performance to improve the end user experience. Also, discover the latest on Amazon CloudWatch.
Amazon is transforming customer experiences through the practical application of AI and machine learning (ML) at scale. This session is for senior business and technology decision-makers who want to understand Amazon.com’s approach to launching and scaling ML-enabled innovations in its core business operations and toward new customer opportunities. See specific examples from various Amazon businesses to learn how Amazon applies AI/ML to shape its customer experience while improving efficiency, increasing speed, and lowering cost. Also hear the lessons the Amazon teams have learned from the cultural, process, and technical aspects of building and scaling ML capabilities across the organization.
Data has become a strategic asset. Customers of all sizes are moving data to the cloud to gain operational efficiencies and fuel innovation. This session details how partners can create repeatable and scalable solutions to help their customers derive value from their data, win new customers, and grow their business. It also discusses how to drive partner-led data migrations using AWS services, tools, resources, and programs, such as the AWS Migration Acceleration Program (MAP). Also, this session shares customer success stories from partners who have used MAP and other resources to help customers migrate to AWS and improve business outcomes.
User-facing web and mobile applications are the primary touchpoint between organizations and their customers. To meet the ever-rising bar for customer experience, developers must deliver high-quality apps with both foundational and differentiating features. AWS Amplify helps front-end web and mobile developers build faster front to back. In this session, review Amplify’s core capabilities like authentication, data, and file storage and explore new capabilities, such as Amplify Geo and extensibility features for easier app customization with AWS services and better integration with existing deployment pipelines. Also learn how customers have been successful using Amplify to innovate in their businesses.
AWS Amplify is a set of tools and services that makes it quickand easy for front-end web and mobile developers to build full-stack applications on AWS
Amplify DataStore provides a programming model for leveraging shared and distributed data without writing additional code for offline and online scenarios, which makes working with distributed, cross-user data just as simple as working with local-only data
AWS AppSync is a managed GraphQL API service
Amazon DynamoDB is a serverless key-value and document database that’s highly scalable
While DevOps has not changed much, the industry has fundamentally transformed over the last decade. Monolithic architectures have evolved into microservices. Containers and serverless have become the default. Applications are distributed on cloud infrastructure across the globe. The technical environment and tooling ecosystem has changed radically from the original conditions in which DevOps was created. So, what’s next? In this session, learn about the next phase of DevOps: a distributed model that emphasizes swift development, observable systems, accountable engineers, and resilient applications.
Innovation Day
Innovation Day is a virtual event that brings together organizations and thought leaders from around the world to share how cloud technology has helped them capture new business opportunities, grow revenue, and solve the big problems facing us today, and in the future. Featured topics include building the first human basecamp on the moon, the next generation F1 car, manufacturing in space, the Climate Pledge from Amazon, and building the city of the future at the foot of Mount Fuji.
Latest AWS Products and Services announced at re:invent 2021
Graviton 3: AWS today announced the newest generation of its Arm-based Graviton processors: the Graviton 3. The company promises that the new chip will be 25 percent faster than the last-generation chips, with 2x faster floating-point performances and a 3x speedup for machine-learning workloads. AWS also promises that the new chips will use 60 percent less power.
Trn1 to train models for various applications
AWS Mainframe Modernization: Cut mainframe migration time by 2/3
AWS Private 5G: Deploy and manage your own private 5G network (Set up and scale a private mobile network in days)
Transaction for Governed tables in Lake Formation: Automatically manages conflicts and error
Serverless and On-Demand Analytics for Redshift, EMAR, MSK, Kinesis:
Amazon Sagemaker Canvas: Create ML predictions without any ML experience or writing any code
AWS IoT TwinMaker: Real Time system that makes it easy to create and use digital twins of real-world systems.
Amazon DevOps Guru for RDS: Automatically detect, diagnose, and resolve hard-to-find database issues.
Amazon DynamoDB Standard-Infrequent Access table class: Reduce costs by up to 60%. Maintain the same performance, durability, scaling. and availability as Standard
AWS Database Migration Service Fleet Advisor: Accelerate database migration with automated inventory and migration: This service makes it easier and faster to get your data to the cloud and match it with the correct database service. “DMS Fleet Advisor automatically builds an inventory of your on-prem database and analytics service by streaming data from on prem to Amazon S3. From there, we take it over. We analyze [the data] to match it with the appropriate amount of AWS Datastore and then provide customized migration plans.
Amazon Sagemaker Ground Truth Plus: Deliver high-quality training datasets fast, and reduce data labeling cost.
Amazon SageMaker Training Compiler: Accelerate model training by 50%
Amazon SageMaker Inference Recommender: Reduce time to deploy from weeks to hours
Amazon SageMaker Serverless Inference: Lower cost of ownership with pay-per-use pricing
Amazon Kendra Experience Builder: Deploy Intelligent search applications powered by Amazon Kendra with a few clicks.
Amazon Lex Automated Chatbot Designer: Drastically Simplifies bot design with advanced natural language understanding
Amazon SageMaker Studio Lab: A no cost, no setup access to powerful machine learning technology
AWS Cloud WAN: Build, manage and monitor global wide area networks
AWS Amplify Studio: Visually build complete, feature-rich apps in hours instead of weeks, with full control over the application code.
AWS Carbon Footprint Tool: Don’t forget to turn off the lights.
AWS Well-Architected Sustainability Pillar: Learn, measure, and improve your workloads using environmental best practices in cloud computing
AWS re:Post: Get Answers from AWS experts. A Reimagined Q&A Experience for the AWS Community
You can automate any task that involves interaction with AWS and on-premises resources, including in multi-account and multi-Region environments, with AWS Systems Manager. In this session, learn more about three new Systems Manager launches at re:Invent—Change Manager, Fleet Manager, and Application Manager. In addition, learn how Systems Manager Automation can be used across multiple Regions and accounts, integrate with other AWS services, and extend to on-premises. This session takes a deep dive into how to author a custom runbook using an automation document, and how to execute automation anywhere.
Learn about the performance improvements made in Amazon EMR for Apache Spark and Presto, giving Amazon EMR one of the fastest runtimes for analytics workloads in the cloud. This session dives deep into how AWS generates smart query plans in the absence of accurate table statistics. It also covers adaptive query execution—a technique to dynamically collect statistics during query execution—and how AWS uses dynamic partition pruning to generate query predicates for speeding up table joins. You also learn about execution improvements such as data prefetching and pruning of nested data types.
Explore how state-of-the-art algorithms built into Amazon SageMaker are used to detect declines in machine learning (ML) model quality. One of the big factors that can affect the accuracy of models is the difference in the data used to generate predictions and what was used for training. For example, changing economic conditions could drive new interest rates affecting home purchasing predictions. Amazon SageMaker Model Monitor automatically detects drift in deployed models and provides detailed alerts that help you identify the source of the problem so you can be more confident in your ML applications.
Amazon Lightsail is AWS’s simple, virtual private server. In this session, learn more about Lightsail and its newest launches. Lightsail is designed for simple web apps, websites, and dev environments. This session reviews core product features, such as preconfigured blueprints, managed databases, load balancers, networking, and snapshots, and includes a demo of the most recent launches. Attend this session to learn more about how you can get up and running on AWS in the easiest way possible.
This session dives into the security model behind AWS Lambda functions, looking at how you can isolate workloads, build multiple layers of protection, and leverage fine-grained authorization. You learn about the implementation, the open-source Firecracker technology that provides one of the most important layers, and what this means for how you build on Lambda. You also see how AWS Lambda securely runs your functions packaged and deployed as container images. Finally, you learn about SaaS, customization, and safe patterns for running your own customers’ code in your Lambda functions.
Unauthorized users and financially motivated third parties also have access to advanced cloud capabilities. This causes concerns and creates challenges for customers responsible for the security of their cloud assets. Join us as Roy Feintuch, chief technologist of cloud products, and Maya Horowitz, director of threat intelligence and research, face off in an epic battle of defense against unauthorized cloud-native attacks. In this session, Roy uses security analytics, threat hunting, and cloud intelligence solutions to dissect and analyze some sneaky cloud breaches so you can strengthen your cloud defense. This presentation is brought to you by Check Point Software, an AWS Partner.
AWS provides services and features that your organization can leverage to improve the security of a serverless application. However, as organizations grow and developers deploy more serverless applications, how do you know if all of the applications are in compliance with your organization’s security policies? This session walks you through serverless security, and you learn about protections and guardrails that you can build to avoid misconfigurations and catch potential security risks.
The Amazon Cash application service matches incoming customer payments with accounts and open invoices, while an email ingestion service (EIS) processes more than 1 million semi-structured and unstructured remittance emails monthly. In this session, learn how this EIS classifies the emails, extracts invoice data from the emails, and then identifies the right invoices to close on Amazon financial platforms. Dive deep on how these services automated 89.5% of cash applications using AWS AI & ML services. Hear about how these services will eliminate the manual effort of 1000 cash application analysts in the next 10 years.
Dive into the details of using Amazon Kinesis Data Streams and Amazon DynamoDB Streams as event sources for AWS Lambda. This session walks you through how AWS Lambda scales along with these two event sources. It also covers best practices and challenges, including how to tune streaming sources for optimum performance and how to effectively monitor them.
Build real-time applications using Apache Flink with Apache Kafka and Amazon Kinesis Data Streams. Apache Flink is a framework and engine for building streaming applications for use cases such as real-time analytics and complex event processing. This session covers best practices for building low-latency applications with Apache Flink when reading data from either Amazon MSK or Amazon Kinesis Data Streams. It also covers best practices for running low-latency Apache Flink applications using Amazon Kinesis Data Analytics and discusses AWS’s open-source contributions to this use case.
Learn how you can accelerate application modernization and benefit from the open-source Apache Kafka ecosystem by connecting your legacy, on-premises systems to the cloud. In this session, hear real customer stories about timely insights gained from event-driven applications built on an event streaming platform from Confluent Cloud running on AWS, which stores and processes historical data and real-time data streams. Confluent makes Apache Kafka enterprise-ready using infinite Kafka storage with Amazon S3 and multiple private networking options including AWS PrivateLink, along with self-managed encryption keys for storage volume encryption with AWS Key Management Service (AWS KMS).
Data-driven business intelligence (BI) decision making is more important than ever in this age of remote work. An increasing number of organizations are investing in data transformation initiatives, including migrating data to the cloud, modernizing data warehouses, and building data lakes. But what about the last mile—connecting the dots for end users with dashboards and visualizations? Come to this session to learn how Amazon QuickSight allows you to connect to your AWS data and quickly build rich and interactive dashboards with self-serve and advanced analytics capabilities that can scale from tens to hundreds of thousands of users, without managing any infrastructure and only paying for what you use.
Is there an Updated SAA-C03 Practice Exam?
Yes as of August 2022. This sample SAA-C03 sample exam PDF file can provide you with a hint of what the real SAA-C03 exam will look like in your upcoming test. In addition, the SAA-C03 sample questions also contain the necessary explanation and reference links that you can study.
In this AWS tutorial, we are going to discuss how we can make the best use of AWS services to build a highly scalable, and fault tolerant configuration of EC2 instances. The use of Load Balancers and Auto Scaling Groups falls under a number of best practices in AWS, including Performance Efficiency, Reliability and high availability.
Before we dive into this hands-on tutorial on how exactly we can build this solution, let’s have a brief recap on what an Auto Scaling group is, and what a Load balancer is.
Autoscaling group (ASG)
An Autoscaling group (ASG) is a logical grouping of instances which can scale up and scale down depending on pre-configured settings. By setting Scaling policies of your ASG, you can choose how many EC2 instances are launched and terminated based on your application’s load. You can do this based on manual, dynamic, scheduled or predictive scaling.
Elastic Load Balancer (ELB)
An Elastic Load Balancer (ELB) is a name describing a number of services within AWS designed to distribute traffic across multiple EC2 instances in order to provide enhanced scalability, availability, security and more. The particular type of Load Balancer we will be using today is an Application Load Balancer (ALB). The ALB is a Layer 7 Load Balancer designed to distribute HTTP/HTTPS traffic across multiple nodes – with added features such as TLS termination, Sticky Sessions and Complex routing configurations.
Getting Started
First of all, we open our AWS management console and head to the EC2 management console.
We scroll down on the left-hand side and select ‘Launch Templates’. A Launch Template is a configuration template which defines the settings for EC2 instances launched by the ASG.
Under Launch Templates, we will select “Create launch template”.
We specify the name ‘MyTestTemplate’ and use the same text in the description.
Under the ‘Auto Scaling guidance’ box, tick the box which says ‘Provide guidance to help me set up a template that I can use with EC2 Auto Scaling’ and scroll down to launch template contents.
When it comes to choosing our AMI (Amazon Machine Image) we can choose the Amazon Linux 2 under ‘Quick Start’.
The Amazon Linux 2 AMI is free tier eligible, and easy to use for our demonstration purposes.
Next, we select the ‘t2.micro’ under instance types, as this is also free tier eligible.
Under Network Settings, we create a new Security Group called ExampleSG in our default VPC, allowing HTTP access to everyone. It should look like this.
We can then add our IAM Role we created earlier. Under Advanced Details, select your IAM instance profile.
Then we need to include some user data which will load a simple web server and web page onto our Launch Template when the EC2 instance launches.
Under ‘advanced details’, and in ‘User data’ paste the following code in the box.
Then simply click ‘Create Launch Template’ and we are done!
We are now able to build an Auto Scaling Group from our launch template.
On the same console page, select ‘Auto Scaling Groups’, and Create Auto Scaling Group.
We will call our Auto Scaling Group ‘ExampleASG’, and select the Launch Template we just created, then select next.
On the next page, keep the default VPC and select any default AZ and Subnet from the list and click next.
Under ‘Configure Advanced Options’ select ‘Attach to a new load balancer’ .
You will notice the settings below will change and we will now build our load balancer directly on the same page.
Select the Application Load Balancer, and leave the default Load Balancer name.
Choose an ‘Internet Facing’ Load balancer, select another AZ and leave all of the other defaults the same. It should look something like the following.
Under ‘Listeners and routing’, select ‘Create a target group’ and select the target group which was just created. It will be called something like ‘ExampleASG-1’. Click next.
Now we get to Group Size. This is where we specify the desired, minimum and maximum capacity of our Auto Scaling Group.
Set the capacities as follows:
Click ‘skip to review’, and click ‘Create Auto Scaling Group’.
You will now see the Auto Scaling Group building, and the capacity is updating.
After a short while, navigate to the EC2 Dashboard, and you will see that two EC2 instances have been launched!
To make sure our Auto Scaling group is working as it should – select any instance, and terminate the instance. After one instance has been terminated you should see another instance pending and go into a running state – bringing capacity back to 2 instances (as per our desired capacity).
If we also head over to the Load Balancer console, you will find our Application Load Balancer has been created.
If you select the load balancer, and scroll down, you will find the DNS name of your ALB – it will look something like ‘ ExampleASG-1-1435567571.us-east-1.elb.amazonaws.com’.
If you enter the DNS name into our URL, you should get the following page show up:
The message will display a ‘Hello World’ message including the IP address of the EC2 instance which is serving up the webpage behind the load balancer.
If you refresh the page a few times, you should see that the IP address listed will change. This is because the load balancer is routing you to the other EC2 instance, validating that our simple webpage is being served from behind our ALB.
The final step Is to make sure you delete all of the resources you configured! Start by deleting the Auto Scaling Group – and ensure you delete your load balancer also – this will ensure you don’t incur any charges.
Architectural Diagram
Below, you’ll find the architectural diagram of what we have built.
Learn how to Master AWS Cloud
Ultimate Training Packages – Our popular training bundles (on-demand video course + practice exams + ebook) will maximize your chances of passing your AWS certification the first time.
Membership – For unlimited access to our cloud training catalog, enroll in our monthly or annual membership program.
Challenge Labs – Build hands-on cloud skills in a secure sandbox environment. Learn, build, test and fail forward without risking unexpected cloud bills.
This post originally appeared on: https://digitalcloud.training/load-balancing-ec2-instances-in-an-autoscaling-group/
There are significant protections provided to you natively when you are building your networking stack on AWS. This wide range of services and features can become difficult to manage, and becoming knowledgeable about what tools to use in which area can be challenging.
The two main security components which can be confused within VPC networking are the Security Group and the Network Access Control List (NACL). When you compare a Security Group vs NACL, you will find that although they are fairly similar in general, there is a distinct difference in the use cases for each of these security features.
In this blog post, we are going to explain the main differences between Security Group vs NACL and talk about the use cases and some best practices.
First of all, what do they have in common?
The main thing that is shared in common between a Security group vs a NACL is that they are both a firewall. So, what is a firewall?
Firewalls in computing monitor and control incoming and outgoing network traffic based on predetermined security rules. Firewalls provide a barrier between trusted and untrusted networks. The network layer which we are talking about in this instance is an Amazon Virtual Private Cloud – aka a VPC.
In the AWS cloud, VPCs are on-demand pools of shared resources, designed to provide a certain degree of isolation between different organizations and different teams within an account.
First, let’s talk about the particulars of a Security Group.
Security Group Key Features
Where do they live?
Security groups are tied to an instance. This can be either an EC2 instance, ECS cluster or an RDS database instance – providing routing rules and acting as a firewall for the resources contained within the security group. With a security group, you have to purposely assign a security group to the instances – if you don’t want them to use the default security group.
The default security group allows all traffic outbound by default, but no inbound traffic.
This means any instances within the subnet group gets the rule applied.
Stateful or Stateless
Security groups are stateful in nature. As a result, any changes applicable to an incoming rule will also be automatically applied to the outgoing rule in the same way. For example, allowing an incoming port 80 will automatically open the outgoing port 80 – without you having to explicitly direct traffic in the opposite direction.
Allow or Deny Rules
The only rule set that can be used in security groups is the Allow rule set. Thus, You cannot backlist a certain IP address from establishing a connection with any instances within your security group. This would have to be achieved using a different technology.
Limits
Instance can have multiple security groups. By default, AWS will let you apply up to five security groups to a virtual network interface, but it is possible to use up to 16 if you submit a limit increase request.
Additionally, you can have 60 inbound and 60 outbound rules per security group (for a total of 120 rules). IPv4 rules are enforced separately from IPv6 rules; a security group, for example, may have 60 IPv4 rules and 60 IPv6 rules.
Network Access Control Lists (NACLS)
Now let’s compare the Security Group vs NACLs using the same criteria.
Where do they live?
Network ACLs exist on an interact at the subnet level, so any instance in the subnet with an associated NACL will automatically follow the rules of the NACL.
Stateful or Stateless
Network ACLs are stateless. Consequently, any changes made to an incoming rule will not be reflected in an outgoing rule. For example, if you allow an incoming port 80, you would also need to apply the rule for outgoing traffic.
Allow or Deny Rules
Unlike a Security Group, NACLs support both allow and deny rules. By deny rules, you could explicitly deny a certain IP address to establish a connection; e.g. to block a specific known malicious IP address from establishing a connection to an EC2 Instance.
Limits
Subnet can have only one NACL. However, you can associate one network ACL to one or more subnets within a VPC. By default, you can have up to 200 unique NACLs within a VPC, however this is a soft limit that is adjustable.
Secondly, you can have 20 inbound and 20 outbound rules per NACL (for a total of 40 rules). IPv4 rules are enforced separately from IPv6 rules. A NACL, for example, may have 20 IPv4 rules and 20 IPv6 rules.
We hope that you now more keenly understand the difference between NACLs and security groups.
A multi-account strategy in AWS can provide you with a secure and isolated platform from which to launch your resources. Whilst smaller organizations may only require a few AWS accounts, large corporations with many business units often require many accounts. These accounts may be organized hierarchically.
Building this account topology manually on the cloud requires a high degree of knowledge, and is rather error prone. If you want to set up a multi-account environment in AWS within a few clicks, you can use a service called AWS Control Tower.
AWS Control Tower allows your team to quickly provision and to set up and govern a secure, multi-account AWS environment, known as a landing zone. Built on the back of AWS Organizations, it automatically implements many accounts under the appropriate organizational units, with hardened service control policies attached. Provisioning new accounts happens in the click of a button, automating security configuration, and ensuring you extend governance into new accounts, without any manual intervention.
There are a number of key features which constitute AWS Control Tower, and in this article, we will explore each section and break down how it makes governing multiple accounts a lot easier.
The Landing Zone
A Landing Zone refers to the multi-account structure itself, which is configured to provide with a compliant and secure set of accounts upon which to start building. A Landing Zone can include extended features like federated account access via SSO and the utilization of centralized logging via Amazon CloudTrail and AWS Config.
The Landing Zone’s accounts follow guardrails set by you to ensure you are compliant to your own security requirements. Guardrails are rules written in plain English, leveraging AWS CloudFormation in the background to establish a hardened account baseline.
Guardrails can fit into one of a number of categories:
Guardrails provide immediate protection from any number of scenarios, without the need to be able to read or write complex security policies – a big upside compared to manual provisioning of permissions.
Account Factory
Account Factory is a component of Control Tower which allows you to automate the secure provisioning of new accounts, which exist according to defined security principles. Several pre-approved configurations are included as part of the launch of your new accounts including Networking information, and Region Selection. You also get seamless integration with AWS Service Catalog to allow your internal customers to configure and build new accounts. Third party Infrastructure as Code tooling like Terraform (Account Factory for Terraform) can be used also to provide your cloud teams the ability to benefit from a multiple account setup whilst using tools they are familiar with.
Architecture of Control Tower
Lets now dive into how Control Tower looks, with an architectural overview.
As you can see, there are a number of OUs (Organizational Units) in which accounts are placed. These are provisioned for you using AWS Organizations.
Security OU – The Security OU contains two accounts, the Log Archive Account and the Audit Account. The Log Archive Account serves as a central store for all CloudTrail and AWS Config logs across the Landing Zone, securely stored within an S3 Bucket.
Sandbox OU – The Sandbox OU is setup to host testing accounts (Sandbox Accounts) which are safely isolated from any production workloads.
Production OU – This OU is for hosting all of your production accounts, containing production workloads.
Non-Production OU – This OU can serve as a pre-production environment, in which further testing and development can take place.
Suspended OU – this is secure OU, where you can move any deleted, reused or breached accounts. Permissions in this OU are extremely locked-down, ensuring it is a safe location.
Shared Services OU – The Shared Services OU contains accounts in which services shared across multiple other accounts are hosted. This consists of three accounts:
The Shared Services account (where the resources are directly shared)
The Security Services Account (hosting services like Amazon Inspector, Amazon Macie, AWS Secrets Manager as well as any firewall solutions.)
The Networking Account – This contains VPC Endpoints and components and things like DNS Endpoints.
Any organization can benefit from using AWS Control Tower. Whether you’re a multinational corporation with years of AWS Experience, or a burgeoning start-up with little experience in the cloud, Landing Zone can provide your customers with confidence that they are provisioning their architecture efficiently and securely.
I've done 3 practice exams from Tutorials Dojo and for each one, I'm getting high 50%'s . Afterward, I studied the questions where I failed, but I should be getting better, not staying the same. I've heard the TD practice exams are harder...but how much harder?? I've taken Adrian's SAA course, so now I'm just trying to get some exam practice. I have flash cards that I use to help me remember difficult subjects. Any advice would be great. thanks submitted by /u/w_savage [link] [comments]
I've a non-IT background trying to make a career shift. I did check out cybersecurity to see if I can fit there, but some people referred me here if I want a stable thing for beginners. Any advice on where to start, also with minimal costs? (I'm not from the US and am currently working as a freelancer due to layoffs, but it's not paying much). submitted by /u/ultravioletheart08 [link] [comments]
Some of people suggest me to prepare with study guides from legit sources like itexamshub or preparexams etc, some are saying to see videos from different channels and some are saying to go with tutorialsdojo? Suggest me the best option plz guyzzzz.... : ) submitted by /u/Commercial-Suit-7693 [link] [comments]
I'm taking Stephane's course and then TD practice exams for the Associate Data Engineering cert (same strategy I used to pass the SA and Developer exams). But since this is a new exam, I realised the Stephane/TD material may not 100% accurately reflect the real exam content yet. So I was wondering if people who have recently sat the exam could say what the main topics were? submitted by /u/WhiskeeFrank [link] [comments]
Hey guys, Ive been running puppeteer on AWS fine for over a year now, but in the last two days I had a major issue pop up. Code with no new lambda update works 100% fine, but immediately after updating the function for a new IP, I now get a "Error: Navigation failed because browser has disconnected! at new LifecycleWatcher". Reached out to AWS but no help so far with them. Has anyone else had this issue? 3 of my functions are now down to do this issue. Doesnt matter the website, it seems like chromium/AWS isnt connecting at all and crashes in the first 1100ms of running. Which again, I ran today and yesterday fine for the first call, but update for a new IP (With same code that worked previously) it now immediately crashes. SOLVED: New Lambda update isn't compatible with some older versions of chromium and or puppeteer. Updated to Version 22.6.4 and sparticuz/chromium": "^123.0.1 solves the issue. Previous versions were : 20.1.0 and 113.0.1 respectively submitted by /u/DetailedLife [link] [comments]
There are many trends within the current cloud computing industry that have a sway on the conversations which take place throughout the market. One of these key areas of discussion is ‘Serverless’.
Serverless application deployment is a way of provisioning infrastructure in a managed way, without having to worry about any building and maintenance of servers – you launch the service and it works. Scaling, high availability, and automotive processes are looked after using managed AWS Serverless service. AWS Step Functions provides us a useful way to coordinate the components of distributed applications and microservices using visual workflows.
What is AWS Step Functions?
AWS Step Functions let developers build distributed applications, automate IT and business processes, and build data and machine learning pipelines by using AWS services.
Using Step Functions workflows, developers can focus on higher-value business logic instead of worrying about failures, retries, parallelization, and service integrations. In other words, AWS Step Functions is serverless workload orchestration service which can make developers’ lives much easier.
Components and Integrations
AWS Step Functions consist of a few components, the first being a State Machine.
What is a state machine?
The State Machine model uses given states and transitions to complete the tasks at hand. It is an abstract machine (system) that can be in one state at a time, but it can also switch between them. As a result, it doesn’t allow infinity loops, which removes one source of errors entirely, which is often costly.
With AWS Step Functions, you can define workflows as state machines, which simplify complex code into easy-to-understand statements and diagrams. The process of building applications and confirming they work as expected is actually much faster and easier.
State
In a state machine, a state is referred to by its name, which can be any string, but must be unique within the state machine. State instances exist until their execution is complete.
An individual component of your state machine can be in any of the following 8 types of states:
Task state – Do some work in your state machine. From a task state, Amazon Step Functions can call Lambda functions directly
Choice state – Make a choice between different branches of execution
Fail state – Stops execution and marks it as failure
Succeed state – Stops execution and marks it as a success
Pass state – Simply pass its input to its output or inject some fixed data
Wait state – Provide a delay for a certain amount of time or until a specified time/date
Parallel state – Begin parallel branches of execution
Map state – Adds a for-each loop condition
Limits
There are some limits which you need to be aware of when you are using AWS Step Functions. This table will break down the limits:
Use Cases and Examples
If you need to build workflows across multiple Amazon services, then AWS Step Functions are a great tool for you. Serverless microservices can be orchestrated with Step Functions, data pipelines can be built, and security incidents can be handled with Step Functions. It is possible to use Step Functions both synchronously and asynchronously.
Instead of manually orchestrating long-running, multiple ETL jobs or maintaining a separate application, Step Functions can ensure that these jobs are executed in order and complete successfully.
As a third feature, Step Functions are a great way to automate recurring tasks, such as updating patches, selecting infrastructure, and synchronizing data, and Step Functions will scale automatically, respond to timeouts, and retry missed tasks when they fail.
With Step Functions, you can create responsive serverless applications and microservices with multiple AWS Lambda functions without writing code for workflow logic, parallel processes, error handling, or timeouts.
Additionally, services and data can be orchestrated that run on Amazon EC2 instances, containers, or on-premises servers.
Pricing
Each time you perform a step in your workflow, Step Functions counts a state transition. State transitions, including retries, are charged across all state machines.
There is a Free Tier for AWS Step Functions of 4000 State Transitions per month.
With AWS Step Functions, you pay for the number state transitions you use per month.
Step Functions count a state transition each time a step of your workflow is executed. You are charged for the total number of state transitions across all your state machines, including retries.
State Transitions cost a flat rate of $0.000025 per state transition thereafter.
Summary
In summary, Step Functions are a powerful tool which you can use to improve the application development and productivity of your developers. By migrating your logic workflows into the cloud you will benefit from lower cost, rapid deployment. As this is a serverless service, you will be able to remove any undifferentiated heavy lifting from the application development process.
Interview Questions
Q: How does AWS Step Function create a State Machine?
A: A state machine is a collection of states which allows you to perform tasks in the form of lambda functions, or another service, in sequence, passing the output of one task to another. You can add branching logic based on the output of a task to determine the next state.
Q: How can we share data in AWS Step Functions without passing it between the steps?
A: You can make use of InputPath and ResultPath. In the ValidationWaiting step you can set the following properties (in State Machine definition)
This way you can send to external service only data that is actually needed by it and you won’t lose access to any data that was previously in the input.
Q: How can I diagnose an error or a failure within AWS Step Functions?
A: The following are some possible failure events that may occur
State Machine Definition Issues.
Task Failures due to exceptions thrown in a Lambda Function.
Transient or Networking Issues.
A task has surpassed its timeout threshold.
Privileges are not set appropriately for a task to execute.
If you want to be an AWS cloud professional, you need to understand the differences between the myriad of services AWS offer. You also need an in-depth understanding on how to use the Security services to ensure that your account infrastructure is highly secure and safe to use. This is job zero at AWS, and there is nothing that is taken more seriously than Security. AWS makes it really easy to implement security best practices and provides you with many tools to do so.
AWS Secrets Manager and SSM Parameter store sound like very similar services on the surface -however, when you dig deeper – comparing AWS Secrets Manager vs SSM Parameter Store – you will find some significant differences which help you understand exactly when to use each tool.
AWS Secrets Manager
AWS Secrets Manager is designed to provide encryption for confidential information (like database credentials and API keys) that needs to be guarded safely in a secure way. Encryption is automatically enabled when creating a secret entry and there are a number of additional features we are going to explore in this article.
Through using AWS Secrets Manager, you can manage a wide range of secrets: Database credentials, API keys, and other self defined secrets are all eligible for this service.
If you are responsible for storing and managing secrets within your team, as well as ensuring that your company follows regulatory requirements – this is possible through AWS Secrets Manager which securely and safely stores all secrets within one place. Secrets Manager also has a large degree of added functionality.
SSM Parameter store
SSM Parameter store is slightly different. The key differences become evident when you compare how AWS Secrets Manager vs SSM Parameter Store are used.
The SSM Parameter Store focuses on a slightly wider set of requirements. Based on your compliance requirements, SSM Parameter Store can be used to store the secrets encrypted or unencrypted within your code base.
By storing environmental configuration data and other parameters, the software simplifies and optimizes the application deployment process. With the AWS Secrets Manager, you can add key rotation, cross-account access, and faster integration with services offered by AWS.
Based on this explanation you may think that they both sound similar. Let’s break down the similarities and differences between these roles.
Similarities
Managed Key/Value Store Services
Both services allow you to store values using a name and key. This is an extremely useful aspect of both of the services as the deployment of the application can reference different parameters or different secrets based on the deployment environment, allowing customizable and highly integratable deployments of your applications.
Both Referenceable in CloudFormation
You can use the powerful Infrastructure as Code (IaC) tool AWS CloudFormation to build your applications programmatically. The effortless deployment of either product using CloudFormation allows a seamless developer experience, without using painful manual processes.
While SSM Parameter Store only allows one version of a parameter to be active at any given time, Secrets Manager allows multiple versions to exist at the same time when you are rotating a secret using staging labels.
Similar Encryption Options
They are both inherently very secure services – and you do not have to choose one over another based on the encryption offered by either service.
Through another AWS Security service, KMS (the Key Management Service), IAM policies can be outlined to control and outline specific permissions on which only certain IAM users and roles have permission to decrypt the value. This restricts access to anyone who doesn’t need it – and it abides to the principle of least privilege, helping you abide by compliance standards.
Versioning
Versioning outlines the ability to save multiple, and iteratively developed versions of something to allow quicker restore lost versions, and maintain multiple copies of the same thing etc.
Both services support versioning of secret values within the service. This allows you to view multiple previous versions of your parameters. You can also optionally choose to promote a former version to the master up to date version, which can be useful as your application changes.
Given that there are lots of similarities between the two services, it is now time to view and compare the differences, along with some use cases of either service.
Differences
Cost
The costs are different across the services, namely the fact that SSM tends to cost less compared to Secrets Manager. Standard parameters are free for SSM. You won’t be charged for the first 10,000 parameters you store, however, Advanced Parameters will cost you. For every 10,000 API calls and every secret per month, AWS Secret Manager bills you a fixed fee.
This may factor into how you use each service and how you define your cloud spending strategy, so this is valuable information.
Password generation
A useful feature within AWS Secrets Manager allows us to generate random data during the creation phase to allow for the secure and auditable creation of strong and unique passwords and subsequently reference it in the same CloudFormation stack. This allows our applications to be fully built using IaC, and gives us all the benefits which that entails.
AWS Systems Manager Parameter Store on the other hand doesn’t work this way, and doesn’t allow us to generate random data — we need to do it manually using console or AWS CLI, and this can’t happen during the creation phase.
Rotation of Secrets
A Powerful feature of AWS Secrets Manager is the ability to automatically rotate credentials based on a pre-defined schedule, which you set. AWS Secrets Manager integrates this feature natively with many AWS services, and this feature (automated data rotation) is simply not possible using AWS Systems Manager Parameter Store.You will have to refresh and update data daily which will include a lot more manual setup to achieve the same functionality that is supported natively with Secrets Manager.
Cross-Account Access
Firstly, there is currently no way to attach resource-based IAM policy for AWS Systems Manager Parameter Store (Standard type).This means that cross-account access is not possible for Parameter store, and if you need this functionality you will have to configure an extensive work around, or use AWS Secrets Manager.
Size of Secrets
Each of the options stores a maximum set size of secret / parameter.
Secrets Manager can store secrets of up to 10kb in size.
Standard Parameters can use up to 4096 characters (4KB size) for each entry, and Advanced Parameters can store up to 8KB entries.
Multi-Region Deployment
Like with many other features of AWS secrets Manager, AWS SSM Parameter store does not come with the same functionality. In this case you can’t easily replicate your secrets across multiple regions for added functionality / value, and you will need to implement an extensive work around for this to work.
In terms of use cases, you may want to use AWS Secrets Manager to store your encrypted secrets with easy rotation. If you require a feature rich solution for managing your secrets to stay compliant with your regulatory and compliance requirements, consider choosing AWS Secrets Manager.
On the other hand, you may want to choose SSM Parameter Store as a cheaper option to store your encrypted or unencrypted secrets. Parameter Store will provide some limited functionality to enable your application deployments by storing your parameters in a safe, cheap and secure way.
When you are building applications in the AWS cloud, you have to go to painstaking lengths to make your applications durable, resilient and highly available.
Whilst AWS can help you with this for the most part, it is nearly impossible to see a situation in which you will not need some kind of Disaster Recovery plan.
An organization’s Business Continuity and Disaster Recovery (BCDR) program is a set of approaches and processes that can be used to recover from a disaster and resume its regular business operations after the disaster has ended. An example of a disaster would be a natural calamity, an outage or disruption caused by a power outage, an employee mistake, a hardware failure, or a cyberattack.
With the implementation of a BCDR plan, businesses can operate as close to normal as possible after an unexpected interruption, and with the least possible loss of data.
In this blog post, we will explore three notable disaster recovery solutions, each with different merits and drawbacks, and different ways of restoring them once they’ve been lost. However, before we can appreciate these different methods, we need to break down some key terminology in Disaster Recovery. Using AWS infrastructure as a lens, we will examine all of these strategies.
What is Disaster Recovery?
This definition provides an excellent summary of disaster recovery – an extremely broad term.
“Disaster recovery involves a set of policies, tools, and procedures to enable the recovery or continuation of vital technology infrastructure and systems following a natural or human-induced disaster.”
This definition emphasizes the necessity of recovering systems, tools, etc. after a disaster. Disaster Recovery depends on many factors, including:
• Financial plan
• Competence in technology
• Use of tools
• The Cloud Provider used
It is essential to understand some key terminology, including RPO and RTO, in order to evaluate disaster recovery efficacy:
How do RPOs and RTOs differ?
RPO (Recovery Point Objective)
The Recovery Point Objective (RPO) is the maximum acceptable amount of data loss after an unplanned data-loss incident, expressed as an amount of time. This is a measure of a maximum, in order to get a low RPO, you will have to have a highly available solution.
RTO (Recovery Time Objective)
The Recovery Time Objective (RTO) is the maximum tolerable length of time that a computer, system, network or application can be down after a failure or disaster occurs. This is measured in minutes or hours and trying to retrieve as low of an RTO as possible is dependent on how quickly you can get your application back online.
Disaster Recovery Methods
Now that we understand these key concepts, we can break down three popular disaster recovery methods, namely Backup and Restore, Disaster Recovery Plan, and Disaster Recovery Contingency Plan.
Backup and Restore
Data loss or corruption can be mitigated by utilizing backup and restore. The replication of data to other data centers can also mitigate the effects of a disaster. Redeploying the infrastructure, configuration, and application code in the recovery Data center is in addition to restoring the data.
The recovery time objective (RTO) and recovery point objective (RPO) of backup and restoration are higher. The result is longer downtimes and greater data loss between the time of the disaster event and the time of recovery. Even so, backup and restore may still be the most cost-effective and easiest strategy for your workload. RTO and RPO in minutes or less are not required for all workloads.
RPO is dependent on how frequently you take snapshots, and RTO is dependent on how long it takes to restore snapshots.
Pilot Light
As far as affordability and reliability are concerned, Pilot Light strikes a perfect balance between the two. There is one key difference between Backup and Restore and Pilot Light: Pilot Light will always have its core functionality running somewhere, either in another region or in another account and region that separates it from Backup and Restore.
You can, for example, log into Backup and Restore and have all of your data synced into an S3 bucket, so that you can retrieve it in case of a disaster. It is important to note that when using Pilot Light, the data is synchronized with an always-on and always-available database replica.
Also, other core services, such as an EC2 instance with all of the necessary software already installed on it, will be available and ready to use at the touch of a button. There would be an Auto-Scaling Policy in place for each of these EC2 instances to ensure the instances would scale out in a timely manner in order to meet your production needs as soon as possible. This strategy focuses on a lower chance of overall downtime and is contingent on smaller aspects of your architecture running all of the time.
Multi-Site Active/Active
Having an exactly mirrored application across multiple AWS regions or data centers is the most resilient cloud disaster recovery strategy.
In the multi-site active/active strategy, you will be able to achieve the lowest RTO (recovery time objective) and RPO (recovery point objective). However, it is important to take into account the potential cost and complexity of operating active stacks in multiple locations.
There is a multi-AZ workload stack available in every region to ensure high availability. There is a live replication of data between each of the data stores within each Region, as well as a backup of this data. Hence, data backups are of crucial importance to protect against disasters that may lead to the loss or corruption of data as a result.
Only the most demanding applications should use this DR method, since it has the lowest RTOs and RPOs of any other DR technique.
Conclusion
It is impossible to build a Disaster Recovery plan that fits all circumstances, and no “one size fits all” approach exists. Budget ahead of time – and ensure that you don’t spend more than you can afford. It may seem like a lot of money is being spent on ‘What ifs?” – but if your applications CAN NOT go down – you have the capability to ensure this happens.
AWS offers many services, so many that it can often get pretty confusing for beginners and experts alike. This is especially true when it comes to the many storage options AWS provides its users. Knowing the benefits and use cases of AWS storage services will help you design the best solution. In this article, we’ll be looking at S3 vs EBS vs EFS.
So, what are these services and what do they do? Let’s start with S3.
Amazon S3 Benefits
The Amazon Simple Storage Service (Amazon S3) is AWS’s object storage solution. If you’ve ever used a service like Google Drive or Dropbox, you’ll know generally what S3 can do. At first glance, S3 is simply a place to store files, photos, videos, and other documents. However, after digging deeper, you’ll uncover the many functionalities of S3, making it much more than the average object storage service.
Some of these functionalities include scalable solutions, which essentially means that if your project gets bigger or smaller than originally expected, S3 can grow or shrink to easily meet your needs in a cost-effective manner. S3 also helps you to easily manage data, giving you the ability to control who accesses your content. With S3 you have data protection against all kinds of threats. It also replicates your data for increased durability and lets you choose between different storage classes to save you money.
S3 is incredibly powerful, so powerful, in fact, that even tech-giant Netflix uses S3 for its services. If you like Netflix, you have AWS S3 to thank for its convenience and efficiency! In fact, many of the websites you access on a daily basis either run off of S3 or use content stored in S3. Let’s look at a couple of use cases to get a better idea of how S3 is used in the real world.
Amazon S3 Use Cases
Have you ever accidentally deleted something important? S3 has backup and restore capabilities to make sure a user doesn’t lose data through versioning and deletion protection. Versioning means that AWS will save a new version of a file every time it’s updated and deletion protection makes sure a user has the right permissions before deleting a file.
What would a company do during an unexpected power outage or if their on-premises data center suddenly crashed? S3 data is protected in an Amazon managed data center, the same data centers Amazon uses to host their world-famous shopping website. By using S3, users get a second storage option without having to directly pay the rent and utilities of a physical site.
Some businesses need to store financial, medical, or other data mandated by industry standards. AWS allows users to archive this type of data with S3 Glacier, one of the many S3 storage classes to choose from. S3 Glacier is a cost-effective solution for archiving and one of the best in the market today.
Amazon EBS Benefits
Amazon Elastic Block Store (Amazon EBS) is an umbrella term for all of AWS’s block storage services. EBS is different from S3 in that it provides a storage volume directly connected to EC2 (Elastic Cloud Compute). EBS allows you to store files directly on an EC2 instance, allowing the instance to access your files in a quick and cheap manner. So when you hear or read about EBS, think “EC2 storage.”
You can customize your EBS volumes with the configuration best suited for the workload. For example, if you have a workload that requires greater throughput, then you could choose a Throughput Optimized HDD EBS volume. If you don’t have any specific needs for your workload then you could choose an EBS General Purpose SSD. If you need a high-performance volume then an EBS Provisioned IOPS SSD volume would do the trick. If you don’t understand yet, that’s okay! There’s a lot to learn about these volume types and we’ll cover that all in our video courses.
Just remember that EBS works with EC2 in a similar way to how your hard drive works with your computer. An EBS lets you save files locally to an EC2 instance. This storage capacity allows your EC2 to do some pretty powerful stuff that would otherwise be impossible. Let’s look at a couple of examples.
Amazon EBS Use Cases
Many companies look for cheaper ways to run their databases. Amazon EBS provides both Relational and NoSQL Databases with scalable solutions that have low-latency performance. Slack, the messaging app, uses EBS to increase database performance to better serve customers around the world.
Another use case of EBS involves backing up your instances. Because EBS is an AWS native solution, the backups you create in EBS can easily be uploaded to S3 for convenient and cost-effective storage. This way you’ll always be able to recover to a certain point-in-time if needed.
Amazon EFS Benefits
Elastic File System (EFS) is Amazon’s way of allowing businesses to share file data from multiple EC2’s or on-prem instances simultaneously. EFS is an elastic and serverless service. It automatically grows and shrinks depending on the file storing needs of your business without you having to provision or manage it.
Some advantages include being able to divide up your content between frequently accessed or infrequently accessed storage classes, helping you save some serious cash. EFS is an AWS native solution, so it also works with containers and functions like Amazon Elastic Container Service (ECS) and AWS Lambda.
Imagine an international company has a hundred EC2 instances with each hosting a web application (a website like this one). Hundreds of thousands of people are accessing these servers on a regular basis — therefore producing HUGE amounts of data. EFS is the AWS tool that would allow you to connect the data gathered from hundreds, even thousands of instances so you can perform data analytics and gather key business insights.
Amazon EFS Use Cases
Amazon Elastic File System (EFS) provides an easy-to-use, high-performing, and consistent file system needed for machine learning and big data workloads. Tons of data scientists use EFS to create the perfect environment for their heavy workloads.
EFS provides an effective means of managing content and web applications. EFS mimics many of the file structures web developers often use, making it easy to learn and implement in web applications like websites or other online content.
When companies like Discover and Ancestry switched from legacy storage systems to Amazon EFS they saved huge amounts of money due to decreased costs in management and time.
S3 vs EBS vs EFS Comparison Table
AWS Storage Summed Up
S3 is for object storage. Think photos, videos, files, and simple web pages.
EBS is for EC2 block storage. Think of a computer’s hard drive.
EFS is a file system for many EC2 instances. Think multiple EC2 instances and lots of data.
I hope that clears up AWS storage options. Of course, we can only cover so much in an article but check out our AWS courses for video lectures and hands-on labs to really learn how these services work.
Serverless computing has been on the rise the last few years, and whilst there is still a large number of customers who are not cloud-ready, there is a larger contingent of users who want to realize the benefits of serverless computing to maximize productivity and to enable newer and more powerful ways of building applications.
Serverless in cloud computing
Serverless is a cloud computing execution model in which the cloud provider allocates machine resources on demand and manages the servers on behalf of their customers. Cloud service providers still use servers to execute code for developers, which makes the term “serverless” a misnomer. There is always a server running in the background somewhere, and the cloud provider (AWS in this case) will run the infrastructure for you and leave you with the room to build your applications.
AWS Lambda
Within the AWS world, the principal Serverless service is AWS Lambda. Using AWS Lambda, you can run code for virtually any type of application or backend service without provisioning or managing servers. AWS Lambda functions can be triggered from many services, and you only pay for what you use.
So how does Lambda work? Using Lambda, you can run your code on high availability compute infrastructure and manage your compute resources. This includes server and operating system maintenance, capacity provisioning and automatic scaling, code and security patch deployment, and code monitoring and logging. All you need to do is supply the code.
AWS Lambda is a powerful service that has in recent years elevated AWS to be the leader in not only serverless architecture development, but within the cloud industry in general.
For those of you who don’t know – Lambda is a serverless, event-driven compute service that lets you run code without provisioning or managing servers which can be used for virtually any type of application or backend service. Its serverless nature and the fact that it has wide appeal across different use cases has made AWS Lambda a useful tool when running your short running compute operations in the cloud.
What makes Lambda better than other options?
You can use Lambda to handle all the operational and administrative tasks on your behalf, such as provisioning capacity, monitoring fleet health, deploying and running your code, and monitoring and logging it. Lambda’s key features and selling points are as follows:
Highly scalable
Completely event-driven
Support multiple languages and frameworks
Pay-as-you-go pricing
The use cases for AWS Lambda are varied and cannot be sufficiently explored in one blog post. However, we have put together the top ten use cases in which Lambda shines the best.
1: Processing uploaded S3 objects
Once your files land in S3 buckets, you can immediately start processing them by Lambda using S3 object event notifications. Using AWS Lambda for thumbnail generation is a great example for this use case, as the solution is cost-effective and you won’t have to worry about scaling up since Lambda can handle any load you place on it.
The alternative to a serverless function handling this request is an EC2 instance spinning up every time a photo needs converting to a thumbnail, or leaving an EC2 instance running 24/7 on the occasion that a thumbnail needs to be converted. This use case requires low latency, highly responsive event-driven architecture that allows your application to perform effectively at scale.
2: Document editing and conversion in a hurry
When objects are uploaded to Amazon S3 you can leverage AWS Lambda to perform changes to the material to help with any business goal you may have. This can also include editing document types and adding watermarks to important corporate documents.
For example, you could leverage a RESTful API, using Amazon S3 Object Lambda to convert documents to PDF and apply a watermark based on the requesting user. You could also convert a file from doc to PDF automatically upon being uploaded to a particular S3 Bucket. The use cases within this field are also unlimited.
3: Cleaning up the backend
Any consumer-oriented website needs to have a fast response time as one of its top priorities. Slow response times or even a visible delay can cause traffic to be lost.
It is likely that your consumers will simply switch to another site if your site is too busy dealing with background tasks to be able to display the next page or search results in a timely manner. While there are some sources of delay that are beyond your control, such as slow ISPs, there are some things you can do to increase your response time, and these are listed below.
How does AWS Lambda come into play when it comes to cloud computing?
Backend tasks should not delay frontend requests due to the fact that they are running on the backend. You can send the data to an AWS Lambda process if you need to parse the user input to store it in a database, or if there are other input processing tasks that are not necessary for rendering the next page. AWS Lambda can then clean up and send the data to your database or application.
4: Creating and operating serverless websites
It is outdated to maintain a dedicated server, even a virtual server. Furthermore, provisioning the instances, updating the OS, etc. takes a lot of time and distracts you from focusing on the core functions.
You don’t need to manage a single server or operating system when you use AWS Lambda and other AWS services to build a powerful website. For a basic version of this architecture you could use AWS API Gateway, DynamoDB, Amazon S3 and Amazon Cognito User Pools to achieve a simple, low effort and highly scalable website to solve any of your business use cases.
5: Real-time processing of bulk data
It is not unusual for an application, or even a website, to handle a certain amount of real-time data at any given time. Depending on how the data is inputted, it can come from communication devices, peripherals interacting with the physical world, or user input devices. Generally, this data will arrive in short bursts, or even a few bytes at a time, in formats that are easy to parse, and will arrive in formats that are usually very easy to read.
Nevertheless, there are times when your application might need to handle large amounts of streaming input data, so moving it to temporary storage for later processing may not be the best option.
It is usually necessary to be able to identify specific values from a stream of data collected from a remote device, such as a telemetry device. It is possible to handle the necessary real-time tasks without hindering the operation of your main application by sending the stream of data to a Lambda application on AWS that can pull and process the required information quickly.
6: Rendering pages in real-time
The Lambda service can play a significant role if you are using predictive page rendering in order to prepare webpages for display on your website.
As an example, if you want to retrieve documents and multimedia files for use in the next requested page, you can use a Lambda-based application to retrieve them, perform the initial stages of rendering them for display, and then, if necessary, use them for use in the next page.
7: Automated backups
When you are operating an enterprise application in the cloud, certain manual tasks like backing up your database or other storage mediums can fall to the side. By taking the undifferentiated heavy lifting out of your operations you can focus on what delivers value.
Using Lambda scheduled events is a great way of performing housekeeping within your account. By using the boto3 Python libraries and AWS Lambda, you can create backups, check for idle resources, generate reports, and perform other common tasks quickly.
8: Email Campaigns using AWS Lambda & SES
You can build out simple email campaigns to send mass emails to potential customers to improve your business outcomes.
Any organization that engages in marketing has mass mailing services as part of its marketing services. Hardware expenditures, license costs, and technical expertise are often required for traditional solutions.
You can build an in-house serverless email platform using AWS Lambda and Simple Email Service SES quite easily which can scale in line with your application.
9: Real-time log analysis
You could easily build out a Lambda function to check log files from Cloudtrail or Cloudwatch.
Amazon CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, and optimize resource utilization. AWS CloudTrail can be used to track all API calls made within your account.
It is possible to search the logs for specific events or log entries as they occur in the logs and be notified of them via SNS when they occur. You can also very easily implement custom notification hooks to Slack, Zendesk, or other systems by calling their API endpoint within Lambda.
10: AWS Lambda Use Case for Building Serverless Chatbot
Building and running chatbots is not only time consuming but expensive also. Developers must provision, run and scale the infrastructural resources that run the chatbot code. However, with AWS Lambda you can run a scalable chatbot architecture quite easily, without having to provision all of the hardware you would have had to do if you were not doing this on the cloud.
Basics of Amazon Detective (Included in AWS SAA-C03 Exam)
Detective is integrated with Amazon GuardDuty, AWS Security Hub, and partner security products, through which you can easily navigate to Detective, you don’t have to organize any data or develop, configure, or tune queries and algorithms. There are no upfront costs and customers pay only for the events analyzed, with no additional software to deploy or other feeds to subscribe to.
Testimonial: Passed SAA-C03!
Hi, just got the word, I passed the cert!
I mainly used Maareks videos for the initial learning, did turorialsdojo for the practice test and used Cantrills to touch up on places I lacked knowledge.
My next cert is prob gonna be sysOps. This time I plan to just use Cantrills videos I think because I feel they helped me the most.
Today I got the notification that I am officially an AWS Certified Solutions Architect and I’m so happy!
I was nervous because I had been studying for the C02 version, but at the last minute I registered for the C03 thinking it was somehow “better” because it was more up to date (?). I didn’t know how different it would be and with the announcement that Stephane was yet to release an updated version for this exam made me even more anxious. But it turned out well!
I used Stephane’s Udemy course and the practice exams from Tutorials Dojo to help me study. I think the practice exams were the most useful as they helped me understand better how the questions would be presented.
Looking back now, I don’t think there was a major difference between C02 and C03, so if you are thinking that you haven’t studied specifically for C03, I wouldn’t worry too much,
My experience with Practice exam –
I found Stephan’s practice exam to be more challenging and it really helped me in filling the gap. Options were very similar to each other so guessing was not an option in stephane’s exams.
With TD, questions were worded correctly but options were terrible. Like even if you don’t know the answer you can guess it. Some options were like ( Which one of the option is a planet – # sun – # Earth #cow – # AWS) like they were that easy to guess and that’s why I got 85% in the second test and I have to review all question because I don’t know the answer yet I was scoring.
Things of note:
Use the keyboard shortcuts (eg alt-n for next question). Over 65 questions, this will save at least 1-2 minutes.
Attempt every question on first read, even if you flag to come back to it, make a go of it there and then. That way if you time out, you’ve put in your first/gut feel answer. More often than not, during review you won’t change i anyway.
Don’t get disheartened. There are 15 non-scoring questions so conceivably one could get 15 plus 12-14 more wrong and still hit 720+ and pass!
Look for the keywords, obvious wrong answers. Most of the time it will be a choice of 2 answers, with maybe a keyword to nail home the right answer. I found a lot of keywords/points that made me thing ‘yep – has to be that’.
Read the entire question and all of the answers, even if sure on the right answer, just in case…
Discover what works best for you in terms of learning. Some people are more suited to books, some are hands on/projects, some are audio/video etc. Finding your way helps make learning something new a lot easier.
If at home, test your machine the week before and then again the day before don’t reboot. Remove the as much stress from the event as possible.
I’m preparing for SAA-C03, when I have questions where to choose the correct policy routing I always struggle with Latency, Geolocation and Geoproximity.
Especially with these kinds of scenarios:
Latency
I’ve users in the US and in Europe, those in Europe have perf issues, you set up your application also in Europe and you pick which policy routing?
Obviously ;-P I’ve selected Geolocation, because they are in Europe and I want they use the EU instances!!! It will boost the latency as well 🙁 , or at least to me is logical, while using a Latency based policy, I cannot be sure that they will use my servers in Europe.
2. Geolocation and Geoproximity
I don’t have a specific case to show up, but my understanding is that when I need to change the bias, I pick proximity based routing. The problem for me, it’s to understand when a simple geolocation policy is not enough (any tips). Is that Geolocation is used mainly to restrict content and internationalization? For country/compliance based restrictions, I understand that is better to use CloudFront, so using Routing is even not an option in such cases…
Comments:
#1: Geolocation isn’t about performance, that’s a secondary effect, but it’s not the primary function.
Latency based routing is there for a reason, to ensure the lowest latency .. and latency (generally) is a good indicator of performance.. especially for any applications which are latency sensitive.
Geo-location is more about delivering content from a localized server .. it might be about data location, language, local laws.
These are taken from my lessons on it, geolocation doesn’t return the ‘closest’ record… if you have a record tagged UK and one tagged France and you are in Germany .. it won’t return either of those… it would do Germany, Europe, default etc.
The different routing types are pretty easy to understand once you think about them in the right way.
We know you like your hobbies and especially coding, We do too, but you should find time to build the skills that’ll drive your career into Six Figures. Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career. 85% of hiring managers say cloud certifications make a candidate more attractive. Start your cloud journey with this excellent books below:
AWS WAF and AWS Shield help protect your AWS resources from web exploits and DDoS attacks.
AWS WAF is a web application firewall service that helps protect your web apps from common exploits that could affect app availability, compromise security, or consume excessive resources.
AWS Shield provides expanded DDoS attack protection for your AWS resources. Get 24/7 support from our DDoS response team and detailed visibility into DDoS events.
We’ll now go into more detail on each service.
AWS Web Application Firewall (WAF)
AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.
AWS WAF helps protect web applications from attacks by allowing you to configure rules that allow, block, or monitor (count) web requests based on conditions that you define.
These conditions include IP addresses, HTTP headers, HTTP body, URI strings, SQL injection and cross-site scripting.
Can allow or block web requests based on strings that appear in the requests using string match conditions.
For example, AWS WAF can match values in the following request parts:
Header – A specified request header, for example, the User-Agent or Referer header.
HTTP method – The HTTP method, which indicates the type of operation that the request is asking the origin to perform. CloudFront supports the following methods: DELETE, GET, HEAD, OPTIONS, PATCH, POST, and PUT.
Query string – The part of a URL that appears after a ? character, if any.
URI – The URI path of the request, which identifies the resource, for example, /images/daily-ad.jpg.
Body – The part of a request that contains any additional data that you want to send to your web server as the HTTP request body, such as data from a form.
Single query parameter (value only) – Any parameter that you have defined as part of the query string.
All query parameters (values only) – As above buy inspects all parameters within the query string.
New rules can be deployed within minutes, letting you respond quickly to changing traffic patterns.
When AWS services receive requests for web sites, the requests are forwarded to AWS WAF for inspection against defined rules.
Once a request meets a condition defined in the rules, AWS WAF instructs the underlying service to either block or allow the request based on the action you define.
With AWS WAF you pay only for what you use.
AWS WAF pricing is based on how many rules you deploy and how many web requests your web application receives.
There are no upfront commitments.
AWS WAF is tightly integrated with Amazon CloudFront and the Application Load Balancer (ALB), services.
When you use AWS WAF on Amazon CloudFront, rules run in all AWS Edge Locations, located around the world close to end users.
This means security doesn’t come at the expense of performance.
Blocked requests are stopped before they reach your web servers.
When you use AWS WAF on an Application Load Balancer, your rules run in region and can be used to protect internet-facing as well as internal load balancers.
Web Traffic Filtering
AWS WAF lets you create rules to filter web traffic based on conditions that include IP addresses, HTTP headers and body, or custom URIs.
This gives you an additional layer of protection from web attacks that attempt to exploit vulnerabilities in custom or third-party web applications.
In addition, AWS WAF makes it easy to create rules that block common web exploits like SQL injection and cross site scripting.
AWS WAF allows you to create a centralized set of rules that you can deploy across multiple websites.
This means that in an environment with many websites and web applications you can create a single set of rules that you can reuse across applications rather than recreating that rule on every application you want to protect.
Full feature API
AWS WAF can be completely administered via APIs.
This provides organizations with the ability to create and maintain rules automatically and incorporate them into the development and design process.
For example, a developer who has detailed knowledge of the web application could create a security rule as part of the deployment process.
This capability to incorporate security into your development process avoids the need for complex handoffs between application and security teams to make sure rules are kept up to date.
AWS WAF can also be deployed and provisioned automatically with AWS CloudFormation sample templates that allow you to describe all security rules you would like to deploy for your web applications delivered by Amazon CloudFront.
AWS WAF is integrated with Amazon CloudFront, which supports custom origins outside of AWS – this means you can protect web sites not hosted in AWS.
Support for IPv6 allows the AWS WAF to inspect HTTP/S requests coming from both IPv6 and IPv4 addresses.
Real-time visibility
AWS WAF provides real-time metrics and captures raw requests that include details about IP addresses, geo locations, URIs, User-Agent and Referers.
AWS WAF is fully integrated with Amazon CloudWatch, making it easy to setup custom alarms when thresholds are exceeded, or attacks occur.
This information provides valuable intelligence that can be used to create new rules to better protect applications.
AWS Shield
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS.
AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection.
There are two tiers of AWS Shield – Standard and Advanced.
AWS Shield Standard
All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge.
AWS Shield Standard defends against most common, frequently occurring network and transport layer DDoS attacks that target web sites or applications.
When using AWS Shield Standard with Amazon CloudFront and Amazon Route 53, you receive comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks.
AWS Shield Advanced
Provides higher levels of protection against attacks targeting applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator and Amazon Route 53 resources.
In addition to the network and transport layer protections that come with Standard, AWS Shield Advanced provides additional detection and mitigation against large and sophisticated DDoS attacks, near real-time visibility into attacks, and integration with AWS WAF, a web application firewall.
AWS Shield Advanced also gives you 24×7 access to the AWS DDoS Response Team (DRT) and protection against DDoS related spikes in your Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator and Amazon Route 53 charges.
AWS Shield Advanced is available globally on all Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53 edge locations.
Origin servers can be Amazon S3, Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), or a custom server outside of AWS.
AWS Shield Advanced includes DDoS cost protection, a safeguard from scaling charges because of a DDoS attack that causes usage spikes on protected Amazon EC2, Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, or Amazon Route 53.
If any of the AWS Shield Advanced protected resources scale up in response to a DDoS attack, you can request credits via the regular AWS Support channel.
AWS Simple Workflow vs AWS Step Function vs Apache Airflow
There are a number of different services and products on the market which support building logic and processes within your application flow. While these services have largely similar pricing, there are different use cases for each service.
AWS Simple Workflow Service (SWF), AWS Step Functions and Apache Airflow all seem very similar, and at times it may seem difficult to distinguish each service. This article highlights the similarities and differences, benefits, drawbacks, and use cases of these services that see a growing demand.
What is AWS Simple Workflow Service?
The AWS Simple Workflow Service (SWF) allows you to coordinate work between distributed applications.
A task is an invocation of a logical step in an Amazon SWF application. Amazon SWF interacts with workers which are programs that retrieve, process, and return tasks.
As part of the coordination of tasks, execution dependencies, scheduling, and concurrency are managed accordingly.
What are AWS Step Functions?
AWS Step Functions enables you to coordinate distributed applications and microservices through visual workflows.
Your workflow can be visualized by state machines describing steps, their relationships, and their inputs and outputs. State machines represent individual steps in a workflow diagram by containing a number of states.
The states in your workflow can perform work, make choices, pass parameters, initiate parallel execution, manage timeouts, or terminate your workflow.
What is Apache Airflow?
Firstly, Apache Airflow is a third party tool – and is not an AWS Service. Apache Airflow is an open-source workflow management platform for data engineering pipelines.
This powerful and widely-used open-source workflow management system (WMS) allows programmatic creation, scheduling, orchestration, and monitoring of data pipelines and workflows.
Using Airflow, you can author workflows as Directed Acyclic Graphs (DAGs) of tasks, and Apache Airflow can integrate with many AWS and non-AWS services such as: Amazon Glacier, Amazon CloudWatch Logs and Google Cloud Secret Manager.
Benefits and Drawbacks
Let’s have a closer look at the benefits and drawbacks of each service.
Here’s an overview of some use cases of each service.
Choose AWS Simple Workflow Service if you are building:
Order management systems
Multi-stage message processing systems
Billing management systems
Video encoding systems
Image conversion systems
Choose AWS Step Functions if you want to include:
Microservice Orchestration
Security and IT Automation
Data Processing and ETL Orchestration
New instances of Media Processing
Choose Apache Airflow if:
ETL pipelines that extract data from multiple sources, and run Spark jobs or other data transformations
Machine learning model training
Automated generation of reports
Backups and other DevOps tasks
Conclusion
Each of the services discussed has unique use cases and deployment considerations. It is always necessary to fully determine your solution requirements before you make a decision as to which service best fits your needs.
There are many things that AWS actively try to help you with – and cost optimization is one of them. Cost optimization simply defined comes down to helping you reduce your cloud spend in specific areas, without impacting on the efficacy of your architecture and how it functions. Cost optimization is one of the pillars in the well architected framework, and we can use it to help us move towards a more streamlined, and cost efficient workload.
AWS Well-Architected Framework enables cloud architects to build fast, reliable, and secure infrastructures for a wide variety of workloads and applications. It is built around six pillars:
Operational excellence
Security
Reliability
Performance efficiency
Cost optimization
Sustainability
The Well-Architected Framework provides customers and partners with a consistent approach for evaluating architectures and implementing scalable designs on AWS. It is applicable for use whether you are a burgeoning start-up or an enterprise corporation using the AWS Cloud.
In this article however, we are going to focus on exactly what is cost optimization, explore some key principles of how it is defined and demonstrate some use cases as to how it could help you when architecting your own AWS Solutions.
What is Cost Optimization?
Besides being one of the pillars on the Well Architected framework, Cost Optimization is a broad, yet simple term and is defined by AWS as follows:
“The Cost Optimization pillar includes the ability to run systems to deliver business value at the lowest price point.”
It provides a comprehensive overview of the general design principles, best practices, and questions related to cost optimization. Once understood, it can have a massive impact on how you are launching your various applications on AWS.
As well as a definition of what the Cost Optimization is there are some key design principles which we’ll explore in order to make sure we are on the right track with enhancing our workloads:
Implement Cloud Financial Management
In order to maximize the value of your cloud investment, cloud financial management/cost optimization is essential for achieving financial success and maximizing the value of your cloud investment. As your organization moves into this new era of technology and usage management, there is an imperative need for you to devote resources and time to developing capability in this new area. As with security or operational excellence, if you want to become a cost-efficient organization, you will have to build capability through knowledge building, programs, resources, and processes in a similar manner to how you would build capability for security.
Adopt a consumption model
If you want to save money on computing resources, it is important to pay only for what you require, and to increase or decrease usage based on the needs of the business, without relying on elaborate forecasting.
Measure overall efficiency
It is important to measure the business output of a workload as well as the costs associated with delivering that workload. You can use this measure to figure out what gains you will make if you increase output and reduce costs. Efficiency doesn’t also have to be just financially worthwhile to help get your cloud spend under control. It can also help any one server from becoming under or over utilized and help from a performance standpoint also.
Stop spending money on unnecessary activities
When it comes to data center operations, AWS handle everything from racks and stacks to powering servers and providing the racking itself. By utilizing managed services, you will also be able to remove the operational burden of managing operating systems as well as applications. The advantage of this method is that you are able to focus on your customers and your business projects rather than on your IT infrastructure.
Analyze and attribute expenditure
There is no doubt that the cloud allows for easy identification of the usage and cost of systems, which in turn allows for transparent attribution of IT costs to individual workload owners. Achieving this helps workload owners to measure the return on investment (ROI) of their investment as well as to reduce their costs and optimize their resources.
Now that we fully understand what we mean when we say ‘Cost Optimization on AWS’, we are going to show some ways that we can use cost optimization principles in order to improve the overall financial performance of our workloads on Amazon S3, and Amazon EC2:
Cost optimization on S3
Amazon S3 is an object-storage service which provides 11 Nines of Durability, and near infinite, low-cost object storage. There are a number of ways to even further optimize your costs, and ensure you are adhering to the Cost Optimization pillar of the Well Architected Framework.
S3 Intelligent Tiering
Amazon S3 Intelligent-Tiering is a storage class that is intended to optimize storage costs as well as provide users with an easy way to move data to the most cost-effective access tier as their usage patterns change over time. In the case of S3, Intelligent-Tiering monitors access patterns and, for a small monthly fee, automatically moves objects that have not been accessed to lower-cost access tiers. You can automatically save storage costs when you use S3 Intelligent-Tiering, which is a technology that provides low-latency and high-throughput access tiers to reduce storage costs. As a result of S3 Intelligent-Tiering storage class, it is possible to automatically archive data that is asynchronously accessible.
S3 Storage Class Analysis
Amazon S3 Storage Class Analysis analyses storage access patterns to help you decide when to transition the right data to the right storage class. This is a relatively new Amazon S3 analytics feature that monitors your data access patterns and tells you when data should be moved to a lower-cost storage class based on the frequency with which it is accessed.
Cost optimization on EC2
Amazon EC2 is simply a Virtual Machine in the cloud that can be scaled up, scaled down dynamically as your application grows. There are a number of ways you can optimize your spend on EC2 depending on your use case, whilst still delivering excellent performance.
Savings Plans
In exchange for a commitment to a specific instance family within the AWS Region (for example, C7 in US-West-2), EC2 Instance Savings Plans offer savings of up to 72 percent off on-demand.
EC2 Instance Savings Plans allow you to switch between instance sizes within the family (for example, from c5.xlarge to c5.2xlarge) or operating systems (such as from Windows to Linux), or change from Dedicated to Default tenancy, while continuing to receive the discounted rate.
If you are using large amounts of particular EC2 instances, buying a Savings Plan allows you to flexibly save money on your compute spend.
Right-sizing EC2 Instances
Right-sizing is about matching instance types and sizes to your workload performance and capacity needs at the lowest possible cost. Furthermore, it involves analyzing deployed instances and identifying opportunities to eliminate or downsize them without compromising capacity or other requirements.
The Amazon EC2 service offers a variety of instance types tailored to fit the needs of different users. There are a number of instance types that offer different combinations of resources such as CPU, memory, storage, and networking, so that you can choose the right resource mix for your application.
You can use Trusted Advisor to give recommendations on which particular EC2 instances are running at low utilization. This takes a lot of undifferentiated heavy lifting out of your hands as AWS tell you the exact instances you need to re-size.
Using Spot Capacity where possible
Spot capacity is spare capacity that AWS has within their data centers, which they provide to you for a large discount (up to 90%). The downside is that if a customer is willing to pay the on-demand price for this capacity, you will be given a 2-minute warning after which your instances will be terminated.
Applications requiring online availability are not well suited to spot instances. The use of Spot Instances is recommended for stateless, fault-tolerant, and flexible applications. A Spot Instance can be used for big data, containerized workloads, continuous integration and delivery (CI/CD), stateless web servers, high performance computing (HPC), and rendering workloads, as well as anything else which can be interrupted and requires low cost.
There are many considerations when it comes to optimizing cost on AWS, and the Cost Optimization pillar provides us all of the resources we need to be fully enabled in our AWS journey.
Source: This article originally appeared on: https://digitalcloud.training/what-does-aws-mean-by-cost-optimization/
AWS Amplify is a set of tools and services that enables mobile and front-end web developers to build secure, scalable full stack applications powered by AWS. Amplify includes an open-source framework with use-case-centric libraries and a powerful toolchain to create and add cloud-based features to your application, and a web-hosting service to deploy static web applications.
AWS SAM
AWS SAM is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, and event source mappings. With just a few lines per resource, you can define the application you want and model it using YAML. During deployment, AWS SAM transforms and expands the AWS SAM syntax into AWS CloudFormation syntax, enabling you to build serverless applications faster.
Amazon Cognito
Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0.
Vue Javascript Framework
Vue JavaScript framework is a progressive framework for building user interfaces. Unlike other monolithic frameworks, Vue is designed to be incrementally adoptable. The core library focuses on the view layer only and is easy to pick up and integrate with other libraries or existing projects. Vue is also perfectly capable of powering sophisticated single-page applications when used in combination with modern tooling and supporting libraries.
AWS Cloud9
AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser. It includes a code editor, debugger, and terminal. AWS Cloud9 makes it easy to write, run, and debug serverless applications. It pre-configures the development environment with all the SDKs, libraries, and plugins needed for serverless development.
Swagger API
Swagger API is an open-source software framework backed by a large ecosystem of tools that help developers design, build, document, and consume RESTful web services. Swagger also allows you to understand and test your backend API specifically.
Amazon DynamoDB
Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-Region, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than 20 million requests per second.
Amazon EventBridge
Amazon EventBridge makes it easy to build event-driven applications because it takes care of event ingestion, delivery, security, authorization, and error handling for you. To achieve the promises of serverless technologies with event-driven architecture, such as being able to individually scale, operate, and evolve each service, the communication between the services must happen in a loosely coupled and reliable environment. Event-driven architecture is a fundamental approach for integrating independent systems or building up a set of loosely coupled systems that can operate, scale, and evolve independently and flexibly. In this lab, you use EventBridge to address the contest use case.
Amazon DynamoDB Streams
Amazon DynamoDB Streams is an ordered flow of information about changes to items in a DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table.
AWS Step Functions
AWS Step Functions is a serverless function orchestrator that makes it easy to sequence Lambda functions and multiple AWS services into business-critical applications. Through its visual interface, you can create and run a series of checkpointed and event-driven workflows that maintain the application state. The output of one step acts as input to the next. Each step in your application runs in order, as defined by your business logic. Orchestrating a series of individual serverless applications, managing retries, and debugging failures can be challenging. As your distributed applications become more complex, the complexity of managing them also grows. Step Functions automatically manages error handling, retry logic, and state. With its built-in operational controls, Step Functions manages sequencing, error handling, retry logic, and state, removing a significant operational burden from your team.
When your processing requires a series of steps, use Step Functions to build a state machine to orchestrate the workflow. This lets you keep your Lambda functions focused on business logic.
Returning to the baker in our analogy, when an order to make a pie comes in, the order is actually a series of related but distinct steps. Some steps have to be done first or in sequence, and some can be done in parallel. Some take longer than others. Someone with expertise in each step performs that step. To make things go smoothly and let the experts stick to their expertise, you need a way to manage the flow of steps and keep whoever needs to know informed of the status.
Amazon Simple Notification Service (Amazon SNS) is a fully managed messaging service for both system-to-system and app-to-person (A2P) communication. The service enables you to communicate between systems through publish/subscribe (pub/sub) patterns that enable messaging between decoupled microservice applications or to communicate directly to users via SMS, mobile push, and email. The system-to-system pub/sub functionality provides topics for high-throughput, push-based, many-to-many messaging. Using Amazon SNS topics, your publisher systems can fan out messages to a large number of subscriber systems or customer endpoints including Amazon Simple Queue Service (Amazon SQS) queues, Lambda functions, and HTTP/S, for parallel processing. The A2P messaging functionality enables you to send messages to users at scale using either a pub/sub pattern or direct-publish messages using a single API.
Observability extends traditional monitoring with approaches that address the kinds of questions you want to answer about your applications. Business metrics are sometimes an afterthought, only coming into play when someone in the business asks the question, and you have to figure out how to get the answers from the data you have. If you build in these needs when you’re building the application, you’ll have much more visibility into what’s happening within your application.
Logs, metrics, and distributed tracing are often known as the three pillars of observability. These are powerful tools that, if well understood, can unlock the ability to build better systems.
Logs provide valuable insights into how you measure your application health. Event logs are especially helpful in uncovering growing and unpredictable behaviors that components of a distributed system exhibit. Logs come in three forms: plaintext, structured, and binary.
Metrics are a numeric representation of data measured over intervals of time about the performance of your systems. You can configure and receive automatic alerts when certain metrics are met.
Tracing can provide visibility into both the path that a request traverses and the structure of a request. An event-driven or microservices architecture consists of many different distributed parts that must be monitored. Imagine a complex system consisting of multiple microservices, and an error occurs in one of the services in the call chain. Even if every microservice is logging properly and logs are consolidated in a central system, it can be difficult to find all relevant log messages.
AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application and shows a map of your application’s underlying components. You can use X-Ray to analyze both applications in development and in production, from simple three-tier applications to complex microservices applications consisting of thousands of services.
Amazon CloudWatch Logs Insights is a fully managed service that is designed to work at cloud scale with no setup or maintenance required. The service analyzes massive logs in seconds and gives you fast, interactive queries and visualizations. CloudWatch Logs Insights can handle any log format and autodiscovers fields from JSON logs.
Amazon CloudWatch ServiceLens is a feature that enables you to visualize and analyze the health, performance, and availability of your applications in a single place. CloudWatch ServiceLens ties together CloudWatch metrics and logs, as well as traces from X-Ray, to give you a complete view of your applications and their dependencies. This enables you to quickly pinpoint performance bottlenecks, isolate root causes of application issues, and determine impacted users.
Characteristics of modern applications that challenge traditional approaches
Short-lived resources
More devices, services, and data
Faster release velocity
Importance of user experience
AWS services that address the three pillars of observability
CloudWatch Logs and Logs Insights
X-Ray
CloudWatch metrics
ServiceLens
Ways you can use CloudWatch metrics
Use default operational metrics.
Alarm based on thresholds, and trigger automated actions.
Correlate logs and metrics.
Create custom metrics.
Use embedded metrics format to create metrics from log files.
User pools can act as an identity provider with profile management and issue a JWT that can be used for authorization. You can also configure sign in to a web or mobile application through Amazon Cognito with user pools.
Federated identities cannot be used as an identity provider (IdP), but you can use them to create unique identities for users and federate them with identity providers. With federated identities, you can issue AWS access key IDs and secret access keys that can be used with temporary IAM credentials to access AWS resources.
Using Amazon EventBridge and Amzon SNS to Decouple Components:
In an asynchronous design, the client sends a request and may get an acknowledgement that the event was received, but it doesn’t get a response that includes the results of the request.
Amazon EventBridge and Amazon SNS are both messaging services that trigger Lambda asynchronously and allow you to fan out for parallel processing.
The advantage of asynchronous processing is that you reduce the dependencies on downstream activities, and this reduction improves responsiveness back to the client. This means you don’t have to put logic into your code to deal with long wait times or to handle errors that might occur downstream. Once your client has successfully handed off the request, you can move on.
If you go into your design thinking in terms of asynchronous connections, you can create a much more flexible and resilient application.
In asynchronous communications, it’s important that the client has a way to know when the downstream task is done so that it can complete the appropriate next steps. There are three common patterns the client might use to get the status of the asynchronous transaction illustrated here.
Three types of event sources for EventBridge
AWS services
Custom applications
Software as a service (SaaS) applications
Two ways Amazon SNS supports event-driven design
You can filter events with subscriber-specific filtering policies.
You can build wide fan-out patterns.
Four ways the schema registry helps developers
Stores event structures that can be searched
Generates code bindings
Automatically discovers and adds schemas to the registry with schema discovery
What are Four types of serverless resource types supported in AWS SAM templates?
Lambda functions
API Gateway APIs
DynamoDB tables
Step Functions state machines
Declarative Programming vs Imperative Programming
With declarative programming, you say what you want (abstract). Compared to imperative programming, developers need less knowledge than with higher level languages and tool chains. But you do give up flexibility in terms of things like full execution control, looping constructs, and advanced techniques (such as OO inheritance, threading, and automated testing). AWS CloudFormation templates use declarative programming.
With imperative programming, you say how to do it (procedural). Compared to declarative programing, developers need more knowledge to write code in the specific language syntax, using different API libraries and conventions and exception handling mechanisms. But this does give the developer greater flexibility within the language-specific editors and tooling. AWS CDK is an option for using imperative programming.
Three things Lambda does for you when polling a queue
Polls the queue and invokes your function with an event that contains a batch of messages
Deletes a successfully processed batch of records off the queue
Scales pollers and Lambda concurrency up and down based on queue depth and error rates
Three things Lambda does for you when polling a stream
Polls the stream and invokes your function with an event that contains a batch of messages
Maintains a pointer of last record processed on the stream
Invokes one Lambda function instance per shard or the number you choose for concurrent batches per shard
Queues or stream?
Data is available to multiple consumers: Stream
Rate of messages is continuous and high volume: Stream
Consumer must maintain a pointer: Stream
Rates of messages vary: Queues
Value from acting on the stream messages: Stream
Messages are deleted after they are successfully processed: Queue
Act on individual messages: Queues
There is only one consumer of the messages: Queues
Messages are deleted after they are successfully processed: With a queue, you delete the message from the queue after it has been processed so that it does not get picked up for processing again. When you use Lambda as an event source, Lambda deletes successfully processed batches from the queue.
Consumer must maintain a pointer: With streams, Lambda and other consumers must maintain a pointer to track which records need to be processed next.
Data is available to multiple consumers: Multiple consumers can poll a stream and process the messages. Messages remain on the stream until they expire.
There is only one consumer of the messages: This implies a queue. You set up a single target for the queue, for example a Lambda function. Messages are deleted from the queue as they are processed.
Rates of messages may vary: This is generally a characteristic that leads to selecting a queue. Records are processed as they arrive, but you do not anticipate a steady stream.
Rate of messages is continuous and high volume: This is the typical scenario for a stream. Message come in at a steady pace and there is a high volume of messages to process.
Act on individual messages: Use a queue when you need to act on individual messages, for example processing an order.
Value from acting on the stream of messages: This points to a stream. You don’t really get value from processing one message on a stream, but instead, find value in aggregating results.
What are the Best practices for writing Lambda functions?
Take advantage of environment reuse, and check that background processes have completed.
Manage database connection pooling with a database proxy.
Persist state data externally.
Minimize deployment package size and complexity of dependencies.
Mount Amazon Elastic File System (Amazon EFS) for large or shared assets.
Use provisioned concurrency to avoid cold starts.
What are Three configurations that impact performance for Lambda functions?
Memory
Timeout
Concurrency (reserved and provisioned)
Lambda error handling
Two general types of errors from Lambda Functions:
Invocation errors
Function errors
Characteristics of error handling for synchronous event sources
The event source must handle any errors that occur.
There are no built-in retries.
Developers should use the backoff and retry functionality in the AWS CLI and AWS SDK to respond to errors.
Characteristics of error handling for asynchronous event sources
The client or invoking service is responsible for errors that prevent Lambda from invoking the function, such as permission issues or invalid JSON.
If Lambda successfully invokes the function but your function throws an exception or doesn’t complete, Lambda will retry running the function up to two more times. You can set this retry value from 0-2 in the function configuration.
You can send Lambda invocation failures that continue to fail to an OnFailure destination or a dead-letter queue.
Characteristics of error handling for streams as an event source
By default, Lambda will keep trying a failed batch until it succeeds or the record expires off of the stream, effectively blocking the shard.
Configure the event source to split a failed batch or use checkpointing and use maximum retries to isolate failing records and send them to an OnFailure destination.
To preserve order, update your function to return a failed sequence identifier and use checkpointing to retry only records that have not been processed.
When you save updates to your Lambda function, $LATEST is updated by default. You edit the $LATEST code and publish new versions as needed.
When you choose the option to “publish” a version of the function, Lambda creates a copy of the unpublished $LATEST version and gives it a sequential number. That version is immutable.
The qualified ARN for that version puts the version number as a qualifier.
You can create an alias and associate to any version of your function, including $LATEST. This allows you to reference your function using the alias as a qualifier in your ARN.
Lambda also lets you use alias routing to have an alias point to two versions, sending a percentage of traffic to each version.
With synchronous event sources, the client is responsible for all error handling. The AWS CLI and AWS SDK include backoff and retries by default, so take advantage of those to respond to errors.
With asynchronous event sources, Lambda will retry failed functions up to two times based on the selection you make on the Lambda function configuration for asynchronous events. With asynchronous event sources, you also have the option to set an on-failure destination and a dead-letter queue.
With streaming event sources, you can configure error handling to split batches on error. This allows the stream to isolate the failing record and send it to an on-failure destination. This is important for preventing bottlenecks because Lambda will not move the pointer on the stream until a batch is successful.
With Amazon SQS as an event source, Lambda will increase concurrency to keep up with the pace of requests, but it will also decrease concurrency if there are errors being returned. With Amazon SQS as an event source, you can configure a dead-letter queue on the queue itself.
In the world of Cloud Computing, Security is always job zero. This means that we design everything with Security in mind – at every single layer of our application! While you may have heard about AWS Security Groups – have you ever stopped to think about what a security group is, and what it actually does?
If for example, you are launching a Web Server to launch a brand new website hosted on AWS, you will have to prevent and allow certain protocols to initiate communication with your Web Server in order for users to interact with your website. On the other hand, if you give everyone access to your server using any protocol you may be leaving sensitive information easily reachable from anyone else on the internet, ruining your security posture.
The balance of allowing this kind of access is done using a specific technology in AWS, and today we are going to explore how Security Groups work, and what problems they help you solve.
What is a Security Group?
Security groups control traffic reaching and leaving the resources they are associated with according to the security group rules set by each group. After you associate a security group with an EC2 instance, it controls the instance’s inbound and outbound traffic.
Although VPCs come with a default security group when you create them, additional security groups can be created for any VPC within your account.
Security groups can only be associated with resources in the VPC for which they were created, and do not apply to resources in different VPCs.
Each security group has rules for controlling traffic based on protocols and ports. There are separate rules for inbound and outbound traffic.
Let’s have a look at what a security group looks like.
As stated earlier, Security Groups control inbound and outbound traffic in relation to resources placed in these security groups. Below are some example rules that you would see routinely when interacting with security groups for a Web Server.
Inbound
Outbound
Security Groups can also be used for the Relational Database Service, and for Amazon Elasticache to control traffic in a similar way.
Security Group Quotas
There is a limit of Security Groups you can have within a Region, and a limit on the number of outbound and inbound rules you can have per security group.
For the number of Security Groups within a Region, you can have 2500 Security Groups per Region by default. This quota applies to individual AWS account VPCs and shared VPCs, and is adjustable through launching a support ticket with AWS Support.
Regarding the number of inbound and outbound rules per Security Group, you can have 60 inbound and 60 outbound rules per security group (making a total of 120 rules). An IPv4 quota is enforced separately from IPv6 quotas. For example, an IPv4 quota can have 60 inbound rules, while an IPv6 quota can have 60 inbound rules.
Both inbound and outbound rules can be changed with a quota change. Per network interface, this quota multiplied by the quota for security groups cannot exceed 1,000.
Best Practices with Security Groups
When we are inevitably using Security Groups as part of our infrastructure, we can use some best practices to ensure that we are aligning ourselves with the highest security standards possible.
Ensure your Security Groups do not have a large range of ports open
When large port ranges are open, instances are vulnerable to unwanted attacks. Furthermore, they make it very difficult to trace vulnerabilities. Web servers may only require 80 and 443 ports to be open, and not any more.
Create new security groups and restrict traffic appropriately
If you are using the default AWS Security group for your active resources you are going to unnecessarily open up your instances and your applications and cause a lessened security posture.
Where possible, restrict access to required IP address(es) and by port, even internally within your organization
If you are allowing all access (0.0.0.0/0 or ::/0) to your resources, you are asking for trouble. Where possible, you can actually restrict access to your resources based on an individual IP address or range of addresses. This would prevent any bad actors accessing your instances and lessen your security posture.
Chain Security Groups together
When Chaining Security Groups, the inbound and outbound rules are set up in a way that traffic can only flow from the top tier to the bottom tier and back up again. The security groups act as firewalls to prevent a security breach in one tier to automatically provide subnet-wide access of all resources to the compromised client.
The AWS Front-End Web and Mobile services support development workflows for native iOS/Android, React Native, and JavaScript developers. You can develop apps and deliver, test, and monitor them using managed AWS services.
AWS AppSync Features
AWS AppSync is a fully managed service that makes it easy to develop GraphQL APIs.
Securely connects to to data sources like AWS DynamoDB, Lambda, and more.
Add caches to improve performance, subscriptions to support real-time updates, and client-side data stores that keep offline clients in sync.
AWS AppSync automatically scales your GraphQL API execution engine up and down to meet API request volumes.
GraphQL
AWS AppSync uses GraphQL, a data language that enables client apps to fetch, change and subscribe to data from servers.
In a GraphQL query, the client specifies how the data is to be structured when it is returned by the server.
This makes it possible for the client to query only for the data it needs, in the format that it needs it in.
GraphQL also includes a feature called “introspection” which lets new developers on a project discover the data available without requiring knowledge of the backend.
Real-time data access and updates
AWS AppSync lets you specify which portions of your data should be available in a real-time manner using GraphQL Subscriptions.
GraphQL Subscriptions are simple statements in the application code that tell the service what data should be updated in real-time.
Offline data synchronization
The Amplify DataStore provides a queryable on-device DataStore for web, mobile and IoT developers.
When combined with AWS AppSync the DataStore can leverage advanced versioning, conflict detection and resolution in the cloud.
This allows automatic merging of data from different clients as well as providing data consistency and integrity.
Data querying, filtering, and search in apps
AWS AppSync gives client applications the ability to specify data requirements with GraphQL so that only the needed data is fetched, allowing for both server and client filtering.
AWS AppSync supports AWS Lambda, Amazon DynamoDB, and Amazon Elasticsearch.
GraphQL operations can be simple lookups, complex queries & mappings, full text searches, fuzzy/keyword searches, or geo lookups.
Server-Side Caching
AWS AppSync’s server-side data caching capabilities reduce the need to directly access data sources.
Data is delivered at low latency using high speed in-memory managed caches.
AppSync is fully managed and eliminates the operational overhead of managing cache clusters.
Provides the flexibility to selectively cache data fields and operations defined in the GraphQL schema with customizable expiration.
Security and Access Control
AWS AppSync allows several levels of data access and authorization depending on the needs of an application.
Simple access can be protected by a key.
AWS IAM roles can be used for more restrictive access control.
AWS AppSync also integrates with:
Amazon Cognito User Pools for email and password functionality
Social providers (Facebook, Google+, and Login with Amazon).
Enterprise federation with SAML.
Customers can use the Group functionality for logical organization of users and roles as well as OAuth features for application access.
Custom Domain Names
AWS AppSync enables customers to use custom domain names with their AWS AppSync API to access their GraphQL endpoint and real-time endpoint.
Used with AWS Certificate Manager (ACM) certificates..
A custom domain name can be associated with any available AppSync API in your account.
When AppSync receives a request on the custom domain endpoint, it routes it to the associated API for handling.
Cloud security best practices are serverless best practices. These include applying the principle of least privilege, securing data in transit and at rest, writing code that is security-aware, and monitoring and auditing actively.
Apply a defense in depth approach to your serverless application security.
Thinking serverless at scale means knowing the quotas of the services you are using and focusing on scaling trade-offs and optimizations among those services to find the balance that makes the most sense for your workload.
As your solutions evolve and your usage patterns become clearer, you should continue to find ways to optimize performance and costs and make the trade-offs that best support the workload you need rather than trying to scale infinitely on all components. Don’t expect to get it perfect on the first deployment. Build in the kind of monitoring and observability that will help you understand what’s happening, and be prepared to tweak things that make sense for the access patterns that happen in production.
Lambda Power Tuning helps you understand the optimal memory to allocate to functions.
You can specify whether you want to optimize on cost, performance, or a balance of the two.
Under the hood, a Step Functions state machine invokes the function you’ve specified at different memory settings from 128 MB to 3 GB and captures both duration and cost values.
Let’s take a look at Lambda Power Tuning in action with a function I’ve written.
The function I have determines the hash value of a lot of numbers. Computationally, it’s expensive. I’d like to know whether I should be allocating 1 GB, 1.5 GB, or 3 GB of RAM to it.
I can specify the memory values to test in the file deploy.sh. In my example, I’m only using 1, 1.5 GB, and 3 GB. The state machine takes the following parameters (you define these in sample-execution-input.json):
Lambda ARN
Number of invocations for each memory configuration
Static payload to pass to the Lambda function for each invocation
Parallel invocation: Whether all invocations should be in parallel or not. Depending on the value, you may experience throttling.
Strategy: Can be cost, speed, or balanced. Default is cost.
If you specify Cost, it will report the cheapest option regardless of performance. Speed will suggest fastest regardless of cost. Balanced will choose a compromise according to balancedWeight. balancedWeight is a number between 0 and 1. Zero is speed strategy. One is cost strategy.
Let’s take a look at the inputs I’ve specified and find out how much memory we should allocate.
In this configuration, I’m specifying that I want this function to execute as quickly as possible.
Results.powershows that 3 GB provides the best performance.
Let’s update my configuration to use the default strategy of cost and run again.
Results.power shows that 1 GB is the best option for price.
Use this tool to help you evaluate how to configure your Lambda functions.
How API Gateway responds to a burst of requests
API Gateway uses the token bucket algorithm to fulfill requests at a steady pace.
If the rate at which the bucket is being filled causes the bucket to fill up and exceed the burst value, a 429 Too Many Requests error is returned.
Lambda concurrency considerations for scaling
Burst quota: Regional limit that prevents concurrency from increasing too quickly (cannot be changed)
Regional account quota: Soft limit on total number of concurrent invocations within an account by Region
Reserved concurrency: Optional limit per function
Provisioned concurrency: Optional subset of reserved concurrency that is always warm
How Lambda reacts to a burst of requests
Lambda immediately increases the number of concurrent invocations by the Regional burst concurrency quota.
After this immediate increase, Lambda will add up to 500 concurrent invocations per minute until it has enough to run all the requests concurrently or until it reaches the function or account limit.
Synchronous and asynchronous event source scaling
Lambda will increase concurrency to keep up with demand up to an account quota or the reserved concurrency set on a function.
Concurrency = requests * duration.
Amazon SQS queue event source scaling
Lambda increases the number of pollers on the queue as queue depth increases but decreases it if error rates are increasing.
You can increase the batch size to process more messages at once, but you need to avoid making it so large that the batch can’t complete before the function times out.
Kinesis Data Streams event source scaling
Lambda invokes one concurrent invocation per shard by default.
You can increase the batch size and increase the concurrent batches per shard to process batches of messages faster.
Configure error handling with a maximum retry count and an onFailure destination to prevent blocked shards.
Configure enhanced fan-out to give higher throughput to many consumers.
Automation is especially important with serverless applications. Lots of distributed services that can be independently deployed mean more, smaller deployment pipelines that each build and test a service or set of services. With an automated pipeline, you can incorporate better detection of anomalies and more testing, halt your pipeline at a certain step, and automatically roll back a change if a deployment were to fail or if an alarm threshold is triggered.
Your pipeline may be a mix and match of AWS or third-party components that suit your needs, but the concepts apply generally to whatever tools your organization uses for each of these steps in the deployment tool chain. This module will reference the AWS tools that you can use in each step in your CI/CD pipeline.
CI/CD best practices
Configure testing using safe deployments in AWS SAM:
Declare an AutoPublishAlias
Set safe deployment type
Set a list of up to 10 alarms that will trigger a rollback
Configure a Lambda function to run pre- and post-deployment tests
Use traffic shifting with pre- and post-deployment hooks
PreTraffic: When the application is deployed, the PreTraffic Lambda function runs to determine if things should continue. If that function completes successfully (i.e., returns a 200 status code), the deployment continues. If the function does not complete successfully, the deployment rolls back.
PostTraffic: If the traffic successfully completes the traffic shifting progression to 100 percent of traffic to the new alias, the PostTraffic Lambda function runs. If it returns a 200 status code, the deployment is complete. If the PostTraffic function is not successful, the deployment is rolled back.
Use separate account per environment
It’s a best practice with serverless to use separate accounts for each stage or environment in your deployment. Each developer has an account, and the staging and deployment environments are each in their own accounts.
This approach limits the blast radius of issues that occur (for example, unexpectedly high concurrency) and allows you to secure each account with IAM credentials more effectively with less complexity in your IAM policies within a given account. It also makes it less complex to differentiate which resources are associated with each environment.
Because of the way costs are calculated with serverless, spinning up additional environments doesn’t add much to your cost. Other than where you are provisioning concurrency or database capacity, the cost of running tests in three environments is not different than running them in one environment because it’s mostly about the total number of transactions that occur, not about having three sets of infrastructure.
Use on AWS SAM template with parameters across environments
As noted earlier, AWS SAM supports CloudFormation syntax so that your AWS SAM template can be the same for each deployment environment with dynamic data for the environment provided when the stack is created or updated. This helps you ensure that you have parity between all testing environments and aren’t surprised by configurations or resources that are different or missing from one environment to the next.
AWS SAM lets you build out multiple environments using the same template, even across accounts:
Use parameters and mappings when possible to build dynamic templates based on user inputs and pseudo parameters, such as AWS: Region
Use the Globals section to simplify templates
Use ExportValue and ImportValue to share resource information across stacks
Manage secrets across environments with Parameter Store:
AWS Systems Manager Parameter Store supports encrypted values and is account specific, accessible through AWS SAM templates at deployment, and accessible from code at runtime.
Testing throughout the pipeline
Another best practice is to test throughout the pipeline. Assuming these steps in a pipeline – build, deploy to test environment, deploy to staging environment, and deploy to production – drag the type of test to the pipeline step where you would perform the tests before allowing the next step in deployment to continue.
You are reviewing the team’s plan for managing the application’s deployment. Which suggestions would you agree with? (Select TWO.)
A. Use IAM to control development and production access within one AWS account to separate development code from production code
B. Use AWS SAM CLI for local development testing
C. Use CloudFormation to write all of the infrastructure as code for deploying the application
D. Use Amplify to deploy the user interface and AWS SAM to deploy the serverless backend
Answer: B. D. Notes: Use AWS SAM CLI for local development testing, Use Amplify to deploy the user interface and AWS SAM to deploy the serverless backend
Scaling considerations for serverless applications
True
Using HTTP APIs and first-class service integrations can reduce end-to-end latency because it lets you connect the API call directly to a service API rather than requiring a Lambda function between API Gateway and the other AWS service.
Provisioned concurrency may be less expensive than on-demand in some cases. If your provisioned concurrency is used more than 60 percent during a given time period, then it will probably be less expensive to use provisioned concurrency or a combination of on-demand and provisioned concurrency.
With Amazon SQS as an event source, Lambda will manage concurrency. Lambda will increase concurrency when the queue depth is increasing, and decrease concurrency when errors are being returned.
You can set a batch window to increase the time before Lambda polls a stream or queue. This lets you reduce costs by avoiding regularly invoking the function with a small number of records if you have a relatively low volume of incoming records.
False
Setting reserved concurrency on a version: You cannot set reserved concurrency per function version. You set reserved concurrency on the function and can set provisioned concurrency on an alias. It’s important to keep the total provisioned concurrency for active aliases to less than the reserved concurrency for the function.
Setting the number of shards on a DynamoDB table: You do not directly control the number of shards the table uses. You can directly add shards to a Kinesis Data Stream. With a DynamoDB table, the way you provision read/write capacity and your scaling decisions drive the number of shards. DynamoDB will automatically adjust the number of shards needed based on the way you’ve configured the table and the volume of data.
Concurrency in synchronous invocations: Lambda will use concurrency equal to the request rate multiplied by function duration. As one function invocation ends, Lambda can reuse its environment rather than spinning up a new one, so function duration plays an important factor in concurrency for synchronous and asynchronous invocations.
The impact of higher function memory: A higher memory configuration does have a higher price per millisecond, but because duration is also a factor of cost, your function may finish faster at higher memory configurations and that might mean an overall lower cost.
A shorter duration may reduce the concurrency Lambda needs, but depending on the nature of the function, higher memory may not have a measurable impact on duration. You can use tools like Lambda Power Tuning (https://github.com/alexcasalboni/aws-lambda-power-tuning) to find the best balance for your functions.
There is no stopping Amazon Web Services (AWS) from innovating, improving, and ensuring the customer gets the best experience possible as a result. Providing a seamless user experience is a constant commitment for AWS, and their ongoing innovation allows the customer’s applications to be more innovative – creating a better customer experience.
AWS makes managing networking in the cloud one of the easiest parts of the cloud service experience. When managing your infrastructure on premises, you would have had to devote a significant amount of time to understanding how your networking stack works. It is important to note that AWS does not have a magic bullet that will make all issues go away, but they are constantly providing new exciting features that will enhance your ability to scale in the cloud, and the key to this is elasticity.
Elasticity is defined as “The ability to acquire resources as you need them and release resources when you no longer need them” – this is one of the biggest selling points of the cloud. The three networking features which we are going to talk about today are all elastic in nature, namely the Elastic Network Interface (ENI), the Elastic Fabric Adapter (EFA), and the Elastic Network Adapter (ENA). Let’s compare and contrast these AWS features to allow us to get a greater understanding into how AWS can help our managed networking requirements.
AWS ENI (Elastic Network Interface)
You may be wondering what an ENI is in AWS? The AWS ENI (AWS Elastic Network Interface) is a virtual network card that can be attached to any instance of the Amazon Elastic Compute Cloud (EC2). The purpose of these devices is to enable network connectivity for your instances. If you have more than one of these devices connected to your instance, it will be able to communicate on two different subnets -offering a whole host of advantages.
For example, using multiple ENIs per instance allows you to decouple the ENI from the EC2 instance, in turn allowing you far more flexibility to design an elastic network which can adapt to failure and change.
As stated, you can connect several ENIs to the same EC2 instance and attach your single EC2 instance to many different subnets. You could for example have one ENI connected to a public-facing subnet, and another ENI connected to another internal private subnet.
You could also, for example, attach an ENI to a running EC2 instance, or you could have it live after the EC2 instance is deleted.
Finally, it can also be implemented as a crude form of high availability: Attach an ENI to an EC2 instance; if that instance dies, launch another and attach the ENI to that one as well. It will only affect traffic flow for a short period of time.
AWS EFA (Elastic Fabric Adapter)
In Amazon EC2 instances, Elastic Fabric Adapters (EFAs) are network devices that accelerate high-performance computing (HPC) and machine learning.
EFAs are Elastic Network Adapters (ENAs) with additional OS-bypass capabilities.
AWS Elastic Fabric Adapter (EFA) is a specialized network interface for Amazon EC2 instances that allows customers to run high levels of inter-instance communication, such as HPC applications on AWS at scale on.
Due to EFA’s support for libfabric APIs, applications using a supported MPI library can be easily migrated to AWS without having to make any changes to their existing code.
For this reason, AWS EFA is often used in conjunction with Cluster placement groups – which allow physical hosts to be placed much closer together within an AZ to decrease latency even more. Some use cases for EFA are in weather modelling, semiconductor design, streaming a live sporting event, oil and gas simulations, genomics, finance, and engineering, amongst others.
AWS ENA (Elastic Network Adapter)
Finally, let’s discuss the AWS ENA (Elastic Network Adapter).
The Elastic Network Adapter (ENA) is designed to provide Enhanced Networking to your EC2 instances.
With ENA, you can expect high throughput and packet per second (PPS) performance, as well as consistently low latencies on Amazon EC2 instances. Using ENA, you can utilize up to 20 Gbps of network bandwidth on certain EC2 instance types – massively improving your networking throughput compared to other EC2 instances, or on premises machines. ENA-based Enhanced Networking is currently supported on X1 instances.
Key Differences
There are a number of differences between these three networking options.
Elastic Network Interface (ENI) is a logical networking component that represents a virtual networking card
Elastic Network Adapter (ENA) physical device, Intel 82599 Virtual Function (VF) to provide high end performance on certain specified and supported EC2 types
Elastic Fabric Adapter (EFA) is a network device which you can attach to your EC2 instance to accelerate High Performance Computing (HPC)
Elastic Network Adapter (ENA) is only available on the X1 instance type, Elastic Network Interfaces (ENI) are ubiquitous across all EC2 instances and Elastic Fabric Adapters are available for only certain instance types.
In order to support VPC networking, an ENA ENI provides traditional IP networking features.
EFA ENIs provide all the functionality of ENA ENIs plus hardware support to allow applications to communicate directly with the EFA ENI without involving the instance kernel (OS-bypass communication).
Since the EFA ENI has advanced capabilities, it can only be attached to stopped instances or at launch.
Limitations
EFA has the following limitations:
p4d.24xlarge and dl1.24xlarge instances support up to four EFAs. All other supported instance types support only one EFA per instance.
It is not possible to send EFA traffic from one subnet to another. It is possible to send IP traffic from one subnet to another using the EFA.
EFA OS-bypass traffic cannot be routed. EFA IP traffic can be routed normally.
An EFA must belong to a security group that allows inbound and outbound traffic to and from the group.
ENA has the following limitations:
ENA is only used currently in the X1 instance type
ENI has the following limitations:
You lack the visibility of a physical networking card, due to virtualization
Only a few instances types support up to four networking cards, the majority only support 1
Pricing
You are not priced per ENI with EC2, you are only limited to how many your instance type supports. There is however a charge for additional public IPs on the same instance.
EFA is available as an optional EC2 networking feature that you can enable on any supported EC2 instance at no additional cost.
ENA pricing is absorbed into the cost of running an X1 instance
Thanks to all the people who posted there testing experience here. It gave me a lot of perspective from the exam point of view and how to prepare for the new version.
Stephan Marek’s udemy course and his practice test on Udemy was the key to my success in this test. I did not use any other resource for my preparation.
I am a consultant and have been working on AWS from the last 5+ years, not much hands on work though. My initial cert expired last year so wanted to renew.
Overall, the C03 version was very similar to the C02/C01 version. I did not get a single question about AI/ML services and the questions were majorly related to more fundamental services like VPC, SQS, Lambda, cloud watch, event bridge, Storage (S3, glacier, lifecycle policies). Source: r/awscertification
Passed SAP-C01 AWS Certified Solutions Architect Professional
Resources used were:
Adrian (for the labs),
Jon (For the Test Bank),
and Stephane for a quick overview played on double speed.
Total time spent studying was about a month. I don’t do much hands on as a security compliance guy, but do work with AWS based applications everyday. It helps to know things to a very low level.
Passed SAA C03 in 38 Days
So I am sharing how I passed my certification SAA C03 in less than 40 Days without any prior experience in AWS, (my org asked me to do it)
Neal Davis Practice Tests: https://www.udemy.com/course-dashboard-redirect/?course_id=1878624 I highly recommend these, since Neal’s tests will give you less hints in questions and after doing these you now have absolute understanding how actual Exam questions will be.
After doing tests just make sure you know why the particular answer is wrong.
I scheduled my exam on 26th September and gave the test in Pearson Center. The exam was extremely lengthy I took all my time to just do the questions and I did not have time to look back at my Flagged questions (actually while I was clicking on End Review button timeup and test ended itself) My results came after 50 hours of completing the test and these 50 hours were the most difficult in complete journey.
Today I received my result and I score 914 and got the badge and certification.
So how do you know you are ready. Once you start getting 80+ consistently in 2-3 tests just book your exam.
Passed SAP-C01!
Just found out I passed the Solutions Architect Pro exam. It was a tough one, took me almost the full 3 hours to answer and review every question. At the end of the exam, I felt that it could have gone either way. Had to wait about 20 painful hours to get my final result (857/1000). I’m honestly amazed, I felt so unprepared. What made it worse is that I suddenly felt ill on the night of the exam. Only got about three hours sleep, realized it was too late to reschedule and had to drag myself to the test center. Was very tempted to bail and pay the $300 to resit, very glad I didn’t!
No formal cloud background, but have worked in IT/software for about 10 years as a software engineer. Some of my roles included network setup/switch configuration/Linux and Windows server admin, which definitely comes in useful (but isn’t required). I got my first cert in January (CCP), and have since got the other three associate certs (SAA, DVA, SOA).
People are not joking when they say this is an endurance test. You need to try and stay focused for the full three hours. It took me about two hours to answer every question, and a further hour to review my answers.
In terms of prep, I used a combination of Stefan Maarek (Udemy) and Adrian Cantrill (learn.cantrill.io). Both courses worked well together I found (Adrian Cantrill for the theory/practical, and Stefan Maarek for the review/revision). I used Tutorials Dojo for practice exams and review (tutorialsdojo.com). The exam questions are very close to the real thing, and the question summary/explanations are extremely well written. My advice is to sit the practice exam, and then carefully review each question (regardless of if you got it right/wrong) and read/understand the explanations as to why each answer is right/wrong. It takes time, but it will really prepare you for the real thing.
I’m particularly impressed with the Advanced Demos on the Adrian Cantrill course, some of those really helped out with having the knowledge to answer the exam questions. I particularly liked the Organizations, Active Directory, Hybrid DNS, Hybrid SSM, VPN and WordPress demos.
In terms of the exam, lots of questions on IAM (cross-account roles), Organizations (billing/SCP/RAM), Database performance issues, migrations, Transit Gateway, DX/VPN, containerisation (ECS/EKS), disaster recovery. Some of the scenario questions are quite tricky, all four answers appear valid but there will be subtle differences between them. So you have to work out what is different between each answer.
A tip I will leave you: a lot of the migration questions will get you to pick between using snow devices or uploading via the internet/DX. Quick way to work out if uploading is feasible is to multiply the line speed by 10,000 – this will give you the approximate number of bytes that can be transferred in a day. E.g. a line speed of 50Mbps will let you transfer 500GBytes in a day (assuming nothing else is using that link). So if you had to transfer 100TB, then you will need to use snow devices (unless you were happy waiting 200 days).
Passed SAA-C03 – Feedback
Just passed the SAA-C03 exam (864) and wanted to provide some feedback since that was helpful for me when I was browsing here before the exam.
I come from an IT background and have a strong knowledge in the VPC portion so that section was a breeze for me in the preparation process (I had never used AWS before this so everything else was new, but the concepts were somewhat familiar considering my background). I started my preparation about a month ago, and used the Mareek class on Udemy. Once I finished the class and reviewed my notes I moved to Mareek’s 6 practice exams (on Udemy). I wasn’t doing extremely well on the PEs (I passed on 4/6 of the exams with 70s grades) I reviewed the exam questions after each exam and moved on to the next. I also purchased Tutorial Dojo’s 6 exams set but only ended up taking one out of 6 (which I passed).
Overall the practice exams ended up being a lot harder than the real exam which had mostly the regular/base topics: a LOT of S3 stuff and storage in general, a decent amount of migration questions, only a couple questions on VPCs and no ML/AI stuff.
Sharing the study guide that I followed when I prepared for the AWS Certified Solutions Architect Associate SAA-C03 exam. I passed this test and thought of sharing a real exam experience in taking this challenging test.
First off: my background – I have 8 years of development.experience and been doing AWS for several project, both personally and at work. Studied for a total of 2 months. Focused on the official Exam Guide, and carefully studied the Task Statements and related AWS services.
SAA-C03 Exam Prep
For my exam prep, I bought the adrian cantrill video course, tutorialsdojo (TD) video course and practice exams. Adrian’s course is just right and highly educational but like others has said, the content is long and cover more than just the exam. Did all of the hands-on labs too and played around some machine learning services in my AWS account.
TD video course is short and a good overall summary of the topics items you’ve just learned. One TD lesson covers multiple topics so the content is highly concise. After I completed doing Adrian’s video course, I used TD’s video course as a refresher, did a couple of their hands-on labs then head on to their practice exams.
For the TD practice exams, I took the exam in chronologically and didn’t jumped back and forth until I completed all tests. I first tried all of the 7 timed-mode tests, and review every wrong ones I got on every attempt., then the 6 review-mode tests and the section/topic-based tests. I took the final-test mode roughly 3 times and this is by far one of the helpful feature of the website IMO. The final-test mode generates a unique set from all TD question bank, so every attempt is challenging for me. I also noticed that the course progress doesn’t move if I failed a specific test, so I used to retake the test that I failed.
The Actual SAA-C03 Exam
The actual AWS exam is almost the same with the ones in the TD tests where:
All of the questions are scenario-based
There are two (or more) valid solutions in the question, e.g:
Need SSL: options are ACM and self-signed URL
Need to store DB credentials: options are SSM Parameter Store and Secrets Manager
The scenarios are long-winded and asks for:
MOST Operationally efficient solution
MOST cost-effective
LEAST amount overhead
Overall, I enjoyed the exam and felt fully prepared while taking the test, thanks to Adrian and TD, but it doesn’t mean the whole darn thing is easy. You really need to put some elbow grease and keep your head lights on when preparing for this exam. Good luck to all and I hope my study guide helped out anyone who is struggling.
Another Passed SAA-C03?
Just another thread about passing the general exam? I passed SAA-C03 yesterday, would like to share my experience on how I earned the examination.
Background:
– graduate with networking background
– working experience on on-premise infrastructure automation, mainly using ansible, python, zabbix and etc.
– cloud experience, short period like 3-6 months with practice
– provisioned cloud application using terraform in azure and aws
cantrill course is depth and lot of practical knowledge, like email alias and etc.. check in to know more
tutorialdojo practice exam help me filter the answer and guide me on correct answer. If I am wrong in specific topic, I rewatch cantrill video. However, there is some topics that not covered by cantrill but the guideline/review in practice exam will provide pretty much detail. I did all the other mode before the timed-based, after that get average 850 in timed-based exam, while scoring the final practice exam with 63/65. However, real examination is harder compared to practice exam in my opinion.
udemy course and practice exam, I go through some of them but I think the practice exam is quite hard compared to tutorialdojo.
lab – just get hand dirty and they will make your knowledge deep dive in your brain, my advice is try not only to do copy and paste lab but really read the description for each parameter in aws portal
Advice:
you need to know some general exam topics like how to:
– s3 private access
– ec2 availability
– kinesis product including firehose, data stream, blabla
– iam
My next target will be AWS SAP and CKA, still searching suitable material for AWS SAP but proposed mainly using acloudguru sandbox and homelab to learn the subject, practice with acantrill lab in github.
Good luck anyone!
Passed SAA
I wanted to give my personal experience. I have a background in IT, but I have never worked in AWS previous to 5 weeks ago. I got my Cloud Practitioner in a week and SAA after another 4 weeks of studying (2-4 hours a day). I used Cantril’s Course and Tutorials Dojo Practice Exams. I highly, highly recommend this combo. I don’t think I would have passed without the practice exams, as they are quite difficult. In my opinion, they are much more difficult than the actual exam. They really hit the mark on what kind of content you will see. I got a 777, and that’s with getting 70-80%’s on the practice exams. I probably could have done better, but I had a really rough night of sleep and I came down with a cold. I was really on the struggle bus halfway through the test.
I only had a couple of questions on ML / AI, so make sure you know the differences between them all. Lot’s of S3 and EC2. You really need to know these in and out.
My company is offering stipend’s for each certification, so I’m going straight to developer next.
Recently passed SAA-C03
Just passed my SAA-C03 yesterday with 961 points. My first time doing AWS certification. I used Cantrill’s course. Went through the course materials twice, and took around 6 months to study, but that’s mostly due to my busy schedule. I found his materials very detailed and probably go beyond what you’d need for the actual exam.
I also used Stephane’s practice exams on Udemy. I’d say it’s instrumental in my passing doing these to get used to the type of questions in the actual exams and review missing knowledge. Would not have passed otherwise.
Just a heads-up, there are a few things popped up that I did not see in the course materials or practice exams:
* Lake Formation: question about pooling data from RDS and S3, as well as controlling access.
* S3 Requester Pays: question about minimizing S3 data cost when sharing with a partner.
* Pinpoint journey: question about customer replying to SMS sent-out and then storing their feedback.
Not sure if they are graded or Amazon testing out new parts.
I’ve spent the last 2 months of my life focusing on this exam and now it’s over! I wanted to write down some thoughts that I hope are informative to others. I’m also happy to answer any other questions.
APPROACH
I used Stephane’s courses to pass CCP, SAA, DVA… however I heard such great things about Adrian’s course that I purchased it and started there.
The detail and clarity that Adrian employs is amazing, and I was blown away by the informative diagrams that he includes with his lessons. His UDP joke made me lol. The course took a month to get through with many daily hours, and I made over 100 pages of study notes in a Google document. After finishing his course, I went through Stephane’s for redundancy.
As many have mentioned here, Stephane does a great job of summarizing concepts, and for me, I really value the slides that he provides with his courses. It helps to memorize and solidify concepts for the actual exam.
After I went through the courses, I bought TutorialsDojo practice exams and started practicing. As everyone says, these are almost a must-use resource before an AWS exam. I recognized three questions on the real exam, and the thought exercise of taking the mocks came in handy during the real exam.
Total preparation: 10 weeks
DIFFICULTY
I heard on this Subreddit that if this exam is a 10, then the associate-level exams are a 3. I was a bit skeptical, but I found the exam a bit harder than the practice exam questions. I just found a few obscure things referred to during the real exam, and some concepts combined in single questions. The Pro-level exams are *at least* 2 times as hard, in my opinion. You need to have Stephane’s slides (or the exam “power-ups” that Adrian points out)/the bolded parts down cold and really understand the fundamentals.
WHILE STUDYING
As my studying progressed, I found myself on this sub almost every day reading others’ experiences and questions. Very few people in my circle truly understand the dedication and hard work that is required to pass any AWS exam, so observing and occasionally interacting here with like-minded people was great. We’re all in this together!
POST-EXAM
I was waiting anxiously for my exam result. When I took the associate exams, I got a binary PASS/FAIL immediately… I got my Credly email 17 hours after finishing the exam, and when I heard from AWS, my score was more than expected which feels great.
WHAT’S NEXT
I’m a developer and have to admit I’ve caught the AWS bug. I want to pursue more… I heard Adrian mention in another thread that some of his students take the Security specialty exam right after SAP, and I think I will do the same after some practice exams. Or DevOps Pro… Then I’m taking a break 🙂
I had a lot on S3, cloudfront, DBs, and a lot on a bunch Lambda and containers. Lots of which is the most cost-effective solution questions.
I think did ok but my online proctoring experience kinda jacked with my mind alittle bit(specifics in separate thread), at one point I even got yelled at for thinking outloud to myself which kinda sucked as that’sone way I talk myself through situations :-/
For two weeks I used MANY practice exams on Youtube, Tutorials Dojo, a cloud guru, and shout out for Cloud Guru Amit (Youtube) has a keyword method that worked well for me, and just reading up on various white papers on stuff I wasn’t clear on/got wrong.
ONTO AWS-Security Specialty and CompTIA Sec+ for me.
Shoutout to Adrian his course was great at preparing me for all the knowledge needed for the exam. (with exception of a question on Polly and Textstract which none of the resources Adrian, Stephan for Test review and dojo practice exams covered)
I got a 78 and went in person to a testing site close by to avoid potential hiccups with online testing. I studied over the course of 4 months but did the bulk of the course in 2 months.
I want to reiterate a common theme in these posts that should not be overlooked in case you are in deep in your journey and plan on taking the tests in the near 4 weeks out or 75% through the videos. BUY THE TUTORIALDOJO PRACTICE EXAMS AND TAKE THEM. EVEN BEFORE YOU ARE DONE WITH ALL THE COURSE.
I thought it would be smarter to finish the course and then do the tests to get a higher score BUT you will inevitably strengthen your skills and knowledge through 1) Doing the tests to get used to the format. 2) REVIEW REVIEW REVIEW – The questions fall into 4 categories and afterwords you will see all the questions and why they are the right answer or almost the right answer. Knowing your weaknesses is crucial for intentional, intelligent, and efficient reviewing.
I took screenshots of all the questions I got wrong or wasn’t completely sure of why I got them right.
Passed SAA-C03 today!
Got a lot of questions based on cloudfront,s3,secrets manger, kms, databases, container(ECS) and ML question based on amazon transcribe.
Passed SAA-C03 = Lots of networking + security questions!
Just passed the AWS Certified Solutions Architect Associate exam SAA-C03 and thank God I allocated some time improving my core networking knowledge. In my point of view, the exam is filled with networking and security questions, so make sure that you really focus on these two domains.
If you don’t know that the port number of MySQL is 3306 and the one for MS SQL is 1433, then you might get overwhelmed by the content of the SAA-C03 exam. Knowing how big or how small a particular VPC (or network) would be based on a given CIDR notation would help too. Integrating SSL / HTTPS into your services like ALB, CloudFront etc are also present in the exam.
On the top of my head, these are the related networking stuff I encountered. Most of the things in this list are somewhat mentioned in the official exam guide:
Ports (e.g. 3306 = MySQL, 1433 = Microsoft SQL)
Regional API Gateway
DNS Resolution between On-Premises networks and AWS
As far as I know, AWS shuffles the content of their exam so you probably could get these topics too. Some feature questions could range from basic to advanced, so make sure you know each feature of all the AWS services mentioned in the exam guide. Here’s what i could remember:
Amazon MQ with active/sync
S3 Features (Requester Pays, Object Lock etc)
Data Lakes
Amazon Rekognition
Amazon Comprehend
For my exam prep, I started my study with Jon Bonso/TD’s SAA video course then moved to Adrian Cantrill’s course. Both are very solid resources and each instructor has a different style of teaching. Jon’s course is more like a minimalist, modern YouTube style teaching. He starts with an overview first before going to the nitty-gritty tech details of things, with fancy montage of videos to drive the fundamental concept in AWS. I recommend his stuff as a crash course to learn the majority of SAA-related content. There’s also a bunch of playcloud/hands-on labs included in his course which I find very helpful too.
Adrian’s course has a much longer course and include the necessary networking/tech fundamentals. Like what other people are saying in this sub, the quality of his stuff is superb and very well delivered. If you are not in a rush and really want to learn the ropes of being a solutions architect, then his course is definitely a must-have. He also has a good videos on YouTube and mini-projects in Github that you can check out.
About half-way in Adrian’s course, I started doing mock exams from TutorialsDojo (TD) and AWS Skill Builder just to reinforce my knowledge. I take a practice test first, then review my correct and incorrect answer. If I notice that I get a lot of mistake in a particular service, I go back to Adrian’s course to make those concepts stick better.
I also recommend trying out the demo/sample/preview lessons before you buy any SAA course. From there, you can decide which teaching style would work best for you:
Thank you to all the helpful guys and gals in this community who shared tips!
Passed 3 AWS Exams – long post
About me:
My overall objective was to pivot more towards cloud security role from a traditional cybersecurity role. I am a security professional and have 10+ years of experience with certifications like CCIE Security, CISSP, OSCP and others. Mostly I have worked in consulting environments doing deployment and pre-sales work.
My cloud Journey:
I started studying AWS certification in January 2022 and did SA Associate in March, SA Professional in August and Security Specialty in September. I used Adrian’s, Stefan’s, and Neal’s videos in mix. I used tutorialdojo for practice test.
Preparation Material:
For videos, Adrian’s stood out with the level of effort this guy has put in. Had this been 6-8 years back this kind of on-site bootcamp for 1 candidate would sell for at minimum 5000 USD . I used them at 1.25x speed but it was difficult to come back to Adrian’s content due to its length if I were to recall/revise something. That’s why I had Stefan’s and Neal’s stuff in my pocket, they usually go on sale for 12-13 USD so no harm in having them. Neal did better job than Stefan for SA Pro as his slides were much more visually appealing. But I felt Stefan covered more concepts. Topics like VPC, Transit gateway can be better understood if the visuals are better. I never made any notes, I purchased Tutorial Dojo’s notes but I dont think they were of much use. You can always find notes made by other people on github and I felt they were more helpful. You can also download video slides from udemy and I did cut a few slides from there and pasted in my google docs if I were to revise them. For the practice test I felt dojo’s wordings were complex compared to the real exam but it does give a very good idea of the difficulty of exam. The real exam had more crisp content.
About Exam:
The exams itself were interesting because it helped me learn the new datacenter’s architecture. Concepts and technologies like lambda, step function, AWS organization, SCP were very interesting and I feel way more confident now compared to what I was 1 year back. Because I target security roles I want to point out that not everything is covered in AWS certifications for these roles. I had gone through CSA 4.0 guide back in December 2021 before starting AWS journey and I think thats helped me visualize many scenarios. Concepts like shadow IT, legal hold, vendor lock in, SOC2/3 reports , portability and interoperability problems in cloud environments were very new to me. I wish AWS can include these stuff in the security exam. These concepts are more towards compliance and governance but its important to know if you are going to interview for cloud security architect roles. I also feel concepts included in DevSecOps should be included more in the security specialty exam.
A bit of criticism here. The exam is very much product specific and many people coming from deployment/research backgrounds will even call it a marketing exam. In fact one L7 Principal Security SA from AWS told me that he considers this a marketing exam. On this forum, there are often discussions on how difficult the AWS SA Pro is but I disagree on that. These exams were no way near the difficulty level of CCIE , CISSP or OSCP which I did in past. The difficulty of exam is high because of its long length of questions, the reading fatigue it can cause, and lack of Visio diagrams. All of these things are not relevant to the real world if you are working as a Solution Architect/Security Architect. Especially for SA Pro almost all questions goes like this – Example scenario -‘A customer plans to migrate to AWS cloud where the application are to be resided on EC2 with auto-scaling enabled in private subnet, those EC2 are behind an ALB which is in public subnet. A replica of this should be created in EU region and Route53 should be doing geolocation routing’ . In the real world, these kinds of issues are always communicated using Visio diagrams i.e. “Current state architecture diagram” and “Future state architecture diagram”. In almost every question I had to read this and draw on the provided sheet which created extra work and reading fatigue. I bet non-english speakers who are experienced architects will find it irritating even though they are given 30 minutes extra. If AWS would change these long sentences into diagrams that can make things easier and more aligned to real world, not sure if they would want to do it because then the difficulty goes down. Also because SMEs are often paid per question they make they don’t want to put more effort in creating diagrams. That’s the problem when you outsource question creation to 3rd party SMEs, the payment is on the number of questions made and I don’t think companies even pay for this. Often this is voluntary work against which the company grants some sort of free recertification or exam voucher.
There seems to be quite a noise for Advanced Networking exam which is considered most difficult. While I haven’t looked into the exam, I would say if it doesn’t has diagrams in each question then the exam is not aligned to real world. Networking challenges should never be communicated without diagrams. Again the difficulty is high because it causes reading fatigue which doesn’t happen in a life of a security architect.
Tips to be a successful consultant:
If you were to become a cloud security architect, I would still highly recommend AWS SA Pro, Security specialty not so much because there was more KMS here and a little bit here and there but the Security specialty was not an eye-opener for me as SA Pro was. Even AWS job description for L6 Security Arch ( Proserv ) role says that the candidate must be able to complete AWS SA Pro in 3 months of hiring which means this is more relevant than the Security specialty even for security roles. But these are all products and you need knowledge beyond that for security roles. The driving force of security has mostly been compliance, you should be really good in things like PCI DSS , ISO 27001 , Cloud Control Matrix because the end of the day you need to map these controls to the product so understanding product is not even 50% of the job. Terraform/Pulumi if you were to communicate your ideas/PoC as IaC. Some python/boto3 SDK which will help you in creating use cases ( need for ProServ roles but not for SA roles ) . If you are looking to do threat modeling of cloud native applications you again need AWS knowledge plus , securing SDLC process, SAST/DAST and then MITRE ATT&CK/Cloud controls matrix etc.
Similarly, if you want to be in networking roles, don’t think AWS Advance Networking will help you be a good consultant. Its a very complex topic and I would recommend look beyond by following courses by Ivan Pepelnjak who himself is a networking veteran. https://www.ipspace.net/Courses . This kind of stuff will help you be a much confident consultant.
I am starting my python journey now which will help me automate use cases. Feel free to ping me if you have any questions.
So finally got my score and I scored 886 which is definitely more than I expected. I have been working on AWS for about a year but my company is slowing moving there so I don’t have ton of hands on experience yet.
I got lot of helpful information from so many people on this subreddit. This is now my turn to share my experience.
Study plan
Started with Cantril’s SAA-C02 course and later switched to his SAA-C03 course. He does a great job at explaining everything in detail. He really covers every topic in great detail and the demos are well structured and detailed. Worth every penny. It does take a long time to finish his course so plan accordingly.
Tutorials DoJo study guide & cheat sheets – I liked this 300 odd pages PDF where all the crucial topics are summarized. Bonso does a great job in comparing similar services and highlighting things that may get you confused during the exam. I took notes within the PDF and used highlighter tool a lot. Helped me revise couple of days before the exam.
Tutorials DoJo practice tests – These tests are the BEST. The questions are similar to what they ask in the exam. The explanation under every question is very helpful. Read thru these for every question that you got wrong and even on the questions that you got right but weren’t 100% sure.
Official exam guide – I used this at the end to check if I have an understanding of knowledge and skill items. The consolidated list of services is really helpful. I took notes against each service and especially focused on services that look similar.
Labs – While Cantrill’s labs are great, if you are following him along then you may be going too fast and missing few things. If you are new to a particular service then you should absolutely go back and go thru every screen at your own pace. I did spend time doing labs but nearly not enough as I had hoped for.
Exam experience
First few questions were easy. Lot of short questions which definitely helped me with my nerves.
Questions started getting longer and answers were confusing too. I flagged about 20 odd questions for review but could only review half of them before the timer was done.
Remember that 15 questions are not scored. No point spending a lot of time on a question that may not even count against your final score. Use the flag for review feature and come back to a question later if time permits.
Watch out for exactly what they are asking for. You as an architect might want to solve the problem in another way than what the question is asking you to do.
Lots of the comments here about networking / VPC questions being prevalent are true. Also so many damn Aurora questions, it was like a presales chat.
The questions are actually quite detailed; as some had already mentioned. So pay close attention to the minute details Some questions you definitely have to flag for re-review.
It is by far harder than the Developer Associate exam, despite it having a broader scope. The DVA-C02 exam was like doing a speedrun but this felt like finishing off Sigrun on GoW. Ya gotta take your time.
I took the TJ practice exams. It somewhat helped, but having intimate knowledge of VPC and DB concepts would help more.
Passed AWS SAP-C01
Just passed the SAP this past weekend, and it was for sure a challenge. I had some familiarity with AWS already having cloud practitioner and passing the SAA back in 2019. I originally wanted to pass the professional version to keep my certs active, so I decided to cram to pass this before they made changes in November. Overall I was able to pass on my first attempt after studying for about 6 weeks heavily. This consisted of on average of about 4 hours a day of studying.
I used the following for studying:
a cloud guru video course and labs(this was ok in my opinion but didn’t really go into as much detail as I think it should have)
Stephane Maarek’s video course was really awesome and hit on everything I really needed on the test. I also took his practice tests a bunch of times.
tutorial dojos practice tests were worth every penny and the review mode on there was perfect to practice and go over material rapidly.
Overall I would focus at first on going through a the full video course with Stephane and then tackling some practice tests. I would then revisit his videos often on subjects I needed to revisit. On the day of the test I took it remotely which honestly think added a little more stress with the proctor all over me on any movement. I ended up passing with a score of 811. Not the best score but I honestly thought I did worse on the test overall as it was challenging and time flew by.
Took Ultimate AWS Certified Solutions Architect Associate SAA-C03 course by Stephane Maarek on Udemy. Sat through all lectures and labs. I think Maarek’s course provides a good overview of all necessary services including hands on labs which prepare you for real world tasks.
Finished all Practice Exams by Tutorial Dojo. Did half of the tests first in review mode and the rest in timed mode.
For last minute summary preparation, I used Tutorials Dojo Study Guide eBook. It was around $4 and summarizes all services. Good ebook to go through before your exam. It is around 280 pages. I only went through summary of services that I was struggling with.
Exam Day and Details:
I opted in for in person exam with Pearson since I live close to their testing centers and I heard about people running into issues with online exams. If you have a testing center nearby, I highly recommend you go there. Unlike online exams, you are free to use bathroom and use blank sheets of paper. It just thought there was more freedom during in person class.
The exam questions were harder than TD. They were more detailed and usually had combination of multiple services as the correct answers. Read the questions very carefully and flag them for review if you aren’t sure.
Around 5-10 questions were exactly the same from TD which was very helpful.
There were a lot of questions related to S3, EBS, EFS, RDS and DynamoDB. So focus on those.
I saw ~5 questions with AWS services which I had never heard of before. I believe those were part of 15 ungraded questions. If you see services you haven’t heard of, I wouldn’t worry much about them as they are likely part of 15 ungraded questions.
It took me around 1.5 hours to finish the exam including the review. I finished my exam around 4PM and got results next morning around 5AM. I only got email from credly. However, I was able to download my exam report from https://www.aws.training/Certification immediately.
Tips:
Try to get at least 80% on few tests on TD before your take your exam.
Take half of the TD practice exams in review mode and go through the answers in detail (even the right ones).
Opt in for in person exam if possible.
If you see AWS services you hadn’t seen before, don’t panic. It’s likely they are part of 15 ungraded questions.
Read questions very carefully.
Relax. It’s just a certification exam and you can retake it in 14 days if you failed. But if you followed all of above, there is very little chance that you will fail.
I passed the SAA-C03 AWS Certified Solutions Architect Assoc. exam this week, all thanks to this helpful Reddit sub! Thank you to everyone who are sharing tips and inspiration on a regular basis. Sharing my exam experience here:
Topics I encountered in the exam:
Lots of S3 features (ex: Object Lock, S3 Access Points)
Lots of advanced cloud designs. I remember the following:
AWS cloud only with 1 VPC
AWS cloud only with 3 VPCs connected using a Transit Gateway
AWS cloud only with 3 VPCs with a shared VPC which contains shared resources that the other 2 VPCs can use.
AWS cloud + on-prem via VPN
AWS cloud + on-prem via Direct Connect
AWS cloud + on-prem with SD WAN connection
Lots of networking ( multicasting via Transit Gateway, Container Networking, Route 53 resolvers )
If you’re not a newbie anymore I recommend skipping the basic lessons included in Adrian Cantrill’s course and focus on the related SAA-C03 stuff.
Do labs labs labs! Adrian has a collection of labs on this github. TD has hands-on labs too with a real AWS Console. I do find the TD labs helpful in testing my actual knowledge in certain topics.
Take the TD practice exams at least twice and aim to get 90% on all test
Review the suitable use cases for each AWS service. The TD and Adrian’s video courses usually covers the use cases for every AWS service. Familiarize yourself with that and make notes
Make sure that whenever you watch the videos, you create your own notes that you can review later on.
source: r/awscertifiations
Passed SAA-C03
Hi guys,
I’ve successfully passed the SAA-C03 exam on Saturday with a score of 832. I felt like the exam was pretty difficult and was wondering if I would pass… Maybe I got a harder test set.
What I did to prepare:
Tom Carpenter’s course on Linkedin Learning for SAA-C02. I started preparing for the exam last year, but had a break in between. Meanwhile AWS released the new version, so this course is not that relevant anymore. They will probably update it for SAA-C03 in the future.
Tutorials Dojo practice tests and materials: Now these were great! I’ve did a couple of their practice tests in review mode and a couple in timed mode. Overall (unpopular opinion) I felt like the exam was harder than the practice tests, but the practice tests and explanations prepared me pretty well for it.
Whizlabs SAA-C03 course: They have some practice tests which were fine, but they also have Labs which are great if you want to explore the AWS services in a guided environment.
Skillcertpro practice tests: The first 5 were fine, but the others were horrible. Stay away from them! They are full of typos and also incorrect answers (S3 was ‘eventually consistent’ in one of the questions)
Are too many companies dependent on ONE cloud company? Amazon’s AWS outage impacted thousands of companies and the products they offer to consumers including doorbells, security cameras, refrigerators, 911 services, and productivity software.
AWS Glue is a pay-as-you-go service from Amazon that helps you with your ETL (extract, transform and load) needs. It automates time-consuming steps of data preparation for analytics. It extracts the data from different data sources, transforms it, and then saves it in the data warehouse. Today, we will explore AWS Glue in detail. Let’s start with the components of AWS Glue.
AWS Glue Components
Below, you’ll find some of the core components of AWS Glue.
Data Catalog
Data Catalog is the persistent metadata store in your AWS Glue. You have one data catalog per AWS account. It contains the metadata related to all your data sources, table definitions, and job definitions to manage the ETL process in AWS Glue.
Crawler
Crawler connects to your data source and data targets. It crawls through the schema and creates metadata in your AWS Glue data catalog.
Classifier
The classifier object determines the schema of a data store. AWS Glue has built-in classifiers for common data types like CSV, Json, XML, etc. AWS Glue also provides default classifiers for common RDBMS systems as well.
Data store
A data store is used to store the actual data in a persistent data storage system like S3 or a relational database management system.
Database
Database in the AWS Glue terminology refers to the collection of associated data catalog table definitions organized into a logical group in AWS Glue.
AWS Glue Architecture
How AWS Glue Works
You will Identify the data sources which you will use.
You will define a crawler to point to each data source and populate the AWS Glue data catalog with the metadata table definitions. This metadata will be used when data is transformed during the ETL process.
Now your data catalog has been categorized, and the data is available for instant searching, querying, and ETL processing.
You will provide a script through the console or API so that the data can be transformed. AWS Glue can also generate a script for this purpose.
You will run the job or schedule the job to run based on a particular trigger. A trigger can be based on a particular schedule or occurring of an event.
When a job is executed, the script extracts the data from the data source(s), transforms it, and loads the transformed data into the data target. The script is run in the Apache Spark environment in AWS Glue.
When To Use AWS Glue
Below are some of the top use cases for AWS Glue.
Build a data warehouse
If you want to build a data warehouse that will collect data from different sources, cleanse it, validate it, and transform it, then AWS Glue is an ideal fit. You can transform and move the AWS cloud data into your data store too.
Use AWS S3 as data lake
You can convert your S3 data into a data lake by cataloguing its data into AWS Glue. The transformed data will be available to AWS redshift and AWS Athena for querying. Both Redshift and Athena can directly query your S3 using AWS Glue.
Create event-driven ETL pipeline
AWS Glue is a perfect fit if you want to launch an ETL job as soon as fresh data is available in S3. You can use AWS Lambda along with AWS Glue to orchestrate the ETL process.
Features of AWS Glue
Below are some of the top features of AWS Glue.
Automatic schema recognition
Crawler is a very powerful component of AWS Glue that automatically recognizes the schema of your data. Users do not need to design the schema of each data source manually. Crawlers automatically identify the schema and parse the data.
Automatic ETL code generation
AWS Glue is capable of creating the ETL code automatically. You just need to specify the source of the data and its target data store; AWS Glue will automatically create the relevant code in scala or python for the entire ETL pipeline.
Job scheduler
ETL jobs are very flexible in AWS Glue. You can execute the jobs on-demand, and you can also schedule them to be triggered based on a schedule or event. Multiple jobs can be executed in parallel, and you can even mention the job dependencies as well.
Developer endpoints
Developers can take advantage of developer endpoints to debug AWS Glue as well as develop custom crawlers, writers, and data transformers, which can later be imported into custom libraries.
Integrated data catalog
The data catalog is the most powerful component of AWS Glue. It is the central metadata store of all the diverse data sources of your pipeline. You only have to maintain just one data catalog per AWS account.
Benefits of Using AWS Glue
Strong integrations
AWS Glue has strong integrations with other AWS services. It provides native support for AWS RDS and Aurora databases. It also supports AWS Redshift, S3, and all common database engines and databases running in your EC2 instances. AWS Glue even supports NoSQL data sources like DynamoDB.
Built-in orchestration
You do not need to set up or maintain ETL pipeline infrastructure. AWS Glue will automatically handle the low-level complexities for you. The crawlers automate the process of schema identification and parsing, freeing you from the burden of manually evaluating and parsing different complex data sources. AWS Glue also creates the ETL pipeline code automatically. It has built-in features for logging, monitoring, alerting, and restarting failure scenarios as well.
AWS Glue is serverless, which means you do not need to worry about maintaining the underlying infrastructure. AWS glue has built-in scaling capabilities, so it can automatically handle the extra load. It automatically handles the setup, configuration, and scaling of underlying resources.
Cost-effective
You only pay for what you use. You will only be charged for the time when your jobs are running. This is especially beneficial if your workload is unpredictable and you are not sure about the infrastructure to provision for your ETL jobs.
Drawbacks of Using AWS Glue
Here are some of the drawbacks of using AWS Glue.
Reliance on Apache Spark
As the AWS Glue jobs run in Apache Spark, the team must have expertise in Spark in order to customize the generated ETL job. AWS Glue also creates the code in python or scala – so your engineers must have knowledge of these programming languages too.
Complexity of some use cases
Apache spark is not very efficient in use cases like advertisement, gaming, and fraud detection because these jobs need high cardinality joins. Spark is not very good when it comes to high cardinality joins. You can handle these scenarios by implementing additional components, although that will make your ETL pipeline complex.
Similarly, if you need to combine steam and batch jobs, that will be complex to handle in AWS Glue. This is because AWS Glue requires batch and stream processes to be separate. As a result, you need to maintain extra code to make sure that both of these processes run in a combined manner.
AWS Glue Pricing
For the ETL jobs, you will be charged only for the time the job is running. AWS will charge you on an hourly basis depending on the number of DPUs (Data Processing Units) that are needed to run your job. One DPU is approximately 4 vCPUs with 16GB of memory. You will also pay for the storage of the data stored in the AWS Glue data catalog. The first million objects are free in the catalog, and the first million accesses are also free. Crawlers and development endpoints are also charged based on an hourly rate, and the rate depends on the number of DPU’s.
Frequently Asked Questions
How is AWS Glue different from AWS Lake Formation?
Lake Formation’s main area is governance and data management functionality, whereas AWS Glue is strong in ETL and data processing. They both complement each other as the lake formation is primarily a permission management layer that uses the AWS glue catalog under the hood.
Can AWS Glue write to DynamoDB?
Yes, AWS Glue can write to DynamoDB. However, the option of writing is not available in the console. You will need to customize the script to achieve that.
Can AWS Glue write to RDS?
Yes, AWS Glue can write to any RDS engine. When using the ETL job wizard, you can select the target option of “JDBC” and then you can create a connection to any RDS-compliant database.
Is AWS Glue in real-time?
AWS Glue can process data from Amazon Kinesis Data Streams using micro-batches in real-time. For a large data set, there might be some delay. It can process petabytes of data both in batches and in real-time.
Does AWS Glue Auto Scale?
AWS Glue provides autoscaling starting from version 3.0. It automatically adds or removes workers based on the workload.
Where is AWS Glue Data Catalog Stored?
As AWS Glue is a drop-in replacement to Hive metastore. Most probably, the data is stored in MySQL database. However, it is not confirmed because there is no official information from AWS regarding this.
How Fast is AWS Glue?
AWS Glue 3 has improved a lot in terms of speed. The speed of version 3 is 2.4 times faster than the version 2. This is because it uses vectorized readers and micro-parallel SIMD CPU instructions for faster data parsing, tokenization, and indexing.
Is AWS Glue Expensive?
No, AWS Glue is not expensive. This is because it is based on serverless architecture, and you are charged only when it is actually used. There is no permanent infrastructure cost, so AWS Glue is not costly.
Is AWS Glue a Database?
No. AWS Glue is a fully managed cloud service from Amazon through which you can prepare data for analysis through an automated ETL process.
Is AWS Glue difficult to learn?
AWS Glue is not really difficult to learn. This is because it provides a GUI-based interface through which you can easily manage the process of authoring, running, and monitoring the whole process of ETL jobs.
What is The Difference Between AWS Glue and EMR?
AWS Glue and EMR are both AWS solutions for ETL processing. EMR is a slightly faster and cheaper platform, especially if you already have the required infrastructure available. However, if you want a serverless solution where you expect your workload to be inconsistent, then AWS Glue is the better option.
Definition 1: Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.
Definition 2: Amazon Elastic Compute Cloud (EC2) forms a central part of Amazon.com’s cloud-computing platform, Amazon Web Services (AWS), by allowing users to rent virtual computers on which to run their own computer applications.
AWS Ec2 Facts and summaries
Can users SSH to EC2 instances using their AWS user name and password? No. User security credentials created with IAM are not supported for direct authentication to customer EC2 instances. Managing EC2 SSH credentials is the customer’s responsibility within the EC2 console.
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.
What is the difference between using the local instance store and Amazon Elastic Block Store (Amazon EBS) for the root device? When you launch your Amazon EC2 instances you have the ability to store your root device data on Amazon EBS or the local instance store. By using Amazon EBS, data on the root device will persist independently from the lifetime of the instance. This enables you to stop and restart the instance at a subsequent time, which is similar to shutting down your laptop and restarting it when you need it again. Alternatively, the local instance store only persists during the life of the instance. This is an inexpensive way to launch instances where data is not stored to the root device. For example, some customers use this option to run large web sites where each instance is a clone to handle web traffic.
How many instances can I run in Amazon EC2? You are limited to running up to a total of 20 On-Demand instances across the instance family, purchasing 20 Reserved Instances, and requesting Spot Instances per your dynamic Spot limit per region.
How quickly can I scale my capacity both up and down? Amazon EC2 provides a truly elastic computing environment. Amazon EC2 enables you to increase or decrease capacity within minutes, not hours or days. You can commission one, hundreds or even thousands of server instances simultaneously. When you need more instances, you simply call RunInstances, and Amazon EC2 will typically set up your new instances in a matter of minutes. Of course, because this is all controlled with web service APIs, your application can automatically scale itself up and down depending on its needs.
When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions? Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
What is one key difference between an Amazon EBS-backed and an instance-store backed instance? Amazon EBS-backed instances can be stopped and restarted without losing data
How is the AWS Ec2 service different than a plain hosting service? Traditional hosting services generally provide a pre-configured resource for a fixed amount of time and at a predetermined cost. Amazon EC2 differs fundamentally in the flexibility, control and significant cost savings it offers developers, allowing them to treat Amazon EC2 as their own personal data center with the benefit of Amazon.com’s robust infrastructure. When computing requirements unexpectedly change (up or down), Amazon EC2 can instantly respond, meaning that developers have the ability to control how many resources are in use at any given point in time. In contrast, traditional hosting services generally provide a fixed number of resources for a fixed amount of time, meaning that users have a limited ability to easily respond when their usage is rapidly changing, unpredictable, or is known to experience large peaks at various intervals. Secondly, many hosting services don’t provide full control over the compute resources being provided. Using Amazon EC2, developers can choose not only to initiate or shut down instances at any time, they can completely customize the configuration of their instances to suit their needs – and change it at any time. Most hosting services cater more towards groups of users with similar system requirements, and so offer limited ability to change these. Finally, with Amazon EC2 developers enjoy the benefit of paying only for their actual resource consumption – and at very low rates. Most hosting services require users to pay a fixed, up-front fee irrespective of their actual computing power used, and so users risk overbuying resources to compensate for the inability to quickly scale up resources within a short time frame.
What load balancing options does the Elastic Load Balancing service offer? Elastic Load Balancing offers two types of load balancers that both feature high availability, automatic scaling, and robust security. These include the Classic Load Balancer that routes traffic based on either application or network level information, and the Application Load Balancer that routes traffic based on advanced application level information that includes the content of the request.
When should I use the Classic Load Balancer and when should I use the Application Load Balancer? The Classic Load Balancer is ideal for simple load balancing of traffic across multiple EC2 instances, while the Application Load Balancer is ideal for applications needing advanced routing capabilities, microservices, and container-based architectures. Please visit Elastic Load Balancing for more information.
Can I get a history of all EC2 API calls made on my account for security analysis and operational troubleshooting purposes? Yes. To receive a history of all EC2 API calls (including VPC and EBS) made on your account, you simply turn on CloudTrail in the AWS Management Console. For more information, visit the CloudTrail home page.
Can I access the metrics data for a terminated Amazon EC2 instance or a deleted Elastic Load Balancer? Yes. Amazon CloudWatch stores metrics for terminated Amazon EC2 instances or deleted Elastic Load Balancers for 2 weeks.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6 Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
Q0: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?
A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
B. Permanently assigning users to specific instances and always routing their traffic to those instances
C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance
A. Amazon ElastiCache for Memcached is a Memcached-compatible in-memory key-value store service that can be used as a cache or a data store. It delivers the performance, ease-of-use, and simplicity of Memcached. ElastiCache for Memcached is fully managed, scalable, and secure – making it an ideal candidate for use cases where frequently accessed data must be in-memory. It is a popular choice for use cases such as Web, Mobile Apps, Gaming, Ad-Tech, and E-Commerce.
Q2: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?
A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.
D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.
Q4: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?
A. Autoscaling requires using Amazon EBS-backed instances
B. Virtual Private Cloud requires EBS backed instances
C. Amazon EBS-backed instances can be stopped and restarted without losing data
D. Instance-store backed instances can be stopped and restarted without losing data
C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.
Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command. ssh -i my_key.pem ec2-user@52.2.222.22 However you receive the following error. @@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@ What is the most probable reason for this and how can you fix it?
A. You do not have root access on your terminal and need to use the sudo option for this to work.
B. You do not have enough permissions to perform the operation.
C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.
D. You need to run something like: chmod 400 my_key.pem
Q5: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?
A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.
D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.
Q6: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:
A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
B. Can only be used to launch EC2 instances in the same country as the AMI is stored
C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
D. Can be used to launch EC2 instances in any AWS region
C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another
Q8: You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load balancer. You have been requested to monitor the incoming connections to the Elastic Load Balancer. Which of the below options can suffice this requirement?
A. Use AWS CloudTrail with your load balancer
B. Enable access logs on the load balancer
C. Use a CloudWatch Logs Agent
D. Create a custom metric CloudWatch lter on your load balancer
Answer – B Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues. Reference: Access Logs for Your Application Load Balancer
IAM is a framework of policies and technologies for ensuring that the proper people in an enterprise have the appropriate access to technology resources. IdM systems fall under the overarching umbrella of IT security and Data Management .
Definition 2: AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6 Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
You can use AWS IAM to securely control individual and group access to your AWS resources. You can create and manage user identities (“IAM users”) and grant permissions for those IAM users to access your resources. You can also grant permissions for users outside of AWS ( federated users).
How do users call AWS services? Users can make requests to AWS services using security credentials. Explicit permissions govern a user’s ability to call AWS services. By default, users have no ability to call service APIs on behalf of the account.
What kinds of security credentials can IAM users have? IAM users can have any combination of credentials that AWS supports, such as an AWS access key, X.509 certificate, SSH key, password for web app logins, or an MFA device.
What is the access level for newly created regular users in AWS? Default deny to all resources and actions By default, all new AWS users lack ANY access to AWS resources with a default deny. That default deny doesn’t prevent an explicit allow to grant them access. Keep in mind that EXPLICT denys override explicit allows.
What is identity federation? AWS Identity and Access Management (IAM) supports identity federation for delegated access to the AWS Management Console or AWS APIs. With identity federation, external identities are granted secure access to resources in your AWS account without having to create IAM users. These external identities can come from your corporate identity provider (such as Microsoft Active Directory or from the AWS Directory Service) or from a web identity provider (such as Amazon Cognito, Login with Amazon, Facebook, Google, or any OpenID Connect-compatible provider).
Does AWS IAM support SAML? Yes, AWS supports the Security Assertion Markup Language (SAML) 2.0.
What SAML profiles does AWS support? The AWS single sign-on (SSO) endpoint supports the IdP-initiated HTTP-POST binding WebSSO SAML Profile. This enables a federated user to sign in to the AWS Management Console using a SAML assertion. A SAML assertion can also be used to request temporary security credentials using the AssumeRoleWithSAML API. For more information, see About SAML 2.0-Based Federation.
Can a temporary security credential be revoked prior to its expiration? No. When requesting temporary credentials, we recommend the following:
When creating temporary security credentials, set the expiration to a value that is appropriate for your application.
Because root account permissions cannot be restricted, use an IAM user and not the root account for creating temporary security credentials. You can revoke permissions of the IAM user that issued the original call to request it. This action almost immediately revokes privileges for all temporary security credentials issued by that IAM user
Can I reactivate or extend the expiration of temporary security credentials? No. It is a good practice to actively check the expiration and request a new temporary security credential before the old one expires. This rotation process is automatically managed for you when temporary security credentials are used in roles for EC2 instances.
What does a policy look like? The following policy grants access to add, update, and delete objects from a specific folder, example_folder, in a specific bucket, example_bucket.
What is the IAM policy simulator? The IAM policy simulator is a tool to help you understand, test, and validate the effects of your access control policies.
What can the policy simulator be used for? You can use the policy simulator in several ways. You can test policy changes to ensure they have the desired effect before committing them to production. You can validate existing policies attached to users, groups, and roles to verify and troubleshoot permissions. You can also use the policy simulator to understand how IAM policies and resource-based policies work together to grant or deny access to AWS resources.
Is there an authentication API to verify IAM user sign-ins? No. There is no programmatic way to verify user sign-ins.
Can users SSH to EC2 instances using their AWS user name and password? No. User security credentials created with IAM are not supported for direct authentication to customer EC2 instances. Managing EC2 SSH credentials is the customer’s responsibility within the EC2 console.
Are IAM actions logged for auditing purposes? Yes. You can log IAM actions, STS actions, and AWS Management Console sign-ins by activating AWS CloudTrail. To learn more about AWS logging, see AWS CloudTrail.
What is AWS MFA? AWS multi-factor authentication (AWS MFA) provides an extra level of security that you can apply to your AWS environment. You can enable AWS MFA for your AWS account and for individual AWS Identity and Access Management (IAM) users you create under your account.
What problems does IAM solve? IAM makes it easy to provide multiple users secure access to your AWS resources. IAM enables you to: Manage IAM users and their access: You can create users in AWS’s identity management system, assign users individual security credentials (such as access keys, passwords, multi-factor authentication devices), or request temporary security credentials to provide users access to AWS services and resources. You can specify permissions to control which operations a user can perform. Manage access for federated users: You can request security credentials with configurable expirations for users who you manage in your corporate directory, allowing you to provide your employees and applications secure access to resources in your AWS account without creating an IAM user account for them. You specify the permissions for these security credentials to control which operations a user can perform.
What is an IAM role? An IAM role is an IAM entity that defines a set of permissions for making AWS service requests. IAM roles are not associated with a specific user or group. Instead, trusted entities assume roles, such as IAM users, applications, or AWS services such as EC2.
What problems do IAM roles solve? IAM roles allow you to delegate access with defined permissions to trusted entities without having to share long-term access keys. You can use IAM roles to delegate access to IAM users managed within your account, to IAM users under a different AWS account, or to an AWS service such as EC2.
Q0: What are the main benefits of IAM groups? (Select two)
A. The ability to create custom permission policies.
B. Assigning IAM permission policies to more than one user at a time.
C. Easier user/policy management.
D. Allowing EC2 instances to gain access to S3.
B. and C.
An IAM group is a collection of IAM users. Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users. For example, you could have a group called Admins and give that group the types of permissions that administrators typically need. Any user in that group automatically has the permissions that are assigned to the group. If a new user joins your organization and needs administrator privileges, you can assign the appropriate permissions by adding the user to that group. Similarly, if a person changes jobs in your organization, instead of editing that user’s permissions, you can remove him or her from the old groups and add him or her to the appropriate new groups.Reference: IAM Groups
Q1: You would like to use STS to allow end users to authenticate from third-party providers such as Facebook, Google, and Amazon. What is this type of authentication called?
A. Web Identity Federation
B. Enterprise Identity Federation
C. Cross-Account Access
D. Commercial Federation
A. AWS Identity and Access Management (IAM) supports identity federation for delegated access to the AWS Management Console or AWS APIs. With identity federation, external identities are granted secure access to resources in your AWS account without having to create IAM users. These external identities can come from your corporate identity provider (such as Microsoft Active Directory or from the AWS Directory Service) or from a web identity provider (such as Amazon Cognito, Login with Amazon, Facebook, Google, or any OpenID Connect-compatible provider).
Q4: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below
A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.
Answer- C The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3. You can then provides access to the objects based on the key values generated via the user id.
Q5: You’ve developed a Lambda function and are now in the process of debugging it. You add the necessary print statements in the code to assist in the debugging. You go to Cloudwatch logs , but you see no logs for the lambda function. Which of the following could be the underlying issue for this?
A. You’ve not enabled versioning for the Lambda function
B. The IAM Role assigned to the Lambda function does not have the necessary permission to create Logs
C. There is not enough memory assigned to the function
D. There is not enough time assigned to the function
Answer: B “If your Lambda function code is executing, but you don’t see any log data being generated after several minutes, this could mean your execution role for the Lambda function did not grant permissions to write log data to CloudWatch Logs. For information about how to make sure that you have set up the execution role correctly to grant these permissions, see Manage Permissions: Using an IAM Role (Execution Role)”.
Q6: Your application must write to an SQS queue. Your corporate security policies require that AWS credentials are always encrypted and are rotated at least once a week. How can you securely provide credentials that allow your application to write to the queue?
A. Have the application fetch an access key from an Amazon S3 bucket at run time.
B. Launch the application’s Amazon EC2 instance with an IAM role.
C. Encrypt an access key in the application source code.
D. Enroll the instance in an Active Directory domain and use AD authentication.
Answer: B. IAM roles are based on temporary security tokens, so they are rotated automatically. Keys in the source code cannot be rotated (and are a very bad idea). It’s impossible to retrieve credentials from an S3 bucket if you don’t already have credentials for that bucket. Active Directory authorization will not grant access to AWS resources. Reference: AWS IAM FAQs
Q65: A corporate web application is deployed within an Amazon VPC, and is connected to the corporate data center via IPSec VPN. The application must authenticate against the on-premise LDAP server. Once authenticated, logged-in users can only access an S3 keyspace specific to the user. Which of the solutions below meet these requirements? Choose two answers How would you authenticate to the application given these details? (Choose 2)
A. The application authenticates against LDAP, and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM Role. The application can use the temporary credentials to access the S3 keyspace.
B. Develop an identity broker which authenticates against LDAP, and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 keyspace
C. Develop an identity broker which authenticates against IAM Security Token Service to assume an IAM Role to get temporary AWS security credentials. The application calls the identity broker to get AWS temporary security credentials with access to the app
D. The application authenticates against LDAP. The application then calls the IAM Security Service to login to IAM using the LDAP credentials. The application can use the IAM temporary credentials to access the appropriate S3 bucket.
Answer: A. and B. The question clearly says “authenticate against LDAP”. Temporary credentials come from STS. Federated user credentials come from the identity broker. Reference: IAM faqs
Q7: A corporate web application is deployed within an Amazon VPC, and is connected to the corporate data center via IPSec VPN. The application must authenticate against the on-premise LDAP server. Once authenticated, logged-in users can only access an S3 keyspace specific to the user. Which of the solutions below meet these requirements? Choose two answers How would you authenticate to the application given these details? (Choose 2)
A. The application authenticates against LDAP, and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM Role. The application can use the temporary credentials to access the S3 keyspace.
B. Develop an identity broker which authenticates against LDAP, and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 keyspace
C. Develop an identity broker which authenticates against IAM Security Token Service to assume an IAM Role to get temporary AWS security credentials. The application calls the identity broker to get AWS temporary security credentials with access to the app
D. The application authenticates against LDAP. The application then calls the IAM Security Service to login to IAM using the LDAP credentials. The application can use the IAM temporary credentials to access the appropriate S3 bucket.
Answer: A. and B. The question clearly says “authenticate against LDAP”. Temporary credentials come from STS. Federated user credentials come from the identity broker. Reference: AWA STS Faqs
Definition 1:Serverless computing is a cloud-computing execution model in which the cloud provider runs the server, and dynamically manages the allocation of machine resources. Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity. It can be a form of utility computing. Definition 2: AWS Serverless is the native architecture of the cloud that enables you to shift more of your operational responsibilities to AWS, increasing your agility and innovation. Serverless allows you to build and run applications and services without thinking about servers. It eliminates infrastructure management tasks such as server or cluster provisioning, patching, operating system maintenance, and capacity provisioning.
AWS Serverless Facts and summaries
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6 Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
The AWS Serverless Application Model (AWS SAM) is a model to define serverless applications. AWS SAM is natively supported by AWS CloudFormation and provides a simplified way of defining the Amazon API Gateway APIs, AWS Lambda functions, and Amazon DynamoDB tables needed by your serverless application.
You can use AWS CodePipeline with the AWS Serverless Application Model to automate building, testing, and deploying serverless applications. AWS CodeBuild integrates with CodePipeline to provide automated builds. You can use AWS CodeDeploy to gradually roll out and test new Lambda function versions.
You can monitor and troubleshoot the performance of your serverless applications and AWS Lambda functions with AWS services and third-party tools. Amazon CloudWatch helps you see real-time reporting metrics and logs for your serverless applications. You can use AWS X-Ray to debug and trace your serverless applications and AWS Lambda.
The AWS Serverless Application Repository is a managed repository for serverless applications. It enables teams, organizations, and individual developers to store and share reusable applications, and easily assemble and deploy serverless architectures in powerful new ways. Using the Serverless Application Repository, you don’t need to clone, build, package, or publish source code to AWS before deploying it. Instead, you can use pre-built applications from the Serverless Application Repository in your serverless architectures, helping you and your teams reduce duplicated work, ensure organizational best practices, and get to market faster.
Anyone with an AWS account can publish a serverless application to the Serverless Application Repository. Applications can be privately shared with specific AWS accounts. Applications that are shared publicly include a link to the application’s source code so others can view what the application does and how it works.
What kinds of applications are available in the AWS Serverless Application Repository? The AWS Serverless Application Repository includes applications for Alexa Skills, chatbots, data processing, IoT, real time stream processing, web and mobile back-ends, social media trend analysis, image resizing, and more from publishers on AWS.
The AWS Serverless Application Repository enables developers to publish serverless applications developed in a GitHub repository. Using AWS CodePipeline to link a GitHub source with the AWS Serverless Application Repository can make the publishing process even easier, and the process can be set up in minutes.
What two arguments does a Python Lambda handler function require? Event, Context
A Lambda deployment package contains Function code and libraries not included within the runtime environment
When referencing the remaining time left for a Lambda function to run within the function’s code you would use The context object.
Long-running memory-intensive workloads is LEAST suited to AWS Lambda
The maximum execution duration of your Lambda functions is Fifteen Minutes
Logs for Lambda functions are Stored in AWS CloudWatch
Docker Container Images are constructed using instructions in a file called Dockerfile
The ECS Task Agent Is responsible for starting and stopping tasks. It runs inside the EC2 instance and reports on information like running tasks and resource utilization
AWS ECR Stores Container Images.
Elastic Beanstalk is used to Deploy and scale web applications and services developed with a supported platform
When deploying a simple Python web application with Elastic Beanstalk which of the following AWS resources will be created and managed for you by Elastic Beanstalk? An Elastic Load Balancer, an S3 Bucket, an Ec2 instance.
When using Elastic Beanstalk you can deploy your web applications by:
Configuring a git repository with Elastic Beanstalk so that changes will be detected and your application will be updated.
Uploading code files to the Elastic Beanstalk service
Q00: You have created a serverless application which converts text in to speech using a combination of S3, API Gateway, Lambda, Polly, DynamoDB and SNS. Your users complain that only some text is being converted, whereas longer amounts of text does not get converted. What could be the cause of this problem?
A. Polly has built in censorship, so if you try and send it text that is deemed offensive, it will not generate an MP3.
B. You’ve placed your DynamoDB table in a single availability zone, which is currently down, causing an outage.
C. Your lambda function needs a longer execution time. You should check how long is needed in the fringe cases and increase the timeout inside the function to slightly longer than that.
D. AWS X-ray service is interfering with the application and should be disabled.
Q3: You have launched a new web application on AWS using API Gateway, Lambda and S3. Someone post a thread to reddit about your application and it starts to go viral. Your start receiving 100000 requests every second and you notice that most requests are similar. Your web application begins to struggle. What can you do to optimize performance of your application?
A. Enable API Gateway Accelerator
B. Enable API Gateway caching to cache frequent requests.
C. Change your route53 allias record to point to AWS Neptune and then configure Neptune to filter your API requests to genuine requests only.
D. Migrate your API Gateway to an Network Load Balancer and enable session stickiness for all sessions.
Q4: Which of the following services does X-ray integrate with? (Choose 3)
A. Elastic Load Balancer
B. Lambda
C. S3
D. API Gateway
Answer: A. B. and D. AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. You can use X-Ray with applications running on EC2, ECS, Lambda, and Elastic Beanstalk. In addition, the X-Ray SDK automatically captures metadata for API calls made to AWS services using the AWS SDK. In addition, the X-Ray SDK provides add-ons for MySQL and PostgreSQL drivers.
Q5: You are a developer for a busy real estate company and you want to enable other real estate agents to the ability to show properties on your books, but skinned so that it looks like their own website. You decide the most efficient way to do this is to expose your API to the public. The project works well, however one of your competitors starts abusing this, sending your API tens of thousands of requests per second. This generates a HTTP 429 error. Each agent connects to your API using individual API Keys. What actions can you take to stop this behavior?
A. Use AWS Shield Advanced API protection to block the requests.
B. Deploy multiple API Gateways and give the agent access to another API Gateway.
C. Place an AWS Web Application Firewall in front of API gateway and filter requests.
D. Throttle the agents API access using the individual API Keys
Answer: D. Throttling ensures that API traffic is controlled to help your backend services maintain performance and availability. How can I protect my backend systems and applications from traffic spikes? Amazon API Gateway provides throttling at multiple levels including global and by service call. Throttling limits can be set for standard rates and bursts. For example, API owners can set a rate limit of 1,000 requests per second for a specific method in their REST APIs, and also configure Amazon API Gateway to handle a burst of 2,000 requests per second for a few seconds. Amazon API Gateway tracks the number of requests per second. Any requests over the limit will receive a 429 HTTP response. The client SDKs generated by Amazon API Gateway retry calls automatically when met with this response.
Q6: You are developing a new application using serverless infrastructure and are using services such as S3, DynamoDB, Lambda, API Gateway, CloudFront, CloudFormation and Polly. You deploy your application to production and your end users begin complaining about receiving a HTTP 429 error. What could be the cause of the error?
A. You enabled API throttling for a rate limit of 1000 requests per second while in development and now that you have deployed to production your API Gateway is being throttled.
B. Your cloudFormation stack is not valid and is failling to deploy properly which is causing a HTTP 429 error.
C. Your lambda function does not have sufficient permissions to read to DynamoDB and this is generating a HTTP 429 error.
D. You have an S3 bucket policy which is preventing lambda from being able to write tyo your bucket, generating a HTTP 429 error.
Answer: A. Amazon API Gateway provides throttling at multiple levels including global and by service call. Throttling limits can be set for standard rates and bursts. For example, API owners can set a rate limit of 1,000 requests per second for a specific method in their REST APIs, and also configure Amazon API Gateway to handle a burst of 2,000 requests per second for a few seconds. Amazon API Gateway tracks the number of requests per second. Any requests over the limit will receive a 429 HTTP response. The client SDKs generated by Amazon API Gateway retry calls automatically when met with this response.
Q7: What is the format of structured notification messages sent by Amazon SNS?
A. An XML object containing MessageId, UnsubscribeURL, Subject, Message and other values
B. An JSON object containing MessageId, DuplicateFlag, Message and other values
C. An XML object containing MessageId, DuplicateFlag, Message and other values
D. An JSON object containing MessageId, unsubscribeURL, Subject, Message and other values
Answer: D.
The notification message sent by Amazon SNS for deliveries over HTTP, HTTPS, Email-JSON and SQS transport protocols will consist of a simple JSON object, which will include the following information: MessageId: A Universally Unique Identifier, unique for each notification published.Reference: Format of structured notification messages sent by Amazon SNS
Definition 1: The AWS Developer is responsible for designing, deploying, and developing cloud applications on AWS platform
Definition 2: The AWS Developer Tools is a set of services designed to enable developers and IT operations professionals practicing DevOps to rapidly and safely deliver software.
The AWS Certified Developer Associate certification is a widely recognized certification that validates a candidate’s expertise in developing and maintaining applications on the Amazon Web Services (AWS) platform.
The certification is about to undergo a major change with the introduction of the new exam version DVA-C02, replacing the current DVA-C01. In this article, we will discuss the differences between the two exams and what candidates should consider in terms of preparation for the new DVA-C02 exam.
Quick facts
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6 Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
The DVA-C01 exam is being replaced by the DVA-C02 exam.
When is this taking place?
The last day to take the current exam is February 27th, 2023 and the first day to take the new exam is February 28th, 2023.
What’s the difference?
The new exam features some new AWS services and features.
Main differences between DVA-C01 and DVA-C02
The table below details the differences between the DVA-C01 and DVA-C02 exams domains and weightings:
In terms of the exam content weightings, the DVA-C02 exam places a greater emphasis on deployment and management, with a slightly reduced emphasis on development and refactoring. This shift reflects the increased importance of operations and management in cloud computing, as well as the need for developers to have a strong understanding of how to deploy and maintain applications on the AWS platform.
One major difference between the two exams is the focus on the latest AWS services and features. The DVA-C02 exam covers around 57 services vs only 33 services in the DVA-C01. This reflects the rapidly evolving AWS ecosystem and the need for developers to be up-to-date with the latest services and features in order to effectively build and maintain applications on the platform.
Click the image above to watch our video about the NEW AWS Developer Associate Exam DVA-C02 from our youtube channel
In terms of preparation for the DVA-C02 exam, we strongly recommend enrolling in our on-demand training courses for the AWS Developer Associate certification. It is important for candidates to familiarize themselves with the latest AWS services and features, as well as the updated exam content weightings. Practical experience working with AWS services and hands-on experimentation with new services and features will be key to success on the exam. Candidates should also focus on their understanding of security best practices, access control, and compliance, as these topics will carry a greater weight in the new exam.
In conclusion, the change from the DVA-C01 to the DVA-C02 exam represents a major shift in the focus and content of the AWS Certified Developer Associate certification. Candidates preparing for the new exam should focus on familiarizing themselves with the latest AWS services and features, as well as the updated exam content weightings, and placing a strong emphasis on security, governance, and compliance.
With the right preparation and focus, candidates can successfully navigate the changes in the DVA-C02 exam and maintain their status as a certified AWS Developer Associate.
AWS Developer and Deployment Theory Facts and summaries
Continuous Integration is about integrating or merging the code changes frequently, at least once per day. It enables multiple devs to work on the same application.
Continuous delivery is all about automating the build, test, and deployment functions.
Continuous Deployment fully automates the entire release process, code is deployed into Production as soon as it has successfully passed through the release pipeline.
AWS CodePipeline is a continuous integration/Continuous delivery service:
It automates your end-to-end software release process based on user defines workflow
It can be configured to automatically trigger your pipeline as soon as a change is detected in your source code repository
It integrates with other services from AWS like CodeBuild and CodeDeploy, as well as third party custom plug-ins.
AWS CodeBuild is a fully managed build service. It can build source code, run tests and produce software packages based on commands that you define yourself.
Dy default the buildspec.yml defines the build commands and settings used by CodeBuild to run your build.
AWS CodeDeploy is a fully managed automated deployment service and can be used as part of a Continuous Delivery or Continuous Deployment process.
There are 2 types of deployment approach:
In-place or Rolling update- you stop the application on each host and deploy the latest code. EC2 and on premise systems only. To roll back, you must re-deploy the previous version of the application.
Blue/Green : New instances are provisioned and the new application is deployed to these new instances. Traffic is routed to the new instances according to your own schedule. Supported for EC2, on-premise systems and Lambda functions. Rollback is easy, just route the traffic back to the original instances. Blue is active deployment, green is new release.
Docker allows you to package your software into Containers which you can run in Elastic Container Service (ECS)
A docker Container includes everything the software needs to run including code, libraries, runtime and environment variables etc..
A special file called Dockerfile is used to specify the instructions needed to assemble your Docker image.
Once built, Docker images can be stored in Elastic Container Registry (ECR) and ECS can then use the image to launch Docker Containers.
AWS CodeCommit is based on Git. It provides centralized repositories for all your code, binaries, images, and libraries.
CodeCommit tracks and manages code changes. It maintains version history.
CodeCommit manages updates from multiple sources and enables collaboration.
To support CORS, API resource needs to implement an OPTIONS method that can respond to the OPTIONS preflight request with following headers:
Access-Control-Allow-Headers
Access-Control-Allow-Origin
Access-Control-Allow-Methods
You have a legacy application that works via XML messages. You need to place the application behind the API gateway in order for customers to make API calls. Which of the following would you need to configure? You will need to work with the Request and Response Data mapping.
Your application currently points to several Lambda functions in AWS. A change is being made to one of the Lambda functions. You need to ensure that application traffic is shifted slowly from one Lambda function to the other. Which of the following steps would you carry out?
Create an ALIAS with the –routing-config parameter
Update the ALIAS with the –routing-config parameter
By default, an alias points to a single Lambda function version. When the alias is updated to point to a different function version, incoming request traffic in turn instantly points to the updated version. This exposes that alias to any potential instabilities introduced by the new version. To minimize this impact, you can implement the routing-config parameter of the Lambda alias that allows you to point to two different versions of the Lambda function and dictate what percentage of incoming traffic is sent to each version.
AWS CodeDeploy: The AppSpec file defines all the parameters needed for the deployment e.g. location of application files and pre/post deployment validation tests to run.
For Ec2 / On Premise systems, the appspec.yml file must be placed in the root directory of your revision (the same folder that contains your application code). Written in YAML.
For Lambda and ECS deployment, the AppSpec file can be YAML or JSON
Visual workflows are automatically created when working with which Step Functions
API Gateway stages store configuration for deployment. An API Gateway Stage refers to A snapshot of your API
AWS SWF Services SWF guarantees delivery order of messages/tasks
Blue/Green Deployments with CodeDeploy on AWS Lambda can happen in multiple ways. Which of these is a potential option? Linear, All at once, Canary
X-Ray Filter Expressions allow you to search through request information using characteristics like URL Paths, Trace ID, Annotations
S3 has eventual consistency for overwrite PUTS and DELETES.
What can you do to ensure the most recent version of your Lambda functions is in CodeDeploy? Specify the version to be deployed in AppSpec file.
https://docs.aws.amazon.com/codedeploy/latest/userguide/application-specification-files.htmlAppSpec Files on an Amazon ECS Compute Platform
If your application uses the Amazon ECS compute platform, the AppSpec file can be formatted with either YAML or JSON. It can also be typed directly into an editor in the console. The AppSpec file is used to specify:
The name of the Amazon ECS service and the container name and port used to direct traffic to the new task set. The functions to be used as validation tests. You can run validation Lambda functions after deployment lifecycle events. For more information, see AppSpec ‘hooks’ Section for an Amazon ECS Deployment, AppSpec File Structure for Amazon ECS Deployments , and AppSpec File Example for an Amazon ECS Deployment .
Q2: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?
Q5: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?
A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.
Q8: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?
A. Transforms
B. Outputs
C. Resources
D. Instances
Answer: C. The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the resources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3.
Q9: Which AWS service can be used to fully automate your entire release process?
A. CodeDeploy
B. CodePipeline
C. CodeCommit
D. CodeBuild
Answer: B. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates
Q10: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?
A. Outputs
B. Transforms
C. Resources
D. Exports
Answer: A. Outputs is used to output user defines data relating to the resources you have built and can also used as input to another CloudFormation stack.
Q11: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?
A. Inputs
B. Resources
C. Transforms
D. Files
Answer: C. Transforms is used to reference code located in S3 and also specifying the use of the Serverless Application Model (SAM) for Lambda deployments. Transform: Name: ‘AWS::Include’ Parameters: Location: ‘s3://MyAmazonS3BucketName/MyFileName.yaml’
Q12: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file used to specify source files and lifecycle hooks?
Q13: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?
A. Share the code using an EBS volume
B. Copy and paste the code into the template each time you need to use it
C. Use a cloudformation nested stack
D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.
Q15:You need to setup a RESTful API service in AWS that would be serviced via the following url https://democompany.com/customers Which of the following combination of services can be used for development and hosting of the RESTful service? Choose 2 answers from the options below
Q16: As a developer, you have created a Lambda function that is used to work with a bucket in Amazon S3. The Lambda function is not working as expected. You need to debug the issue and understand what’s the underlying issue. How can you accomplish this in an easily understandable way?
A. Use AWS Cloudwatch metrics
B. Put logging statements in your code
C. Set the Lambda function debugging level to verbose
D. Use AWS Cloudtrail logs
Answer: B You can insert logging statements into your code to help you validate that your code is working as expected. Lambda automatically integrates with Amazon CloudWatch Logs and pushes all logs from your code to a CloudWatch Logs group associated with a Lambda function (/aws/lambda/). Reference: Using Amazon CloudWatch
Q17: You have a lambda function that is processed asynchronously. You need a way to check and debug issues if the function fails? How could you accomplish this?
A. Use AWS Cloudwatch metrics
B. Assign a dead letter queue
C. Congure SNS notications
D. Use AWS Cloudtrail logs
Answer: B Any Lambda function invoked asynchronously is retried twice before the event is discarded. If the retries fail and you’re unsure why, use Dead Letter Queues (DLQ) to direct unprocessed events to an Amazon SQS queue or an Amazon SNS topic to analyze the failure. Reference: AWS Lambda Function Dead Letter Queues
Q18: You are developing an application that is going to make use of Amazon Kinesis. Due to the high throughput , you decide to have multiple shards for the streams. Which of the following is TRUE when it comes to processing data across multiple shards?
A. You cannot guarantee the order of data across multiple shards. Its possible only within a shard
B. Order of data is possible across all shards in a streams
C. Order of data is not possible at all in Kinesis streams
D. You need to use Kinesis firehose to guarantee the order of data
Answer: A Kinesis Data Streams lets you order records and read and replay records in the same order to many Kinesis Data Streams applications. To enable write ordering, Kinesis Data Streams expects you to call the PutRecord API to write serially to a shard while using the sequenceNumberForOrdering parameter. Setting this parameter guarantees strictly increasing sequence numbers for puts from the same client and to the same partition key. Option A is correct as it cannot guarantee the ordering of records across multiple shards. Reference: How to perform ordered data replication between applications by using Amazon DynamoDB Streams
Q19: You’ve developed a Lambda function and are now in the process of debugging it. You add the necessary print statements in the code to assist in the debugging. You go to Cloudwatch logs , but you see no logs for the lambda function. Which of the following could be the underlying issue for this?
A. You’ve not enabled versioning for the Lambda function
B. The IAM Role assigned to the Lambda function does not have the necessary permission to create Logs
C. There is not enough memory assigned to the function
D. There is not enough time assigned to the function
Answer: B “If your Lambda function code is executing, but you don’t see any log data being generated after several minutes, this could mean your execution role for the Lambda function did not grant permissions to write log data to CloudWatch Logs. For information about how to make sure that you have set up the execution role correctly to grant these permissions, see Manage Permissions: Using an IAM Role (Execution Role)”.
Q20: Your application is developed to pick up metrics from several servers and push them off to Cloudwatch. At times , the application gets client 429 errors. Which of the following can be done from the programming side to resolve such errors?
A. Use the AWS CLI instead of the SDK to push the metrics
B. Ensure that all metrics have a timestamp before sending them across
C. Use exponential backoff in your request
D. Enable encryption for the requests
Answer: C. The main reason for such errors is that throttling is occurring when many requests are sent via API calls. The best way to mitigate this is to stagger the rate at which you make the API calls. In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values and should be set based on the operation being performed, as well as other local factors, such as network latency. Reference: Error Retries and Exponential Backoff in AWS
Q21: You have been instructed to use the CodePipeline service for the CI/CD automation in your company. Due to security reasons , the resources that would be part of the deployment are placed in another account. Which of the following steps need to be carried out to accomplish this deployment? Choose 2 answers from the options given below
A. Dene a customer master key in KMS
B. Create a reference Code Pipeline instance in the other account
C. Add a cross account role
D. Embed the access keys in the codepipeline process
Answer: A. and C. You might want to create a pipeline that uses resources created or managed by another AWS account. For example, you might want to use one account for your pipeline and another for your AWS CodeDeploy resources. To do so, you must create a AWS Key Management Service (AWS KMS) key to use, add the key to the pipeline, and set up account policies and roles to enable cross-account access. Reference: Create a Pipeline in CodePipeline That Uses Resources from Another AWS Account
Q22: You are planning on deploying an application to the worker role in Elastic Beanstalk. Moreover, this worker application is going to run the periodic tasks. Which of the following is a must have as part of the deployment?
A. An appspec.yaml file
B. A cron.yaml file
C. A cron.cong file
D. An appspec.json file
Answer: B. Create an Application Source Bundle When you use the AWS Elastic Beanstalk console to deploy a new application or an application version, you’ll need to upload a source bundle. Your source bundle must meet the following requirements: Consist of a single ZIP file or WAR file (you can include multiple WAR files inside your ZIP file) Not exceed 512 MB Not include a parent folder or top-level directory (subdirectories are fine) If you want to deploy a worker application that processes periodic background tasks, your application source bundle must also include a cron.yaml file. For more information, see Periodic Tasks.
Q23: An application needs to make use of an SQS queue for working with messages. An SQS queue has been created with the default settings. The application needs 60 seconds to process each message. Which of the following step need to be carried out by the application.
A. Change the VisibilityTimeout for each message and then delete the message after processing is completed
B. Delete the message and change the visibility timeout.
C. Process the message , change the visibility timeout. Delete the message
D. Process the message and delete the message
Answer: A If the SQS queue is created with the default settings , then the default visibility timeout is 30 seconds. And since the application needs more time for processing , you first need to change the timeout and delete the message after it is processed. Reference: Amazon SQS Visibility Timeout
Q24: AWS CodeDeploy deployment fails to start & generate following error code, ”HEALTH_CONSTRAINTS_INVALID”, Which of the following can be used to eliminate this error?
A. Make sure the minimum number of healthy instances is equal to the total number of instances in the deployment group.
B. Increase the number of healthy instances required during deployment
C. Reduce number of healthy instances required during deployment
D. Make sure the number of healthy instances is equal to the specified minimum number of healthy instances.
Answer: C AWS CodeDeploy generates ”HEALTH_CONSTRAINTS_INVALID” error, when a minimum number of healthy instances defined in deployment group are not available during deployment. To mitigate this error, make sure required number of healthy instances are available during deployments. Reference: Error Codes for AWS CodeDeploy
B. A specific snapshot of all of your API’s settings, resources, and methods
C. A specific snapshot of your API’s resources
D. A specific snapshot of your API’s resources and methods
Answer: D. AWS API Gateway Deployments are a snapshot of all the resources and methods of your API and their configuration. Reference: Deploying a REST API in Amazon API Gateway
Q29: A SWF workflow task or task execution can live up to how long?
A. 1 Year
B. 14 days
C. 24 hours
D. 3 days
Answer: A. 1 Year Each workflow execution can run for a maximum of 1 year. Each workflow execution history can grow up to 25,000 events. If your use case requires you to go beyond these limits, you can use features Amazon SWF provides to continue executions and structure your applications using child workflow executions. Reference: Amazon SWF FAQs
Q30: With AWS Step Functions, all the work in your state machine is done by tasks. These tasks performs work by using what types of things? (Choose the best 3 answers)
A. An AWS Lambda Function Integration
B. Passing parameters to API actions of other services
A. A decider program that is written in the language of the developer’s choice
B. A visual workflow created in the SWF visual workflow editor
C. A JSON-defined state machine that contains states within it to select the next step to take
D. SWF outsources all decisions to human deciders through the AWS Mechanical Turk service.
Answer: A. SWF allows the developer to write their own application logic to make decisions and determine how to evaluate incoming data. Q: What programming conveniences does Amazon SWF provide to write applications? Like other AWS services, Amazon SWF provides a core SDK for the web service APIs. Additionally, Amazon SWF offers an SDK called the AWS Flow Framework that enables you to develop Amazon SWF-based applications quickly and easily. AWS Flow Framework abstracts the details of task-level coordination with familiar programming constructs. While running your program, the framework makes calls to Amazon SWF, tracks your program’s execution state using the execution history kept by Amazon SWF, and invokes the relevant portions of your code at the right times. By offering an intuitive programming framework to access Amazon SWF, AWS Flow Framework enables developers to write entire applications as asynchronous interactions structured in a workflow. For more details, please see What is the AWS Flow Framework? Reference:
Q34: CodePipeline pipelines are workflows that deal with stages, actions, transitions, and artifacts. Which of the following statements is true about these concepts?
A. Stages contain at least two actions
B. Artifacts are never modified or iterated on when used inside of CodePipeline
C. Stages contain at least one action
D. Actions will have a deployment artifact as either an input an output or both
Q35: When deploying a simple Python web application with Elastic Beanstalk which of the following AWS resources will be created and managed for you by Elastic Beanstalk?
A. An Elastic Load Balancer
B. An S3 Bucket
C. A Lambda Function
D. An EC2 instance
Answer: A. B. and D. AWS Elastic Beanstalk uses proven AWS features and services, such as Amazon EC2, Amazon RDS, Elastic Load Balancing, Auto Scaling, Amazon S3, and Amazon SNS, to create an environment that runs your application. The current version of AWS Elastic Beanstalk uses the Amazon Linux AMI or the Windows Server 2012 R2 AMI. Reference: AWS Elastic Beanstalk FAQs
A. Deploy and scale web applications and services developed with a supported platform
B. Deploy and scale serverless applications
C. Deploy and scale applications based purely on EC2 instances
D. Manage the deployment of all AWS infrastructure resources of your AWS applications
Answer: A. Who should use AWS Elastic Beanstalk? Those who want to deploy and manage their applications within minutes in the AWS Cloud. You don’t need experience with cloud computing to get started. AWS Elastic Beanstalk supports Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker web applications. Reference:
Q41: How does using ElastiCache help to improve database performance?
A. It can store petabytes of data
B. It provides faster internet speeds
C. It can store the results of frequent or highly-taxing queries
D. It uses read replicas
Answer: C. With ElastiCache, customers get all of the benefits of a high-performance, in-memory cache with less of the administrative burden involved in launching and managing a distributed cache. The service makes setup, scaling, and cluster failure handling much simpler than in a self-managed cache deployment. Reference: Amazon ElastiCache
Q42: Which of the following best describes the Lazy Loading caching strategy?
A. Every time the underlying database is written to or updated the cache is updated with the new information.
B. Every miss to the cache is counted and when a specific number is reached a full copy of the database is migrated to the cache
C. A specific amount of time is set before the data in the cache is marked as expired. After expiration, a request for expired data will be made through to the backing database.
D. Data is added to the cache when a cache miss occurs (when there is no data in the cache and the request must go to the database for that data)
Answer: D. Amazon ElastiCache is an in-memory key/value store that sits between your application and the data store (database) that it accesses. Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests the data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. Reference: Lazy Loading
Q45: You have just set up a push notification service to send a message to an app installed on a device with the Apple Push Notification Service. It seems to work fine. You now want to send a message to an app installed on devices for multiple platforms, those being the Apple Push Notification Service(APNS) and Google Cloud Messaging for Android (GCM). What do you need to do first for this to be successful?
A. Request Credentials from Mobile Platforms, so that each device has the correct access control policies to access the SNS publisher
B. Create a Platform Application Object which will connect all of the mobile devices with your app to the correct SNS topic.
C. Request a Token from Mobile Platforms, so that each device has the correct access control policies to access the SNS publisher.
D. Get a set of credentials in order to be able to connect to the push notification service you are trying to setup.
Answer: D. To use Amazon SNS mobile push notifications, you need to establish a connection with a supported push notification service. This connection is established using a set of credentials. Reference: Add Device Tokens or Registration IDs
Q46: SNS message can be sent to different kinds of endpoints. Which of these is NOT currently a supported endpoint?
A. Slack Messages
B. SMS (text message)
C. HTTP/HTTPS
D. AWS Lambda
Answer: A. Slack messages are not directly integrated with SNS, though theoretically, you could write a service to push messages to slack from SNS. Reference:
Q47: Company B provides an online image recognition service and utilizes SQS to decouple system components for scalability. The SQS consumers poll the imaging queue as often as possible to keep end-to-end throughput as high as possible. However, Company B is realizing that polling in tight loops is burning CPU cycles and increasing costs with empty responses. How can Company B reduce the number empty responses?
A. Set the imaging queue VisibilityTimeout attribute to 20 seconds
B. Set the imaging queue MessageRetentionPeriod attribute to 20 seconds
C. Set the imaging queue ReceiveMessageWaitTimeSeconds Attribute to 20 seconds
D. Set the DelaySeconds parameter of a message to 20 seconds
Answer: C. Enabling long polling reduces the amount of false and empty responses from SQS service. It also reduces the number of calls that need to be made to a queue by staying connected to the queue until all messages have been received or until timeout. In order to enable long polling the ReceiveMessageWaitTimeSeconds attribute needs to be set to a number greater than 0. If it is set to 0 then short polling is enabled. Reference: Amazon SQS Long Polling
Q48: Which of the following statements about SQS standard queues are true?
A. Message order can be indeterminate – you’re not guaranteed to get messages in the same order they were sent in
B. Messages will be delivered exactly once and messages will be delivered in First in, First out order
C. Messages will be delivered exactly once and message delivery order is indeterminate
D. Messages can be delivered one or more times
Answer: A. and D. A standard queue makes a best effort to preserve the order of messages, but more than one copy of a message might be delivered out of order. If your system requires that order be preserved, we recommend using a FIFO (First-In-First-Out) queue or adding sequencing information in each message so you can reorder the messages when they’re received. Reference: Amazon SQS Standard Queues
Q49: Which of the following is true if long polling is enabled?
A. If long polling is enabled, then each poll only polls a subset of SQS servers; in order for all messages to be received, polling must continuously occur
B. The reader will listen to the queue until timeout
C. Increases costs because each request lasts longer
D. The reader will listen to the queue until a message is available or until timeout
Q50: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?
A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
B. Permanently assigning users to specific instances and always routing their traffic to those instances
C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance
Q52: Your application must write to an SQS queue. Your corporate security policies require that AWS credentials are always encrypted and are rotated at least once a week. How can you securely provide credentials that allow your application to write to the queue?
A. Have the application fetch an access key from an Amazon S3 bucket at run time.
B. Launch the application’s Amazon EC2 instance with an IAM role.
C. Encrypt an access key in the application source code.
D. Enroll the instance in an Active Directory domain and use AD authentication.
Answer: B. IAM roles are based on temporary security tokens, so they are rotated automatically. Keys in the source code cannot be rotated (and are a very bad idea). It’s impossible to retrieve credentials from an S3 bucket if you don’t already have credentials for that bucket. Active Directory authorization will not grant access to AWS resources. Reference: AWS IAM FAQs
Q53: Your web application reads an item from your DynamoDB table, changes an attribute, and then writes the item back to the table. You need to ensure that one process doesn’t overwrite a simultaneous change from another process. How can you ensure concurrency?
A. Implement optimistic concurrency by using a conditional write.
B. Implement pessimistic concurrency by using a conditional write.
C. Implement optimistic concurrency by locking the item upon read.
D. Implement pessimistic concurrency by locking the item upon read.
Answer: A. Optimistic concurrency depends on checking a value upon save to ensure that it has not changed. Pessimistic concurrency prevents a value from changing by locking the item or row in the database. DynamoDB does not support item locking, and conditional writes are perfect for implementing optimistic concurrency. Reference: Optimistic Locking With Version Number
Answer: C. The intrinsic function Fn::FindInMap returns the value corresponding to keys in a two-level map that is declared in the Mappings section. You can use the Fn::FindInMap function to return a named value based on a specified key. The following example template contains an Amazon EC2 resource whose ImageId property is assigned by the FindInMap function. The FindInMap function specifies key as the region where the stack is created (using the AWS::Region pseudo parameter) and HVM64 as the name of the value to map to. Reference:
Q56: Your application triggers events that must be delivered to all your partners. The exact partner list is constantly changing: some partners run a highly available endpoint, and other partners’ endpoints are online only a few hours each night. Your application is mission-critical, and communication with your partners must not introduce delay in its operation. A delay in delivering the event to one partner cannot delay delivery to other partners.
What is an appropriate way to code this?
A. Implement an Amazon SWF task to deliver the message to each partner. Initiate an Amazon SWF workflow execution.
B. Send the event as an Amazon SNS message. Instruct your partners to create an HTTP. Subscribe their HTTP endpoint to the Amazon SNS topic.
C. Create one SQS queue per partner. Iterate through the queues and write the event to each one. Partners retrieve messages from their queue.
D. Send the event as an Amazon SNS message. Create one SQS queue per partner that subscribes to the Amazon SNS topic. Partners retrieve messages from their queue.
Answer: D. There are two challenges here: the command must be “fanned out” to a variable pool of partners, and your app must be decoupled from the partners because they are not highly available. Sending the command as an SNS message achieves the fan-out via its publication/subscribe model, and using an SQS queue for each partner decouples your app from the partners. Writing the message to each queue directly would cause more latency for your app and would require your app to monitor which partners were active. It would be difficult to write an Amazon SWF workflow for a rapidly changing set of partners.
Q57: You have a three-tier web application (web, app, and data) in a single Amazon VPC. The web and app tiers each span two Availability Zones, are in separate subnets, and sit behind ELB Classic Load Balancers. The data tier is a Multi-AZ Amazon RDS MySQL database instance in database subnets. When you call the database tier from your app tier instances, you receive a timeout error. What could be causing this?
A. The IAM role associated with the app tier instances does not have rights to the MySQL database.
B. The security group for the Amazon RDS instance does not allow traffic on port 3306 from the app instances.
C. The Amazon RDS database instance does not have a public IP address.
D. There is no route defined between the app tier and the database tier in the Amazon VPC.
Answer: B. Security groups block all network traffic by default, so if a group is not correctly configured, it can lead to a timeout error. MySQL security, not IAM, controls MySQL security. All subnets in an Amazon VPC have routes to all other subnets. Internal traffic within an Amazon VPC does not require public IP addresses.
Q58: What type of block cipher does Amazon S3 offer for server side encryption?
A. RC5
B. Blowfish
C. Triple DES
D. Advanced Encryption Standard
Answer: D Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data.
Q59: You have written an application that uses the Elastic Load Balancing service to spread traffic to several web servers Your users complain that they are sometimes forced to login again in the middle of using your application, after they have already togged in. This is not behaviour you have designed. What is a possible solution to prevent this happening?
A. Use instance memory to save session state.
B. Use instance storage to save session state.
C. Use EBS to save session state
D. Use ElastiCache to save session state.
E. Use Glacier to save session slate.
Answer: D. You can cache a variety of objects using the service, from the content in persistent data stores (such as Amazon RDS, DynamoDB, or self-managed databases hosted on EC2) to dynamically generated web pages (with Nginx for example), or transient session data that may not require a persistent backing store. You can also use it to implement high-frequency counters to deploy admission control in high volume web applications.
Q60: You are writing to a DynamoDB table and receive the following exception:” ProvisionedThroughputExceededException”. though according to your Cloudwatch metrics for the table, you are not exceeding your provisioned throughput. What could be an explanation for this?
A. You haven’t provisioned enough DynamoDB storage instances
B. You’re exceeding your capacity on a particular Range Key
C. You’re exceeding your capacity on a particular Hash Key
D. You’re exceeding your capacity on a particular Sort Key
E. You haven’t configured DynamoDB Auto Scaling triggers
Answer: C. The primary key that uniquely identifies each item in a DynamoDB table can be simple (a partition key only) or composite (a partition key combined with a sort key). Generally speaking, you should design your application for uniform activity across all logical partition keys in the Table and its secondary indexes. You can determine the access patterns that your application requires, and estimate the total read capacity units and write capacity units that each table and secondary Index requires.
As traffic starts to flow, DynamoDB automatically supports your access patterns using the throughput you have provisioned, as long as the traffic against a given partition key does not exceed 3000 read capacity units or 1000 write capacity units.
Q61: Which DynamoDB limits can be raised by contacting AWS support?
A. The number of hash keys per account
B. The maximum storage used per account
C. The number of tables per account
D. The number of local secondary indexes per account
E. The number of provisioned throughput units per account
Answer: C. and E.
For any AWS account, there is an initial limit of 256 tables per region. AWS places some default limits on the throughput you can provision. These are the limits unless you request a higher amount. To request a service limit increase see https://aws.amazon.com/support.Reference: Limits in DynamoDB
Q62: AWS CodeBuild allows you to compile your source code, run unit tests, and produce deployment artifacts by:
A. Allowing you to provide an Amazon Machine Image to take these actions within
B. Allowing you to select an Amazon Machine Image and provide a User Data bootstrapping script to prepare an instance to take these actions within
C. Allowing you to provide a container image to take these actions within
D. Allowing you to select from pre-configured environments to take these actions within
Answer: C. and D. You can provide your own custom container image to build your deployment artifacts. You never actually pass a specific AMI to CodeBuild. Though you can provide a custom docker image which you could basically ‘bootstrap’ for the purposes of your build. Reference: AWS CodeBuild Faqs
Q63: Which of the following will not cause a CloudFormation stack deployment to rollback?
A. The template contains invalid JSON syntax
B. An AMI specified in the template exists in a different region than the one in which the stack is being deployed.
C. A subnet specified in the template does not exist
D. The template specifies an instance-store backed AMI and an incompatible EC2 instance type.
Answer: A. Invalid JSON syntax will cause an error message during template validation. Until the syntax is fixed, the template will not be able to deploy resources, so there will not be a need to or opportunity to rollback. Reference: AWS CloudFormatio Faqs
Q64: Your team is using CodeDeploy to deploy an application which uses secure parameters that are stored in the AWS System Mangers Parameter Store. What two options below must be completed so CodeDeploy can deploy the application?
A. Use ssm get-parameters with –with-decryption option
B. Add permissions using AWS access keys
C. Add permissions using AWS IAM role
D. Use ssm get-parameters with –with-no-decryption option
Q65: A corporate web application is deployed within an Amazon VPC, and is connected to the corporate data center via IPSec VPN. The application must authenticate against the on-premise LDAP server. Once authenticated, logged-in users can only access an S3 keyspace specific to the user. Which of the solutions below meet these requirements? Choose two answers How would you authenticate to the application given these details? (Choose 2)
A. The application authenticates against LDAP, and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM Role. The application can use the temporary credentials to access the S3 keyspace.
B. Develop an identity broker which authenticates against LDAP, and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 keyspace
C. Develop an identity broker which authenticates against IAM Security Token Service to assume an IAM Role to get temporary AWS security credentials. The application calls the identity broker to get AWS temporary security credentials with access to the app
D. The application authenticates against LDAP. The application then calls the IAM Security Service to login to IAM using the LDAP credentials. The application can use the IAM temporary credentials to access the appropriate S3 bucket.
Answer: A. and B. The question clearly says “authenticate against LDAP”. Temporary credentials come from STS. Federated user credentials come from the identity broker. Reference: IAM faqs
Q66: A corporate web application is deployed within an Amazon VPC, and is connected to the corporate data center via IPSec VPN. The application must authenticate against the on-premise LDAP server. Once authenticated, logged-in users can only access an S3 keyspace specific to the user. Which of the solutions below meet these requirements? Choose two answers How would you authenticate to the application given these details? (Choose 2)
A. The application authenticates against LDAP, and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM Role. The application can use the temporary credentials to access the S3 keyspace.
B. Develop an identity broker which authenticates against LDAP, and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 keyspace
C. Develop an identity broker which authenticates against IAM Security Token Service to assume an IAM Role to get temporary AWS security credentials. The application calls the identity broker to get AWS temporary security credentials with access to the app
D. The application authenticates against LDAP. The application then calls the IAM Security Service to login to IAM using the LDAP credentials. The application can use the IAM temporary credentials to access the appropriate S3 bucket.
Answer: A. and B. The question clearly says “authenticate against LDAP”. Temporary credentials come from STS. Federated user credentials come from the identity broker. Reference: AWA STS Faqs
Q67: When users are signing in to your application using Cognito, what do you need to do to make sure if the user has compromised credentials, they must enter a new password?
A. Create a user pool in Cognito
B. Block use for “Compromised credential” in the Basic security section
C. Block use for “Compromised credential” in the Advanced security section
D. Use secure remote password
Answer: A. and C. Amazon Cognito can detect if a user’s credentials (user name and password) have been compromised elsewhere. This can happen when users reuse credentials at more than one site, or when they use passwords that are easy to guess.
From the Advanced security page in the Amazon Cognito console, you can choose whether to allow, or block the user if compromised credentials are detected. Blocking requires users to choose another password. Choosing Allow publishes all attempted uses of compromised credentials to Amazon CloudWatch. For more information, see Viewing Advanced Security Metrics.
You can also choose whether Amazon Cognito checks for compromised credentials during sign-in, sign-up, and password changes.
Note Currently, Amazon Cognito doesn’t check for compromised credentials for sign-in operations with Secure Remote Password (SRP) flow, which doesn’t send the password during sign-in. Sign-ins that use the AdminInitiateAuth API with ADMIN_NO_SRP_AUTH flow and the InitiateAuth API with USER_PASSWORD_AUTH flow are checked for compromised credentials.
Q68: You work in a large enterprise that is currently evaluating options to migrate your 27 GB Subversion code base. Which of the following options is the best choice for your organization?
A. AWS CodeHost
B. AWS CodeCommit
C. AWS CodeStart
D. None of these
Answer: D. None of these. While CodeCommit is a good option for git reponsitories it is not able to host Subversion source control.
Q69: You are on a development team and you need to migrate your Spring Application over to AWS. Your team is looking to build, modify, and test new versions of the application. What AWS services could help you migrate your app?
A. Elastic Beanstalk
B. SQS
C. Ec2
D. AWS CodeDeploy
Answer: A. C. and D. Amazon EC2 can be used to deploy various applications to your AWS Infrastructure. AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, or serverless Lambda functions.
Q70: You are a developer responsible for managing a high volume API running in your company’s datacenter. You have been asked to implement a similar API, but one that has potentially higher volume. And you must do it in the most cost effective way, using as few services and components as possible. The API stores and fetches data from a key value store. Which services could you utilize in AWS?
A. DynamoDB
B. Lambda
C. API Gateway
D. EC2
Answer: A. and C. NoSQL databases like DynamoDB are designed for key value usage. DynamoDB can also handle incredible volumes and is cost effective. AWS API Gateway makes it easy for developers to create, publish, maintain, monitor, and secure APIs.
Q72: AWS X-Ray was recently implemented inside of a service that you work on. Several weeks later, after a new marketing push, that service started seeing a large spike in traffic and you’ve been tasked with investigating a few issues that have started coming up but when you review the X-Ray data you can’t find enough information to draw conclusions so you decide to:
A. Start passing in the X-Amzn-Trace-Id: True HTTP header from your upstream requests
B. Refactor the service to include additional calls to the X-Ray API using an AWS SDK
C. Update the sampling algorithm to increase the sample rate and instrument X-Ray to collect more pertinent information
D. Update your application to use the custom API Gateway TRACE method to send in data
Answer: C. This is a good way to solve the problem – by customizing the sampling so that you can get more relevant information.
Q75: Which of the following is the right sequence that gets called in CodeDeploy when you use Lambda hooks in an EC2/On-Premise Deployment?
A. Before Install-AfterInstall-Validate Service-Application Start
B. Before Install-After-Install-Application Stop-Application Start
C. Before Install-Application Stop-Validate Service-Application Start
D. Application Stop-Before Install-After Install-Application Start
Answer: D. In an in-place deployment, including the rollback of an in-place deployment, event hooks are run in the following order:
Note An AWS Lambda hook is one Lambda function specified with a string on a new line after the name of the lifecycle event. Each hook is executed once per deployment. Following are descriptions of the lifecycle events where you can run a hook during an Amazon ECS deployment.
BeforeInstall – Use to run tasks before the replacement task set is created. One target group is associated with the original task set. If an optional test listener is specified, it is associated with the original task set. A rollback is not possible at this point. AfterInstall – Use to run tasks after the replacement task set is created and one of the target groups is associated with it. If an optional test listener is specified, it is associated with the original task set. The results of a hook function at this lifecycle event can trigger a rollback. AfterAllowTestTraffic – Use to run tasks after the test listener serves traffic to the replacement task set. The results of a hook function at this point can trigger a rollback. BeforeAllowTraffic – Use to run tasks after the second target group is associated with the replacement task set, but before traffic is shifted to the replacement task set. The results of a hook function at this lifecycle event can trigger a rollback. AfterAllowTraffic – Use to run tasks after the second target group serves traffic to the replacement task set. The results of a hook function at this lifecycle event can trigger a rollback. Run Order of Hooks in an Amazon ECS Deployment
In an Amazon ECS deployment, event hooks run in the following order:
For in-place deployments, the six hooks related to blocking and allowing traffic apply only if you specify a Classic Load Balancer, Application Load Balancer, or Network Load Balancer from Elastic Load Balancing in the deployment group.Note The Start, DownloadBundle, Install, and End events in the deployment cannot be scripted, which is why they appear in gray in this diagram. However, you can edit the ‘files’ section of the AppSpec file to specify what’s installed during the Install event.
Q76: Describe the process of registering a mobile device with SNS push notification service using GCM.
A. Receive Registration ID and token for each mobile device. Then, register the mobile application with Amazon SNS, and pass the GCM token credentials to Amazon SNS
B. Pass device token to SNS to create mobile subscription endpoint for each mobile device, then request the device token from each mobile device. SNS then communicates on your behalf to the GCM service
C. None of these are correct
D. Submit GCM notification credentials to Amazon SNS, then receive the Registration ID for each mobile device. After that, pass the device token to SNS, and SNS then creates a mobile subscription endpoint for each device and communicates with the GCM service on your behalf
Answer: D. When you first register an app and mobile device with a notification service, such as Apple Push Notification Service (APNS) and Google Cloud Messaging for Android (GCM), device tokens or registration IDs are returned from the notification service. When you add the device tokens or registration IDs to Amazon SNS, they are used with the PlatformApplicationArn API to create an endpoint for the app and device. When Amazon SNS creates the endpoint, an EndpointArn is returned. The EndpointArn is how Amazon SNS knows which app and mobile device to send the notification message to.
Q77: You run an ad-supported photo sharing website using S3 to serve photos to visitors of your site. At some point you find out that other sites have been linking to the photos on your site, causing loss to your business. What is an effective method to mitigate this?
A. Store photos on an EBS volume of the web server.
B. Block the IPs of the offending websites in Security Groups.
C. Remove public read access and use signed URLs with expiry dates.
D. Use CloudFront distributions for static content.
Answer: C. This solves the issue, but does require you to modify your website. Your website already uses S3, so it doesn’t require a lot of changes. See the docs for details: http://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html
CloudFront on its own doesn’t prevent unauthorized access and requires you to add a whole new layer to your stack (which may make sense anyway). You can serve private content, but you’d have to use signed URLs or similar mechanism. Here are the docs: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
Q78: How can you control access to the API Gateway in your environment?
A. Cognito User Pools
B. Lambda Authorizers
C. API Methods
D. API Stages
Answer: A. and B. Access to a REST API Using Amazon Cognito User Pools as Authorizer As an alternative to using IAM roles and policies or Lambda authorizers (formerly known as custom authorizers), you can use an Amazon Cognito user pool to control who can access your API in Amazon API Gateway.
To use an Amazon Cognito user pool with your API, you must first create an authorizer of the COGNITO_USER_POOLS type and then configure an API method to use that authorizer. After the API is deployed, the client must first sign the user in to the user pool, obtain an identity or access token for the user, and then call the API method with one of the tokens, which are typically set to the request’s Authorization header. The API call succeeds only if the required token is supplied and the supplied token is valid, otherwise, the client isn’t authorized to make the call because the client did not have credentials that could be authorized.
The identity token is used to authorize API calls based on identity claims of the signed-in user. The access token is used to authorize API calls based on the custom scopes of specified access-protected resources. For more information, see Using Tokens with User Pools and Resource Server and Custom Scopes.
Q80: Company B provides an online image recognition service and utilizes SQS to decouple system components for scalability. The SQS consumers poll the imaging queue as often as possible to keep end-to-end throughput as high as possible. However, Company B is realizing that polling in tight loops is burning CPU cycles and increasing costs with empty responses. How can Company B reduce the number of empty responses?
A. Set the imaging queue MessageRetentionPeriod attribute to 20 seconds.
B. Set the imaging queue ReceiveMessageWaitTimeSeconds attribute to 20 seconds.
C. Set the imaging queue VisibilityTimeout attribute to 20 seconds.
D. Set the DelaySeconds parameter of a message to 20 seconds.
Answer: B. ReceiveMessageWaitTimeSeconds, when set to greater than zero, enables long polling. Long polling allows the Amazon SQS service to wait until a message is available in the queue before sending a response. Short polling continuously pools a queue and can have false positives. Enabling long polling reduces the number of poll requests, false positives, and empty responses. Reference: AWS SQS Long Polling
81: You’re using CloudFormation templates to build out staging environments. What section of the CloudFormation would you edit in order to allow the user to specify the PEM key-name at start time?
A. Resources Section
B. Parameters Section
C. Mappings Section
D. Declaration Section
Answer:B.
Parameters property type in CloudFormation allows you to accept user input when starting the CloudFormation template. It allows you to reference the user input as variable throughout your CloudFormation template. Other examples might include asking the user starting the template to provide Domain admin passwords, instance size, pem key, region, and other dynamic options.
Q82: You are writing an AWS CloudFormation template and you want to assign values to properties that will not be available until runtime. You know that you can use intrinsic functions to do this but are unsure as to which part of the template they can be used in. Which of the following is correct in describing how you can currently use intrinsic functions in an AWS CloudFormation template?
A. You can use intrinsic functions in any part of a template, except AWSTemplateFormatVersion and Description
B. You can use intrinsic functions in any part of a template.
C. You can use intrinsic functions only in the resource properties part of a template.
D. You can only use intrinsic functions in specific parts of a template. You can use intrinsic functions in resource properties, metadata attributes, and update policy attributes.
Answer: D.
You can use intrinsic functions only in specific parts of a template. Currently, you can use intrinsic functions in resource properties, outputs, metadata attributes, and update policy attributes. You can also use intrinsic functions to conditionally create stack resources.
Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.
submitted by /u/Make_the_music_stop [link] [comments]
Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.