Download the AI & Machine Learning For Dummies App: iOS - Android
What are the Top 100 AWS Solutions Architect Associate Certification Exam Questions and Answers Dump SAA-C03?
AWS Certified Solutions Architects are responsible for designing, deploying, and managing AWS cloud applications. The AWS Cloud Solutions Architect Associate exam validates an examinee’s ability to effectively demonstrate knowledge of how to design and deploy secure and robust applications on AWS technologies. The AWS Solutions Architect Associate training provides an overview of key AWS services, security, architecture, pricing, and support.
The AWS Certified Solutions Architect – Associate (SAA-C03) Examination is a required examination for the AWS Certified Solutions Architect – Professional level. Successful completion of this examination can lead to a salary raise or promotion for those in cloud roles. Below is the Top 100 AWS solutions architect associate exam prep facts and summaries questions and answers dump.
With average increases in salary of over 25% for certified individuals, you’re going to be in a much better position to secure your dream job or promotion if you earn your AWS Certified Solutions Architect Associate certification. You’ll also develop strong hands-on skills by doing the guided hands-on lab exercises in our course which will set you up for successfully performing in a solutions architect role.
We recommend that you allocate at least 60 minutes of study time per day and you will then be able to complete the certification within 5 weeks (including taking the actual exam). Study times can vary based on your experience with AWS and how much time you have each day, with some students passing their exams much faster and others taking a little longer. Get our eBook here.
The AWS Solutions Architect Associate exam is an associate-level exam that requires a solid understanding of the AWS platform and a broad range of AWS services. The AWS Certified Solutions Architect Associate exam questions are scenario-based questions and can be challenging. Despite this, the AWS Solutions Architect Associate is often earned by beginners to cloud computing.
The AWS Certified Solutions Architect – Associate (SAA-C03) exam is intended for individuals who perform in a solutions architect role. The exam validates a candidate’s ability to use AWS technologies to design solutions based on the AWS Well-Architected Framework.
The SAA-C03 exam is a multiple choice examination that is 65 questions in length. You can take the exam in a testing center or using an online proctored exam from your home or office. You have 130 minutes to complete your exam and the passing mark is 720 points out of 100 points (72%). If English is not your first language you can request an accommodation when booking your exam that will qualify you for an additional 30 minutes exam extension.
The exam also validates a candidate’s ability to complete the following tasks: • Design solutions that incorporate AWS services to meet current business requirements and future projected needs • Design architectures that are secure, resilient, high-performing, and cost-optimized • Review existing solutions and determine improvements
Unscored content The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.
Target candidate description The target candidate should have at least 1 year of hands-on experience designing cloud solutions that use AWS services
All AWS certification exam results are reported as a score from 100 to 1000. Your score shows how you performed on the examination as a whole and whether or not you passed. The passing score for the AWS Certified Solutions Architect Associate is 720 (72%).
Yes, you can now take all AWS Certification exams with online proctoring using Pearson Vue or PSI. Here’s a detailed guide on how to book your AWS exam.
There are no prerequisites for taking AWS exams. You do not need any programming knowledge or experience working with AWS. Everything you need to know is included in our courses. We do recommend that you have a basic understanding of fundamental computing concepts such as compute, storage, networking, and databases.
AWS Certified Solutions Architects are IT professionals who design cloud solutions with AWS services to meet given technical requirements. An AWS Solutions Architect Associate is expected to design and implement distributed systems on AWS that are high-performing, scalable, secure and cost optimized.
Domain 1: Design Secure Architectures This exam domain is focused on securing your architectures on AWS and comprises 30% of the exam. Task statements include:
Task Statement 1: Design secure access to AWS resources. Knowledge of: • Access controls and management across multiple accounts • AWS federated access and identity services (for example, AWS Identity and Access Management [IAM], AWS Single Sign-On [AWS SSO]) • AWS global infrastructure (for example, Availability Zones, AWS Regions) • AWS security best practices (for example, the principle of least privilege) • The AWS shared responsibility model
Skills in: • Applying AWS security best practices to IAM users and root users (for example, multi-factor authentication [MFA]) • Designing a flexible authorization model that includes IAM users, groups, roles, and policies • Designing a role-based access control strategy (for example, AWS Security Token Service [AWS STS], role switching, cross-account access) • Designing a security strategy for multiple AWS accounts (for example, AWS Control Tower, service control policies [SCPs]) • Determining the appropriate use of resource policies for AWS services • Determining when to federate a directory service with IAM roles
Task Statement 2: Design secure workloads and applications.
Knowledge of: • Application configuration and credentials security • AWS service endpoints • Control ports, protocols, and network traffic on AWS • Secure application access • Security services with appropriate use cases (for example, Amazon Cognito, Amazon GuardDuty, Amazon Macie) • Threat vectors external to AWS (for example, DDoS, SQL injection)
Skills in: • Designing VPC architectures with security components (for example, security groups, route tables, network ACLs, NAT gateways) • Determining network segmentation strategies (for example, using public subnets and private subnets) • Integrating AWS services to secure applications (for example, AWS Shield, AWS WAF, AWS SSO, AWS Secrets Manager) • Securing external network connections to and from the AWS Cloud (for example, VPN, AWS Direct Connect)
Task Statement 3: Determine appropriate data security controls.
Knowledge of: • Data access and governance • Data recovery • Data retention and classification • Encryption and appropriate key management
Skills in: • Aligning AWS technologies to meet compliance requirements • Encrypting data at rest (for example, AWS Key Management Service [AWS KMS]) • Encrypting data in transit (for example, AWS Certificate Manager [ACM] using TLS) • Implementing access policies for encryption keys • Implementing data backups and replications • Implementing policies for data access, lifecycle, and protection • Rotating encryption keys and renewing certificates
Domain 2: Design Resilient Architectures This exam domain is focused on designing resilient architectures on AWS and comprises 26% of the exam. Task statements include:
Task Statement 1: Design scalable and loosely coupled architectures. Knowledge of: • API creation and management (for example, Amazon API Gateway, REST API) • AWS managed services with appropriate use cases (for example, AWS Transfer Family, Amazon Simple Queue Service [Amazon SQS], Secrets Manager) • Caching strategies • Design principles for microservices (for example, stateless workloads compared with stateful workloads) • Event-driven architectures • Horizontal scaling and vertical scaling • How to appropriately use edge accelerators (for example, content delivery network [CDN]) • How to migrate applications into containers • Load balancing concepts (for example, Application Load Balancer) • Multi-tier architectures • Queuing and messaging concepts (for example, publish/subscribe) • Serverless technologies and patterns (for example, AWS Fargate, AWS Lambda) • Storage types with associated characteristics (for example, object, file, block) • The orchestration of containers (for example, Amazon Elastic Container Service [Amazon ECS],Amazon Elastic Kubernetes Service [Amazon EKS]) • When to use read replicas • Workflow orchestration (for example, AWS Step Functions)
Skills in: • Designing event-driven, microservice, and/or multi-tier architectures based on requirements • Determining scaling strategies for components used in an architecture design • Determining the AWS services required to achieve loose coupling based on requirements • Determining when to use containers • Determining when to use serverless technologies and patterns • Recommending appropriate compute, storage, networking, and database technologies based on requirements • Using purpose-built AWS services for workloads
Task Statement 2: Design highly available and/or fault-tolerant architectures. Knowledge of: • AWS global infrastructure (for example, Availability Zones, AWS Regions, Amazon Route 53) • AWS managed services with appropriate use cases (for example, Amazon Comprehend, Amazon Polly) • Basic networking concepts (for example, route tables) • Disaster recovery (DR) strategies (for example, backup and restore, pilot light, warm standby, active-active failover, recovery point objective [RPO], recovery time objective [RTO]) • Distributed design patterns • Failover strategies • Immutable infrastructure • Load balancing concepts (for example, Application Load Balancer) • Proxy concepts (for example, Amazon RDS Proxy) • Service quotas and throttling (for example, how to configure the service quotas for a workload in a standby environment) • Storage options and characteristics (for example, durability, replication) • Workload visibility (for example, AWS X-Ray)
Skills in: • Determining automation strategies to ensure infrastructure integrity • Determining the AWS services required to provide a highly available and/or fault-tolerant architecture across AWS Regions or Availability Zones • Identifying metrics based on business requirements to deliver a highly available solution • Implementing designs to mitigate single points of failure • Implementing strategies to ensure the durability and availability of data (for example, backups) • Selecting an appropriate DR strategy to meet business requirements • Using AWS services that improve the reliability of legacy applications and applications not built for the cloud (for example, when application changes are not possible) • Using purpose-built AWS services for workloads
Domain 3: Design High-Performing Architectures This exam domain is focused on designing high-performing architectures on AWS and comprises 24% of the exam. Task statements include:
Knowledge of: • Hybrid storage solutions to meet business requirements • Storage services with appropriate use cases (for example, Amazon S3, Amazon Elastic File System [Amazon EFS], Amazon Elastic Block Store [Amazon EBS]) • Storage types with associated characteristics (for example, object, file, block)
Skills in: • Determining storage services and configurations that meet performance demands • Determining storage services that can scale to accommodate future needs
Task Statement 2: Design high-performing and elastic compute solutions. Knowledge of: • AWS compute services with appropriate use cases (for example, AWS Batch, Amazon EMR, Fargate) • Distributed computing concepts supported by AWS global infrastructure and edge services • Queuing and messaging concepts (for example, publish/subscribe) • Scalability capabilities with appropriate use cases (for example, Amazon EC2 Auto Scaling, AWS Auto Scaling) • Serverless technologies and patterns (for example, Lambda, Fargate) • The orchestration of containers (for example, Amazon ECS, Amazon EKS)
Skills in: • Decoupling workloads so that components can scale independently • Identifying metrics and conditions to perform scaling actions • Selecting the appropriate compute options and features (for example, EC2 instance types) to meet business requirements • Selecting the appropriate resource type and size (for example, the amount of Lambda memory) to meet business requirements
Task Statement 3: Determine high-performing database solutions. Knowledge of: • AWS global infrastructure (for example, Availability Zones, AWS Regions) • Caching strategies and services (for example, Amazon ElastiCache) • Data access patterns (for example, read-intensive compared with write-intensive) • Database capacity planning (for example, capacity units, instance types, Provisioned IOPS) • Database connections and proxies • Database engines with appropriate use cases (for example, heterogeneous migrations, homogeneous migrations) • Database replication (for example, read replicas) • Database types and services (for example, serverless, relational compared with non-relational, in-memory)
Skills in: • Configuring read replicas to meet business requirements • Designing database architectures • Determining an appropriate database engine (for example, MySQL compared with PostgreSQL) • Determining an appropriate database type (for example, Amazon Aurora, Amazon DynamoDB) • Integrating caching to meet business requirements
Task Statement 4: Determine high-performing and/or scalable network architectures. Knowledge of: • Edge networking services with appropriate use cases (for example, Amazon CloudFront, AWS Global Accelerator) • How to design network architecture (for example, subnet tiers, routing, IP addressing) • Load balancing concepts (for example, Application Load Balancer) • Network connection options (for example, AWS VPN, Direct Connect, AWS PrivateLink)
Skills in: • Creating a network topology for various architectures (for example, global, hybrid, multi-tier) • Determining network configurations that can scale to accommodate future needs • Determining the appropriate placement of resources to meet business requirements • Selecting the appropriate load balancing strategy
Task Statement 5: Determine high-performing data ingestion and transformation solutions. Knowledge of: • Data analytics and visualization services with appropriate use cases (for example, Amazon Athena, AWS Lake Formation, Amazon QuickSight) • Data ingestion patterns (for example, frequency) • Data transfer services with appropriate use cases (for example, AWS DataSync, AWS Storage Gateway) • Data transformation services with appropriate use cases (for example, AWS Glue) • Secure access to ingestion access points • Sizes and speeds needed to meet business requirements • Streaming data services with appropriate use cases (for example, Amazon Kinesis)
Skills in: • Building and securing data lakes • Designing data streaming architectures • Designing data transfer solutions • Implementing visualization strategies • Selecting appropriate compute options for data processing (for example, Amazon EMR) • Selecting appropriate configurations for ingestion • Transforming data between formats (for example, .csv to .parquet)
Domain 4: Design Cost-Optimized Architectures This exam domain is focused optimizing solutions for cost-effectiveness on AWS and comprises 20% of the exam. Task statements include:
Task Statement 1: Design cost-optimized storage solutions. Knowledge of: • Access options (for example, an S3 bucket with Requester Pays object storage) • AWS cost management service features (for example, cost allocation tags, multi-account billing) • AWS cost management tools with appropriate use cases (for example, AWS Cost Explorer, AWS Budgets, AWS Cost and Usage Report) • AWS storage services with appropriate use cases (for example, Amazon FSx, Amazon EFS, Amazon S3, Amazon EBS) • Backup strategies • Block storage options (for example, hard disk drive [HDD] volume types, solid state drive [SSD] volume types) • Data lifecycles • Hybrid storage options (for example, DataSync, Transfer Family, Storage Gateway) • Storage access patterns • Storage tiering (for example, cold tiering for object storage) • Storage types with associated characteristics (for example, object, file, block)
Skills in: • Designing appropriate storage strategies (for example, batch uploads to Amazon S3 compared with individual uploads) • Determining the correct storage size for a workload • Determining the lowest cost method of transferring data for a workload to AWS storage • Determining when storage auto scaling is required • Managing S3 object lifecycles • Selecting the appropriate backup and/or archival solution • Selecting the appropriate service for data migration to storage services • Selecting the appropriate storage tier • Selecting the correct data lifecycle for storage • Selecting the most cost-effective storage service for a workload
Task Statement 2: Design cost-optimized compute solutions. Knowledge of: • AWS cost management service features (for example, cost allocation tags, multi-account billing) • AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report) • AWS global infrastructure (for example, Availability Zones, AWS Regions) • AWS purchasing options (for example, Spot Instances, Reserved Instances, Savings Plans) • Distributed compute strategies (for example, edge processing) • Hybrid compute options (for example, AWS Outposts, AWS Snowball Edge) • Instance types, families, and sizes (for example, memory optimized, compute optimized, virtualization) • Optimization of compute utilization (for example, containers, serverless computing, microservices) • Scaling strategies (for example, auto scaling, hibernation)
Skills in: • Determining an appropriate load balancing strategy (for example, Application Load Balancer [Layer 7] compared with Network Load Balancer [Layer 4] compared with Gateway Load Balancer) • Determining appropriate scaling methods and strategies for elastic workloads (for example, horizontal compared with vertical, EC2 hibernation) • Determining cost-effective AWS compute services with appropriate use cases (for example, Lambda, Amazon EC2, Fargate) • Determining the required availability for different classes of workloads (for example, production workloads, non-production workloads) • Selecting the appropriate instance family for a workload • Selecting the appropriate instance size for a workload
Task Statement 3: Design cost-optimized database solutions. Knowledge of: • AWS cost management service features (for example, cost allocation tags, multi-account billing) • AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report) • Caching strategies • Data retention policies • Database capacity planning (for example, capacity units) • Database connections and proxies • Database engines with appropriate use cases (for example, heterogeneous migrations, homogeneous migrations) • Database replication (for example, read replicas) • Database types and services (for example, relational compared with non-relational, Aurora, DynamoDB)
Skills in: • Designing appropriate backup and retention policies (for example, snapshot frequency) • Determining an appropriate database engine (for example, MySQL compared with PostgreSQL) • Determining cost-effective AWS database services with appropriate use cases (for example, DynamoDB compared with Amazon RDS, serverless) • Determining cost-effective AWS database types (for example, time series format, columnar format) • Migrating database schemas and data to different locations and/or different database engines
Task Statement 4: Design cost-optimized network architectures. Knowledge of: • AWS cost management service features (for example, cost allocation tags, multi-account billing) • AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report) • Load balancing concepts (for example, Application Load Balancer) • NAT gateways (for example, NAT instance costs compared with NAT gateway costs) • Network connectivity (for example, private lines, dedicated lines, VPNs) • Network routing, topology, and peering (for example, AWS Transit Gateway, VPC peering) • Network services with appropriate use cases (for example, DNS)
Skills in: • Configuring appropriate NAT gateway types for a network (for example, a single shared NAT gateway compared with NAT gateways for each Availability Zone) • Configuring appropriate network connections (for example, Direct Connect compared with VPN compared with internet) • Configuring appropriate network routes to minimize network transfer costs (for example, Region to Region, Availability Zone to Availability Zone, private to public, Global Accelerator, VPC endpoints) • Determining strategic needs for content delivery networks (CDNs) and edge caching • Reviewing existing workloads for network optimizations • Selecting an appropriate throttling strategy • Selecting the appropriate bandwidth allocation for a network device (for example, a single VPN compared with multiple VPNs, Direct Connect speed)
Which key tools, technologies, and concepts might be covered on the exam? The following is a non-exhaustive list of the tools and technologies that could appear on the exam. This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam. The general tools and technologies in this list appear in no particular order. AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance: • Compute • Cost management • Database • Disaster recovery • High performance • Management and governance • Microservices and component decoupling • Migration and data transfer • Networking, connectivity, and content delivery • Resiliency • Security • Serverless and event-driven design principles • Storage
AWS Services and Features There are lots of new services and feature updates in scope for the new AWS Certified Solutions Architect Associate certification! Here’s a list of some of the new services that will be in scope for the new version of the exam:
Analytics: • Amazon Athena • AWS Data Exchange • AWS Data Pipeline • Amazon EMR • AWS Glue • Amazon Kinesis • AWS Lake Formation • Amazon Managed Streaming for Apache Kafka (Amazon MSK) • Amazon OpenSearch Service (Amazon Elasticsearch Service) • Amazon QuickSight • Amazon Redshift
Management and Governance: • AWS Auto Scaling • AWS CloudFormation • AWS CloudTrail • Amazon CloudWatch • AWS Command Line Interface (AWS CLI) • AWS Compute Optimizer • AWS Config • AWS Control Tower • AWS License Manager • Amazon Managed Grafana • Amazon Managed Service for Prometheus • AWS Management Console • AWS Organizations • AWS Personal Health Dashboard • AWS Proton • AWS Service Catalog • AWS Systems Manager • AWS Trusted Advisor • AWS Well-Architected Tool
Media Services: • Amazon Elastic Transcoder • Amazon Kinesis Video Streams
Migration and Transfer: • AWS Application Discovery Service • AWS Application Migration Service (CloudEndure Migration) • AWS Database Migration Service (AWS DMS) • AWS DataSync • AWS Migration Hub • AWS Server Migration Service (AWS SMS) • AWS Snow Family • AWS Transfer Family
Out-of-scope AWS services and features The following is a non-exhaustive list of AWS services and features that are not covered on the exam. These services and features do not represent every AWS offering that is excluded from the exam content.
AWS solutions architect associate exam prep facts and summaries questions and answers dump – Solution Architecture Definition 1:
Solution architecture is a practice of defining and describing an architecture of a system delivered in context of a specific solution and as such it may encompass description of an entire system or only its specific parts. Definition of a solution architecture is typically led by a solution architect.
AWS solutions architect associate exam prep facts and summaries questions and answers dump – Solution Architecture Definition 2:
The AWS Certified Solutions Architect – Associate examination is intended for individuals who perform a solutions architect role and have one or more years of hands-on experience designing available, cost-efficient, fault-tolerant, and scalable distributed systems on AWS.
If you are running an application in a production environment and must add a new EBS volume with data from a snapshot, what could you do to avoid degraded performance during the volume’s first use? Initialize the data by reading each storage block on the volume. Volumes created from an EBS snapshot must be initialized. Initializing occurs the first time a storage block on the volume is read, and the performance impact can be impacted by up to 50%. You can avoid this impact in production environments by pre-warming the volume by reading all of the blocks.
If you are running a legacy application that has hard-coded static IP addresses and it is running on an EC2 instance; what is the best failover solution that allows you to keep the same IP address on a new instance? Elastic IP addresses (EIPs) are designed to be attached/detached and moved from one EC2 instance to another. They are a great solution for keeping a static IP address and moving it to a new instance if the current instance fails. This will reduce or eliminate any downtime uses may experience.
Which feature of Intel processors help to encrypt data without significant impact on performance? AES-NI
You can mount to EFS from which two of the following?
On-prem servers running Linux
EC2 instances running Linux
EFS is not compatible with Windows operating systems.
When a file(s) is encrypted and the stored data is not in transit it’s known as encryption at rest. What is an example of encryption at rest?
When would vertical scaling be necessary? When an application is built entirely into one source code, otherwise known as a monolithic application.
Fault-Tolerance allows for continuous operation throughout a failure, which can lead to a low Recovery Time Objective. RPO vs RTO
High-Availability means automating tasks so that an instance will quickly recover, which can lead to a low Recovery Time Objective. RPO vs. RTO
Frequent backups reduce the time between the last backup and recovery point, otherwise known as the Recovery Point Objective. RPO vs. RTO
Which represents the difference between Fault-Tolerance and High-Availability? High-Availability means the system will quickly recover from a failure event, and Fault-Tolerance means the system will maintain operations during a failure.
From a security perspective, what is a principal?An anonymous user falls under the definition of a principal. A principal can be an anonymous user acting on a system.
An authenticated user falls under the definition of a principal. A principal can be an authenticated user acting on a system.
What are two types of session data saving for an Application Session State?Stateless and Stateful
23. It is the customer’s responsibility to patch the operating system on an EC2 instance.
24. In designing an environment, what four main points should a Solutions Architect keep in mind? Cost-efficient, secure, application session state, undifferentiated heavy lifting: These four main points should be the framework when designing an environment.
25. In the context of disaster recovery, what does RPO stand for? RPO is the abbreviation for Recovery Point Objective.
26.What are the benefits of horizontal scaling?
Vertical scaling can be costly while horizontal scaling is cheaper.
Horizontal scaling suffers from none of the size limitations of vertical scaling.
Having horizontal scaling means you can easily route traffic to another instance of a server.
Top AWS solutions architect associate exam prep facts and summaries questions and answers dump – Quizzes
A company is developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? (Select TWO.)
A. CloudWatch
B. DynamoDB
C. Elastic Load Balancing
D. ElastiCache
E. Storage Gateway
Answer: B and D ( Get the SAA Exam Prep for More: iOS – Android – Windows )
Q1: A Solutions Architect is designing a critical business application with a relational database that runs on an EC2 instance. It requires a single EBS volume that can support up to 16,000 IOPS. Which Amazon EBS volume type can meet the performance requirements of this application?
A. EBS Provisioned IOPS SSD
B. EBS Throughput Optimized HDD
C. EBS General Purpose SSD
D. EBS Cold HDD
Answer: A ( Get the SAA Exam Prep for More: iOS – Android – Windows ) EBS Provisioned IOPS SSD provides sustained performance for mission-critical low-latency workloads. EBS General Purpose SSD can provide bursts of performance up to 3,000 IOPS and have a maximum baseline performance of 10,000 IOPS for volume sizes greater than 3.3 TB. The 2 HDD options are lower cost, high throughput volumes.
Q2: An application running on EC2 instances processes sensitive information stored on Amazon S3. The information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 is a security risk. Which solution will resolve the security concern?
A. Access the data through an Internet Gateway.
B. Access the data through a VPN connection.
C. Access the data through a NAT Gateway.
D.Access the data through a VPC endpoint for Amazon S3
Answer: D ( Get the SAA-C03 Exam Prep for More: iOS – Android – Windows ) VPC endpoints for Amazon S3 provide secure connections to S3 buckets that do not require a gateway or NAT instances. NAT Gateways and Internet Gateways still route traffic over the Internet to the public endpoint for Amazon S3. There is no way to connect to Amazon S3 via VPN.
Q3: An organization is building an Amazon Redshift cluster in their shared services VPC. The cluster will host sensitive data. How can the organization control which networks can access the cluster?
A. Run the cluster in a different VPC and connect through VPC peering.
B. Create a database user inside the Amazon Redshift cluster only for users on the network.
C. Define a cluster security group for the cluster that allows access from the allowed networks.
D. Only allow access to networks that connect with the shared services network via VPN.
Answer: ( Get the SAA-C03 Exam Prep for More: iOS – Android – Windows ) A security group can grant access to traffic from the allowed networks via the CIDR range for each network. VPC peering and VPN are connectivity services and cannot control traffic for security. Amazon Redshift user accounts address authentication and authorization at the user level and have no control over network traffic.
Q4: A web application allows customers to upload orders to an S3 bucket. The resulting Amazon S3 events trigger a Lambda function that inserts a message to an SQS queue. A single EC2 instance reads messages from the queue, processes them, and stores them in an DynamoDB table partitioned by unique order ID. Next month traffic is expected to increase by a factor of 10 and a Solutions Architect is reviewing the architecture for possible scaling problems. Which component is MOST likely to need re-architecting to be able to scale to accommodate the new traffic?
A. Lambda function
B. SQS queue
C. EC2 instance
D. DynamoDB table
Answer: C ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows ) A single EC2 instance will not scale and is a single point of failure in the architecture. A much better solution would be to have EC2 instances in an Auto Scaling group across 2 availability zones read messages from the queue. The other responses are all managed services that can be configured to scale or will scale automatically.
Q5: An application requires a highly available relational database with an initial storage capacity of 8 TB. The database will grow by 8 GB every day. To support expected traffic, at least eight read replicas will be required to handle database reads. Which option will meet these requirements?
A. DynamoDB
B. Amazon S3
C. Amazon Aurora
D. Amazon Redshift
Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows ) Amazon Aurora is a relational database that will automatically scale to accommodate data growth. Amazon Redshift does not support read replicas and will not automatically scale. DynamoDB is a NoSQL service, not a relational database. Amazon S3 is object storage, not a relational database.
C. Divide your files system into multiple smaller file systems.
D. Provision higher IOPs for your EFS.
Answer: B ( Get the SAA-C03 Exam Prep for More: iOS – Android – Windows ) Amazon EFS now allows you to instantly provision the throughput required for your applications independent of the amount of data stored in your file system. This allows you to optimize throughput for your application’s performance needs.
Q7: If you are designing an application that requires fast (10 – 25Gbps), low-latency connections between EC2 instances, what EC2 feature should you use?
A. Snapshots
B. Instance store volumes
C. Placement groups
D. IOPS provisioned instances.
Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows ) Placement groups are a clustering of EC2 instances in one Availability Zone with fast (up to 25Gbps) connections between them. This feature is used for applications that need extremely low-latency connections between instances.
Q8: A Solution Architect is designing an online shopping application running in a VPC on EC2 instances behind an ELB Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The application tier must read and write data to a customer managed database cluster. There should be no access to the database from the Internet, but the cluster must be able to obtain software patches from the Internet.
Which VPC design meets these requirements?
A. Public subnets for both the application tier and the database cluster
B. Public subnets for the application tier, and private subnets for the database cluster
C. Public subnets for the application tier and NAT Gateway, and private subnets for the database cluster
D. Public subnets for the application tier, and private subnets for the database cluster and NAT Gateway
Answer: C. The online application must be in public subnets to allow access from clients’ browsers. The database cluster must be in private subnets to meet the requirement that there be no access from the Internet. A NAT Gateway is required to give the database cluster the ability to download patches from the Internet. NAT Gateways must be deployed in public subnets.
Q9: What command should you run on a running instance if you want to view its user data (that is used at launch)?
A. curl http://254.169.254.169/latest/user-data
B. curl http://localhost/latest/meta-data/bootstrap
C. curl http://localhost/latest/user-data
D. curl http://169.254.169.254/latest/user-data
Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows ) Retrieve Instance User Data To retrieve user data from within a running instance, use the following URI: http://169.254.169.254/latest/user-data
Q10: A company is developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? (Select TWO.)
A. CloudWatch
B. DynamoDB
C. Elastic Load Balancing
D. ElastiCache
E. Storage Gateway
Answer: B. and D. ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows ) Both DynamoDB and ElastiCache provide high performance storage of key-value pairs. CloudWatch and ELB are not storage services. Storage Gateway is a storage service, but it is a hybrid Storage service that enables on-premises applications to use cloud storage.
A stateful web service will keep track of the “state” of a client’s connection and data over several requests. So for example, the client might login, select a users account data, update their address, attach a photo, and change the status flag, then disconnect.
In a stateless web service, the server doesn’t keep any information from one request to the next. The client needs to do it’s work in a series of simple transactions, and the client has to keep track of what happens between requests. So in the above example, the client needs to do each operation separately: connect and update the address, disconnect. Connect and attach the photo, disconnect. Connect and change the status flag, disconnect.
A stateless web service is much simpler to implement, and can handle greater volume of clients.
Q11: From a security perspective, what is a principal?
A. An identity
B. An anonymous user
C. An authenticated user
D. A resource
Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows )
An anonymous user falls under the definition of a principal. A principal can be an anonymous user acting on a system. An authenticated user falls under the definition of a principal. A principal can be an authenticated user acting on a system.
Q12: What are the characteristics of a tiered application?
A. All three application layers are on the same instance
B. The presentation tier is on an isolated instance than the logic layer
C. None of the tiers can be cloned
D. The logic layer is on an isolated instance than the data layer
E. Additional machines can be added to help the application by implementing horizontal scaling
F. Incapable of horizontal scaling
Answer: B. D. and E. ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows )
In a tiered application, the presentation layer is separate from the logic layer; the logic layer is separate from the data layer. Since parts of the application are isolated, they can scale horizontally.
Q17: You lead a team to develop a new online game application in AWS EC2. The application will have a large number of users globally. For a great user experience, this application requires very low network latency and jitter. If the network speed is not fast enough, you will lose customers. Which tool would you choose to improve the application performance? (Select TWO.)
A. AWS VPN
B. AWS Global Accelerator
C. Direct Connect
D. API Gateway
E. CloudFront
Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep App for More: iOS – Android – Windows )
Notes: This online game application has global users and needs low latency. Both CloudFront and Global Accelerator can speed up the distribution of contents over the AWS global network. AWS Global Accelerator works at the network layer and is able to direct traffic to optimal endpoints. Check what is global-accelerator for reference. CloudFront delivers content through edge locations and users are routed to the edge location that has the lowest time delay.
Q18: A company has a media processing application deployed in a local data center. Its file storage is built on a Microsoft Windows file server. The application and file server need to be migrated to AWS. You want to quickly set up the file server in AWS and the application code should continue working to access the file systems. Which method should you choose to create the file server?
A. Create a Windows File Server from Amazon WorkSpaces.
B. Configure a high performance Windows File System in Amazon EFS.
C. Create a Windows File Server in Amazon FSx.
D. Configure a secure enterprise storage through Amazon WorkDocs.
Answer: C – ( Get the SAA-C03 Exam Prep App for More: iOS – Android – Windows )
Notes: In this question, a Windows file server is required in AWS and the application should continue to work unchanged. Amazon FSx for Windows File Server is the correct answer as it is backed by a fully native Windows file system.
Q19: You are developing an application using AWS SDK to get objects from AWS S3. The objects have big sizes and sometimes there are failures when getting objects especially when the network connectivity is poor. You want to get a specific range of bytes in a single GET request and retrieve the whole object in parts. Which method can achieve this?
A. Enable multipart upload in the AWS SDK.
B. Use the “Range” HTTP header in a GET request to download the specified range bytes of an object.
C. Reduce the retry requests and enlarge the retry timeouts through AWS SDK when fetching S3 objects.
D. Retrieve the whole S3 object through a single GET operation.
Answer: – ( Get the SAA-C02 / SAA-C03 Exam Prep App for More: iOS – Android – Windows )
Notes: Because with byte-range fetches, users can establish concurrent connections to Amazon S3 to fetch different parts from within the same object.
Through the “Range” header in the HTTP GET request, a specified portion of the objects can be downloaded instead of the whole objects. Check the explanations in here.
Q20: You have an application hosted in an Auto Scaling group and an application load balancer distributes traffic to the ASG. You want to add a scaling policy that keeps the average aggregate CPU utilization of the Auto Scaling group to be 60 percent. The capacity of the Auto Scaling group should increase or decrease based on this target value. Which scaling policy does it belong to?
A. Target tracking scaling policy.
B. Step scaling policy.
C. Simple scaling policy.
D. Scheduled scaling policy.
Answer: A – ( Get the SAA-C03 Exam Prep App for More: iOS – Android – Windows )
Notes: A target tracking scaling policy can be applied to check the ASGAverageCPUUtilization metric. In ASG, you can add a target tracking scaling policy based on a target. Check here for different scaling policies.
Q21: You need to launch a number of EC2 instances to run Cassandra. There are large distributed and replicated workloads in Cassandra and you plan to launch instances using EC2 placement groups. The traffic should be distributed evenly across several partitions and each partition should contain multiple instances. Which strategy would you use when launching the placement groups?
A. Cluster placement strategy
B. Spread placement strategy.
C. Partition placement strategy.
D. Network placement strategy.
Answer: – ( Get the SAA-C02 / SAA-C03 Exam Prep App for More: iOS – Android – Windows )
Notes: Placement groups have the placement strategies of Cluster, Partition and Spread. With the Partition placement strategy, instances in one partition do not share the underlying hardware with other partitions. This strategy is suitable for distributed and replicated workloads such as Cassandra. Details please refer to Placement Groups Limitation partition.
Q22: To improve the network performance, you launch a C5 EC2 Amazon Linux instance and enable enhanced networking by modifying the instance attribute with “aws ec2 modify-instance-attribute –instance-id instance_id –ena-support”. Which mechanism does the EC2 instance use to enhance the networking capabilities?
A. Intel 82599 Virtual Function (VF) interface.
B. Elastic Fabric Adapter (EFA).
C. Elastic Network Adapter (ENA).
D. Elastic Network Interface (ENI).
Answer: C
Notes: Enhanced networking has two mechanisms: Elastic Network Adapter (ENA) and Intel 82599Virtual Function (VF) interface. For ENA, users can enable it with –ena-support. References can be found here
Q23: You work for an online retailer where any downtime at all can cause a significant loss of revenue. You have architected your application to be deployed on an Auto Scaling Group of EC2 instances behind a load balancer. You have configured and deployed these resources using a CloudFormation template. The Auto Scaling Group is configured with default settings, and a simple CPU utilization scaling policy. You have also set up multiple Availability Zones for high availability. The Load Balancer does health checks against an html file generated by script. When you begin performing load testing on your application and notice in CloudWatch that the load balancer is not sending traffic to one of your EC2 instances. What could be the problem?
A. The EC2 instance has failed the load balancer health check.
B. The instance has not been registered with CloudWatch.
C. The EC2 instance has failed EC2 status checks.
D. You are load testing at a moderate traffic level and not all instances are needed.
Notes: The load balancer will route the incoming requests only to the healthy instances. The EC2 instance may have passed status check and be considered health to the Auto Scaling Group, but the ELB may not use it if the ELB health check has not been met. The ELB health check has a default of 30 seconds between checks, and a default of 3 checks before making a decision. Therefore the instance could be visually available but unused for at least 90 seconds before the GUI would show it as failed. In CloudWatch where the issue was noticed it would appear to be a healthy EC2 instance but with no traffic. Which is what was observed.
References: ELB HealthCheck
Q24: Your company is using a hybrid configuration because there are some legacy applications which are not easily converted and migrated to AWS. And with this configuration comes a typical scenario where the legacy apps must maintain the same private IP address and MAC address. You are attempting to convert the application to the cloud and have configured an EC2 instance to house the application. What you are currently testing is removing the ENI from the legacy instance and attaching it to the EC2 instance. You want to attempt a cold attach. What does this mean?
A. Attach ENI when it’s stopped.
B. Attach ENI before the public IP address is assigned.
C. Attach ENI to an instance when it’s running.
D. Attach ENI when the instance is being launched.
Notes: Best practices for configuring network interfaces You can attach a network interface to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach). You can detach secondary network interfaces when the instance is running or stopped. However, you can’t detach the primary network interface. You can move a network interface from one instance to another, if the instances are in the same Availability Zone and VPC but in different subnets. When launching an instance using the CLI, API, or an SDK, you can specify the primary network interface and additional network interfaces. Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance. A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly. Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves. Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance. If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing. If possible, use a secondary private IPv4 address on the primary network interface instead. For more information, see Assigning a secondary private IPv4 address.
Q25: Your company has recently converted to a hybrid cloud environment and will slowly be migrating to a fully AWS cloud environment. The AWS side is in need of some steps to prepare for disaster recovery. A disaster recovery plan needs drawn up and disaster recovery drills need to be performed for compliance reasons. The company wants to establish Recovery Time and Recovery Point Objectives. The RTO and RPO can be pretty relaxed. The main point is to have a plan in place, with as much cost savings as possible. Which AWS disaster recovery pattern will best meet these requirements?
A. Warm Standby
B. Backup and restore
C. Multi Site
D. Pilot Light
Answer: B
Notes: Backup and Restore: This is the least expensive option and cost is the overriding factor.
Q26: An international travel company has an application which provides travel information and alerts to users all over the world. The application is hosted on groups of EC2 instances in Auto Scaling Groups in multiple AWS Regions. There are also load balancers routing traffic to these instances. In two countries, Ireland and Australia, there are compliance rules in place that dictate users connect to the application in eu-west-1 and ap-southeast-1. Which service can you use to meet this requirement?
A. Use Route 53 weighted routing.
B. Use Route 53 geolocation routing.
C. Configure CloudFront and the users will be routed to the nearest edge location.
D. Configure the load balancers to route users to the proper region.
Notes: Geolocation routing lets you choose the resources that serve your traffic based on the geographic location of your users, meaning the location that DNS queries originate from. For example, you might want all queries from Europe to be routed to an ELB in the Frankfurt region. When you use geolocation routing, you can localize your content and present some or all of your website in the language of your users. You can also use geolocation routing to restrict distribution of content to only the locations in which you have distribution rights. Another possible use is for balancing load across endpoints in a predictable, easy-to-manage way, so that each user location is consistently routed to the same endpoint.
Q26: You have taken over management of several instances in the company AWS environment. You want to quickly review scripts used to bootstrap the instances at runtime. A URL command can be used to do this. What can you append to the URL http://169.254.169.254/latest/ to retrieve this data?
A. user-data/
B. instance-demographic-data/
C. meta-data/
D. instance-data/
Answer: A
Notes: When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives.
Q27: A software company has created an application to capture service requests from users and also enhancement requests. The application is deployed on an Auto Scaling group of EC2 instances fronted by an Application Load Balancer. The Auto Scaling group has scaled to maximum capacity, but there are still requests being lost. The cost of these instances is becoming an issue. What step can the company take to ensure requests aren’t lost?
A. Use larger instances in the Auto Scaling group.
B. Use spot instances to save money.
C. Use an SQS queue with the Auto Scaling group to capture all requests.
D. Use a Network Load Balancer instead for faster throughput.
Notes: There are some scenarios where you might think about scaling in response to activity in an Amazon SQS queue. For example, suppose that you have a web app that lets users upload images and use them online. In this scenario, each image requires resizing and encoding before it can be published. The app runs on EC2 instances in an Auto Scaling group, and it’s configured to handle your typical upload rates. Unhealthy instances are terminated and replaced to maintain current instance levels at all times. The app places the raw bitmap data of the images in an SQS queue for processing. It processes the images and then publishes the processed images where they can be viewed by users. The architecture for this scenario works well if the number of image uploads doesn’t vary over time. But if the number of uploads changes over time, you might consider using dynamic scaling to scale the capacity of your Auto Scaling group.
Q28: A company has an auto scaling group of EC2 instances hosting their retail sales application. Any significant downtime for this application can result in large losses of profit. Therefore the architecture also includes an Application Load Balancer and an RDS database in a Multi-AZ deployment. The company has a very aggressive Recovery Time Objective (RTO) in case of disaster. How long will a failover typically complete?
A. Under 10 minutes
B. Within an hour
C. Almost instantly
D. one to two minutes
Answer: D
Notes: What happens during Multi-AZ failover and how long does it take? Failover is automatically handled by Amazon RDS so that you can resume database operations as quickly as possible without administrative intervention. When failing over, Amazon RDS simply flips the canonical name record (CNAME) for your DB instance to point at the standby, which is in turn promoted to become the new primary. We encourage you to follow best practices and implement database connection retry at the application layer. Failovers, as defined by the interval between the detection of the failure on the primary and the resumption of transactions on the standby, typically complete within one to two minutes. Failover time can also be affected by whether large uncommitted transactions must be recovered; the use of adequately large instance types is recommended with Multi-AZ for best results. AWS also recommends the use of Provisioned IOPS with Multi-AZ instances for fast, predictable, and consistent throughput performance.
Q29: You have two EC2 instances running in the same VPC, but in different subnets. You are removing the secondary ENI from an EC2 instance and attaching it to another EC2 instance. You want this to be fast and with limited disruption. So you want to attach the ENI to the EC2 instance when it’s running. What is this called?
Notes: Here are some best practices for configuring network interfaces. You can attach a network interface to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach). You can detach secondary network interfaces when the instance is running or stopped. However, you can’t detach the primary network interface. You can move a network interface from one instance to another if the instances are in the same Availability Zone and VPC but in different subnets. When launching an instance using the CLI, API, or an SDK, you can specify the primary network interface and additional network interfaces. Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance. A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly. Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves. Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance. If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing. If possible, use a secondary private IPv4 address on the primary network interface instead. For more information, see Assigning a secondary private IPv4 address.
Q30: You suspect that one of the AWS services your company is using has gone down. How can you check on the status of this service?
A. AWS Trusted Advisor
B. Amazon Inspector
C. AWS Personal Health Dashboard
D. AWS Organizations
Answer: C
Notes: AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that may impact you. While the Service Health Dashboard displays the general status of AWS services, Personal Health Dashboard gives you a personalized view of the performance and availability of the AWS services underlying your AWS resources. The dashboard displays relevant and timely information to help you manage events in progress, and provides proactive notification to help you plan for scheduled activities. With Personal Health Dashboard, alerts are triggered by changes in the health of AWS resources, giving you event visibility and guidance to help quickly diagnose and resolve issues.
Q31: You have configured an Auto Scaling Group of EC2 instances fronted by an Application Load Balancer and backed by an RDS database. You want to begin monitoring the EC2 instances using CloudWatch metrics. Which metric is not readily available out of the box?
Notes: Memory utilization is not available as an out of the box metric in CloudWatch. You can, however, collect memory metrics when you configure a custom metric for CloudWatch.
Types of custom metrics that you can set up include:
Q32: Several instances you are creating have a specific data requirement. The requirement states that the data on the root device needs to persist independently from the lifetime of the instance. After considering AWS storage options, which is the simplest way to meet these requirements?
A. Store your root device data on Amazon EBS.
B. Store the data on the local instance store.
C. Create a cron job to migrate the data to S3.
D. Send the data to S3 using S3 lifecycle rules.
Answer: A
Notes: By using Amazon EBS, data on the root device will persist independently from the lifetime of the instance. This enables you to stop and restart the instance at a subsequent time, which is similar to shutting down your laptop and restarting it when you need it again.
Q33: A company has an Auto Scaling Group of EC2 instances hosting their retail sales application. Any significant downtime for this application can result in large losses of profit. Therefore the architecture also includes an Application Load Balancer and an RDS database in a Multi-AZ deployment. What will happen to preserve high availability if the primary database fails?
A. A Lambda function kicks off a CloudFormation template to deploy a backup database.
B. The CNAME is switched from the primary db instance to the secondary.
C. Route 53 points the CNAME to the secondary database instance.
D. The Elastic IP address for the primary database is moved to the secondary database.
Notes: Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
Failover is automatically handled by Amazon RDS so that you can resume database operations as quickly as possible without administrative intervention. When failing over, Amazon RDS simply flips the canonical name record (CNAME) for your DB instance to point at the standby, which is in turn promoted to become the new primary.
Q34: After several issues with your application and unplanned downtime, your recommendation to migrate your application to AWS is approved. You have set up high availability on the front end with a load balancer and an Auto Scaling Group. What step can you take with your database to configure high-availability and ensure minimal downtime (under five minutes)?
A. Create a read replica.
B. Enable Multi-AZ failover on the database.
C. Take frequent snapshots of your database.
D. Create your database using CloudFormation and save the template for reuse.
Answer: B
Notes: In the event of a planned or unplanned outage of your DB instance, Amazon RDS automatically switches to a standby replica in another Availability Zone if you have enabled Multi-AZ. The time it takes for the failover to complete depends on the database activity and other conditions at the time the primary DB instance became unavailable. Failover times are typically 60–120 seconds. However, large transactions or a lengthy recovery process can increase failover time. When the failover is complete, it can take additional time for the RDS console to reflect the new Availability Zone. Note the above sentences. Large transactions could cause a problem in getting back up within five minutes, but this is clearly the best of the available choices to attempt to meet this requirement. We must move through our questions on the exam quickly, but always evaluate all the answers for the best possible solution.
Q35: A new startup is considering the advantages of using DynamoDB versus a traditional relational database in AWS RDS. The NoSQL nature of DynamoDB presents a small learning curve to the team members who all have experience with traditional databases. The company will have multiple databases, and the decision will be made on a case-by-case basis. Which of the following use cases would favour DynamoDB? Select two.
Notes: DynamoDB is a NoSQL database that supports key-value and document data structures. A key-value store is a database service that provides support for storing, querying, and updating collections of objects that are identified using a key and values that contain the actual content being stored. Meanwhile, a document data store provides support for storing, querying, and updating items in a document format such as JSON, XML, and HTML. DynamoDB’s fast and predictable performance characteristics make it a great match for handling session data. Plus, since it’s a fully-managed NoSQL database service, you avoid all the work of maintaining and operating a separate session store.
Storing metadata for Amazon S3 objects is correct because the Amazon DynamoDB stores structured data indexed by primary key and allows low-latency read and write access to items ranging from 1 byte up to 400KB. Amazon S3 stores unstructured blobs and is suited for storing large objects up to 5 TB. In order to optimize your costs across AWS services, large objects or infrequently accessed data sets should be stored in Amazon S3, while smaller data elements or file pointers (possibly to Amazon S3 objects) are best saved in Amazon DynamoDB.
Q36: You have been tasked with designing a strategy for backing up EBS volumes attached to an instance-store-backed EC2 instance. You have been asked for an executive summary on your design, and the executive summary should include an answer to the question, “What can an EBS volume do when snapshotting the volume is in progress”?
A. The volume can be used normally while the snapshot is in progress.
B. The volume can only accommodate writes while a snapshot is in progress.
C. The volume can not be used while a snapshot is in progress.
D. The volume can only accommodate reads while a snapshot is in progress.
Answer: A
Notes: You can create a point-in-time snapshot of an EBS volume and use it as a baseline for new volumes or for data backup. If you make periodic snapshots of a volume, the snapshots are incremental; the new snapshot saves only the blocks that have changed since your last snapshot. Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed. While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume.
Q37: You are working as a Solutions Architect in a large healthcare organization. You have many Auto Scaling Groups that you need to create. One requirement is that you need to reuse some software licenses and therefore need to use dedicated hosts on EC2 instances in your Auto Scaling Groups. What step must you take to meet this requirement?
A. Create your launch configuration, but manually change the instances to Dedicated Hosts in the EC2 console.
B. Use a launch template with your Auto Scaling Group.
C. Create the Dedicated Host EC2 instances, then add them to an existing Auto Scaling Group.
D. Make sure your launch configurations are using Dedicated Hosts.
Answer: B
Notes: In addition to the features of Amazon EC2 Auto Scaling that you can configure by using launch templates, launch templates provide more advanced Amazon EC2 configuration options. For example, you must use launch templates to use Amazon EC2 Dedicated Hosts. Dedicated Hosts are physical servers with EC2 instance capacity that are dedicated to your use. While Amazon EC2 Dedicated Instances also run on dedicated hardware, the advantage of using Dedicated Hosts over Dedicated Instances is that you can bring eligible software licenses from external vendors and use them on EC2 instances. If you currently use launch configurations, you can specify a launch template when you update an Auto Scaling group that was created using a launch configuration. To create a launch template to use with an Auto Scaling Group, create the template from scratch, create a new version of an existing template, or copy the parameters from a launch configuration, running instance, or other template.
Q38: Your organization uses AWS CodeDeploy for deployments. Now you are starting a project on the AWS Lambda platform. For your deployments, you’ve been given a requirement of performing blue-green deployments. When you perform deployments, you want to split traffic, sending a small percentage of the traffic to the new version of your application. Which deployment configuration will allow this splitting of traffic?
Notes: With canary, traffic is shifted in two increments. You can choose from predefined canary options that specify the percentage of traffic shifted to your updated Lambda function version in the first increment and the interval, in minutes, before the remaining traffic is shifted in the second increment.
Q39: A financial institution has an application that produces huge amounts of actuary data, which is ultimately expected to be in the terabyte range. There is a need to run complex analytic queries against terabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Which storage service will best meet this requirement?
A. RDS
B. DynamoDB
C. Redshift
D. ElastiCache
Answer: C
Notes: Amazon Redshift is a fast, fully-managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It enables you to run complex analytic queries against terabytes to petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Most results come back in seconds. With Redshift, you can start small for just $0.25 per hour with no commitments and scale-out to petabytes of data for $1,000 per terabyte per year, less than a tenth of the cost of traditional on-premises solutions. Amazon Redshift also includes Amazon Redshift Spectrum, allowing you to run SQL queries directly against exabytes of unstructured data in Amazon S3 data lakes. No loading or transformation is required, and you can use open data formats, including Avro, CSV, Grok, Amazon Ion, JSON, ORC, Parquet, RCFile, RegexSerDe, Sequence, Text, and TSV. Redshift Spectrum automatically scales query compute capacity based on the data retrieved, so queries against Amazon S3 run fast, regardless of data set size.
Q40: A company has an application for sharing static content, such as photos. The popularity of the application has grown, and the company is now sharing content worldwide. This worldwide service has caused some issues with latency. What AWS services can be used to host a static website, serve content to globally dispersed users, and address latency issues, while keeping cost under control? Choose two.
Notes: Amazon S3 is an object storage built to store and retrieve any amount of data from anywhere on the Internet. It’s a simple storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low costs. AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around the world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Both services integrate with AWS Shield for DDoS protection.
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services. CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing, or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience. Lastly, if you use AWS origins such as Amazon S3, Amazon EC2, or Elastic Load Balancing, you don’t pay for any data transferred between these services and CloudFront.
Q41: You have just been hired by a large organization which uses many different AWS services in their environment. Some of the services which handle data include: RDS, Redshift, ElastiCache, DynamoDB, S3, and Glacier. You have been instructed to configure a web application using stateless web servers. Which services can you use to handle session state data? Choose two.
Q42: After an IT Steering Committee meeting you have been put in charge of configuring a hybrid environment for the company’s compute resources. You weigh the pros and cons of various technologies based on the requirements you are given. Your primary requirement is the necessity for a private, dedicated connection, which bypasses the Internet and can provide throughput of 10 Gbps. Which option will you select?
A. AWS Direct Connect
B. VPC Peering
C. AWS VPN
D. AWS Direct Gateway
Answer: A
Notes: AWS Direct Connect can reduce network costs, increase bandwidth throughput, and provide a more consistent network experience than internet-based connections. It uses industry-standard 802.1q VLANs to connect to Amazon VPC using private IP addresses. You can choose from an ecosystem of WAN service providers for integrating your AWS Direct Connect endpoint in an AWS Direct Connect location with your remote networks. AWS Direct Connect lets you establish 1 Gbps or 10 Gbps dedicated network connections (or multiple connections) between AWS networks and one of the AWS Direct Connect locations. You can also work with your provider to create sub-1G connection or use link aggregation group (LAG) to aggregate multiple 1 gigabit or 10 gigabit connections at a single AWS Direct Connect endpoint, allowing you to treat them as a single, managed connection. A Direct Connect gateway is a globally available resource to enable connections to multiple Amazon VPCs across different regions or AWS accounts.
Q43: An application is hosted on an EC2 instance in a VPC. The instance is in a subnet in the VPC, and the instance has a public IP address. There is also an internet gateway and a security group with the proper ingress configured. But your testers are unable to access the instance from the Internet. What could be the problem?
A. Make sure the instance has a private IP address.
B. Add a route to the route table, from the subnet containing the instance, to the Internet Gateway.
C. A NAT gateway needs to be configured.
D. A Virtual private gateway needs to be configured.
The question doesn’t state if the subnet containing the instance is public or private. An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet. To enable access to or from the internet for instances in a subnet in a VPC, you must do the following:
Attach an internet gateway to your VPC.
Add a route to your subnet’s route table that directs internet-bound traffic to the internet gateway. If a subnet is associated with a route table that has a route to an internet gateway, it’s known as a public subnet. If a subnet is associated with a route table that does not have a route to an internet gateway, it’s known as a private subnet.
Ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address).
Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance.
In your subnet route table, you can specify a route for the internet gateway to all destinations not explicitly known to the route table (0.0.0.0/0 for IPv4 or ::/0 for IPv6). Alternatively, you can scope the route to a narrower range of IP addresses. For example, the public IPv4 addresses of your company’s public endpoints outside of AWS, or the elastic IP addresses of other Amazon EC2 instances outside your VPC. To enable communication over the Internet for IPv4, your instance must have a public IPv4 address or an Elastic IP address that’s associated with a private IPv4 address on your instance. Your instance is only aware of the private (internal) IP address space defined within the VPC and subnet. The internet gateway logically provides the one-to-one NAT on behalf of your instance so that when traffic leaves your VPC subnet and goes to the Internet, the reply address field is set to the public IPv4 address or elastic IP address of your instance and not its private IP address. Conversely, traffic that’s destined for the public IPv4 address or elastic IP address of your instance has its destination address translated into the instance’s private IPv4 address before the traffic is delivered to the VPC. To enable communication over the Internet for IPv6, your VPC and subnet must have an associated IPv6 CIDR block, and your instance must be assigned an IPv6 address from the range of the subnet. IPv6 addresses are globally unique, and therefore public by default.
Q44: A data company has implemented a subscription service for storing video files. There are two levels of subscription: personal and professional use. The personal users can upload a total of 5 GB of data, and professional users can upload as much as 5 TB of data. The application can upload files of size up to 1 TB to an S3 Bucket. What is the best way to upload files of this size?
A. Multipart upload
B. Single-part Upload
C. AWS Snowball
D. AWS SnowMobile
Answers: A
Notes: The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object (see Operations on Objects). Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket. You can list all of your in-progress multipart uploads or get a list of the parts that you have uploaded for a specific multipart upload. Each of these operations is explained in this section.
Q45: You have multiple EC2 instances housing applications in a VPC in a single Availability Zone. The applications need to communicate at extremely high throughputs to avoid latency for end users. The average throughput needs to be 6 Gbps. What’s the best measure you can do to ensure this throughput?
Notes: Amazon Web Services’ (AWS) solution to reducing latency between instances involves the use of placement groups. As the name implies, a placement group is just that — a group. AWS instances that exist within a common availability zone can be grouped into a placement group. Group members are able to communicate with one another in a way that provides low latency and high throughput. A cluster placement group is a logical grouping of instances within a single Availability Zone. A cluster placement group can span peered VPCs in the same Region. Instances in the same cluster placement group enjoy a higher per-flow throughput limit of up to 10 Gbps for TCP/IP traffic and are placed in the same high-bisection bandwidth segment of the network.
Q46: A team member has been tasked to configure four EC2 instances for four separate applications. These are not high-traffic apps, so there is no need for an Auto Scaling Group. The instances are all in the same public subnet and each instance has an EIP address, and all of the instances have the same Security Group. But none of the instances can send or receive internet traffic. You verify that all the instances have a public IP address. You also verify that an internet gateway has been configured. What is the most likely issue?
A. There is no route in the route table to the internet gateway (or it has been deleted).
B. Each instance needs its own security group.
C. The route table is corrupt.
D. You are using the default nacl.
Answers: A
Notes: The question details all of the configuration needed for internet access, except for a route to the IGW in the route table. This is definitely a key step in any checklist for internet connectivity. It is quite possible to have a subnet with the ‘Public’ attribute set but no route to the Internet in the assigned Route table. (test it yourself). This may have been a setup error, or someone may have thoughtlessly altered the shared Route table for a special case instead of creating a new Route table for the special case.
Q47: You have been assigned to create an architecture which uses load balancers to direct traffic to an Auto Scaling Group of EC2 instances across multiple Availability Zones. The application to be deployed on these instances is a life insurance application which requires path-based and host-based routing. Which type of load balancer will you need to use?
A. Any type of load balancer will meet these requirements.
B. Classic Load Balancer
C. Network Load Balancer
D. Application Load Balancer
Answers: D
Notes: Only the Application Load Balancer can support path-based and host-based routing. Using an Application Load Balancer instead of a Classic Load Balancer has the following benefits:
Support for path-based routing. You can configure rules for your listener that forward requests based on the URL in the request. This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL.
Support for host-based routing. You can configure rules for your listener that forward requests based on the host field in the HTTP header. This enables you to route requests to multiple domains using a single load balancer.
Support for routing based on fields in the request, such as standard and custom HTTP headers and methods, query parameters, and source IP addresses.
Support for routing requests to multiple applications on a single EC2 instance. You can register each instance or IP address with the same target group using multiple ports.
Support for redirecting requests from one URL to another.
Support for returning a custom HTTP response.
Support for registering targets by IP address, including targets outside the VPC for the load balancer.
Support for registering Lambda functions as targets.
Support for the load balancer to authenticate users of your applications through their corporate or social identities before routing requests.
Support for containerized applications. Amazon Elastic Container Service (Amazon ECS) can select an unused port when scheduling a task and register the task with a target group using this port. This enables you to make efficient use of your clusters.
Support for monitoring the health of each service independently, as health checks are defined at the target group level and many CloudWatch metrics are reported at the target group level. Attaching a target group to an Auto Scaling group enables you to scale each service dynamically based on demand.
Access logs contain additional information and are stored in compressed format.
Q48: You have been assigned to create an architecture which uses load balancers to direct traffic to an Auto Scaling Group of EC2 instances across multiple Availability Zones. You were considering using an Application Load Balancer, but some of the requirements you have been given seem to point to a Classic Load Balancer. Which requirement would be better served by an Application Load Balancer?
A. Support for EC2-Classic
B. Path-based routing
C. Support for sticky sessions using application-generated cookies
D. Support for TCP and SSL listeners
Answers: B
Notes:
Using an Application Load Balancer instead of a Classic Load Balancer has the following benefits:
Support for path-based routing. You can configure rules for your listener that forward requests based on the URL in the request. This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL.
Q49: You have been tasked to review your company disaster recovery plan due to some new requirements. The driving factor is that the Recovery Time Objective has become very aggressive. Because of this, it has been decided to configure Multi-AZ deployments for the RDS MySQL databases. Unrelated to DR, it has been determined that some read traffic needs to be offloaded from the master database. What step can be taken to meet this requirement?
A. Convert to Aurora to allow the standby to serve read traffic.
B. Redirect some of the read traffic to the standby database.
C. Add DAX to the solution to alleviate excess read traffic.
D. Add read replicas to offload some read traffic.
Notes: Amazon RDS Read Replicas for MySQL and MariaDB now support Multi-AZ deployments. Combining Read Replicas with Multi-AZ enables you to build a resilient disaster recovery strategy and simplify your database engine upgrade process. Amazon RDS Read Replicas enable you to create one or more read-only copies of your database instance within the same AWS Region or in a different AWS Region. Updates made to the source database are then asynchronously copied to your Read Replicas. In addition to providing scalability for read-heavy workloads, Read Replicas can be promoted to become a standalone database instance when needed.
Q50: A gaming company is designing several new games which focus heavily on player-game interaction. The player makes a certain move and the game has to react very quickly to change the environment based on that move and to present the next decision for the player in real-time. A tool is needed to continuously collect data about player-game interactions and feed the data into the gaming platform in real-time. Which AWS service can best meet this need?
A. AWS Lambda
B. Kinesis Data Streams
C. Kinesis Data Analytics
D. AWS IoT
Answers: B
Notes: Kinesis Data Streams can be used to continuously collect data about player-game interactions and feed the data into your gaming platform. With Kinesis Data Streams, you can design a game that provides engaging and dynamic experiences based on players’ actions and behaviors.
Q51: You are designing an architecture for a financial company which provides a day trading application to customers. After viewing the traffic patterns for the existing application you notice that traffic is fairly steady throughout the day, with the exception of large spikes at the opening of the market in the morning and at closing around 3 pm. Your architecture will include an Auto Scaling Group of EC2 instances. How can you configure the Auto Scaling Group to ensure that system performance meets the increased demands at opening and closing of the market?
A. Configure a Dynamic Scaling Policy to scale based on CPU Utilization.
B. Use a load balancer to ensure that the load is distributed evenly during high-traffic periods.
C. Configure your Auto Scaling Group to have a desired size which will be able to meet the demands of the high-traffic periods.
D. Use a predictive scaling policy on the Auto Scaling Group to meet opening and closing spikes.
Notes: Use a predictive scaling policy on the Auto Scaling Group to meet opening and closing spikes: Using data collected from your actual EC2 usage and further informed by billions of data points drawn from our own observations, we use well-trained Machine Learning models to predict your expected traffic (and EC2 usage) including daily and weekly patterns. The model needs at least one day’s of historical data to start making predictions; it is re-evaluated every 24 hours to create a forecast for the next 48 hours. What we can gather from the question is that the spikes at the beginning and end of day can potentially affect performance. Sure, we can use dynamic scaling, but remember, scaling up takes a little bit of time. We have the information to be proactive, use predictive scaling, and be ready for these spikes at opening and closing.
Q52: A software gaming company has produced an online racing game which uses CloudFront for fast delivery to worldwide users. The game also uses DynamoDB for storing in-game and historical user data. The DynamoDB table has a preconfigured read and write capacity. Users have been reporting slow down issues, and an analysis has revealed that the DynamoDB table has begun throttling during peak traffic times. Which step can you take to improve game performance?
A. Add a load balancer in front of the web servers.
B. Add ElastiCache to cache frequently accessed data in memory.
C. Add an SQS Queue to queue requests which could be lost.
D. Make sure DynamoDB Auto Scaling is turned on.
Answers: D
Notes: Amazon DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns. This enables a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic, without throttling. When the workload decreases, Application Auto Scaling decreases the throughput so that you don’t pay for unused provisioned capacity. Note that if you use the AWS Management Console to create a table or a global secondary index, DynamoDB auto scaling is enabled by default. You can modify your auto scaling settings at any time.
Q53: You have configured an Auto Scaling Group of EC2 instances. You have begun testing the scaling of the Auto Scaling Group using a stress tool to force the CPU utilization metric being used to force scale out actions. The stress tool is also being manipulated by removing stress to force a scale in. But you notice that these actions are only taking place in five-minute intervals. What is happening?
A. Auto Scaling Groups can only scale in intervals of five minutes or greater.
B. The Auto Scaling Group is following the default cooldown procedure.
C. A load balancer is managing the load and limiting the effectiveness of stressing the servers.
D. The stress tool is configured to run for five minutes.
Notes: The cooldown period helps you prevent your Auto Scaling group from launching or terminating additional instances before the effects of previous activities are visible. You can configure the length of time based on your instance startup time or other application needs. When you use simple scaling, after the Auto Scaling group scales using a simple scaling policy, it waits for a cooldown period to complete before any further scaling activities due to simple scaling policies can start. An adequate cooldown period helps to prevent the initiation of an additional scaling activity based on stale metrics. By default, all simple scaling policies use the default cooldown period associated with your Auto Scaling Group, but you can configure a different cooldown period for certain policies, as described in the following sections. Note that Amazon EC2 Auto Scaling honors cooldown periods when using simple scaling policies, but not when using other scaling policies or scheduled scaling. A default cooldown period automatically applies to any scaling activities for simple scaling policies, and you can optionally request to have it apply to your manual scaling activities. When you use the AWS Management Console to update an Auto Scaling Group, or when you use the AWS CLI or an AWS SDK to create or update an Auto Scaling Group, you can set the optional default cooldown parameter. If a value for the default cooldown period is not provided, its default value is 300 seconds.
Q54: A team of architects is designing a new AWS environment for a company which wants to migrate to the Cloud. The architects are considering the use of EC2 instances with instance store volumes. The architects realize that the data on the instance store volumes are ephemeral. Which action will not cause the data to be deleted on an instance store volume?
A. Reboot
B. The underlying disk drive fails.
C. Hardware disk failure.
D. Instance is stopped
Answers: A
Notes: Some Amazon Elastic Compute Cloud (Amazon EC2) instance types come with a form of directly attached, block-device storage known as the instance store. The instance store is ideal for temporary storage, because the data stored in instance store volumes is not persistent through instance stops, terminations, or hardware failures.
Q55: You work for an advertising company that has a real-time bidding application. You are also using CloudFront on the front end to accommodate a worldwide user base. Your users begin complaining about response times and pauses in real-time bidding. Which service can be used to reduce DynamoDB response times by an order of magnitude (milliseconds to microseconds)?
A. DAX
B. DynamoDB Auto Scaling
C. Elasticache
D. CloudFront Edge Caches
Answers: A
Notes: Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache that can reduce Amazon DynamoDB response times from milliseconds to microseconds, even at millions of requests per second. While DynamoDB offers consistent single-digit millisecond latency, DynamoDB with DAX takes performance to the next level with response times in microseconds for millions of requests per second for read-heavy workloads. With DAX, your applications remain fast and responsive, even when a popular event or news story drives unprecedented request volumes your way. No tuning required.
Q56: A travel company has deployed a website which serves travel updates to users all over the world. The traffic this database serves is very read heavy and can have some latency issues at certain times of the year. What can you do to alleviate these latency issues?
Notes: Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances. Read replicas are available in Amazon RDS for MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server as well as Amazon Aurora.
Q57: A large financial institution is gradually moving their infrastructure and applications to AWS. The company has data needs that will utilize all of RDS, DynamoDB, Redshift, and ElastiCache. Which description best describes Amazon Redshift?
A. Key-value and document database that delivers single-digit millisecond performance at any scale.
B. Cloud-based relational database.
C. Can be used to significantly improve latency and throughput for many read-heavy application workloads.
D. Near real-time complex querying on massive data sets.
Answers: D
Notes: Amazon Redshift is a fast, fully-managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against terabytes to petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Most results come back in seconds. With Redshift, you can start small for just $0.25 per hour with no commitments and scale out to petabytes of data for $1,000 per terabyte per year, less than a tenth the cost of traditional on-premises solutions. Amazon Redshift also includes Amazon Redshift Spectrum, allowing you to run SQL queries directly against exabytes of unstructured data in Amazon S3 data lakes. No loading or transformation is required, and you can use open data formats, including Avro, CSV, Grok, Amazon Ion, JSON, ORC, Parquet, RCFile, RegexSerDe, Sequence, Text, and TSV. Redshift Spectrum automatically scales query compute capacity based on the data retrieved, so queries against Amazon S3 run fast, regardless of data set size.
Q58: You are designing an architecture which will house an Auto Scaling Group of EC2 instances. The application hosted on the instances is expected to be an extremely popular social networking site. Forecasts for traffic to this site expect very high traffic and you will need a load balancer to handle tens of millions of requests per second while maintaining high throughput at ultra low latency. You need to select the type of load balancer to front your Auto Scaling Group to meet this high traffic requirement. Which load balancer will you select?
A. You will need an Application Load Balancer to meet this requirement.
B. All the AWS load balancers meet the requirement and perform the same.
C. You will select a Network Load Balancer to meet this requirement.
D. You will need a Classic Load Balancer to meet this requirement.
Answers: C
Notes: Network Load Balancer Overview: A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration. When you enable an Availability Zone for the load balancer, Elastic Load Balancing creates a load balancer node in the Availability Zone. By default, each load balancer node distributes traffic across the registered targets in its Availability Zone only. If you enable cross-zone load balancing, each load balancer node distributes traffic across the registered targets in all enabled Availability Zones. It is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on your part. The Network Load Balancer is API-compatible with the Application Load Balancer, including full programmatic control of Target Groups and Targets. Here are some of the most important features:
Static IP Addresses – Each Network Load Balancer provides a single IP address for each Availability Zone in its purview. If you have targets in us-west-2a and other targets in us-west-2c, NLB will create and manage two IP addresses (one per AZ); connections to that IP address will spread traffic across the instances in all the VPC subnets in the AZ. You can also specify an existing Elastic IP for each AZ for even greater control. With full control over your IP addresses, a Network Load Balancer can be used in situations where IP addresses need to be hard-coded into DNS records, customer firewall rules, and so forth.
Zonality – The IP-per-AZ feature reduces latency with improved performance, improves availability through isolation and fault tolerance, and makes the use of Network Load Balancers transparent to your client applications. Network Load Balancers also attempt to route a series of requests from a particular source to targets in a single AZ while still providing automatic failover should those targets become unavailable.
Source Address Preservation – With Network Load Balancer, the original source IP address and source ports for the incoming connections remain unmodified, so application software need not support X-Forwarded-For, proxy protocol, or other workarounds. This also means that normal firewall rules, including VPC Security Groups, can be used on targets.
Long-running Connections – NLB handles connections with built-in fault tolerance, and can handle connections that are open for months or years, making them a great fit for IoT, gaming, and messaging applications.
Failover – Powered by Route 53 health checks, NLB supports failover between IP addresses within and across regions.
Q59: An organization of about 100 employees has performed the initial setup of users in IAM. All users except administrators have the same basic privileges. But now it has been determined that 50 employees will have extra restrictions on EC2. They will be unable to launch new instances or alter the state of existing instances. What will be the quickest way to implement these restrictions?
A. Create an IAM Role for the restrictions. Attach it to the EC2 instances.
B. Create the appropriate policy. Place the restricted users in the new policy.
C. Create the appropriate policy. With only 20 users, attach the policy to each user.
D. Create the appropriate policy. Create a new group for the restricted users. Place the restricted users in the new group and attach the policy to the group.
Notes: You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. AWS supports six types of policies: identity-based policies, resource-based policies, permissions boundaries, Organizations SCPs, ACLs, and session policies. IAM policies define permissions for an action regardless of the method that you use to perform the operation. For example, if a policy allows the GetUser action, then a user with that policy can get user information from the AWS Management Console, the AWS CLI, or the AWS API. When you create an IAM user, you can choose to allow console or programmatic access. If console access is allowed, the IAM user can sign in to the console using a user name and password. Or if programmatic access is allowed, the user can use access keys to work with the CLI or API.
Q60: You are managing S3 buckets in your organization. This management of S3 extends to Amazon Glacier. For auditing purposes you would like to be informed if an object is restored to S3 from Glacier. What is the most efficient way you can do this?
A. Create a CloudWatch event for uploads to S3
B. Create an SNS notification for any upload to S3.
C. Configure S3 notifications for restore operations from Glacier.
D. Create a Lambda function which is triggered by restoration of object from Glacier to S3.
Answers: C
Notes: The Amazon S3 notification feature enables you to receive notifications when certain events happen in your bucket. To enable notifications, you must first add a notification configuration that identifies the events you want Amazon S3 to publish and the destinations where you want Amazon S3 to send the notifications. An S3 notification can be set up to notify you when objects are restored from Glacier to S3.
Q61: Your company has gotten back results from an audit. One of the mandates from the audit is that your application, which is hosted on EC2, must encrypt the data before writing this data to storage. Which service could you use to meet this requirement?
A. AWS Cloud HSM
B. Security Token Service
C. EBS encryption
D. AWS KMS
Answers: D
Notes: You can configure your application to use the KMS API to encrypt all data before saving it to disk. This link details how to choose an encryption service for various use cases:
Q62: Recent worldwide events have dictated that you perform your duties as a Solutions Architect from home. You need to be able to manage several EC2 instances while working from home and have been testing the ability to ssh into these instances. One instance in particular has been a problem and you cannot ssh into this instance. What should you check first to troubleshoot this issue?
A. Make sure that the security group for the instance has ingress on port 80 from your home IP address.
B. Make sure that your VPC has a connected Virtual Private Gateway.
C. Make sure that the security group for the instance has ingress on port 22 from your home IP address.
D. Make sure that the Security Group for the instance has ingress on port 443 from your home IP address.
Notes: The rules of a security group control the inbound traffic that’s allowed to reach the instances that are associated with the security group. The rules also control the outbound traffic that’s allowed to leave them. The following are the characteristics of security group rules:
By default, security groups allow all outbound traffic.
Security group rules are always permissive; you can’t create rules that deny access.
Security groups are stateful. If you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. For VPC security groups, this also means that responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules. For more information, see Connection tracking.
You can add and remove rules at any time. Your changes are automatically applied to the instances that are associated with the security group. The effect of some rule changes can depend on how the traffic is tracked. For more information, see Connection tracking. When you associate multiple security groups with an instance, the rules from each security group are effectively aggregated to create one set of rules. Amazon EC2 uses this set of rules to determine whether to allow access. You can assign multiple security groups to an instance. Therefore, an instance can have hundreds of rules that apply. This might cause problems when you access the instance. We recommend that you condense your rules as much as possible.
Q62: A consultant is hired by a small company to configure an AWS environment. The consultant begins working with the VPC and launching EC2 instances within the VPC. The initial instances will be placed in a public subnet. The consultant begins to create security groups. What is true of the default security group?
A. You can delete this group, however, you can’t change the group’s rules.
B. You can delete this group or you can change the group’s rules.
C. You can’t delete this group, nor can you change the group’s rules.
D. You can’t delete this group, however, you can change the group’s rules.
Answers: D
Notes: Your VPC includes a default security group. You can’t delete this group, however, you can change the group’s rules. The procedure is the same as modifying any other security group. For more information, see Adding, removing, and updating rules.
Q63: You are evaluating the security setting within the main company VPC. There are several NACLs and security groups to evaluate and possibly edit. What is true regarding NACLs and security groups?
A. Network ACLs and security groups are both stateful.
B. Network ACLs and security groups are both stateless.
C. Network ACLs are stateless, and security groups are stateful.
D. Network ACLs and stateful, and security groups are stateless.
Notes: Network ACLs are stateless, which means that responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa).
The following are the basic characteristics of security groups for your VPC:
There are quotas on the number of security groups that you can create per VPC, the number of rules that you can add to each security group, and the number of security groups that you can associate with a network interface. For more information, see Amazon VPC quotas.
You can specify allow rules, but not deny rules.
You can specify separate rules for inbound and outbound traffic.
When you create a security group, it has no inbound rules. Therefore, no inbound traffic originating from another host to your instance is allowed until you add inbound rules to the security group.
By default, a security group includes an outbound rule that allows all outbound traffic. You can remove the rule and add outbound rules that allow specific outbound traffic only. If your security group has no outbound rules, no outbound traffic originating from your instance is allowed.
Security groups are stateful. If you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. Responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules.
Q64: Your company needs to deploy an application in the company AWS account. The application will reside on EC2 instances in an Auto Scaling Group fronted by an Application Load Balancer. The company has been using Elastic Beanstalk to deploy the application due to limited AWS experience within the organization. The application now needs upgrades and a small team of subcontractors have been hired to perform these upgrades. What can be used to provide the subcontractors with short-lived access tokens that act as temporary security credentials to the company AWS account?
A. IAM Roles
B. AWS STS
C. IAM user accounts
D. AWS SSO
Answers: B
Notes: AWS Security Token Service (AWS STS) is the service that you can use to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use. You can use the AWS Security Token Service (AWS STS) to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use, with the following differences: Temporary security credentials are short-term, as the name implies. They can be configured to last for anywhere from a few minutes to several hours. After the credentials expire, AWS no longer recognizes them or allows any kind of access from API requests made with them. Temporary security credentials are not stored with the user but are generated dynamically and provided to the user when requested. When (or even before) the temporary security credentials expire, the user can request new credentials, as long as the user requesting them still has permissions to do so.
Q65: The company you work for has reshuffled teams a bit and you’ve been moved from the AWS IAM team to the AWS Network team. One of your first assignments is to review the subnets in the main VPCs. What are two key concepts regarding subnets?
A. A subnet spans all the Availability Zones in a Region.
B. Private subnets can only hold database.
C. Each subnet maps to a single Availability Zone.
D. Every subnet you create is associated with the main route table for the VPC.
E. Each subnet is associated with one security group.
Notes: A VPC spans all the Availability Zones in the region. After creating a VPC, you can add one or more subnets in each Availability Zone. When you create a subnet, you specify the CIDR block for the subnet, which is a subset of the VPC CIDR block. Each subnet must reside entirely within one Availability Zone and cannot span zones. Availability Zones are distinct locations that are engineered to be isolated from failures in other Availability Zones. By launching instances in separate Availability Zones, you can protect your applications from the failure of a single location. A VPC spans all of the Availability Zones in the Region. After creating a VPC, you can add one or more subnets in each Availability Zone. You can optionally add subnets in a Local Zone, which is an AWS infrastructure deployment that places compute, storage, database, and other select services closer to your end users. A Local Zone enables your end users to run applications that require single-digit millisecond latencies. For information about the Regions that support Local Zones, see Available Regions in the Amazon EC2 User Guide for Linux Instances. When you create a subnet, you specify the CIDR block for the subnet, which is a subset of the VPC CIDR block. We assign a unique ID to each subnet.
Q66: Amazon Web Services offers 4 different levels of support. Which of the following are valid support levels? Choose 3
A. Enterprise
B. Developer
C. Corporate
D. Business
E. Free Tier
Answer: A B D Notes: The correct answers are Enterprise, Business, Developer. References: https://docs.aws.amazon.com/
Q67: You are reviewing Change Control requests, and you note that there is a change designed to reduce wasted CPU cycles by increasing the value of your Amazon SQS “VisibilityTimeout” attribute. What does this mean?
A. While processing a message, a consumer instance can amend the message visibility counter by a fixed amount.
B. When a consumer instance retrieves a message, that message will be hidden from other consumer instances for a fixed period.
C. When the consumer instance polls for new work the SQS service will allow it to wait a certain time for a message to be available before closing the connection.
D. While processing a message, a consumer instance can reset the message visibility by restarting the preset timeout counter.
E. When the consumer instance polls for new work, the consumer instance will wait a certain time until it has a full workload before closing the connection.
F. When a new message is added to the SQS queue, it will be hidden from consumer instances for a fixed period.
Answer: B Notes: Poor timing of SQS processes can significantly impact the cost effectiveness of the solution. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours. References: https://docs.aws.amazon.com/sqs
Q68: You are a security architect working for a large antivirus company. The production environment has recently been moved to AWS and is in a public subnet. You are able to view the production environment over HTTP. However, when your customers try to update their virus definition files over a custom port, that port is blocked. You log in to the console and you allow traffic in over the custom port. How long will this take to take effect?
A. After a few minutes.
B. Immediately.
C. Straight away, but to the new instances only.
D. Straight away to the new instances, but old instances must be stopped and restarted before the new rules apply.
Q69: Amazon SQS keeps track of all tasks and events in an application.
A. True
B. False
Answer: B Notes: Amazon SWF (not Amazon SQS) keeps track of all tasks and events in an application. Amazon SQS requires you to implement your own application-level tracking, especially if your application uses multiple queues. Amazon SWF FAQs. References: References: https://docs.aws.amazon.com/sqs
Q70: Your Security Manager has hired a security contractor to audit your network and firewall configurations. The consultant doesn’t have access to an AWS account. You need to provide the required access for the auditing tasks, and answer a question about login details for the official AWS firewall appliance. Which of the following might you do? Choose 2
A. Create an IAM User with a policy that can Read Security Group and NACL settings.
B. Explain that AWS implements network security differently and that there is no such thing as an official AWS firewall appliance. Security Groups and NACLs are used instead.
C. Create an IAM Role with a policy that can Read Security Group and NACL settings.
D. Explain that AWS is a cloud service and that AWS manages the Network appliances.
E. Create an IAM Role with a policy that can Read Security Group and Route settings.
Answer: A and B Notes: Create an IAM user for the auditor and explain that the firewall functionality is implemented as stateful Security Groups, and stateless subnet NACLs. AWS has removed the Firewall appliance from the hub of the network and implemented the firewall functionality as stateful Security Groups, and stateless subnet NACLs. This is not a new concept in networking, but rarely implemented at this scale. References: https://docs.aws.amazon.com/iam
Q71: How many internet gateways can I attach to my custom VPC?
A. 5 B. 3 C. 2 D. 1
Answer: D Notes: 1 References: https://docs.aws.amazon.com/vpc
Q72: How long can a message be retained in an SQS Queue?
Q73: Although your application customarily runs at 30% usage, you have identified a recurring usage spike (>90%) between 8pm and midnight daily. What is the most cost-effective way to scale your application to meet this increased need?
A. Manually deploy Reactive Event-based Scaling each night at 7:45.
B. Deploy additional EC2 instances to meet the demand.
C. Use scheduled scaling to boost your capacity at a fixed interval.
D. Increase the size of the Resource Group to meet demand.
Answer: C Notes: Scheduled scaling allows you to set your own scaling schedule. For example, let’s say that every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling actions based on the predictable traffic patterns of your web application. Scaling actions are performed automatically as a function of time and date. Reference: Scheduled scaling for Amazon EC2 Auto Scaling.
Q74: To save money, you quickly stored some data in one of the attached volumes of an EC2 instance and stopped it for the weekend. When you returned on Monday and restarted your instance, you discovered that your data was gone. Why might that be?
A. The EBS volume was not large enough to store your data.
B. The instance failed to connect to the root volume on Monday.
C. The elastic block-level storage service failed over the weekend.
D. The volume was ephemeral, block-level storage. Data on an instance store volume is lost if an instance is stopped.
Answer: D Notes: the EC2 instance had an instance store volume attached to it. Instance store volumes are ephemeral, meaning that data in attached instance store volumes is lost if the instance stops. Reference: Instance store lifetime
Q75: Select all the true statements on S3 URL styles: Choose 2
A. Virtual hosted-style URLs will be eventually depreciated in favor of Path-Style URLs for S3 bucket access.
B. Virtual-host-style URLs (such as: https://bucket-name.s3.Region.amazonaws.com/key name) are supported by AWS.
C. Path-Style URLs (such as https://s3.Region.amazonaws.com/bucket-name/key name) are supported by AWS.
D. DNS compliant names are NOT recommended for the URLs to access S3.
Answer: B and C Notes: Virtual-host-style URLs and Path-Style URLs (soon to be retired) are supported by AWS. DNS compliant names are recommended for the URLs to access S3. References: https://docs.aws.amazon.com/s3
Q76: With EBS, I can ____. Choose 2
A. Create an encrypted snapshot from an unencrypted snapshot by creating an encrypted copy of the unencrypted snapshot.
B. Create an unencrypted volume from an encrypted snapshot.
C. Create an encrypted volume from a snapshot of another encrypted volume.
D. Encrypt an existing volume.
Answer: A and C Notes: Although there is no direct way to encrypt an existing unencrypted volume or snapshot, you can encrypt them by creating either a volume or a snapshot. Reference: Encrypting unencrypted resources. You can create an encrypted volume from a snapshot of another encrypted volume. References: https://docs.aws.amazon.com/ebs
Q77: You have been engaged by a company to design and lead a migration to an AWS environment. The team is concerned about the capabilities of the new environment, especially when it comes to high availability and cost-effectiveness. The design calls for about 20 instances (c3.2xlarge) pulling jobs/messages from SQS. Network traffic per instance is estimated to be around 500 Mbps at the beginning and end of each job. Which configuration should you plan on deploying?
A. Use a 2nd Network Interface to separate the SQS traffic for the storage traffic.
B. Choose a different instance type that better matched the traffic demand.
C.Spread the Instances over multiple AZs to minimize the traffic concentration and maximize fault-tolerance.
D. Deploy as a Cluster Placement Group as the aggregated burst traffic could be around 10 Gbps.
Answer: C Notes: With a multi-AZ configuration, an additional reliability point is scored as the entire Availability Zone itself is ruled out as a single point of failure. This ensures high availability. Wherever possible, use simple solutions such as spreading the load out rather than expensive high tech solutions. References:AZ
Q78: You are a solutions architect working for a cosmetics company. Your company has a busy Magento online store that consists of a two-tier architecture. The web servers are on EC2 instances deployed across multiple AZs, and the database is on a Multi-AZ RDS MySQL database instance. Your store is having a Black Friday sale in five days, and having reviewed the performance for the last sale you expect the site to start running very slowly during the peak load. You investigate and you determine that the database was struggling to keep up with the number of reads that the store was generating. Which solution would you implement to improve the application read performance the most?
A. Deploy an Amazon ElastiCache cluster with nodes running in each AZ.
B. Upgrade your RDS MySQL instance to use provisioned IOPS.
C. Add an RDS Read Replica in each AZ.
D. Upgrade the RDS MySQL instance to a larger type.
Answer: C Notes: RDS Replicas can substantially increase the Read performance of your database. Multiple read replicas can be made to increase performance further. It will also require the least modifications to any code, and is generally possible to be implemented in the timeframe specified References:RDS
Q79: Which native AWS service will act as a file system mounted on an S3 bucket?
A. Amazon Elastic Block Store
B. File Gateway
C. Amazon S3
D. Amazon Elastic File System
Answer: B Notes: A file gateway supports a file interface into Amazon Simple Storage Service (Amazon S3) and combines a service and a virtual software appliance. By using this combination, you can store and retrieve objects in Amazon S3 using industry-standard file protocols such as Network File System (NFS) and Server Message Block (SMB). The software appliance, or gateway, is deployed into your on-premises environment as a virtual machine (VM) running on VMware ESXi, Microsoft Hyper-V, or Linux Kernel-based Virtual Machine (KVM) hypervisor. The gateway provides access to objects in S3 as files or file share mount points. You can manage your S3 data using lifecycle policies, cross-region replication, and versioning. You can think of a file gateway as a file system mount on S3. Reference: What is AWS Storage Gateway? .
Q80:You have been evaluating the NACLS in your company. Most of the NACLs are configured the same: 100 All Traffic Allow 200 All Traffic Deny ‘*’ All Traffic Deny If a request comes in, how will it be evaluated?
A. The default will deny traffic.
B. The request will be allowed.
C. The highest numbered rule will be used, a deny.
D. All rules will be evaluated and the end result will be Deny.
Answer: B
Notes: Rules are evaluated starting with the lowest numbered rule. As soon as a rule matches traffic, it’s applied immediately regardless of any higher-numbered rule that may contradict it. The following are the basic things that you need to know about network ACLs: Your VPC automatically comes with a modifiable default network ACL. By default, it allows all inbound and outbound IPv4 traffic and, if applicable, IPv6 traffic. You can create a custom network ACL and associate it with a subnet. By default, each custom network ACL denies all inbound and outbound traffic until you add rules. Each subnet in your VPC must be associated with a network ACL. If you don’t explicitly associate a subnet with a network ACL, the subnet is automatically associated with the default network ACL. You can associate a network ACL with multiple subnets. However, a subnet can be associated with only one network ACL at a time. When you associate a network ACL with a subnet, the previous association is removed. A network ACL contains a numbered list of rules. We evaluate the rules in order, starting with the lowest-numbered rule, to determine whether traffic is allowed in or out of any subnet associated with the network ACL. The highest number that you can use for a rule is 32766. We recommend that you start by creating rules in increments (for example, increments of 10 or 100) so that you can insert new rules where you need to later on. A network ACL has separate inbound and outbound rules, and each rule can either allow or deny traffic. Network ACLs are stateless, which means that responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa).
Q81: You have been given an assignment to configure Network ACLs in your VPC. Before configuring the NACLs, you need to understand how the NACLs are evaluated. How are NACL rules evaluated?
A. NACL rules are evaluated by rule number from lowest to highest and executed immediately when a matching rule is found.
B. NACL rules are evaluated by rule number from highest to lowest, and executed immediately when a matching rule is found.
C. All NACL rules that you configure are evaluated before traffic is passed through.
D. NACL rules are evaluated by rule number from highest to lowest, and all are evaluated before traffic is passed through.
Answer: A
Notes: NACL rules are evaluated by rule number from lowest to highest and executed immediately when a matching rule is found.
You can add or remove rules from the default network ACL, or create additional network ACLs for your VPC. When you add or remove rules from a network ACL, the changes are automatically applied to the subnets that it’s associated with. A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC. The following are the parts of a network ACL rule:
Rule number. Rules are evaluated starting with the lowest-numbered rule. As soon as a rule matches traffic, it’s applied regardless of any higher-numbered rule that might contradict it.
Type. The type of traffic, for example, SSH. You can also specify all traffic or a custom range.
Protocol. You can specify any protocol that has a standard protocol number. For more information, see Protocol Numbers. If you specify ICMP as the protocol, you can specify any or all of the ICMP types and codes.
Port range. The listening port or port range for the traffic. For example, 80 for HTTP traffic.
Source. [Inbound rules only] The source of the traffic (CIDR range).
Destination. [Outbound rules only] The destination for the traffic (CIDR range).
Allow/Deny. Whether to allow or deny the specified traffic.
Q82: Your company has gone through an audit with a focus on data storage. You are currently storing historical data in Amazon Glacier. One of the results of the audit is that a portion of the infrequently-accessed historical data must be able to be accessed immediately upon request. Where can you store this data to meet this requirement?
A. S3 Standard
B. Leave infrequently-accessed data in Glacier.
C. S3 Standard-IA
D. Store the data in EBS
Answer: C
Notes: S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of S3 Standard, with a low-per-GB storage price and per GB retrieval fee. This combination of low cost and high performance make S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files. S3 Storage Classes can be configured at the object level and a single bucket can contain objects stored across S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA. You can also use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes.
Q84: After an IT Steering Committee meeting, you have been put in charge of configuring a hybrid environment for the company’s compute resources. You weigh the pros and cons of various technologies, such as VPN and Direct Connect, and based on the requirements you have decided to configure a VPN connection. What features and advantages can a VPN connection provide?
A VPN provides a connection between an on-premises network and a VPC, using a secure and private connection with IPsec and TLS.
A VPC/VPN Connection utilizes IPSec to establish encrypted network connectivity between your intranet and Amazon VPC over the Internet. VPN Connections can be configured in minutes and are a good solution if you have an immediate need, have low-to-modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity.
AWS Client VPN is a managed client-based VPN service that enables you to securely access your AWS resources or your on-premises network. With AWS Client VPN, you configure an endpoint to which your users can connect to establish a secure TLS VPN session. This enables clients to access resources in AWS or on-premises from any location using an OpenVPN-based VPN client.
Q86: Your company has decided to go to a hybrid cloud environment. Part of this effort will be to move a large data warehouse to the cloud. The warehouse is 50TB, and will take over a month to migrate given the current bandwidth available. What is the best option available to perform this migration considering both cost and performance aspects?
The AWS Snowball Edge is a type of Snowball device with on-board storage and compute power for select AWS capabilities. Snowball Edge can undertake local processing and edge-computing workloads in addition to transferring data between your local environment and the AWS Cloud.
Each Snowball Edge device can transport data at speeds faster than the internet. This transport is done by shipping the data in the appliances through a regional carrier. The appliances are rugged shipping containers, complete with E Ink shipping labels. The AWS Snowball Edge device differs from the standard Snowball because it can bring the power of the AWS Cloud to your on-premises location, with local storage and compute functionality.
Snowball Edge devices have three options for device configurations: storage optimized, compute optimized, and with GPU. When this guide refers to Snowball Edge devices, it’s referring to all options of the device. Whenever specific information applies to only one or more optional configurations of devices, like how the Snowball Edge with GPU has an on-board GPU, it will be called out. For more information, see Snowball Edge Device Options.
Q87: You have been assigned the review of the security in your company AWS cloud environment. Your final deliverable will be a report detailing potential security issues. One of the first things that you need to describe is the responsibilities of the company under the shared responsibility module. Which measure is the customer’s responsibility?
EC2 instance OS Patching
Notes:Security and compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates. The customer assumes responsibility for, and management of, the guest operating system (including updates and security patches), other associated application software, and the configuration of the AWS provided security group firewall. Customers should carefully consider the services they choose, as their responsibilities vary depending on the services used, the integration of those services into their IT environment, and applicable laws and regulations. The nature of this shared responsibility also provides the flexibility and customer control that permits the deployment. As shown in the chart below, this differentiation of responsibility is commonly referred to as Security “of” the Cloud versus Security “in” the Cloud.
Customers that deploy an Amazon EC2 instance are responsible for management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.
Q88: You work for a busy real estate company, and you need to protect your data stored on S3 from accidental deletion. Which of the following actions might you take to achieve this? Choose 2
A. Create a bucket policy that prohibits anyone from deleting things from the bucket. B. Enable S3 – Infrequent Access Storage (S3 – IA). C. Enable versioning on the bucket. If a file is accidentally deleted, delete the delete marker. D. Configure MFA-protected API access. E. Use pre-signed URL’s so that users will not be able to accidentally delete data.
Answer: C and D Notes: The best answers are to allow versioning on the bucket and to protect the objects by configuring MFA-protected API access. Reference:https://docs.aws.amazon.com/s3
Q89: AWS intends to shut down your spot instance; which of these scenarios is possible? Choose 3
A. AWS sends a notification of termination and you receive it 120 seconds before the intended forced shutdown.
B. AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, and you delay it by sending a ‘Delay300’ instruction before the forced shutdown takes effect.
C. AWS sends a notification of termination and you receive it 120 seconds before the intended forced shutdown, but AWS does not action the shutdown.
D. AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, but you block the shutdown because you used ‘Termination Protection’ when you initialized the instance.
E. AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, but the defined duration period (also known as Spot blocks) hasn’t ended yet.
F. AWS sends a notification of termination, but you do not receive it within the 120 seconds and the instance is shutdown.
Answer: A E and F Notes: When Amazon EC2 is going to interrupt your Spot Instance, it emits an event two minutes prior to the actual interruption (except for hibernation, which gets the interruption notice, but not two minutes in advance because hibernation begins immediately).
In rare situations, Spot blocks may be interrupted due to Amazon EC2 capacity needs. In these cases, AWS provides a two-minute warning before the instance is terminated, and customers are not charged for the terminated instances even if they have used them.
It is possible that your Spot Instance is terminated before the warning can be made available. Reference: https://docs.aws.amazon.com/ec2
Q90: What does the “EAR” in a policy document stand for?
A. Effects, APIs, Roles B. Effect, Action, Resource C. Ewoks, Always, Romanticize D. Every, Action, Reasonable
Answer: B. Notes: The elements included in a policy document that make up the “EAR” are effect, action, and resource. Reference: Policies and Permissions in IAM
Q91: _____ provides real-time streaming of data.
A. Kinesis Data Analytics B. Kinesis Data Firehose C. Kinesis Data Streams D. SQS
Answer: C Notes: Kinesis Data Streams offers real-time data streaming Reference: Amazon Kinesis Data Streams –
Q92: You can use _ to build a schema for your data, and _ to query the data that’s stored in S3.
A. Glue, Athena B. EC2, SQS C. EC2, Glue D. Athena, Lambda
Answer: A Notes: Kinesis Data Streams offers real-time data streaming Reference: Glue and Athena are correct –
Q93: What type of work does EMR perform?
A. Data processing information (DPI) jobs. B. Big data (BD) jobs. C. Extract, transform, and load (ETL) jobs. D. Huge amounts of data (HAD) jobs
Answer: C Notes: EMR excels at extract, transform, and load (ETL) jobs. Reference: Apache EMR – https://aws.amazon.com/emr/
Q94: _____ allows you to transform data using SQL as it’s being passed through Kinesis.
A. RDS B. Kinesis Data Analytics C. Redshift D. DynamoDB
Answer: B Notes: Kinesis Data Analytics allows you to transform data using SQL. Reference: Amazon Kinesis Data Analytics –
Q95 [SAA-C03]: A company runs a public-facing three-tier web application in a VPC across multiple Availability Zones. Amazon EC2 instances for the application tier running in private subnets need to download software patches from the internet. However, the EC2 instances cannot be directly accessible from the internet. Which actions should be taken to allow the EC2 instances to download the needed patches? (Select TWO.)
A. Configure a NAT gateway in a public subnet. B. Define a custom route table with a route to the NAT gateway for internet traffic and associate it with the private subnets for the application tier. C. Assign Elastic IP addresses to the EC2 instances. D. Define a custom route table with a route to the internet gateway for internet traffic and associate it with the private subnets for the application tier. E. Configure a NAT instance in a private subnet.
Answer: A. B. Notes: – A NAT gateway forwards traffic from the EC2 instances in the private subnet to the internet or other AWS services, and then sends the response back to the instances. After a NAT gateway is created, the route tables for private subnets must be updated to point internet traffic to the NAT gateway.
Q96 [SAA-C03]: A solutions architect wants to design a solution to save costs for Amazon EC2 instances that do not need to run during a 2-week company shutdown. The applications running on the EC2 instances store data in instance memory that must be present when the instances resume operation. Which approach should the solutions architect recommend to shut down and resume the EC2 instances?
A. Modify the application to store the data on instance store volumes. Reattach the volumes while restarting them. B. Snapshot the EC2 instances before stopping them. Restore the snapshot after restarting the instances. C. Run the applications on EC2 instances enabled for hibernation. Hibernate the instances before the 2- week company shutdown. D. Note the Availability Zone for each EC2 instance before stopping it. Restart the instances in the same Availability Zones after the 2-week company shutdown.
Answer: C. Notes: Hibernating EC2 instances save the contents of instance memory to an Amazon Elastic Block Store (Amazon EBS) root volume. When the instances restart, the instance memory contents are reloaded.
Q97 [SAA-C03]: A company plans to run a monitoring application on an Amazon EC2 instance in a VPC. Connections are made to the EC2 instance using the instance’s private IPv4 address. A solutions architect needs to design a solution that will allow traffic to be quickly directed to a standby EC2 instance if the application fails and becomes unreachable. Which approach will meet these requirements?
A) Deploy an Application Load Balancer configured with a listener for the private IP address and register the primary EC2 instance with the load balancer. Upon failure, de-register the instance and register the standby EC2 instance. B) Configure a custom DHCP option set. Configure DHCP to assign the same private IP address to the standby EC2 instance when the primary EC2 instance fails. C) Attach a secondary elastic network interface to the EC2 instance configured with the private IP address. Move the network interface to the standby EC2 instance if the primary EC2 instance becomes unreachable. D) Associate an Elastic IP address with the network interface of the primary EC2 instance. Disassociate the Elastic IP from the primary instance upon failure and associate it with a standby EC2 instance.
Answer: C. Notes: A secondary elastic network interface can be added to an EC2 instance. While primary network interfaces cannot be detached from an instance, secondary network interfaces can be detached and attached to a different EC2 instance.
Q98 [SAA-C03]: An analytics company is planning to offer a web analytics service to its users. The service will require that the users’ webpages include a JavaScript script that makes authenticated GET requests to the company’s Amazon S3 bucket. What must a solutions architect do to ensure that the script will successfully execute?
A. Enable cross-origin resource sharing (CORS) on the S3 bucket. B. Enable S3 Versioning on the S3 bucket. C. Provide the users with a signed URL for the script. D. Configure an S3 bucket policy to allow public execute privileges.
Answer: A. Notes: Web browsers will block running a script that originates from a server with a domain name that is different from the webpage. Amazon S3 can be configured with CORS to send HTTP headers that allow the script to run
Q99 [SAA-C03]: A company’s security team requires that all data stored in the cloud be encrypted at rest at all times using encryption keys stored on premises. Which encryption options meet these requirements? (Select TWO.)
A. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). B. Use server-side encryption with AWS KMS managed encryption keys (SSE-KMS). C. Use server-side encryption with customer-provided encryption keys (SSE-C). D. Use client-side encryption to provide at-rest encryption. E. Use an AWS Lambda function invoked by Amazon S3 events to encrypt the data using the customer’s keys.
Answer: C. D. Notes: Server-side encryption with customer-provided keys (SSE-C) enables Amazon S3 to encrypt objects on the server side using an encryption key provided in the PUT request. The same key must be provided in the GET requests for Amazon S3 to decrypt the object. Customers also have the option to encrypt data on the client side before uploading it to Amazon S3, and then they can decrypt the data after downloading it. AWS software development kits (SDKs) provide an S3 encryption client that streamlines the process.
Q100 [SAA-C03]: A company uses Amazon EC2 Reserved Instances to run its data processing workload. The nightly job typically takes 7 hours to run and must finish within a 10-hour time window. The company anticipates temporary increases in demand at the end of each month that will cause the job to run over the time limit with the capacity of the current resources. Once started, the processing job cannot be interrupted before completion. The company wants to implement a solution that would provide increased resource capacity as cost-effectively as possible. What should a solutions architect do to accomplish this?
A) Deploy On-Demand Instances during periods of high demand. B) Create a second EC2 reservation for additional instances. C) Deploy Spot Instances during periods of high demand. D) Increase the EC2 instance size in the EC2 reservation to support the increased workload.
Answer: A. Notes: While Spot Instances would be the least costly option, they are not suitable for jobs that cannot be interrupted or must complete within a certain time period. On-Demand Instances would be billed for the number of seconds they are running.
Q101 [SAA-C03]: A company runs an online voting system for a weekly live television program. During broadcasts, users submit hundreds of thousands of votes within minutes to a front-end fleet of Amazon EC2 instances that run in an Auto Scaling group. The EC2 instances write the votes to an Amazon RDS database. However, the database is unable to keep up with the requests that come from the EC2 instances. A solutions architect must design a solution that processes the votes in the most efficient manner and without downtime. Which solution meets these requirements?
A. Migrate the front-end application to AWS Lambda. Use Amazon API Gateway to route user requests to the Lambda functions. B. Scale the database horizontally by converting it to a Multi-AZ deployment. Configure the front-end application to write to both the primary and secondary DB instances. C. Configure the front-end application to send votes to an Amazon Simple Queue Service (Amazon SQS) queue. Provision worker instances to read the SQS queue and write the vote information to the database. D. Use Amazon EventBridge (Amazon CloudWatch Events) to create a scheduled event to re-provision the database with larger, memory optimized instances during voting periods. When voting ends, re-provision the database to use smaller instances.
Answer: C. Notes: – Decouple the ingestion of votes from the database to allow the voting system to continue processing votes without waiting for the database writes. Add dedicated workers to read from the SQS queue to allow votes to be entered into the database at a controllable rate. The votes will be added to the database as fast as the database can process them, but no votes will be lost.
Q102 [SAA-C03]: A company has a two-tier application architecture that runs in public and private subnets. Amazon EC2 instances running the web application are in the public subnet and an EC2 instance for the database runs on the private subnet. The web application instances and the database are running in a single Availability Zone (AZ). Which combination of steps should a solutions architect take to provide high availability for this architecture? (Select TWO.)
A. Create new public and private subnets in the same AZ. B. Create an Amazon EC2 Auto Scaling group and Application Load Balancer spanning multiple AZs for the web application instances. C. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer. D. Create new public and private subnets in a new AZ. Create a database using an EC2 instance in the public subnet in the new AZ. Migrate the old database contents to the new database. E. Create new public and private subnets in the same VPC, each in a new AZ. Create an Amazon RDS Multi-AZ DB instance in the private subnets. Migrate the old database contents to the new DB instance.
Answer: B. E. Notes: Create new subnets in a new Availability Zone (AZ) to provide a redundant network. Create an Auto Scaling group with instances in two AZs behind the load balancer to ensure high availability of the web application and redistribution of web traffic between the two public AZs. Create an RDS DB instance in the two private subnets to make the database tier highly available too.
Q103 [SAA-C03]: A website runs a custom web application that receives a burst of traffic each day at noon. The users upload new pictures and content daily, but have been complaining of timeouts. The architecture uses Amazon EC2 Auto Scaling groups, and the application consistently takes 1 minute to initiate upon boot up before responding to user requests. How should a solutions architect redesign the architecture to better respond to changing traffic?
A. Configure a Network Load Balancer with a slow start configuration. B. Configure Amazon ElastiCache for Redis to offload direct requests from the EC2 instances. C. Configure an Auto Scaling step scaling policy with an EC2 instance warmup condition. D. Configure Amazon CloudFront to use an Application Load Balancer as the origin.
Answer: C. Notes: The current configuration puts new EC2 instances into service before they are able to respond to transactions. This could also cause the instances to overscale. With a step scaling policy, you can specify the number of seconds that it takes for a newly launched instance to warm up. Until its specified warm-up time has expired, an EC2 instance is not counted toward the aggregated metrics of the Auto Scaling group. While scaling out, the Auto Scaling logic does not consider EC2 instances that are warming up as part of the current capacity of the Auto Scaling group. Therefore, multiple alarm breaches that fall in the range of the same step adjustment result in a single scaling activity. This ensures that you do not add more instances than you need.
Q104 [SAA-C03]: An application running on AWS uses an Amazon Aurora Multi-AZ DB cluster deployment for its database. When evaluating performance metrics, a solutions architect discovered that the database reads are causing high I/O and adding latency to the write requests against the database. What should the solutions architect do to separate the read requests from the write requests?
A. Enable read-through caching on the Aurora database. B. Update the application to read from the Multi-AZ standby instance. C. Create an Aurora replica and modify the application to use the appropriate endpoints. D. Create a second Aurora database and link it to the primary database as a read replica.
Answer: C. Notes: Aurora Replicas provide a way to offload read traffic. Aurora Replicas share the same underlying storage as the main database, so lag time is generally very low. Aurora Replicas have their own endpoints, so the application will need to be configured to direct read traffic to the new endpoints. Reference: Aurora Replicas
Question 106: A company plans to migrate its on-premises workload to AWS. The current architecture is composed of a Microsoft SharePoint server that uses a Windows shared file storage. The Solutions Architect needs to use a cloud storage solution that is highly available and can be integrated with Active Directory for access control and authentication. Which of the following options can satisfy the given requirement?
A. Create a file system using Amazon EFS and join it to an Active Directory domain. B. Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory domain in AWS. C. Create a Network File System (NFS) file share using AWS Storage Gateway. D. Launch an Amazon EC2 Windows Server to mount a new S3 bucket as a file volume.
Answer: B. Notes: Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessible over the industry-standard Service Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. Amazon FSx is accessible from Windows, Linux, and MacOS compute instances and devices. Thousands of compute instances and devices can access a file system concurrently. ReferenceText: FSx
Question 107: A company plans to migrate its on-premises workload to AWS. The current architecture is composed of a Microsoft SharePoint server that uses a Windows shared file storage. The Solutions Architect needs to use a cloud storage solution that is highly available and can be integrated with Active Directory for access control and authentication. Which of the following options can satisfy the given requirement?
A. Create a file system using Amazon EFS and join it to an Active Directory domain. B. Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory domain in AWS. C. Create a Network File System (NFS) file share using AWS Storage Gateway. D. Launch an Amazon EC2 Windows Server to mount a new S3 bucket as a file volume.
Answer: B. Notes: Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessible over the industry-standard Service Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. Amazon FSx is accessible from Windows, Linux, and MacOS compute instances and devices. Thousands of compute instances and devices can access a file system concurrently. Reference: FSx Category: Design Resilient Architectures
Question 108: A Forex trading platform, which frequently processes and stores global financial data every minute, is hosted in your on-premises data center and uses an Oracle database. Due to a recent cooling problem in their data center, the company urgently needs to migrate their infrastructure to AWS to improve the performance of their applications. As the Solutions Architect, you are responsible in ensuring that the database is properly migrated and should remain available in case of database server failure in the future. Which of the following is the most suitable solution to meet the requirement?
A. Create an Oracle database in RDS with Multi-AZ deployments. B. Launch an Oracle database instance in RDS with Recovery Manager (RMAN) enabled. C. Launch an Oracle Real Application Clusters (RAC) in RDS. D. Convert the database schema using the AWS Schema Conversion Tool and AWS Database Migration Service. Migrate the Oracle database to a non-cluster Amazon Aurora with a single instance.
Answer: A. Notes: Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. Reference: RDS Multi AZ Category: Design Resilient Architectures
Question 109: A data analytics company, which uses machine learning to collect and analyze consumer data, is using Redshift cluster as their data warehouse. You are instructed to implement a disaster recovery plan for their systems to ensure business continuity even in the event of an AWS region outage. Which of the following is the best approach to meet this requirement?
A. Do nothing because Amazon Redshift is a highly available, fully-managed data warehouse which can withstand an outage of an entire AWS region. B. Enable Cross-Region Snapshots Copy in your Amazon Redshift Cluster. C. Create a scheduled job that will automatically take the snapshot of your Redshift Cluster and store it to an S3 bucket. Restore the snapshot in case of an AWS region outage. D. Use Automated snapshots of your Redshift Cluster.
Answer: B. Notes: You can configure Amazon Redshift to copy snapshots for a cluster to another region. To configure cross-region snapshot copy, you need to enable this copy feature for each cluster and configure where to copy snapshots and how long to keep copied automated snapshots in the destination region. When cross-region copy is enabled for a cluster, all new manual and automatic snapshots are copied to the specified region. Reference: Redshift Snapshots
Category: Design Resilient Architectures
Question 109: A start-up company has an EC2 instance that is hosting a web application. The volume of users is expected to grow in the coming months and hence, you need to add more elasticity and scalability in your AWS architecture to cope with the demand. Which of the following options can satisfy the above requirement for the given scenario? (Select TWO.)
A. Set up two EC2 instances and then put them behind an Elastic Load balancer (ELB). B. Set up two EC2 instances deployed using Launch Templates and integrated with AWS Glue. C. Set up an S3 Cache in front of the EC2 instance. D. Set up two EC2 instances and use Route 53 to route traffic based on a Weighted Routing Policy. E. Set up an AWS WAF behind your EC2 Instance.
Answer: A. D. Notes: Using an Elastic Load Balancer is an ideal solution for adding elasticity to your application. Alternatively, you can also create a policy in Route 53, such as a Weighted routing policy, to evenly distribute the traffic to 2 or more EC2 instances. Hence, setting up two EC2 instances and then put them behind an Elastic Load balancer (ELB) and setting up two EC2 instances and using Route 53 to route traffic based on a Weighted Routing Policy are the correct answers. Reference: Elastic Load Balancing Category: Design Resilient Architectures
Question 110: A company plans to deploy a Docker-based batch application in AWS. The application will be used to process both mission-critical data as well as non-essential batch jobs. Which of the following is the most cost-effective option to use in implementing this architecture?
A. Use ECS as the container management service then set up Reserved EC2 Instances for processing both mission-critical and non-essential batch jobs. B. Use ECS as the container management service then set up a combination of Reserved and Spot EC2 Instances for processing mission-critical and non-essential batch jobs respectively. C. Use ECS as the container management service then set up On-Demand EC2 Instances for processing both mission-critical and non-essential batch jobs. D. Use ECS as the container management service then set up Spot EC2 Instances for processing both mission-critical and non-essential batch jobs.
Answer: B. Notes: Amazon ECS lets you run batch workloads with managed or custom schedulers on Amazon EC2 On-Demand Instances, Reserved Instances, or Spot Instances. You can launch a combination of EC2 instances to set up a cost-effective architecture depending on your workload. You can launch Reserved EC2 instances to process the mission-critical data and Spot EC2 instances for processing non-essential batch jobs. There are two different charge models for Amazon Elastic Container Service (ECS): Fargate Launch Type Model and EC2 Launch Type Model. With Fargate, you pay for the amount of vCPU and memory resources that your containerized application requests while for EC2 launch type model, there is no additional charge. You pay for AWS resources (e.g. EC2 instances or EBS volumes) you create to store and run your application. You only pay for what you use, as you use it; there are no minimum fees and no upfront commitments. In this scenario, the most cost-effective solution is to use ECS as the container management service then set up a combination of Reserved and Spot EC2 Instances for processing mission-critical and non-essential batch jobs respectively. You can use Scheduled Reserved Instances (Scheduled Instances) which enables you to purchase capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term. This will ensure that you have an uninterrupted compute capacity to process your mission-critical batch jobs. Reference: Emazon ECS
Category: Design Resilient Architectures
[/bgcollapse]
Question 111: A company has recently adopted a hybrid cloud architecture and is planning to migrate a database hosted on-premises to AWS. The database currently has over 50 TB of consumer data, handles highly transactional (OLTP) workloads, and is expected to grow. The Solutions Architect should ensure that the database is ACID-compliant and can handle complex queries of the application. Which type of database service should the Architect use?
A. Amazon DynamoDB B. Amazon RDS C. Amazon Redshift D. Amazon Aurora
Answer: D. Notes: Amazon Aurora (Aurora) is a fully managed relational database engine that’s compatible with MySQL and PostgreSQL. You already know how MySQL and PostgreSQL combine the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. The code, tools, and applications you use today with your existing MySQL and PostgreSQL databases can be used with Aurora. With some workloads, Aurora can deliver up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. Aurora includes a high-performance storage subsystem. Its MySQL- and PostgreSQL-compatible database engines are customized to take advantage of that fast distributed storage. The underlying storage grows automatically as needed, up to 64 tebibytes (TiB). Aurora also automates and standardizes database clustering and replication, which are typically among the most challenging aspects of database configuration and administration. Reference: Aurora Category: Design Resilient Architectures
Question 112: An online stocks trading application that stores financial data in an S3 bucket has a lifecycle policy that moves older data to Glacier every month. There is a strict compliance requirement where a surprise audit can happen at anytime and you should be able to retrieve the required data in under 15 minutes under all circumstances. Your manager instructed you to ensure that retrieval capacity is available when you need it and should handle up to 150 MB/s of retrieval throughput. Which of the following should you do to meet the above requirement? (Select TWO.)
A. Retrieve the data using Amazon Glacier Select. B. Use Bulk Retrieval to access the financial data. C. Purchase provisioned retrieval capacity. D. Use Expedited Retrieval to access the financial data. E. Specify a range, or portion, of the financial data archive to retrieve.
Answer: C. D. Notes: Expedited retrievals allow you to quickly access your data when occasional urgent requests for a subset of archives are required. For all but the largest archives (250 MB+), data accessed using Expedited retrievals are typically made available within 1–5 minutes. Provisioned Capacity ensures that retrieval capacity for Expedited retrievals is available when you need it. To make an Expedited, Standard, or Bulk retrieval, set the Tier parameter in the Initiate Job (POST jobs) REST API request to the option you want, or the equivalent in the AWS CLI or AWS SDKs. If you have purchased provisioned capacity, then all expedited retrievals are automatically served through your provisioned capacity. Provisioned capacity ensures that your retrieval capacity for expedited retrievals is available when you need it. Each unit of capacity provides that at least three expedited retrievals can be performed every five minutes and provides up to 150 MB/s of retrieval throughput. You should purchase provisioned retrieval capacity if your workload requires highly reliable and predictable access to a subset of your data in minutes. Without provisioned capacity Expedited retrievals are accepted, except for rare situations of unusually high demand. However, if you require access to Expedited retrievals under all circumstances, you must purchase provisioned retrieval capacity. Reference: Amazon Glacier Category: Design Resilient Architectures
Question 113: An organization stores and manages financial records of various companies in its on-premises data center, which is almost out of space. The management decided to move all of their existing records to a cloud storage service. All future financial records will also be stored in the cloud. For additional security, all records must be prevented from being deleted or overwritten. Which of the following should you do to meet the above requirement? A. Use AWS Storage Gateway to establish hybrid cloud storage. Store all of your data in Amazon S3 and enable object lock. B. Use AWS DataSync to move the data. Store all of your data in Amazon EFS and enable object lock. C. Use AWS Storage Gateway to establish hybrid cloud storage. Store all of your data in Amazon EBS and enable object lock. D. Use AWS DataSync to move the data. Store all of your data in Amazon S3 and enable object lock.
Answer: D. Notes: AWS DataSync allows you to copy large datasets with millions of files, without having to build custom solutions with open source tools, or license and manage expensive commercial network acceleration software. You can use DataSync to migrate active data to AWS, transfer data to the cloud for analysis and processing, archive data to free up on-premises storage capacity, or replicate data to AWS for business continuity. AWS DataSync enables you to migrate your on-premises data to Amazon S3, Amazon EFS, and Amazon FSx for Windows File Server. You can configure DataSync to make an initial copy of your entire dataset, and schedule subsequent incremental transfers of changing data towards Amazon S3. Enabling S3 Object Lock prevents your existing and future records from being deleted or overwritten. AWS DataSync is primarily used to migrate existing data to Amazon S3. On the other hand, AWS Storage Gateway is more suitable if you still want to retain access to the migrated data and for ongoing updates from your on-premises file-based applications. ReferenceText: AWS DataSync ReferenceUrl: https://aws.amazon.com/datasync/faqs/ Category: Design Secure Applications and Architectures
Question 114: A solutions architect is designing a solution to run a containerized web application by using Amazon Elastic Container Service (Amazon ECS). The solutions architect wants to minimize cost by running multiple copies of a task on each container instance. The number of task copies must scale as the load increases and decreases. Which routing solution distributes the load to the multiple tasks?
A. Configure an Application Load Balancer to distribute the requests by using path-based routing. B. Configure an Application Load Balancer to distribute the requests by using dynamic host port mapping. C. Configure an Amazon Route 53 alias record set to distribute the requests with a failover routing policy. D. Configure an Amazon Route 53 alias record set to distribute the requests with a weighted routing policy.
Answer: B. Notes: With dynamic host port mapping, multiple tasks from the same service are allowed for each container instance. You can use weighted routing policies to route traffic to instances at proportions that you specify. You cannot use weighted routing policies to manage multiple tasks on a single container. Notes: With dynamic host port mapping, multiple tasks from the same service are allowed for each container instance. You can use weighted routing policies to route traffic to instances at proportions that you specify. You cannot use weighted routing policies to manage multiple tasks on a single container. Reference:Choosing a routing policy Category: Design Cost-Optimized Architectures
Question 115: Question: A Solutions Architect needs to deploy a mobile application that can collect votes for a popular singing competition. Millions of users from around the world will submit votes using their mobile phones. These votes must be collected and stored in a highly scalable and highly available data store which will be queried for real-time ranking. Which of the following combination of services should the architect use to meet this requirement? A. Amazon Redshift and AWS Mobile Hub B. Amazon DynamoDB and AWS AppSync C. Amazon Relational Database Service (RDS) and Amazon MQ D. Amazon Aurora and Amazon Cognito
Answer: B. Notes: When the word durability pops out, the first service that should come to your mind is Amazon S3. Since this service is not available in the answer options, we can look at the other data store available which is Amazon DynamoDB. DynamoDB is durable, scalable, and highly available data store which can be used for real-time tabulation. You can also use AppSync with DynamoDB to make it easy for you to build collaborative apps that keep shared data updated in real time. You just specify the data for your app with simple code statements and AWS AppSync manages everything needed to keep the app data updated in real time. This will allow your app to access data in Amazon DynamoDB, trigger AWS Lambda functions, or run Amazon Elasticsearch queries and combine data from these services to provide the exact data you need for your app.
Question 116: The usage of a company’s image-processing application is increasing suddenly with no set pattern. The application’s processing time grows linearly with the size of the image. The processing can take up to 20 minutes for large image files. The architecture consists of a web tier, an Amazon Simple Queue Service (Amazon SQS) standard queue, and message consumers that process the images on Amazon EC2 instances. When a high volume of requests occurs, the message backlog in Amazon SQS increases. Users are reporting the delays in processing. A solutions architect must improve the performance of the application in compliance with cloud best practices. Which solution will meet these requirements?
A. Purchase enough Dedicated Instances to meet the peak demand. Deploy the instances for the consumers. B. Convert the existing SQS standard queue to an SQS FIFO queue. Increase the visibility timeout. C. Configure a scalable AWS Lambda function as the consumer of the SQS messages. D. Create a message consumer that is an Auto Scaling group of instances. Configure the Auto Scaling group to scale based upon the ApproximateNumberOfMessages Amazon CloudWatch metric.
Answer: D. Notes: FIFO queues will solve problems that occur when messages are processed out of order. FIFO queues will not improve performance during sudden volume increases. Additionally, you cannot convert SQS queues after you create them. Reference: FIFO Queues
Question 117: An application is hosted on an EC2 instance with multiple EBS Volumes attached and uses Amazon Neptune as its database. To improve data security, you encrypted all of the EBS volumes attached to the instance to protect the confidential data stored in the volumes. Which of the following statements are true about encrypted Amazon Elastic Block Store volumes? (Select TWO.)
A. All data moving between the volume and the instance are encrypted. B. Snapshots are automatically encrypted. C. The volumes created from the encrypted snapshot are not encrypted. D. Snapshots are not automatically encrypted. E. Only the data in the volume is encrypted and not all the data moving between the volume and the instance. Answer: A. B. Notes: Amazon Elastic Block Store (Amazon EBS) provides block level storage volumes for use with EC2 instances. EBS volumes are highly available and reliable storage volumes that can be attached to any running instance that is in the same Availability Zone. EBS volumes that are attached to an EC2 instance are exposed as storage volumes that persist independently from the life of the instance. Encryption operations occur on the servers that host EC2 instances, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. You can encrypt both the boot and data volumes of an EC2 instance. Reference:EBS
Question 118: A reporting application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. For complex reports, the application can take up to 15 minutes to respond to a request. A solutions architect is concerned that users will receive HTTP 5xx errors if a report request is in process during a scale-in event. What should the solutions architect do to ensure that user requests will be completed before instances are terminated?
A. Enable sticky sessions (session affinity) for the target group of the instances. B. Increase the instance size in the Application Load Balancer target group. C. Increase the cooldown period for the Auto Scaling group to a greater amount of time than the time required for the longest running responses. D. Increase the deregistration delay timeout for the target group of the instances to greater than 900 seconds.
Answer: D. Notes: By default, Elastic Load Balancing waits 300 seconds before the completion of the deregistration process, which can help in-flight requests to the target become complete. To change the amount of time that Elastic Load Balancing waits, update the deregistration delay value. Reference: Deregistration Delay.
Question 119: A company used Amazon EC2 Spot Instances for a demonstration that is now complete. A solutions architect must remove the Spot Instances to stop them from incurring cost. What should the solutions architect do to meet this requirement?
A. Cancel the Spot request only. B. Terminate the Spot Instances only. C. Cancel the Spot request. Terminate the Spot Instances. D. Terminate the Spot Instances. Cancel the Spot request.
Answer: C. Notes: To remove the Spot Instances, the appropriate steps are to cancel the Spot request and then to terminate the Spot Instances. Reference:Spot Instances
Question 120: Which components are required to build a site-to-site VPN connection on AWS? (Select TWO.) A. An Internet Gateway B. A NAT gateway C. A customer Gateway D. A Virtual Private Gateway E. Amazon API Gateway
Answer: C. D. Notes: A virtual private gateway is attached to a VPC to create a site-to-site VPN connection on AWS. You can accept private encrypted network traffic from an on-premises data center into your VPC without the need to traverse the open public internet. A customer gateway is required for the VPN connection to be established. A customer gateway device is set up and configured in the customer’s data center. Reference: What is AWS Site-to-Site VPN?
Question 121: A company runs its website on Amazon EC2 instances behind an Application Load Balancer that is configured as the origin for an Amazon CloudFront distribution. The company wants to protect against cross-site scripting and SQL injection attacks. Which approach should a solutions architect recommend to meet these requirements?
A. Enable AWS Shield Advanced. List the CloudFront distribution as a protected resource. B. Define an AWS Shield Advanced policy in AWS Firewall Manager to block cross-site scripting and SQL injection attacks. C. Set up AWS WAF on the CloudFront distribution. Use conditions and rules that block cross-site scripting and SQL injection attacks. D. Deploy AWS Firewall Manager on the EC2 instances. Create conditions and rules that block cross-site scripting and SQL injection attacks.
Answer: C. Notes: AWS WAF can detect the presence of SQL code that is likely to be malicious (known as SQL injection). AWS WAF also can detect the presence of a script that is likely to be malicious (known as cross-site scripting). Reference: AWS WAF.
Question 122: A media company is designing a new solution for graphic rendering. The application requires up to 400 GB of storage for temporary data that is discarded after the frames are rendered. The application requires approximately 40,000 random IOPS to perform the rendering. What is the MOST cost-effective storage option for this rendering application? A. A storage optimized Amazon EC2 instance with instance store storage B. A storage optimized Amazon EC2 instance with a Provisioned IOPS SSD (io1 or io2) Amazon Elastic Block Store (Amazon EBS) volume C. A burstable Amazon EC2 instance with a Throughput Optimized HDD (st1) Amazon Elastic Block Store (Amazon EBS) volume D. A burstable Amazon EC2 instance with Amazon S3 storage over a VPC endpoint
Answer: A. Notes: SSD-Backed Storage Optimized (i2) instances provide more than 365,000 random IOPS. The instance store has no additional cost, compared with the regular hourly cost of the instance. Reference: Amazon EC2 pricing.
Question 123: A company is deploying a new application that will consist of an application layer and an online transaction processing (OLTP) relational database. The application must be available at all times. However, the application will have periods of inactivity. The company wants to pay the minimum for compute costs during these idle periods. Which solution meets these requirements MOST cost-effectively? A. Run the application in containers with Amazon Elastic Container Service (Amazon ECS) on AWS Fargate. Use Amazon Aurora Serverless for the database. B. Run the application on Amazon EC2 instances by using a burstable instance type. Use Amazon Redshift for the database. C. Deploy the application and a MySQL database to Amazon EC2 instances by using AWS CloudFormation. Delete the stack at the beginning of the idle periods. D. Deploy the application on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. Use Amazon RDS for MySQL for the database.
Answer: A. Notes: When Amazon ECS uses Fargate for compute, it incurs no costs when the application is idle. Aurora Serverless also incurs no compute costs when it is idle. Reference: AWS Fargate Pricing.
Question 124:Which options best describe characteristics of events in event-driven design? (Select THREE.)
A. Events are usually processed asynchronously
B. Events usually expect an immediate reply
C. Events are used to share information about a change in state
D. Events are observable
E. Events direct the actions of targets
Answer: A. C. D. Notes: Events are used to share information about a change in state. Events are observable and usually processed asynchronously. Events do not direct the actions of targets, and events do not expect a reply. Events can be used to trigger synchronous communications, and in this case, an event source like API Gateway might wait for a response. Reference:Event Driven Design on AWS
Questions 125: Which of these scenarios would lead you to choose AWS AppSync and GraphQL APIs over API Gateway and REST APIs? Choose THREE.
A. You need a strongly typed schema for developers.
B. You need a server-controlled response.
C. You need multiple authentication options to the same API.
D. You need to integrate with existing clients.
E. You need client-specific responses that require data from many backend resources.
Answer: A. C. E Notes: With GraphQL, you define the schema and data types in advance. If it’s not in the schema, you can’t query for it. Developers can download the schema and generate source code off the schema to work with it. Consider GraphQL for applications where you need a client-specific response that needs data from lots of backend sources. When you need a server-controlled choose REST. AWS AppSync allows you to use multiple authentication options on the same API, but API Gateway allows you to associate only one authentication option per resource. When you need to integrate with existing clients, REST is much more mature, and there are more tools in which to use it. Most clients are written for REST. Reference: GraphQL vs. REST
Question 126: Which options are TRUE statements about serverless security? (Select THREE.)
A. Logging and metrics are especially critical because you can’t go back to the server to see what happened when something fails.
B. Because you aren’t responsible for the operating system and the network itself, you don’t need to worry about mitigating external attacks.
C. The distributed perimeter means your code needs to defend each of the potential paths that might be used to reach your functions.
D. You can use Lambda’s fine-grained controls to scope its reach with a much smaller set of permissions as opposed to traditional approaches.
E. You may use the same tooling as with your server-based applications, but the best practices you follow will be different.
Answer: A. C. and D.
Notes: In Lambda’s ephemeral environment, logging and metrics are more critical because once the code runs, you can no longer go back to the server to find out what has happened. The security perimeter you are defending has to consider the different services that might trigger a function, and your code needs to defend each of those potential paths. You can use Lambda’s fine-grained controls to scope its reach with a much smaller set of permissions as opposed to traditional approaches where you may give broad permissions for your application on its servers. Scope your functions to limit permission sharing between any unrelated components. Security best practices don’t change with serverless, but the tooling you’ll use will change. For example, techniques such as installing agents on your host may not be relevant any more. While you aren’t responsible for the operating system or the network itself, you do need to protect your network boundaries and mitigate external attacks.
Question 127: Which options are examples of steps you take to protect your serverless application from attacks? (Select FOUR.)
A. Update your operating system with the latest patches.
B. Configure geoblocking on Amazon CloudFront in front of regional API endpoints.
C. Disable origin access identity on Amazon S3.
D. Disable CORS on your APIs.
E. Use resource policies to limit access to your APIs to users from a specified account.
F. Filter out specific traffic patterns with AWS WAF.
G. Parameterize queries so that your Lambda function expects a single input.
Answer: B. E. F. G
Notes: You aren’t responsible for the operating system or network configuration where your functions run, and AWS is ensuring the security of the data within those managed services. You are responsible for protecting data entering your application and limiting access to your AWS resources. You still need to protect data that originates client-side or that travels to or from endpoints outside AWS.
When integrating CloudFront with regional API endpoints, CloudFront also supports geoblocking, which you can use to prevent requests from being served from particular geographic locations.
Use origin access identity with Amazon S3 to allow bucket access only through CloudFront.
CORS is a browser security feature that restricts cross-origin HTTP requests that are initiated from scripts running in the browser. It is enforced by the browser. If your APIs will receive cross-origin requests, you should enable CORS support in API Gateway.
IAM resource policies can be used to limit access to your APIs. For example, you can restrict access to users from a specified AWS account or deny traffic from a specified source IP address or CIDR block.
AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits. AWS WAF lets you create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns you define.
Lambda functions are triggered by events. These events submit an event parameter to the Lambda function and could be exploited for SQL injection. You can prevent this type of attack by parameterizing queries so that your Lambda function expects a single input.
Question 128:Which options reflect best practices for automating your deployment pipeline with serverless applications? (Select TWO.)
A. Select one deployment framework and use it for all of your deployments for consistency.
B. Use different AWS accounts for each environment in your deployment pipeline.
C. Use AWS SAM to configure safe deployments and include pre- and post-traffic tests.
D. Create a specific AWS SAM template to match each environment to keep them distinct.
Answer: B. and C.
Notes: You may use multiple deployment frameworks for an application so that you can use the framework that best suits the type of deployment. For example, you might use the AWS SAM framework to define your application stack and deployment preferences and then use AWS CDK to provision any infrastructure-related resources, such as the CI/CD pipeline.
It is a best practice to use different AWS accounts for each environment. This approach limits the blast radius of issues that occur and makes it less complex to differentiate which resources are associated with each environment. Because of the way costs are calculated with serverless, spinning up additional environments doesn’t add much to your cost.
AWS SAM lets you configure safe deployment preferences so that you can run code before the deployment, and after the deployment and rollback if there is a problem. You can also specify a method for shifting traffic to the new version a little bit at a time.
It is a best practice to use one AWS SAM template across environments and use options to parameterize values that are different per environment. This helps ensure that the environment is built with exactly the same stack.
Question 129: Your application needs to connect to an Amazon RDS instance on the backend. What is the best recommendation to the developer whose function must read from and write to the Amazon RDS instance?
A. Use reserved concurrency to limit the number of concurrent functions that would try to write to the database
B. Use the database proxy feature to provide connection pooling for the functions
C. Initialize the number of connections you want outside of the handler
D. Use the database TTL setting to clean up connections
Answer: B Notes: Use the database proxy feature to provide connection pooling for the functions
Question 130: A company runs a cron job on an Amazon EC2 instance on a predefined schedule. The cron job calls a bash script that encrypts a 2 KB file. A security engineer creates an AWS Key Management Service (AWS KMS) CMK with a key policy.
The key policy and the EC2 instance role have the necessary configuration for this job.
Which process should the bash script use to encrypt the file?
A) Use the aws kms encrypt command to encrypt the file by using the existing CMK.
B) Use the aws kms create-grant command to generate a grant for the existing CMK.
C) Use the aws kms encrypt command to generate a data key. Use the plaintext data key to encrypt the file.
D) Use the aws kms generate-data-key command to generate a data key. Use the encrypted data key to encrypt the file.
Answer: D
Notes: KMS allow encryption for raw data up to 4K but it is not recommended so A is possible but not good practice. Create grant is a ‘policy’ things, not an encryption things. Kms encrypt doesn’t generate data key. Only D generate a data key clear text and encrypted. You then encrypt the file with the data key, add the encrypted data key to the encrypted file metadata for later decryption.
Question 131: A Security engineer must develop an AWS Identity and Access Management (IAM) strategy for a company’s organization in AWS Organizations. The company needs to give developers autonomy to develop and test their applications on AWS, but the company also needs to implement security guardrails to help protect itself. The company creates and distributes applications with different levels of data classification and types. The solution must maximize scalability.
Which combination of steps should the security engineer take to meet these requirements? (Choose three.)
A) Create an SCP to restrict access to highly privileged or unauthorized actions to specific AM principals. Assign the SCP to the appropriate AWS accounts.
B) Create an IAM permissions boundary to allow access to specific actions and IAM principals. Assign the IAM permissions boundary to all AM principals within the organization
C) Create a delegated IAM role that has capabilities to create other IAM roles. Use the delegated IAM role to provision IAM principals by following the principle of least privilege.
D) Create OUs based on data classification and type. Add the AWS accounts to the appropriate OU. Provide developers access to the AWS accounts based on business need.
E) Create IAM groups based on data classification and type. Add only the required developers’ IAM role to the IAM groups within each AWS account.
F) Create IAM policies based on data classification and type. Add the minimum required IAM policies to the developers’ IAM role within each AWS account.
Answer: A B and C
Notes:
If you look at the choices, there are three related to SCP, which controls services, and three related to IAM and permissions boundaries.
Limiting services doesn’t help with data classification – using boundaries, policies and roles give you the scalability and can solve the problem.
Question 132: A company is ready to deploy a public web application. The company will use AWS and will host the application on an Amazon EC2 instance. The company must use SSL/TLS encryption. The company is already using AWS Certificate Manager (ACM) and will export a certificate for use with the deployment.
How can a security engineer deploy the application to meet these requirements?
A) Put the EC2 instance behind an Application Load Balancer (ALB). In the EC2 console, associate the certificate with the ALB by choosing HTTPS and 443.
B) Put the EC2 instance behind a Network Load Balancer. Associate the certificate with the EC2 instance.
C) Put the EC2 instance behind a Network Load Balancer (NLB). In the EC2 console, associate the certificate with the NLB by choosing HTTPS and 443.
D) Put the EC2 instance behind an Application Load Balancer. Associate the certificate with the EC2 instance.
Answer: A
Notes: You can’t directly install Amazon-issued certificates on Amazon Elastic Compute Cloud (EC2) instances. Instead, use the certificate with a load balancer, and then register the EC2 instance behind the load balancer.
What are the 6 pillars of a well architected framework:
AWS Well-Architected helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications and workloads. Based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization — AWS Well-Architected provides a consistent approach for customers and partners to evaluate architectures, and implement designs that can scale over time.
1. Operational Excellence
The operational excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. You can find prescriptive guidance on implementation in the Operational Excellence Pillar whitepaper.
2. Security The security pillar includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies. You can find prescriptive guidance on implementation in the Security Pillar whitepaper.
3. Reliability The reliability pillar includes the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. You can find prescriptive guidance on implementation in the Reliability Pillar whitepaper.
4. Performance Efficiency The performance efficiency pillar includes the ability to use computing resources efficiently to meet system requirements and to maintain that efficiency as demand changes and technologies evolve. You can find prescriptive guidance on implementation in the Performance Efficiency Pillar whitepaper.
5. Cost Optimization The cost optimization pillar includes the ability to avoid or eliminate unneeded cost or suboptimal resources. You can find prescriptive guidance on implementation in the Cost Optimization Pillar whitepaper.
6. Sustainability
The ability to increase efficiency across all components of a workload by maximizing the benefits from the provisioned resources.
There are six best practice areas for sustainability in the cloud:
Region Selection – AWS Global Infrastructure
User Behavior Patterns – Auto Scaling, Elastic Load Balancing
Software and Architecture Patterns – AWS Design Principles
The AWS Well-Architected Framework provides architectural best practices across the five pillars for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. The framework provides a set of questions that allows you to review an existing or proposed architecture. It also provides a set of AWS best practices for each pillar. Using the Framework in your architecture helps you produce stable and efficient systems, which allows you to focus on functional requirements.
Other AWS Facts and Summaries and Questions/Answers Dump
The reality, of course, today is that if you come up with a great idea you don’t get to go quickly to a successful product. There’s a lot of undifferentiated heavy lifting that stands between your idea and that success. The kinds of things that I’m talking about when I say undifferentiated heavy lifting are things like these: figuring out which servers to buy, how many of them to buy, what time line to buy them.
Eventually you end up with heterogeneous hardware and you have to match that. You have to think about backup scenarios if you lose your data center or lose connectivity to a data center. Eventually you have to move facilities. There’s negotiations to be done. It’s a very complex set of activities that really is a big driver of ultimate success.
But they are undifferentiated from, it’s not the heart of, your idea. We call this muck. And it gets worse because what really happens is you don’t have to do this one time. You have to drive this loop. After you get your first version of your idea out into the marketplace, you’ve done all that undifferentiated heavy lifting, you find out that you have to cycle back. Change your idea. The winners are the ones that can cycle this loop the fastest.
On every cycle of this loop you have this undifferentiated heavy lifting, or muck, that you have to contend with. I believe that for most companies, and it’s certainly true at Amazon, that 70% of your time, energy, and dollars go into the undifferentiated heavy lifting and only 30% of your energy, time, and dollars gets to go into the core kernel of your idea.
I think what people are excited about is that they’re going to get a chance they see a future where they may be able to invert those two. Where they may be able to spend 70% of their time, energy and dollars on the differentiated part of what they’re doing.
So my exam was yesterday and I got the results in 24 hours. I think that’s how they review all saa exams, not showing the results right away anymore.
I scored 858. Was practicing with Stephan’s udemy lectures and Bonso exam tests. My test results were as follows Test 1. 63%, 93% Test 2. 67%, 87% Test 3. 81 % Test 4. 72% Test 5. 75 % Test 6. 81% Stephan’s test. 80%
I was reading all question explanations (even the ones I got correct)
The actual exam was pretty much similar to these. The topics I got were:
A lot of S3 (make sure you know all of it from head to toes)
VPC peering
DataSync and Database Migration Service in same questions. Make sure you know the difference
One EKS question
2-3 KMS questions
Security group question
A lot of RDS Multi-AZ
SQS + SNS fan out pattern
ECS microservice architecture question
Route 53
NAT gateway
And that’s all I can remember)
I took extra 30 minutes, because English is not my native language and I had plenty of time to think and then review flagged questions.
Hey guys, just giving my update so all of you guys working towards your certs can stay motivated as these success stories drove me to reach this goal.
Background: 12 years of military IT experience, never worked with the cloud. I’ve done 7 deployments (that is a lot in 12 years), at which point I came home from the last one burnt out with a family that barely knew me. I knew I needed a change, but had no clue where to start or what I wanted to do. I wasn’t really interested in IT but I knew it’d pay the bills. After seeing videos about people in IT working from home(which after 8+ years of being gone from home really appealed to me), I stumbled across a video about a Solutions Architect’s daily routine working from home and got me interested in AWS.
AWS Solutions Architect SAA Certification Preparation time: It took me 68 days straight of hard work to pass this exam with confidence. No rest days, more than 120 pages of hand-written notes and hundreds and hundreds of flash cards.
In the beginning, I hopped on Stephane Maarek’s course for the CCP exam just to see if it was for me. I did the course in about a week and then after doing some research on here, got the CCP Practice exams from tutorialsdojo.com Two weeks after starting the Udemy course, I passed the exam. By that point, I’d already done lots of research on the different career paths and the best way to study, etc.
Cantrill(10/10) – That same day, I hopped onto Cantrill’s course for the SAA and got to work. Somebody had mentioned that by doing his courses you’d be over-prepared for the exam. While I think a combination of material is really important for passing the certification with confidence, I can say without a doubt Cantrill’s courses got me 85-90% of the way there. His forum is also amazing, and has directly contributed to me talking with somebody who works at AWS to land me a job, which makes the money I spent on all of his courses A STEAL. As I continue my journey (up next is SA Pro), I will be using all of his courses.
Neal Davis(8/10) – After completing Cantrill’s course, I found myself needing a resource to reinforce all the material I’d just learned. AWS is an expansive platform and the many intricacies of the different services can be tricky. For this portion, I relied on Neal Davis’s Training Notes series. These training notes are a very condensed version of the information you’ll need to pass the exam, and with the proper context are very useful to find the things you may have missed in your initial learnings. I will be using his other Training Notes for my other exams as well.
TutorialsDojo(10/10) – These tests filled in the gaps and allowed me to spot my weaknesses and shore them up. I actually think my real exam was harder than these, but because I’d spent so much time on the material I got wrong, I was able to pass the exam with a safe score.
As I said, I was surprised at how difficult the exam was. A lot of my questions were related to DBs, and a lot of them gave no context as to whether the data being loaded into them was SQL or NoSQL which made the choice selection a little frustrating. A lot of the questions have 2 VERY SIMILAR answers, and often time the wording of the answers could be easy to misinterpret (such as when you are creating a Read Replica, do you attach it to the primary application DB that is slowing down because of read issues or attach it to the service that is causing the primary DB to slow down). For context, I was scoring 95-100% on the TD exams prior to taking the test and managed a 823 on the exam so I don’t know if I got unlucky with a hard test or if I’m not as prepared as I thought I was (i.e. over-thinking questions).
Anyways, up next is going back over the practical parts of the course as I gear up for the SA Pro exam. I will be taking my time with this one, and re-learning the Linux CLI in preparation for finding a new job.
PS if anybody on here is hiring, I’m looking! I’m the hardest worker I know and my goal is to make your company as streamlined and profitable as possible. 🙂
Testimonial: How did you prepare for AWS Certified Solutions Architect – Associate Level certification?
Best way to prepare for aws solution architect associate certification
Practical knowledge is 30% important and rest is Jayendra blog and Dumps.
Buying udemy courses doesn’t make you pass, I can tell surely without going to dumps and without going to jayendra’s blog not easy to clear the certification.
Read FAQs of S3, IAM, EC2, VPC, SQS, Autoscaling, Elastic Load Balancer, EBS, RDS, Lambda, API Gateway, ECS.
Read the Security Whitepaper and Shared Responsibility model.
The most important thing is basic questions from the last introduced topics to the exam is very important like Amazon Kinesis, etc…
Whitepapers are the important information about each services that are published by Amazon in their website. If you are preparing for the AWS certifications, it is very important to use the some of the most recommended whitepapers to read before writing the exam.
Data Security questions could be the more challenging and it’s worth noting that you need to have a good understanding of security processes described in the whitepaper titled “Overview of Security Processes”.
In the above list, most important whitepapers are Overview of Security Processes and Storage Options in the Cloud. Read more here…
Stephen Maarek’s Udemy course, and his 6 exam practices
Adrian Cantrill’s online course (about `60% done)
TutorialDojo’s exams
(My company has udemy business account so I was able to use Stephen’s course/exam)
I scheduled my exam at the end of March, and started with Adrian’s. But I was dumb thinking that I could go through his course within 3 weeks… I stopped around 12% of his course and went to the textbook and finished reading the all-in-one exam guide within a weekend. Then I started going through Stephen’s course. While learning the course, I pushed back the exam to end of April, because I knew I wouldn’t be ready by the exam comes along.
Five days before the exam, I finished Stephen’s course, and then did his final exam on the course. I failed miserably (around 50%). So I did one of Stephen’s practice exam and did worse (42%). I thought maybe it might be his exams that are slightly difficult, so I went and bought Jon Bonso’s exam and got 60% on his first one. And then I realized based on all the questions on the exams, I was definitely lacking some fundamentals. I went back to Adrian’s course and things were definitely sticking more – I think it has to do with his explanations + more practical stuff. Unfortunately, I could not finish his course before the exam (because I was cramming), and on the day of the exam, I could only do Bonso’s four of six exams, with barely passing one of them.
Please, don’t do what I did. I was desperate to get this thing over with it. I wanted to move on and work on other things for job search, but if you’re not in this situation, please don’t do this. I can’t for love of god tell you about OAI and Cloudfront and why that’s different than S3 URL. The only thing that I can remember is all the practical stuff that I did with Adrian’s course. I’ll never forget how to create VPC, because he make you manually go through it. I’m not against Stephen’s course – they are different on its own way (see the tips below).
So here’s what I recommend doing before writing for aws exam:
Don’t schedule your exam beforehand. Go through the materials that you are doing, and make sure you get at least 80% on all of the Jon Bonso’s exam (I’d recommend maybe 90% or higher)
If you like to learn things practically, I do recommend Adrian’s course. If you like to learn things conceptually, go with Stephen Maarek’s course. I find Stephen’s course more detailed when going through different architectures, but I can’t really say that because I didn’t really finish Adrian’s course
Jon Bonso’s exam was about the same difficulty as the actual exam. But they’re slightly more tricky. For example, many of the questions will give you two different situation and you really have to figure out what they are asking for because they might contradict to each other, but the actual question is asking one specific thing. However, there were few questions that were definitely obvious if you knew the service.
I’m upset that even though I passed the exam, I’m still lacking some practical stuff, so I’m just going to go through Adrian’s Developer exam but without cramming this time. If you actually learn the materials and practice them, they are definitely useful in the real world. I hope this will help you passing and actually learning the stuff.
P.S I vehemently disagree with Adrian in one thing in his course. doggogram.io is definitely better than catagram.io, although his cats are pretty cool
I sat the exam at a PearsonVUE test centre and scored 816.
The exam had lots of questions around S3, RDS and storage. To be honest it was a bit of a blur but they are the ones I remember.
I was a bit worried before sitting the exam as I was only hit 76% in the official AWS practice exam the night before but it turned out alright in the end!
I have around 8 years of experience in IT but AWS was relatively new to me around 5 weeks ago.
Training Material Used
Firstly I ran through the u/stephanemaarek course which I found to pretty much cover all that was required!
I then used the u/Tutorials_Dojo practice exams. I took one before starting Stephane’s course to see where I was at with no training. I got 46% but I suppose a few of them were lucky guesses!
I then finished the course and took another test and hit around 65%, TD was great as they gave explanations on the answers. I then used this go back to the course to go over my weak areas again.
I then seemed to not be able to get higher than the low 70% on the exams so I went through u/neal-davis course, this was also great as it had an “Exam Cram” video at the end of each topic.
I also set up flashcards on BrainScape which helped me remember AWS services and what their function is.
All in all it was a great learning experience and I look forward to putting my skills into action!
S3 Use cases, storage tiers, cloudfront were pretty prominent too
Only got one “figure out what’s wrong with this IAM policy” question
A handful of dynamodb questions and a handful for picking use cases between different database types or caching layers.
Other typical tips: When you’re unclear on what answer you should pick, or if they seem very similar – work on eliminating answers first. “It can’t be X because oy Y” and that can help a lot.
Testimonial: Passed the AWS Solutions Architect Associate exam! I prepared mostly from freely available resources as my basics were strong. Bought Jon Bonso’s tests on Udemy and they turned out to be super important while preparing for those particular type of questions (i.e. the questions which feel subjective, but they aren’t), understanding line of questioning and most suitable answers for some common scenarios.
Created a Notion notebook to note down those common scenarios, exceptions, what supports what, integrations etc. Used that notebook and cheat sheets on Tutorials Dojo website for revision on final day.
Found the exam was little tougher than Jon Bonso’s, but his practice tests on Udemy were crucial. Wouldn’t have passed it without them.
Piece of advice for upcoming test aspirants: Get your basics right, especially networking. Understand properly how different services interact in VPC. Focus more on the last line of the question. It usually gives you a hint upon what exactly is needed. Whether you need cost optimization, performance efficiency or high availability. Little to no operational effort means serverless. Understand all serverless services thoroughly.
I have almost no experience with AWS, except for completing the Certified Cloud Practitioner earlier this year. My work is pushing all IT employees to complete some cloud training and certifications, which is why I chose to do this.
How I Studied: My company pays for acloudguru subscriptions for its employees, so I used that for the bulk of my learning. I took notes on 3×5 notecards on the key terms and concepts for review.
Once I scored passing grades on the ACG practice tests, I took the Jon Bonso tests on Udemy, which are much more difficult and fairly close to the difficulty of the actual exam. I scored 45%-74% on every Bonso practice test, and spent 1-2 hours after each test reviewing what I missed, supplementing my note cards, and taking time to understand my weak spots. I only took these tests once each, but in between each practice test, I would review all my note cards until I had the content largely memorized.
The Test: This was one of the most difficult certification tests I’ve ever done. The exam was remote proctored with PearsonVUE (I used PSI for the CCP and didn’t like it as much) I felt like I was failing half the time. I marked about 25% of the questions for review, and I used up the entire allotted time. The questions are mostly about understanding which services interact with which other services, or which services are incompatible with the scenario. It was important for me to read through each response and eliminate the ones that don’t make sense. A lot of the responses mentioned a lot of AWS services that sound good but don’t actually work together (i.e. if it doesn’t make sense to have service X querying database Y, so that probably isn’t the right answer). I can’t point to one domain that really needs to be studied more than any other. You need to know all of the content for the exam.
Final Thoughts: The ACG practice tests are not a good metric for success for the actual SAA exam, and I would not have passed without Bonso’s tests showing me my weak spots. PearsonVUE is better than PSI. Make sure to study everything thoroughly and review excessively. You don’t necessarily need 5 different study sources and years of experience to be able to pass (although both of those definitely help) and good luck to anyone that took the time to read!
AWS Certified Solutions Architect Associate So glad to pass my first AWS certification after 6 weeks of preparation.
My Preparation:
After a series of trial of error in regards to picking the appropriate learning content. Eventually, I went with the community’s advice, and took the course presented by the amazing u/stephanemaarek, in addition to the practice exams by Jon Bonso. At this point, I can’t say anything that hasn’t been said already about how helpful they are. It’s a great combination of learning material, I appreciate the instructor’s work, and the community’s help in this sub.
Review:
Throughout the course I noted down the important points, and used the course slides as a reference in the first review iteration. Before resorting to Udemy’s practice exams, I purchased a practice exam from another website, that I regret (not to defame the other vendor, I would simply recommend Udemy). Udemy’s practice exams were incredible, in that they made me aware of the points I hadn’t understood clearly. After each exam, I would go both through the incorrect answers, as well as the questions I marked for review, wrote down the topic for review, and read the explanation thoroughly. The explanations point to the respective documentation in AWS, which is a recommended read, especially if you don’t feel confident with the service. What I want to note, is that I didn’t get satisfying marks on the first go at the practice exams (I got an average of ~70%). Throughout the 6 practice exams, I aggregated a long list of topics to review, went back to the course slides and practice-exams explanations, in addition to the AWS documentation for the respective service. On the second go I averaged 85%. The second attempt at the exams was important as a confidence boost, as I made sure I understood the services more clearly.
The take away:
Don’t feel disappointed if you get bad results at your practice-exams. Make sure to review the topics and give it another shot.
The AWS documentation is your friend! It is vert clear and concise. My only regret is not having referenced the documentation enough after learning new services.
The exam:
I scheduled the exam using PSI. I was very confident going into the exam. But going through such an exam environment for the first time made me feel under pressure. Partly, because I didn’t feel comfortable being monitored (I was afraid to get eliminated if I moved or covered my mouth), but mostly because there was a lot at stake from my side, and I had to pass it in the first go. The questions were harder than expected, but I tried analyze the questions more, and eliminate the invalid answers. I was very nervous and kept reviewing flagged questions up to the last minute. Luckily, I pulled through.
The take away:
The proctors are friendly, just make sure you feel comfortable in the exam place, and use the practice exams to prepare for the actual’s exam’s environment. That includes sitting in a straight posture, not talking/whispering, or looking away.
Make sure to organize the time dedicated for each questions well, and don’t let yourself get distracted by being monitored like I did.
Don’t skip the question that you are not sure of. Try to select the most probable answer, then flag the question. This will make the very-stressful, last-minute review easier.
You have been engaged by a company to design and lead a migration to an AWS environment. The team is concerned about the capabilities of the new environment, especially when it comes to high availability and cost-effectiveness. The design calls for about 20 instances (c3.2xlarge) pulling jobs/messages from SQS. Network traffic per instance is estimated to be around 500 Mbps at the beginning and end of each job. Which configuration should you plan on deploying?
Spread the Instances over multiple AZs to minimize the traffic concentration and maximize fault-tolerance. With a multi-AZ configuration, an additional reliability point is scored as the entire Availability Zone itself is ruled out as a single point of failure. This ensures high availability. Wherever possible, use simple solutions such as spreading the load out rather than expensive high tech solutions
To save money, you quickly stored some data in one of the attached volumes of an EC2 instance and stopped it for the weekend. When you returned on Monday and restarted your instance, you discovered that your data was gone. Why might that be?
The volume was ephemeral, block-level storage. Data on an instance store volume is lost if an instance is stopped.
The most likely answer is that the EC2 instance had an instance store volume attached to it. Instance store volumes are ephemeral, meaning that data in attached instance store volumes is lost if the instance stops.
Your company likes the idea of storing files on AWS. However, low-latency service of the last few days of files is important to customer service. Which Storage Gateway configuration would you use to achieve both of these ends?
A file gateway simplifies file storage in Amazon S3, integrates to existing applications through industry-standard file system protocols, and provides a cost-effective alternative to on-premises storage. It also provides low-latency access to data through transparent local caching.
Cached volumes allow you to store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data.
You’ve been commissioned to develop a high-availability application with a stateless web tier. Identify the most cost-effective means of reaching this end.
Use an Elastic Load Balancer, a multi-AZ deployment of an Auto-Scaling group of EC2 Spot instances (primary) running in tandem with an Auto-Scaling group of EC2 On-Demand instances (secondary), and DynamoDB.
With proper scripting and scaling policies, running EC2 On-Demand instances behind the Spot instances will deliver the most cost-effective solution because On-Demand instances will only spin up if the Spot instances are not available. DynamoDB lends itself to supporting stateless web/app installations better than RDS .
You are building a NAT Instance in an m3.medium using the AWS Linux2 distro with amazon-linux-extras installed. Which of the following do you need to set?
Ensure that “Source/Destination Checks” is disabled on the NAT instance. With a NAT instance, the most common oversight is forgetting to disable Source/Destination Checks. TNote: This is a legacy topic and while it may appear on the AWS exam it will only do so infrequently.
You are reviewing Change Control requests and you note that there is a proposed change designed to reduce errors due to SQS Eventual Consistency by updating the “DelaySeconds” attribute. What does this mean?
When a new message is added to the SQS queue, it will be hidden from consumer instances for a fixed period.
Delay queues let you postpone the delivery of new messages to a queue for a number of seconds, for example, when your consumer application needs additional time to process messages. If you create a delay queue, any messages that you send to the queue remain invisible to consumers for the duration of the delay period. The default (minimum) delay for a queue is 0 seconds. The maximum is 15 minutes. To set delay seconds on individual messages, rather than on an entire queue, use message timers to allow Amazon SQS to use the message timer’s DelaySeconds value instead of the delay queue’s DelaySeconds value. Reference: Amazon SQS delay queues.
Amazon SQS keeps track of all tasks and events in an application: True or False?
False. Amazon SWF (not Amazon SQS) keeps track of all tasks and events in an application. Amazon SQS requires you to implement your own application-level tracking, especially if your application uses multiple queues. Amazon SWF FAQs.
You work for a company, and you need to protect your data stored on S3 from accidental deletion. Which actions might you take to achieve this?
Allow versioning on the bucket and to protect the objects by configuring MFA-protected API access.
Your Security Manager has hired a security contractor to audit your network and firewall configurations. The consultant doesn’t have access to an AWS account. You need to provide the required access for the auditing tasks, and answer a question about login details for the official AWS firewall appliance. Which actions might you do?
AWS has removed the Firewall appliance from the hub of the network and implemented the firewall functionality as stateful Security Groups, and stateless subnet NACLs. This is not a new concept in networking, but rarely implemented at this scale.
Create an IAM user for the auditor and explain that the firewall functionality is implemented as stateful Security Groups, and stateless subnet NACLs
Amazon ElastiCache can fulfill a number of roles. Which operations can be implemented using ElastiCache for Redis.
Amazon ElastiCache offers a fully managed Memcached and Redis service. Although the name only suggests caching functionality, the Redis service in particular can offer a number of operations such as Pub/Sub, Sorted Sets and an In-Memory Data Store. However, Amazon ElastiCache for Redis doesn’t support multithreaded architectures.
You have been asked to deploy an application on a small number of EC2 instances. The application must be placed across multiple Availability Zones and should also minimize the chance of underlying hardware failure. Which actions would provide this solution?
Deploy the EC2 servers in a Spread Placement Group.
Spread Placement Groups are recommended for applications that have a small number of critical instances which need to be kept separate from each other. Launching instances in a Spread Placement Group reduces the risk of simultaneous failures that might occur when instances share the same underlying hardware. Spread Placement Groups provide access to distinct hardware, and are therefore suitable for mixing instance types or launching instances over time. In this case, deploying the EC2 instances in a Spread Placement Group is the only correct option.
You manage a NodeJS messaging application that lives on a cluster of EC2 instances. Your website occasionally experiences brief, strong, and entirely unpredictable spikes in traffic that overwhelm your EC2 instances’ resources and freeze the application. As a result, you’re losing recently submitted messages from end-users. You use Auto Scaling to deploy additional resources to handle the load during spikes, but the new instances don’t spin-up fast enough to prevent the existing application servers from freezing. Can you provide the most cost-effective solution in preventing the loss of recently submitted messages?
Use Amazon SQS to decouple the application components and keep the messages in queue until the extra Auto-Scaling instances are available.
Neither increasing the size of your EC2 instances nor maintaining additional EC2 instances is cost-effective, and pre-warming an ELB signifies that these spikes in traffic are predictable. The cost-effective solution to the unpredictable spike in traffic is to use SQS to decouple the application components.
True statements on S3 URL styles
Virtual-host-style URLs (such as: https://bucket-name.s3.Region.amazonaws.com/key name) are supported by AWS.
Path-Style URLs (such as https://s3.Region.amazonaws.com/bucket-name/key name) are supported by AWS.
You run an automobile reselling company that has a popular online store on AWS. The application sits behind an Auto Scaling group and requires new instances of the Auto Scaling group to identify their public and private IP addresses. How can you achieve this?
Using a Curl or Get Command to get the latest meta-data from http://169.254.169.254/latest/meta-data/
What data formats are used to create CloudFormation templates?
JSOn and YAML
You have launched a NAT instance into a public subnet, and you have configured all relevant security groups, network ACLs, and routing policies to allow this NAT to function. However, EC2 instances in the private subnet still cannot communicate out to the internet. What troubleshooting steps should you take to resolve this issue?
Disable the Source/Destination Check on your NAT instance.
A NAT instance sends and retrieves traffic on behalf of instances in a private subnet. As a result, source/destination checks on the NAT instance must be disabled to allow the sending and receiving traffic for the private instances. Route 53 resolves DNS names, so it would not help here. Traffic that is originating from your NAT instance will not pass through an ELB. Instead, it is sent directly from the public IP address of the NAT Instance out to the Internet.
You need a storage service that delivers the lowest-latency access to data for a database running on a single EC2 instance. Which of the following AWS storage services is suitable for this use case?
Amazon EBS is a block level storage service for use with Amazon EC2. Amazon EBS can deliver performance for workloads that require the lowest-latency access to data from a single EC2 instance. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.
What are DynamoDB use cases?
Use cases include storing JSON data, BLOB data and storing web session data.
You are reviewing Change Control requests, and you note that there is a change designed to reduce costs by updating the Amazon SQS “WaitTimeSeconds” attribute. What does this mean?
When the consumer instance polls for new work, the SQS service will allow it to wait a certain time for one or more messages to be available before closing the connection.
Poor timing of SQS processes can significantly impact the cost effectiveness of the solution.
Long polling helps reduce the cost of using Amazon SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request) and false empty responses (when messages are available but aren’t included in a response).
You have been asked to decouple an application by utilizing SQS. The application dictates that messages on the queue CAN be delivered more than once, but must be delivered in the order they have arrived while reducing the number of empty responses. Which option is most suitable?
Configure a FIFO SQS queue and enable long polling.
You are a security architect working for a large antivirus company. The production environment has recently been moved to AWS and is in a public subnet. You are able to view the production environment over HTTP. However, when your customers try to update their virus definition files over a custom port, that port is blocked. You log in to the console and you allow traffic in over the custom port. How long will this take to take effect?
Immediately.
You need to restrict access to an S3 bucket. Which methods can you use to do so?
There are two ways of securing S3, using either Access Control Lists (Permissions) or by using bucket Policies.
You are reviewing Change Control requests, and you note that there is a change designed to reduce wasted CPU cycles by increasing the value of your Amazon SQS “VisibilityTimeout” attribute. What does this mean?
When a consumer instance retrieves a message, that message will be hidden from other consumer instances for a fixed period.
Poor timing of SQS processes can significantly impact the cost effectiveness of the solution. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours.
With EBS, I can ____.
Create an encrypted volume from a snapshot of another encrypted volume.
Create an encrypted snapshot from an unencrypted snapshot by creating an encrypted copy of the unencrypted snapshot.
You can create an encrypted volume from a snapshot of another encrypted volume.
Although there is no direct way to encrypt an existing unencrypted volume or snapshot, you can encrypt them by creating either a volume or a snapshot. Reference: Encrypting unencrypted resources.
Following advice from your consultant, you have configured your VPC to use dedicated hosting tenancy. Your VPC has an Amazon EC2 Auto Scaling designed to launch or terminate Amazon EC2 instances on a regular basis, in order to meet workload demands. A subsequent change to your application has rendered the performance gains from dedicated tenancy superfluous, and you would now like to recoup some of these greater costs. How do you revert your instance tenancy attribute of a VPC to default for new launched EC2 instances?
Modify the instance tenancy attribute of your VPC from dedicated to default using the AWS CLI, an AWS SDK, or the Amazon EC2 API.
You can change the instance tenancy attribute of a VPC from dedicated to default. Modifying the instance tenancy of the VPC does not affect the tenancy of any existing instances in the VPC. The next time you launch an instance in the VPC, it has a tenancy of default, unless you specify otherwise during launch. You can modify the instance tenancy attribute of a VPC using the AWS CLI, an AWS SDK, or the Amazon EC2 API only. Reference: Change the tenancy of a VPC.
Amazon DynamoDB is a fast, fully managed NoSQL database service. DynamoDB makes it simple and cost-effective to store and retrieve any amount of data and serve any level of request traffic.
DynamoDB is used to create tables that store and retrieve any level of data.
DynamoDB uses SSD’s to store data.
Provides Automatic and synchronous data.
Maximum item size is 400KB
Supports cross-region replication.
DynamoDB Core Concepts:
The fundamental concepts around DynamoDB are:
Tables-which is a collection of data.
Items- They are the individual entries in the table.
Attributes- These are the properties associated with the entries.
Primary Keys.
Secondary Indexes.
DynamoDB streams.
Secondary Indexes:
The Secondary index is a data structure that contains a subset of attributes from the table, along with an alternate key that supports Query operations.
Every secondary index is related to only one table, from where it obtains data. This is called base table of the index.
When you create an index you create an alternate key for the index i.e. Partition Key and Sort key, DynamoDB creates a copy of the attributes into the index, including primary key attributes derived from the table.
After this is done, you use the query/scan in the same way as you would use a query on a table.
Every secondary index is instinctively maintained by DynamoDB.
DynamoDB Indexes: DynamoDB supports two indexes:
Local Secondary Index (LSI)- The index has the same partition key as the base table but a different sort key,
Global Secondary index (GSI)- The index has a partition key and sort key are different from those on the base table.
While creating more than one table using secondary table , you must do it in a sequence. Create table one after the another. When you create the first table wait for it to be active.
Once that table is active, create another table and wait for it to get active and so on. If you try to create one or more tables continuously DynamoDB will return a LimitExceededException.
You must specify the following, for every secondary index:
Type- You must mention the type of index you are creating whether it is a Global Secondary Index or a Local Secondary index.
Name- You must specify the name for the index. The rules for naming the indexes are the same as that for the table it is connected with. You can use the same name for the indexes that are connected with the different base table.
Key- The key schema for the index states that every attribute in the index must be of the top level attribute of type-string, number, or binary. Other data types which include documents and sets are not allowed. Other requirements depend on the type of index you choose.
For GSI- The partitions key can be any scalar attribute of the base table.
Sort key is optional and this too can be any scalar attribute of the base table.
For LSI- The partition key must be the same as the base table’s partition key.
The sort key must be a non-key table attribute.
Additional Attributes: The additional attributes are in addition to the tables key attributes. They are automatically projected into every index. You can use attributes for any data type, including scalars, documents and sets.
Throughput: The throughput settings for the index if necessary are:
GSI: Specify read and write capacity unit settings. These provisioned throughput settings are not dependent on the base tables settings.
LSI- You do not need to specify read and write capacity unit settings. Any read and write operations on the local secondary index are drawn from the provisioned throughput settings of the base table.
You can create upto 5 Global and 5 Local Secondary Indexes per table. With the deletion of a table all the indexes are connected with the table are also deleted.
You can use the Scan or Query operation to fetch the data from the table. DynamoDB will give you the results in descending or ascending order.
Network Load Balancer Overview: A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration. When you enable an Availability Zone for the load balancer, Elastic Load Balancing creates a load balancer node in the Availability Zone. By default, each load balancer node distributes traffic across the registered targets in its Availability Zone only. If you enable cross-zone load balancing, each load balancer node distributes traffic across the registered targets in all enabled Availability Zones. It is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on your part. The Network Load Balancer is API-compatible with the Application Load Balancer, including full programmatic control of Target Groups and Targets. Here are some of the most important features:
Static IP Addresses – Each Network Load Balancer provides a single IP address for each Availability Zone in its purview. If you have targets in us-west-2a and other targets in us-west-2c, NLB will create and manage two IP addresses (one per AZ); connections to that IP address will spread traffic across the instances in all the VPC subnets in the AZ. You can also specify an existing Elastic IP for each AZ for even greater control. With full control over your IP addresses, a Network Load Balancer can be used in situations where IP addresses need to be hard-coded into DNS records, customer firewall rules, and so forth.
Zonality – The IP-per-AZ feature reduces latency with improved performance, improves availability through isolation and fault tolerance, and makes the use of Network Load Balancers transparent to your client applications. Network Load Balancers also attempt to route a series of requests from a particular source to targets in a single AZ while still providing automatic failover should those targets become unavailable.
Source Address Preservation – With Network Load Balancer, the original source IP address and source ports for the incoming connections remain unmodified, so application software need not support X-Forwarded-For, proxy protocol, or other workarounds. This also means that normal firewall rules, including VPC Security Groups, can be used on targets.
Long-running Connections – NLB handles connections with built-in fault tolerance, and can handle connections that are open for months or years, making them a great fit for IoT, gaming, and messaging applications.
Failover – Powered by Route 53 health checks, NLB supports failover between IP addresses within and across regions.
There are two types of VPC endpoints: (1) interface endpoints and (2) gateway endpoints. Interface endpoints enable connectivity to services over AWS PrivateLink.
Amazon AWS uses key pair to encrypt and decrypt login information.
A sender uses a public key to encrypt data, which its receiver then decrypts using another private key. These two keys, public and private, are known as a key pair.
You need a key pair to be able to connect to your instances. The way this works on Linux and Windows instances is different.
First, when you launch a new instance, you assign a key pair to it. Then, when you log in to it, you use the private key.
The difference between Linux and Windows instances is that Linux instances do not have a password already set and you must use the key pair to log in to Linux instances. On the other hand, on Windows instances, you need the key pair to decrypt the administrator password. Using the decrypted password, you can use RDP and then connect to your Windows instance.
Amazon EC2 stores only the public key, and you can either generate it inside Amazon EC2 or you can import it. Since the private key is not stored by Amazon, it’s advisable to store it in a secure place as anyone who has this private key can log in on your behalf.
AWS PrivateLink provides private connectivity between VPCs and services hosted on AWS or on-premises, securely on the Amazon network. By providing a private endpoint to access your services, AWS PrivateLink ensures your traffic is not exposed to the public internet.
There are two types of Security Groups based on where you launch your instance. When you launch your instance on EC2-Classic, you have to specify an EC2-Classic Security Group . On the other hand, when you launch an instance in a VPC, you will have to specify an EC2-VPC Security Group. Now that we have a clear understanding what we are comparing, lets see their main differences:
I think this is historical in nature. S3 and DynamoDB were the first services to support VPC endpoints. The release of those VPC endpoint features pre-dates two important services that subsequently enabled interface endpoints: Network Load Balancer and AWS PrivateLink.
Take advantage of execution context reuse to improve the performance of your function. Initialize SDK clients and database connections outside of the function handler, and cache static assets locally in the /tmp directory. Subsequent invocations processed by the same instance of your function can reuse these resources. This saves execution time and avoid potential data leaks across invocations, don’t use the execution context to store user data, events, or other information with security implications. If your function relies on a mutable state that can’t be stored in memory within the handler, consider creating a separate function or separate versions of a function for each user.
Use AWS Lambda Environment Variables to pass operational parameters to your function. For example, if you are writing to an Amazon S3 bucket, instead of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable.
You can use VPC Flow Logs. The steps would be the following:
Enable VPC Flow Logs for the VPC your EC2 instance lives in. You can do this from the VPC console
Having VPC Flow Logs enabled will create a CloudWatch Logs log group
Find the Elastic Network Interface assigned to your EC2 instance. Also, get the private IP of your EC2 instance. You can do this from the EC2 console.
Find the CloudWatch Logs log stream for that ENI.
Search the log stream for records where your Windows instance’s IP is the destination IP, make sure the port is the one you’re looking for. You’ll see records that tell you if someone has been connecting to your EC2 instance. For example, there are bytes transferred, status=ACCEPT, log-status=OK. You will also know the source IP that connected to your instance.
I recommend using CloudWatch Logs Metric Filters, so you don’t have to do all this manually. Metric Filters will find the patterns I described in your CloudWatch Logs entries and will publish a CloudWatch metric. Then you can trigger an alarm that notifies you when someone logs in to your instance.
Here are more details from the AWS Official Blog and the AWS documentation for VPC Flow Logs records:
Also, there are 3rd-party tools that simplify all these steps for you and give you very nice visibility and alerts into what’s happening in your AWS network resources. I’ve tried Observable Networks and it’s great: Observable Networks
Typically outbound traffic is not blocked by NAT on any port, so you would not need to explicitly allow those, since they should already be allowed. Your firewall generally would have a rule to allow return traffic that was initiated outbound from inside your office.
Packet sniffing by other tenants. It is not possible for a virtual instance running in promiscuous mode to receive or “sniff” traffic that is intended for a different virtual instance. While you can place your interfaces into promiscuous mode, the hypervisor will not deliver any traffic to them that is not addressed to them. Even two virtual instances that are owned by the same customer located on the same physical host cannot listen to each other’s traffic. Attacks such as ARP cache poisoning do not work within Amazon EC2 and Amazon VPC. While Amazon EC2 does provide ample protection against one customer inadvertently or maliciously attempting to view another’s data, as a standard practice you should encrypt sensitive traffic.
But as you can see, they still recommend that you should maintain encryption inside your network. We have taken the approach of terminating SSL at the external interface of the ELB, but then initiating SSL from the ELB to our back-end servers, and even further, to our (RDS) databases. It’s probably belt-and-suspenders, but in my industry it’s needed. Heck, we have some interfaces that require HTTPS and a VPN.
What’s the use case for S3 Pre-signed URL for uploading objects?
I get the use-case to allow access to private/premium content in S3 using Presigned-url that can be used to view or download the file until the expiration time set, But what’s a real life scenario in which a Webapp would have the need to generate URI to give users temporary credentials to upload an object, can’t the same be done by using the SDK and exposing a REST API at the backend.
Asking this since I want to build a POC for this functionality in Java, but struggling to find a real-world use-case for the same
Pre-signed URLs are used to provide short-term access to a private object in your S3 bucket. They work by appending an AWS Access Key, expiration time, and Sigv4 signature as query parameters to the S3 object. There are two common use cases when you may want to use them:
Simple, occasional sharing of private files.
Frequent, programmatic access to view or upload a file in an application.
Imagine you may want to share a confidential presentation with a business partner, or you want to allow a friend to download a video file you’re storing in your S3 bucket. In both situations, you could generate a URL, and share it to allow the recipient short-term access.
There are a couple of different approaches for generating these URLs in an ad-hoc, one-off fashion, including:
First time going there, what like to know in advance the do and don’t … from people with previous experiences.
Pre-plan as much as you can, but don’t sweat it in the moment if it doesn’t work out. The experience and networking are as if not more valuable than the sessions.
Deliberately know where your exits are. Most of Vegas is designed to keep you inside — when you’re burned out from the crowds and knowledge deluge is not the time to be trying to figure out how the hell you get out of wherever you are.
Study maps of how the properties interconnect before you go. You can get a lot of places without ever going outside. Be able to make a deliberate decision of what route to take. Same thing for the outdoor escalators and pedestrian bridges — they’re not necessarily intuitive, but if you know where they go, they’re a life saver running between events.
Drink more water and eat less food than you think you need to. Your mind and body will thank you.
Be prepared for all of the other Vegasisms if you ever plan on leaving the con boundaries (like to walk down the street to another venue) — you will likely be propositioned by mostly naked showgirls, see overt advertisement for or even be directly propositioned by prostitutes and their business associates, witness some pretty awful homelessness, and be “accidentally bumped into” pretty regularly by amateur pickpockets.
Switching gears between “work/AWS” and “surviving Vegas” multiple times a day can be seriously mentally taxing. I haven’t found any way to prevent that, just know it’s going to happen.
Take a burner laptop and not your production access work machine. You don’t want to accidentally crater your production environment because you gave the wrong cred as part of a lab.
There are helpful staffers everywhere around the con — don’t be afraid to leverage them — they tend to be much better informed than the ushers/directors/crowd wranglers at other cons.
Plan on getting Covid or at very least Con Crud. If you’re not used to being around a million sick people in the desert, it’s going to take its toll on your body one way or another.
Don’t set morning alarms. If your body needs to sleep in, that was more important than whatever morning session you wanted to catch. Watch the recording later on your own time and enjoy your mental clarity for the rest of the day.
Wander the expo floor when you’re bored to get a big picture of the ecosystem, but don’t expect anything too deep. The partner booths are all fun and games and don’t necessarily align with reality. Hang out at the “Ask AWS” booths — people ask some fun interesting questions and AWS TAMs/SAs and the other folks staffing the booth tend not to suck.
Listen to The Killers / Brandon Flowers when walking around outside — he grew up in Las Vegas and a lot of his music has subtle (and not so subtle) hints on how to survive and thrive there.
I’m sure there’s more, but that’s what I can think of off the top of my head.
This is more Vegas-advice than pure Re:Invent advice, but if you’re going to be in the city for more than 3 days try to either:
Find a way off/out of the strip for an afternoon. A hike out at Red Rocks is a great option.
Get a pass to the spa at your hotel so that you can escape the casino/event/hotel room trap. It’s amazing how shitty you feel without realizing it until you do a quick workout and steam/sauna/ice bath routine.
I’ve also seen a whole variety of issues that people run into during hands-on workshops where for one reason or another their corporate laptop/email/security won’t let them sign up and log into a new AWS account. Make sure you don’t have any restrictions there, as that’ll be a big hassle. The workshops have been some of the best and most memorable sessions for me.
More tips:
Sign up for all the parties! Try to get your sessions booked too, it’s a pain to be on waitlists. Don’t do one session at Venetian followed by a session at MGM. You’ll never make it in time. Try to group your sessions by location/day.
We catalog all the parties, keep a list of the latest (and older) guides, the Expo floor plan, drawings, etc. On Twitter as well @reInventParties
Hidden gem if you’re into that sort of thing, the Pinball Museum is a great place to hang for a bit with some friends.
Bring sunscreen, a water bottle you like, really comfortable shoes, and lip balm.
Get at least one cert if you don’t already have one. The Cert lounge is a wonderful place to chill and the swag there is top tier.
Check the partner parties, they have good food and good swag.
Register with an alt email address (something like yourname+reinvent@domain.com) so you can set an email rule for all the spam.
If your workplace has an SA, coordinate with them for schedules and info. They will also curate calendars for you and get you insider info if you want them to.
Prioritize workshops and chalk talks. Partner talks are long advertisements, take them with a grain of salt.
Even if you are an introvert, network. There are folks there with valuable insights and skills. You are one of those.
Don’t underestimate the distance between venues. Getting from MGM to Venetian can take forever.
Bring very comfortable walking shoes and be prepared to spend a LOT of time on your feet and walking 25-30,000 steps a day. All of the other comments and ideas are awesome. The most important thing to remember, especially for your very first year, is to have fun. Don’t just sit in breakouts all day and then go back to your hotel. Go to the after dark events. Don’t get too hung up on if you don’t make it to all the breakout sessions you want to go to. Let your first year be a learning curve on how to experience and enjoy re:Invent. It is the most epic week in Vegas you will ever experience. Maybe we will bump into each other. Love meeting new people.
Join Peter DeSantis, Senior Vice President, Utility Computing and Apps, to learn how AWS has optimized its cloud infrastructure to run some of the world’s most demanding workloads and give your business a competitive edge.
Join Dr. Werner Vogels, CTO, Amazon.com, as he goes behind the scenes to show how Amazon is solving today’s hardest technology problems. Based on his experience working with some of the largest and most successful applications in the world, Dr. Vogels shares his insights on building truly resilient architectures and what that means for the future of software development.
Applied artificial intelligence (AI) solutions, such as contact center intelligence (CCI), intelligent document processing (IDP), and media intelligence (MI), have had a significant market and business impact for customers, partners, and AWS. This session details how partners can collaborate with AWS to differentiate their products and solutions with AI and machine learning (ML). It also shares partner and customer success stories and discusses opportunities to help customers who are looking for turnkey solutions.
An implication of applying the microservices architectural style is that a lot of communication between components is done over the network. In order to achieve the full capabilities of microservices, this communication needs to happen in a loosely coupled manner. In this session, explore some fundamental application integration patterns based on messaging and connect them to real-world use cases in a microservices scenario. Also, learn some of the benefits that asynchronous messaging can have over REST APIs for communication between microservices.
Avoiding unexpected user behavior and maintaining reliable performance is crucial. This session is for application developers who want to learn how to maintain application availability and performance to improve the end user experience. Also, discover the latest on Amazon CloudWatch.
Amazon is transforming customer experiences through the practical application of AI and machine learning (ML) at scale. This session is for senior business and technology decision-makers who want to understand Amazon.com’s approach to launching and scaling ML-enabled innovations in its core business operations and toward new customer opportunities. See specific examples from various Amazon businesses to learn how Amazon applies AI/ML to shape its customer experience while improving efficiency, increasing speed, and lowering cost. Also hear the lessons the Amazon teams have learned from the cultural, process, and technical aspects of building and scaling ML capabilities across the organization.
Data has become a strategic asset. Customers of all sizes are moving data to the cloud to gain operational efficiencies and fuel innovation. This session details how partners can create repeatable and scalable solutions to help their customers derive value from their data, win new customers, and grow their business. It also discusses how to drive partner-led data migrations using AWS services, tools, resources, and programs, such as the AWS Migration Acceleration Program (MAP). Also, this session shares customer success stories from partners who have used MAP and other resources to help customers migrate to AWS and improve business outcomes.
User-facing web and mobile applications are the primary touchpoint between organizations and their customers. To meet the ever-rising bar for customer experience, developers must deliver high-quality apps with both foundational and differentiating features. AWS Amplify helps front-end web and mobile developers build faster front to back. In this session, review Amplify’s core capabilities like authentication, data, and file storage and explore new capabilities, such as Amplify Geo and extensibility features for easier app customization with AWS services and better integration with existing deployment pipelines. Also learn how customers have been successful using Amplify to innovate in their businesses.
AWS Amplify is a set of tools and services that makes it quickand easy for front-end web and mobile developers to build full-stack applications on AWS
Amplify DataStore provides a programming model for leveraging shared and distributed data without writing additional code for offline and online scenarios, which makes working with distributed, cross-user data just as simple as working with local-only data
AWS AppSync is a managed GraphQL API service
Amazon DynamoDB is a serverless key-value and document database that’s highly scalable
While DevOps has not changed much, the industry has fundamentally transformed over the last decade. Monolithic architectures have evolved into microservices. Containers and serverless have become the default. Applications are distributed on cloud infrastructure across the globe. The technical environment and tooling ecosystem has changed radically from the original conditions in which DevOps was created. So, what’s next? In this session, learn about the next phase of DevOps: a distributed model that emphasizes swift development, observable systems, accountable engineers, and resilient applications.
Innovation Day
Innovation Day is a virtual event that brings together organizations and thought leaders from around the world to share how cloud technology has helped them capture new business opportunities, grow revenue, and solve the big problems facing us today, and in the future. Featured topics include building the first human basecamp on the moon, the next generation F1 car, manufacturing in space, the Climate Pledge from Amazon, and building the city of the future at the foot of Mount Fuji.
Latest AWS Products and Services announced at re:invent 2021
Graviton 3: AWS today announced the newest generation of its Arm-based Graviton processors: the Graviton 3. The company promises that the new chip will be 25 percent faster than the last-generation chips, with 2x faster floating-point performances and a 3x speedup for machine-learning workloads. AWS also promises that the new chips will use 60 percent less power.
Trn1 to train models for various applications
AWS Mainframe Modernization: Cut mainframe migration time by 2/3
AWS Private 5G: Deploy and manage your own private 5G network (Set up and scale a private mobile network in days)
Transaction for Governed tables in Lake Formation: Automatically manages conflicts and error
Serverless and On-Demand Analytics for Redshift, EMAR, MSK, Kinesis:
Amazon Sagemaker Canvas: Create ML predictions without any ML experience or writing any code
AWS IoT TwinMaker: Real Time system that makes it easy to create and use digital twins of real-world systems.
Amazon DevOps Guru for RDS: Automatically detect, diagnose, and resolve hard-to-find database issues.
Amazon DynamoDB Standard-Infrequent Access table class: Reduce costs by up to 60%. Maintain the same performance, durability, scaling. and availability as Standard
AWS Database Migration Service Fleet Advisor: Accelerate database migration with automated inventory and migration: This service makes it easier and faster to get your data to the cloud and match it with the correct database service. “DMS Fleet Advisor automatically builds an inventory of your on-prem database and analytics service by streaming data from on prem to Amazon S3. From there, we take it over. We analyze [the data] to match it with the appropriate amount of AWS Datastore and then provide customized migration plans.
Amazon Sagemaker Ground Truth Plus: Deliver high-quality training datasets fast, and reduce data labeling cost.
Amazon SageMaker Training Compiler: Accelerate model training by 50%
Amazon SageMaker Inference Recommender: Reduce time to deploy from weeks to hours
Amazon SageMaker Serverless Inference: Lower cost of ownership with pay-per-use pricing
Amazon Kendra Experience Builder: Deploy Intelligent search applications powered by Amazon Kendra with a few clicks.
Amazon Lex Automated Chatbot Designer: Drastically Simplifies bot design with advanced natural language understanding
Amazon SageMaker Studio Lab: A no cost, no setup access to powerful machine learning technology
AWS Cloud WAN: Build, manage and monitor global wide area networks
AWS Amplify Studio: Visually build complete, feature-rich apps in hours instead of weeks, with full control over the application code.
AWS Carbon Footprint Tool: Don’t forget to turn off the lights.
AWS Well-Architected Sustainability Pillar: Learn, measure, and improve your workloads using environmental best practices in cloud computing
AWS re:Post: Get Answers from AWS experts. A Reimagined Q&A Experience for the AWS Community
You can automate any task that involves interaction with AWS and on-premises resources, including in multi-account and multi-Region environments, with AWS Systems Manager. In this session, learn more about three new Systems Manager launches at re:Invent—Change Manager, Fleet Manager, and Application Manager. In addition, learn how Systems Manager Automation can be used across multiple Regions and accounts, integrate with other AWS services, and extend to on-premises. This session takes a deep dive into how to author a custom runbook using an automation document, and how to execute automation anywhere.
Learn about the performance improvements made in Amazon EMR for Apache Spark and Presto, giving Amazon EMR one of the fastest runtimes for analytics workloads in the cloud. This session dives deep into how AWS generates smart query plans in the absence of accurate table statistics. It also covers adaptive query execution—a technique to dynamically collect statistics during query execution—and how AWS uses dynamic partition pruning to generate query predicates for speeding up table joins. You also learn about execution improvements such as data prefetching and pruning of nested data types.
Explore how state-of-the-art algorithms built into Amazon SageMaker are used to detect declines in machine learning (ML) model quality. One of the big factors that can affect the accuracy of models is the difference in the data used to generate predictions and what was used for training. For example, changing economic conditions could drive new interest rates affecting home purchasing predictions. Amazon SageMaker Model Monitor automatically detects drift in deployed models and provides detailed alerts that help you identify the source of the problem so you can be more confident in your ML applications.
Amazon Lightsail is AWS’s simple, virtual private server. In this session, learn more about Lightsail and its newest launches. Lightsail is designed for simple web apps, websites, and dev environments. This session reviews core product features, such as preconfigured blueprints, managed databases, load balancers, networking, and snapshots, and includes a demo of the most recent launches. Attend this session to learn more about how you can get up and running on AWS in the easiest way possible.
This session dives into the security model behind AWS Lambda functions, looking at how you can isolate workloads, build multiple layers of protection, and leverage fine-grained authorization. You learn about the implementation, the open-source Firecracker technology that provides one of the most important layers, and what this means for how you build on Lambda. You also see how AWS Lambda securely runs your functions packaged and deployed as container images. Finally, you learn about SaaS, customization, and safe patterns for running your own customers’ code in your Lambda functions.
Unauthorized users and financially motivated third parties also have access to advanced cloud capabilities. This causes concerns and creates challenges for customers responsible for the security of their cloud assets. Join us as Roy Feintuch, chief technologist of cloud products, and Maya Horowitz, director of threat intelligence and research, face off in an epic battle of defense against unauthorized cloud-native attacks. In this session, Roy uses security analytics, threat hunting, and cloud intelligence solutions to dissect and analyze some sneaky cloud breaches so you can strengthen your cloud defense. This presentation is brought to you by Check Point Software, an AWS Partner.
AWS provides services and features that your organization can leverage to improve the security of a serverless application. However, as organizations grow and developers deploy more serverless applications, how do you know if all of the applications are in compliance with your organization’s security policies? This session walks you through serverless security, and you learn about protections and guardrails that you can build to avoid misconfigurations and catch potential security risks.
The Amazon Cash application service matches incoming customer payments with accounts and open invoices, while an email ingestion service (EIS) processes more than 1 million semi-structured and unstructured remittance emails monthly. In this session, learn how this EIS classifies the emails, extracts invoice data from the emails, and then identifies the right invoices to close on Amazon financial platforms. Dive deep on how these services automated 89.5% of cash applications using AWS AI & ML services. Hear about how these services will eliminate the manual effort of 1000 cash application analysts in the next 10 years.
Dive into the details of using Amazon Kinesis Data Streams and Amazon DynamoDB Streams as event sources for AWS Lambda. This session walks you through how AWS Lambda scales along with these two event sources. It also covers best practices and challenges, including how to tune streaming sources for optimum performance and how to effectively monitor them.
Build real-time applications using Apache Flink with Apache Kafka and Amazon Kinesis Data Streams. Apache Flink is a framework and engine for building streaming applications for use cases such as real-time analytics and complex event processing. This session covers best practices for building low-latency applications with Apache Flink when reading data from either Amazon MSK or Amazon Kinesis Data Streams. It also covers best practices for running low-latency Apache Flink applications using Amazon Kinesis Data Analytics and discusses AWS’s open-source contributions to this use case.
Learn how you can accelerate application modernization and benefit from the open-source Apache Kafka ecosystem by connecting your legacy, on-premises systems to the cloud. In this session, hear real customer stories about timely insights gained from event-driven applications built on an event streaming platform from Confluent Cloud running on AWS, which stores and processes historical data and real-time data streams. Confluent makes Apache Kafka enterprise-ready using infinite Kafka storage with Amazon S3 and multiple private networking options including AWS PrivateLink, along with self-managed encryption keys for storage volume encryption with AWS Key Management Service (AWS KMS).
Data-driven business intelligence (BI) decision making is more important than ever in this age of remote work. An increasing number of organizations are investing in data transformation initiatives, including migrating data to the cloud, modernizing data warehouses, and building data lakes. But what about the last mile—connecting the dots for end users with dashboards and visualizations? Come to this session to learn how Amazon QuickSight allows you to connect to your AWS data and quickly build rich and interactive dashboards with self-serve and advanced analytics capabilities that can scale from tens to hundreds of thousands of users, without managing any infrastructure and only paying for what you use.
Is there an Updated SAA-C03 Practice Exam?
Yes as of August 2022. This sample SAA-C03 sample exam PDF file can provide you with a hint of what the real SAA-C03 exam will look like in your upcoming test. In addition, the SAA-C03 sample questions also contain the necessary explanation and reference links that you can study.
In this AWS tutorial, we are going to discuss how we can make the best use of AWS services to build a highly scalable, and fault tolerant configuration of EC2 instances. The use of Load Balancers and Auto Scaling Groups falls under a number of best practices in AWS, including Performance Efficiency, Reliability and high availability.
Before we dive into this hands-on tutorial on how exactly we can build this solution, let’s have a brief recap on what an Auto Scaling group is, and what a Load balancer is.
Autoscaling group (ASG)
An Autoscaling group (ASG) is a logical grouping of instances which can scale up and scale down depending on pre-configured settings. By setting Scaling policies of your ASG, you can choose how many EC2 instances are launched and terminated based on your application’s load. You can do this based on manual, dynamic, scheduled or predictive scaling.
Elastic Load Balancer (ELB)
An Elastic Load Balancer (ELB) is a name describing a number of services within AWS designed to distribute traffic across multiple EC2 instances in order to provide enhanced scalability, availability, security and more. The particular type of Load Balancer we will be using today is an Application Load Balancer (ALB). The ALB is a Layer 7 Load Balancer designed to distribute HTTP/HTTPS traffic across multiple nodes – with added features such as TLS termination, Sticky Sessions and Complex routing configurations.
Getting Started
First of all, we open our AWS management console and head to the EC2 management console.
We scroll down on the left-hand side and select ‘Launch Templates’. A Launch Template is a configuration template which defines the settings for EC2 instances launched by the ASG.
Under Launch Templates, we will select “Create launch template”.
We specify the name ‘MyTestTemplate’ and use the same text in the description.
Under the ‘Auto Scaling guidance’ box, tick the box which says ‘Provide guidance to help me set up a template that I can use with EC2 Auto Scaling’ and scroll down to launch template contents.
When it comes to choosing our AMI (Amazon Machine Image) we can choose the Amazon Linux 2 under ‘Quick Start’.
The Amazon Linux 2 AMI is free tier eligible, and easy to use for our demonstration purposes.
Next, we select the ‘t2.micro’ under instance types, as this is also free tier eligible.
Under Network Settings, we create a new Security Group called ExampleSG in our default VPC, allowing HTTP access to everyone. It should look like this.
We can then add our IAM Role we created earlier. Under Advanced Details, select your IAM instance profile.
Then we need to include some user data which will load a simple web server and web page onto our Launch Template when the EC2 instance launches.
Under ‘advanced details’, and in ‘User data’ paste the following code in the box.
Then simply click ‘Create Launch Template’ and we are done!
We are now able to build an Auto Scaling Group from our launch template.
On the same console page, select ‘Auto Scaling Groups’, and Create Auto Scaling Group.
We will call our Auto Scaling Group ‘ExampleASG’, and select the Launch Template we just created, then select next.
On the next page, keep the default VPC and select any default AZ and Subnet from the list and click next.
Under ‘Configure Advanced Options’ select ‘Attach to a new load balancer’ .
You will notice the settings below will change and we will now build our load balancer directly on the same page.
Select the Application Load Balancer, and leave the default Load Balancer name.
Choose an ‘Internet Facing’ Load balancer, select another AZ and leave all of the other defaults the same. It should look something like the following.
Under ‘Listeners and routing’, select ‘Create a target group’ and select the target group which was just created. It will be called something like ‘ExampleASG-1’. Click next.
Now we get to Group Size. This is where we specify the desired, minimum and maximum capacity of our Auto Scaling Group.
Set the capacities as follows:
Click ‘skip to review’, and click ‘Create Auto Scaling Group’.
You will now see the Auto Scaling Group building, and the capacity is updating.
After a short while, navigate to the EC2 Dashboard, and you will see that two EC2 instances have been launched!
To make sure our Auto Scaling group is working as it should – select any instance, and terminate the instance. After one instance has been terminated you should see another instance pending and go into a running state – bringing capacity back to 2 instances (as per our desired capacity).
If we also head over to the Load Balancer console, you will find our Application Load Balancer has been created.
If you select the load balancer, and scroll down, you will find the DNS name of your ALB – it will look something like ‘ ExampleASG-1-1435567571.us-east-1.elb.amazonaws.com’.
If you enter the DNS name into our URL, you should get the following page show up:
The message will display a ‘Hello World’ message including the IP address of the EC2 instance which is serving up the webpage behind the load balancer.
If you refresh the page a few times, you should see that the IP address listed will change. This is because the load balancer is routing you to the other EC2 instance, validating that our simple webpage is being served from behind our ALB.
The final step Is to make sure you delete all of the resources you configured! Start by deleting the Auto Scaling Group – and ensure you delete your load balancer also – this will ensure you don’t incur any charges.
Architectural Diagram
Below, you’ll find the architectural diagram of what we have built.
Learn how to Master AWS Cloud
Ultimate Training Packages – Our popular training bundles (on-demand video course + practice exams + ebook) will maximize your chances of passing your AWS certification the first time.
Membership – For unlimited access to our cloud training catalog, enroll in our monthly or annual membership program.
Challenge Labs – Build hands-on cloud skills in a secure sandbox environment. Learn, build, test and fail forward without risking unexpected cloud bills.
This post originally appeared on: https://digitalcloud.training/load-balancing-ec2-instances-in-an-autoscaling-group/
There are significant protections provided to you natively when you are building your networking stack on AWS. This wide range of services and features can become difficult to manage, and becoming knowledgeable about what tools to use in which area can be challenging.
The two main security components which can be confused within VPC networking are the Security Group and the Network Access Control List (NACL). When you compare a Security Group vs NACL, you will find that although they are fairly similar in general, there is a distinct difference in the use cases for each of these security features.
In this blog post, we are going to explain the main differences between Security Group vs NACL and talk about the use cases and some best practices.
First of all, what do they have in common?
The main thing that is shared in common between a Security group vs a NACL is that they are both a firewall. So, what is a firewall?
Firewalls in computing monitor and control incoming and outgoing network traffic based on predetermined security rules. Firewalls provide a barrier between trusted and untrusted networks. The network layer which we are talking about in this instance is an Amazon Virtual Private Cloud – aka a VPC.
In the AWS cloud, VPCs are on-demand pools of shared resources, designed to provide a certain degree of isolation between different organizations and different teams within an account.
First, let’s talk about the particulars of a Security Group.
Security Group Key Features
Where do they live?
Security groups are tied to an instance. This can be either an EC2 instance, ECS cluster or an RDS database instance – providing routing rules and acting as a firewall for the resources contained within the security group. With a security group, you have to purposely assign a security group to the instances – if you don’t want them to use the default security group.
The default security group allows all traffic outbound by default, but no inbound traffic.