You can translate the content of this page by selecting a language in the select box.
What are the corresponding Azure and Google Cloud services for each of the AWS services?
What are unique distinctions and similarities between AWS, Azure and Google Cloud services? For each AWS service, what is the equivalent Azure and Google Cloud service? For each Azure service, what is the corresponding Google Service? AWS Services vs Azure vs Google Services? Side by side comparison between AWS, Google Cloud and Azure Service?
Category: Marketplace Easy-to-deploy and automatically configured third-party applications, including single virtual machine or multiple virtual machine solutions. References: [AWS]:AWS Marketplace [Azure]:Azure Marketplace [Google]:Google Cloud Marketplace Tags: #AWSMarketplace, #AzureMarketPlace, #GoogleMarketplace Differences: They are both digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on their respective cloud platform.
Tags: #AmazonLex, #CogintiveServices, #AzureSpeech, #Api.ai, #DialogFlow, #Tensorflow Differences: api.ai provides us with such a platform which is easy to learn and comprehensive to develop conversation actions. It is a good example of the simplistic approach to solving complex man to machine communication problem using natural language processing in proximity to machine learning. Api.ai supports context based conversations now, which reduces the overhead of handling user context in session parameters. On the other hand in Lex this has to be handled in session. Also, api.ai can be used for both voice and text based conversations (assistant actions can be easily created using api.ai).
Category: Serverless Description: Integrate systems and run backend processes in response to events or schedules without provisioning or managing servers. References: [AWS]:AWS Lambda [Azure]:Azure Functions [Google]:Google Cloud Functions Tags:#AWSLAmbda, #AzureFunctions, #GoogleCloudFunctions Differences: Both AWS Lambda and Microsoft Azure Functions and Google Cloud Functions offer dynamic, configurable triggers that you can use to invoke your functions on their platforms. AWS Lambda, Azure and Google Cloud Functions support Node.js, Python, and C#. The beauty of serverless development is that, with minor changes, the code you write for one service should be portable to another with little effort – simply modify some interfaces, handle any input/output transforms, and an AWS Lambda Node.JS function is indistinguishable from a Microsoft Azure Node.js Function. AWS Lambda provides further support for Python and Java, while Azure Functions provides support for F# and PHP. AWS Lambda is built from the AMI, which runs on Linux, while Microsoft Azure Functions run in a Windows environment. AWS Lambda uses the AWS Machine architecture to reduce the scope of containerization, letting you spin up and tear down individual pieces of functionality in your application at will.
Category:Caching Description:An in-memory–based, distributed caching service that provides a high-performance store typically used to offload non transactional work from a database. References: [AWS]:AWS ElastiCache (works as an in-memory data store and cache to support the most demanding applications requiring sub-millisecond response times.) [Azure]:Azure Cache for Redis (based on the popular software Redis. It is typically used as a cache to improve the performance and scalability of systems that rely heavily on backend data-stores.) [Google]:Memcache (In-memory key-value store, originally intended for caching) Tags:#Redis, #Memcached <Differences: They all support horizontal scaling via sharding.They all improve the performance of web applications by allowing you to retrive information from fast, in-memory caches, instead of relying on slower disk-based databases.”, “Differences”: “ElastiCache supports Memcached and Redis. Memcached Cloud provides various data persistence options as well as remote backups for disaster recovery purposes. Redis offers persistence to disk, Memcache does not. This can be very helpful if you cache lots of data, since you remove the slowness around having a fully cold cache. Redis also offers several extra data structures that Memcache doesn’t— Lists, Sets, Sorted Sets, etc. Memcache only has Key/Value pairs. Memcache is multi-threaded. Redis is single-threaded and event driven. Redis is very fast, but it’ll never be multi-threaded. At hight scale, you can squeeze more connections and transactions out of Memcache. Memcache tends to be more memory efficient. This can make a big difference around the magnitude of 10s of millions or 100s of millions of keys. ElastiCache supports Memcached and Redis. Memcached Cloud provides various data persistence options as well as remote backups for disaster recovery purposes. Redis offers persistence to disk, Memcache does not. This can be very helpful if you cache lots of data, since you remove the slowness around having a fully cold cache. Redis also offers several extra data structures that Memcache doesn’t— Lists, Sets, Sorted Sets, etc. Memcache only has Key/Value pairs. Memcache is multi-threaded. Redis is single-threaded and event driven. Redis is very fast, but it’ll never be multi-threaded. At hight scale, you can squeeze more connections and transactions out of Memcache. Memcache tends to be more memory efficient. This can make a big difference around the magnitude of 10s of millions or 100s of millions of keys.
Category: Enterprise application services Description:Fully integrated Cloud service providing communications, email, document management in the cloud and available on a wide variety of devices. References: [AWS]:Amazon WorkMail, Amazon WorkDocs, Amazon Kendra (Sync and Index) [Azure]:Office 365 [Google]:G Suite Tags: #AmazonWorkDocs, #Office365, #GoogleGSuite Differences: G suite document processing applications like Google Docs are far behind Office 365 popular Word and Excel software, but G Suite User interface is intuite, simple and easy to navigate. Office 365 is too clunky. Get 20% off G-Suite Business Plan with Promo Code: PCQ49CJYK7EATNC
Category: Management Description: A unified management console that simplifies building, deploying, and operating your cloud resources. References: [AWS]:AWS Management Console, Trusted Advisor, AWS Usage and Billing Report, AWS Application Discovery Service, Amazon EC2 Systems Manager, AWS Personal Health Dashboard, AWS Compute Optimizer (Identify optimal AWS Compute resources) [Azure]:Azure portal, Azure Advisor, Azure Billing API, Azure Migrate, Azure Monitor, Azure Resource Health [Google]:Google CLoud Platform, Cost Management, Security Command Center, StackDriver Tags: #AWSConsole, #AzurePortal, #GoogleCloudConsole, #TrustedAdvisor, #AzureMonitor, #SecurityCommandCenter Differences: AWS Console categorizes its Infrastructure as a Service offerings into Compute, Storage and Content Delivery Network (CDN), Database, and Networking to help businesses and individuals grow. Azure excels in the Hybrid Cloud space allowing companies to integrate onsite servers with cloud offerings. Google has a strong offering in containers, since Google developed the Kubernetes standard that AWS and Azure now offer. GCP specializes in high compute offerings like Big Data, analytics and machine learning. It also offers considerable scale and load balancing – Google knows data centers and fast response time.
Enables both Speech to Text, and Text into Speech capabilities. The Speech Services are the unification of speech-to-text, text-to-speech, and speech-translation into a single Azure subscription. It’s easy to speech enable your applications, tools, and devices with the Speech SDK, Speech Devices SDK, or REST APIs. Amazon Polly is a Text-to-Speech (TTS) service that uses advanced deep learning technologies to synthesize speech that sounds like a human voice. With dozens of lifelike voices across a variety of languages, you can select the ideal voice and build speech-enabled applications that work in many different countries. Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to their applications. Using the Amazon Transcribe API, you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech.
Computer Vision: Extract information from images to categorize and process visual data. Amazon Rekognition is a simple and easy to use API that can quickly analyze any image or video file stored in Amazon S3. Amazon Rekognition is always learning from new data, and we are continually adding new labels and facial recognition features to the service.
Face: Detect, identy, and analyze faces in photos.
The Virtual Assistant Template brings together a number of best practices we’ve identified through the building of conversational experiences and automates integration of components that we’ve found to be highly beneficial to Bot Framework developers.
Redeploy and extend your VMware-based enterprise workloads to Azure with Azure VMware Solution by CloudSimple. Keep using the VMware tools you already know to manage workloads on Azure without disrupting network, security, or data protection policies.
Fully managed service that enables developers to deploy microservices applications without managing virtual machines, storage, or networking. AWS App Mesh is a service mesh that provides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure. App Mesh standardizes how your services communicate, giving you end-to-end visibility and ensuring high-availability for your applications.
Integrate systems and run backend processes in response to events or schedules without provisioning or managing servers. AWS Lambda is an event-driven, serverless computing platform provided by Amazon as a part of the Amazon Web Services. It is a computing service that runs code in response to events and automatically manages the computing resources required by that code
Managed relational database service where resiliency, scale, and maintenance are primarily handled by the platform. Amazon Relational Database Service is a distributed relational database service by Amazon Web Services. It is a web service running “in the cloud” designed to simplify the setup, operation, and scaling of a relational database for use in applications. Administration processes like patching the database software, backing up databases and enabling point-in-time recovery are managed automatically. Scaling storage and compute resources can be performed by a single API call as AWS does not offer an ssh connection to RDS instances.
An in-memory–based, distributed caching service that provides a high-performance store typically used to offload non transactional work from a database. Amazon ElastiCache is a fully managed in-memory data store and cache service by Amazon Web Services. The service improves the performance of web applications by retrieving information from managed in-memory caches, instead of relying entirely on slower disk-based databases. ElastiCache supports two open-source in-memory caching engines: Memcached and Redis.
Migration of database schema and data from one database format to a specific database technology in the cloud. AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.
Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers. AWS X-Ray is an application performance management service that enables a developer to analyze and debug applications in the Amazon Web Services (AWS) public cloud. A developer can use AWS X-Ray to visualize how a distributed application is performing during development or production, and across multiple AWS regions and accounts.
A cloud service for collaborating on code development. AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define. AWS CodeCommit is a source code storage and version-control service for Amazon Web Services’ public cloud customers. CodeCommit was designed to help IT teams collaborate on software development, including continuous integration and application delivery.
Collection of tools for building, debugging, deploying, diagnosing, and managing multiplatform scalable apps and services. The AWS Developer Tools are designed to help you build software like Amazon. They facilitate practices such as continuous delivery and infrastructure as code for serverless, containers, and Amazon EC2.
Built on top of the native REST API across all cloud services, various programming language-specific wrappers provide easier ways to create solutions. The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.
Configures and operates applications of all shapes and sizes, and provides templates to create and manage a collection of resources. AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers.
Provides a way for users to automate the manual, long-running, error-prone, and frequently repeated IT tasks. AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. CloudFormation allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts.
Provides an isolated, private environment in the cloud. Users have control over their virtual networking environment, including selection of their own IP address range, creation of subnets, and configuration of route tables and network gateways.
Azure Digital Twins is an IoT service that helps you create comprehensive models of physical environments. Create spatial intelligence graphs to model the relationships and interactions between people, places, and devices. Query data from a physical space rather than disparate sensors.
Allows users to securely control access to services and resources while offering data security and protection. Create and manage users and groups, and use permissions to allow and deny access to resources.
Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources, so those resources stay compliant with your corporate standards and service level agreements.
Azure management groups provide a level of scope above subscriptions. You organize subscriptions into containers called “management groups” and apply your governance conditions to the management groups. All subscriptions within a management group automatically inherit the conditions applied to the management group. Management groups give you enterprise-grade management at a large scale, no matter what type of subscriptions you have.
Easily join your distributed microservice architectures into a single global application using HTTP load balancing and path-based routing rules. Automate turning up new regions and scale-out with API-driven global actions, and independent fault-tolerance to your back end microservices in Azure—or anywhere.
Azure Stack is a hybrid cloud platform that enables you to run Azure services in your company’s or service provider’s datacenter. As a developer, you can build apps on Azure Stack. You can then deploy them to either Azure Stack or Azure, or you can build truly hybrid apps that take advantage of connectivity between an Azure Stack cloud and Azure.
Basically, it all comes down to what your organizational needs are and if there’s a particular area that’s especially important to your business (ex. serverless, or integration with Microsoft applications).
Some of the main things it comes down to is compute options, pricing, and purchasing options.
Here’s a brief comparison of the compute option features across cloud providers:
Here’s an example of a few instances’ costs (all are Linux OS):
Each provider offers a variety of options to lower costs from the listed On-Demand prices. These can fall under reservations, spot and preemptible instances and contracts.
Both AWS and Azure offer a way for customers to purchase compute capacity in advance in exchange for a discount: AWS Reserved Instances and Azure Reserved Virtual Machine Instances. There are a few interesting variations between the instances across the cloud providers which could affect which is more appealing to a business.
Another discounting mechanism is the idea of spot instances in AWS and low-priority VMs in Azure. These options allow users to purchase unused capacity for a steep discount.
With AWS and Azure, enterprise contracts are available. These are typically aimed at enterprise customers, and encourage large companies to commit to specific levels of usage and spend in exchange for an across-the-board discount – for example, AWS EDPs and Azure Enterprise Agreements.
You can read more about the differences between AWS and Azure to help decide which your business should use in this blog post
You can translate the content of this page by selecting a language in the select box.
What are the Top 100 AWS Solutions Architect Associate Certification Exam Questions and Answers Dump SAA-C03?
AWS Certified Solutions Architects are responsible for designing, deploying, and managing AWS cloud applications. The AWS Cloud Solutions Architect Associate exam validates an examinee’s ability to effectively demonstrate knowledge of how to design and deploy secure and robust applications on AWS technologies. The AWS Solutions Architect Associate training provides an overview of key AWS services, security, architecture, pricing, and support.
The AWS Certified Solutions Architect – Associate (SAA-C03) Examination is a required examination for the AWS Certified Solutions Architect – Professional level. Successful completion of this examination can lead to a salary raise or promotion for those in cloud roles. Below is the Top 100 AWS solutions architect associate exam prep facts and summaries questions and answers dump.
With average increases in salary of over 25% for certified individuals, you’re going to be in a much better position to secure your dream job or promotion if you earn your AWS Certified Solutions Architect Associate certification. You’ll also develop strong hands-on skills by doing the guided hands-on lab exercises in our course which will set you up for successfully performing in a solutions architect role.
We recommend that you allocate at least 60 minutes of study time per day and you will then be able to complete the certification within 5 weeks (including taking the actual exam). Study times can vary based on your experience with AWS and how much time you have each day, with some students passing their exams much faster and others taking a little longer. Get our eBook here.
The AWS Solutions Architect Associate exam is an associate-level exam that requires a solid understanding of the AWS platform and a broad range of AWS services. The AWS Certified Solutions Architect Associate exam questions are scenario-based questions and can be challenging. Despite this, the AWS Solutions Architect Associate is often earned by beginners to cloud computing.
The AWS Certified Solutions Architect – Associate (SAA-C03) exam is intended for individuals who perform in a solutions architect role. The exam validates a candidate’s ability to use AWS technologies to design solutions based on the AWS Well-Architected Framework.
The SAA-C03 exam is a multiple choice examination that is 65 questions in length. You can take the exam in a testing center or using an online proctored exam from your home or office. You have 130 minutes to complete your exam and the passing mark is 720 points out of 100 points (72%). If English is not your first language you can request an accommodation when booking your exam that will qualify you for an additional 30 minutes exam extension.
Unscored content The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.
Target candidate description The target candidate should have at least 1 year of hands-on experience designing cloud solutions that use AWS services
All AWS certification exam results are reported as a score from 100 to 1000. Your score shows how you performed on the examination as a whole and whether or not you passed. The passing score for the AWS Certified Solutions Architect Associate is 720 (72%).
There are no prerequisites for taking AWS exams. You do not need any programming knowledge or experience working with AWS. Everything you need to know is included in our courses. We do recommend that you have a basic understanding of fundamental computing concepts such as compute, storage, networking, and databases.
AWS Certified Solutions Architects are IT professionals who design cloud solutions with AWS services to meet given technical requirements. An AWS Solutions Architect Associate is expected to design and implement distributed systems on AWS that are high-performing, scalable, secure and cost optimized.
Task Statement 1: Design secure access to AWS resources. Knowledge of: • Access controls and management across multiple accounts • AWS federated access and identity services (for example, AWS Identity and Access Management [IAM], AWS Single Sign-On [AWS SSO]) • AWS global infrastructure (for example, Availability Zones, AWS Regions) • AWS security best practices (for example, the principle of least privilege) • The AWS shared responsibility model
Skills in: • Applying AWS security best practices to IAM users and root users (for example, multi-factor authentication [MFA]) • Designing a flexible authorization model that includes IAM users, groups, roles, and policies • Designing a role-based access control strategy (for example, AWS Security Token Service [AWS STS], role switching, cross-account access) • Designing a security strategy for multiple AWS accounts (for example, AWS Control Tower, service control policies [SCPs]) • Determining the appropriate use of resource policies for AWS services • Determining when to federate a directory service with IAM roles
Task Statement 2: Design secure workloads and applications.
Knowledge of: • Application configuration and credentials security • AWS service endpoints • Control ports, protocols, and network traffic on AWS • Secure application access • Security services with appropriate use cases (for example, Amazon Cognito, Amazon GuardDuty, Amazon Macie) • Threat vectors external to AWS (for example, DDoS, SQL injection)
Skills in: • Designing VPC architectures with security components (for example, security groups, route tables, network ACLs, NAT gateways) • Determining network segmentation strategies (for example, using public subnets and private subnets) • Integrating AWS services to secure applications (for example, AWS Shield, AWS WAF, AWS SSO, AWS Secrets Manager) • Securing external network connections to and from the AWS Cloud (for example, VPN, AWS Direct Connect)
Task Statement 3: Determine appropriate data security controls.
Knowledge of: • Data access and governance • Data recovery • Data retention and classification • Encryption and appropriate key management
Skills in: • Aligning AWS technologies to meet compliance requirements • Encrypting data at rest (for example, AWS Key Management Service [AWS KMS]) • Encrypting data in transit (for example, AWS Certificate Manager [ACM] using TLS) • Implementing access policies for encryption keys • Implementing data backups and replications • Implementing policies for data access, lifecycle, and protection • Rotating encryption keys and renewing certificates
Domain 2: Design Resilient Architectures This exam domain is focused on designing resilient architectures on AWS and comprises 26% of the exam. Task statements include:
Task Statement 1: Design scalable and loosely coupled architectures. Knowledge of: • API creation and management (for example, Amazon API Gateway, REST API) • AWS managed services with appropriate use cases (for example, AWS Transfer Family, Amazon Simple Queue Service [Amazon SQS], Secrets Manager) • Caching strategies • Design principles for microservices (for example, stateless workloads compared with stateful workloads) • Event-driven architectures • Horizontal scaling and vertical scaling • How to appropriately use edge accelerators (for example, content delivery network [CDN]) • How to migrate applications into containers • Load balancing concepts (for example, Application Load Balancer) • Multi-tier architectures • Queuing and messaging concepts (for example, publish/subscribe) • Serverless technologies and patterns (for example, AWS Fargate, AWS Lambda) • Storage types with associated characteristics (for example, object, file, block) • The orchestration of containers (for example, Amazon Elastic Container Service [Amazon ECS],Amazon Elastic Kubernetes Service [Amazon EKS]) • When to use read replicas • Workflow orchestration (for example, AWS Step Functions)
Skills in: • Designing event-driven, microservice, and/or multi-tier architectures based on requirements • Determining scaling strategies for components used in an architecture design • Determining the AWS services required to achieve loose coupling based on requirements • Determining when to use containers • Determining when to use serverless technologies and patterns • Recommending appropriate compute, storage, networking, and database technologies based on requirements • Using purpose-built AWS services for workloads
Task Statement 2: Design highly available and/or fault-tolerant architectures. Knowledge of: • AWS global infrastructure (for example, Availability Zones, AWS Regions, Amazon Route 53) • AWS managed services with appropriate use cases (for example, Amazon Comprehend, Amazon Polly) • Basic networking concepts (for example, route tables) • Disaster recovery (DR) strategies (for example, backup and restore, pilot light, warm standby, active-active failover, recovery point objective [RPO], recovery time objective [RTO]) • Distributed design patterns • Failover strategies • Immutable infrastructure • Load balancing concepts (for example, Application Load Balancer) • Proxy concepts (for example, Amazon RDS Proxy) • Service quotas and throttling (for example, how to configure the service quotas for a workload in a standby environment) • Storage options and characteristics (for example, durability, replication) • Workload visibility (for example, AWS X-Ray)
Skills in: • Determining automation strategies to ensure infrastructure integrity • Determining the AWS services required to provide a highly available and/or fault-tolerant architecture across AWS Regions or Availability Zones • Identifying metrics based on business requirements to deliver a highly available solution • Implementing designs to mitigate single points of failure • Implementing strategies to ensure the durability and availability of data (for example, backups) • Selecting an appropriate DR strategy to meet business requirements • Using AWS services that improve the reliability of legacy applications and applications not built for the cloud (for example, when application changes are not possible) • Using purpose-built AWS services for workloads
Domain 3: Design High-Performing Architectures This exam domain is focused on designing high-performing architectures on AWS and comprises 24% of the exam. Task statements include:
Knowledge of: • Hybrid storage solutions to meet business requirements • Storage services with appropriate use cases (for example, Amazon S3, Amazon Elastic File System [Amazon EFS], Amazon Elastic Block Store [Amazon EBS]) • Storage types with associated characteristics (for example, object, file, block)
Skills in: • Determining storage services and configurations that meet performance demands • Determining storage services that can scale to accommodate future needs
Task Statement 2: Design high-performing and elastic compute solutions. Knowledge of: • AWS compute services with appropriate use cases (for example, AWS Batch, Amazon EMR, Fargate) • Distributed computing concepts supported by AWS global infrastructure and edge services • Queuing and messaging concepts (for example, publish/subscribe) • Scalability capabilities with appropriate use cases (for example, Amazon EC2 Auto Scaling, AWS Auto Scaling) • Serverless technologies and patterns (for example, Lambda, Fargate) • The orchestration of containers (for example, Amazon ECS, Amazon EKS)
Skills in: • Decoupling workloads so that components can scale independently • Identifying metrics and conditions to perform scaling actions • Selecting the appropriate compute options and features (for example, EC2 instance types) to meet business requirements • Selecting the appropriate resource type and size (for example, the amount of Lambda memory) to meet business requirements
Task Statement 3: Determine high-performing database solutions. Knowledge of: • AWS global infrastructure (for example, Availability Zones, AWS Regions) • Caching strategies and services (for example, Amazon ElastiCache) • Data access patterns (for example, read-intensive compared with write-intensive) • Database capacity planning (for example, capacity units, instance types, Provisioned IOPS) • Database connections and proxies • Database engines with appropriate use cases (for example, heterogeneous migrations, homogeneous migrations) • Database replication (for example, read replicas) • Database types and services (for example, serverless, relational compared with non-relational, in-memory)
Skills in: • Configuring read replicas to meet business requirements • Designing database architectures • Determining an appropriate database engine (for example, MySQL compared with PostgreSQL) • Determining an appropriate database type (for example, Amazon Aurora, Amazon DynamoDB) • Integrating caching to meet business requirements
Task Statement 4: Determine high-performing and/or scalable network architectures. Knowledge of: • Edge networking services with appropriate use cases (for example, Amazon CloudFront, AWS Global Accelerator) • How to design network architecture (for example, subnet tiers, routing, IP addressing) • Load balancing concepts (for example, Application Load Balancer) • Network connection options (for example, AWS VPN, Direct Connect, AWS PrivateLink)
Skills in: • Creating a network topology for various architectures (for example, global, hybrid, multi-tier) • Determining network configurations that can scale to accommodate future needs • Determining the appropriate placement of resources to meet business requirements • Selecting the appropriate load balancing strategy
Task Statement 5: Determine high-performing data ingestion and transformation solutions. Knowledge of: • Data analytics and visualization services with appropriate use cases (for example, Amazon Athena, AWS Lake Formation, Amazon QuickSight) • Data ingestion patterns (for example, frequency) • Data transfer services with appropriate use cases (for example, AWS DataSync, AWS Storage Gateway) • Data transformation services with appropriate use cases (for example, AWS Glue) • Secure access to ingestion access points • Sizes and speeds needed to meet business requirements • Streaming data services with appropriate use cases (for example, Amazon Kinesis)
Skills in: • Building and securing data lakes • Designing data streaming architectures • Designing data transfer solutions • Implementing visualization strategies • Selecting appropriate compute options for data processing (for example, Amazon EMR) • Selecting appropriate configurations for ingestion • Transforming data between formats (for example, .csv to .parquet)
Domain 4: Design Cost-Optimized Architectures This exam domain is focused optimizing solutions for cost-effectiveness on AWS and comprises 20% of the exam. Task statements include:
Task Statement 1: Design cost-optimized storage solutions. Knowledge of: • Access options (for example, an S3 bucket with Requester Pays object storage) • AWS cost management service features (for example, cost allocation tags, multi-account billing) • AWS cost management tools with appropriate use cases (for example, AWS Cost Explorer, AWS Budgets, AWS Cost and Usage Report) • AWS storage services with appropriate use cases (for example, Amazon FSx, Amazon EFS, Amazon S3, Amazon EBS) • Backup strategies • Block storage options (for example, hard disk drive [HDD] volume types, solid state drive [SSD] volume types) • Data lifecycles • Hybrid storage options (for example, DataSync, Transfer Family, Storage Gateway) • Storage access patterns • Storage tiering (for example, cold tiering for object storage) • Storage types with associated characteristics (for example, object, file, block)
Skills in: • Designing appropriate storage strategies (for example, batch uploads to Amazon S3 compared with individual uploads) • Determining the correct storage size for a workload • Determining the lowest cost method of transferring data for a workload to AWS storage • Determining when storage auto scaling is required • Managing S3 object lifecycles • Selecting the appropriate backup and/or archival solution • Selecting the appropriate service for data migration to storage services • Selecting the appropriate storage tier • Selecting the correct data lifecycle for storage • Selecting the most cost-effective storage service for a workload
Task Statement 2: Design cost-optimized compute solutions. Knowledge of: • AWS cost management service features (for example, cost allocation tags, multi-account billing) • AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report) • AWS global infrastructure (for example, Availability Zones, AWS Regions) • AWS purchasing options (for example, Spot Instances, Reserved Instances, Savings Plans) • Distributed compute strategies (for example, edge processing) • Hybrid compute options (for example, AWS Outposts, AWS Snowball Edge) • Instance types, families, and sizes (for example, memory optimized, compute optimized, virtualization) • Optimization of compute utilization (for example, containers, serverless computing, microservices) • Scaling strategies (for example, auto scaling, hibernation)
Skills in: • Determining an appropriate load balancing strategy (for example, Application Load Balancer [Layer 7] compared with Network Load Balancer [Layer 4] compared with Gateway Load Balancer) • Determining appropriate scaling methods and strategies for elastic workloads (for example, horizontal compared with vertical, EC2 hibernation) • Determining cost-effective AWS compute services with appropriate use cases (for example, Lambda, Amazon EC2, Fargate) • Determining the required availability for different classes of workloads (for example, production workloads, non-production workloads) • Selecting the appropriate instance family for a workload • Selecting the appropriate instance size for a workload
Task Statement 3: Design cost-optimized database solutions. Knowledge of: • AWS cost management service features (for example, cost allocation tags, multi-account billing) • AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report) • Caching strategies • Data retention policies • Database capacity planning (for example, capacity units) • Database connections and proxies • Database engines with appropriate use cases (for example, heterogeneous migrations, homogeneous migrations) • Database replication (for example, read replicas) • Database types and services (for example, relational compared with non-relational, Aurora, DynamoDB)
Skills in: • Designing appropriate backup and retention policies (for example, snapshot frequency) • Determining an appropriate database engine (for example, MySQL compared with PostgreSQL) • Determining cost-effective AWS database services with appropriate use cases (for example, DynamoDB compared with Amazon RDS, serverless) • Determining cost-effective AWS database types (for example, time series format, columnar format) • Migrating database schemas and data to different locations and/or different database engines
Task Statement 4: Design cost-optimized network architectures. Knowledge of: • AWS cost management service features (for example, cost allocation tags, multi-account billing) • AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report) • Load balancing concepts (for example, Application Load Balancer) • NAT gateways (for example, NAT instance costs compared with NAT gateway costs) • Network connectivity (for example, private lines, dedicated lines, VPNs) • Network routing, topology, and peering (for example, AWS Transit Gateway, VPC peering) • Network services with appropriate use cases (for example, DNS)
Skills in: • Configuring appropriate NAT gateway types for a network (for example, a single shared NAT gateway compared with NAT gateways for each Availability Zone) • Configuring appropriate network connections (for example, Direct Connect compared with VPN compared with internet) • Configuring appropriate network routes to minimize network transfer costs (for example, Region to Region, Availability Zone to Availability Zone, private to public, Global Accelerator, VPC endpoints) • Determining strategic needs for content delivery networks (CDNs) and edge caching • Reviewing existing workloads for network optimizations • Selecting an appropriate throttling strategy • Selecting the appropriate bandwidth allocation for a network device (for example, a single VPN compared with multiple VPNs, Direct Connect speed)
Which key tools, technologies, and concepts might be covered on the exam? The following is a non-exhaustive list of the tools and technologies that could appear on the exam. This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam. The general tools and technologies in this list appear in no particular order. AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance: • Compute • Cost management • Database • Disaster recovery • High performance • Management and governance • Microservices and component decoupling • Migration and data transfer • Networking, connectivity, and content delivery • Resiliency • Security • Serverless and event-driven design principles • Storage
AWS Services and Features There are lots of new services and feature updates in scope for the new AWS Certified Solutions Architect Associate certification! Here’s a list of some of the new services that will be in scope for the new version of the exam:
Analytics: • Amazon Athena • AWS Data Exchange • AWS Data Pipeline • Amazon EMR • AWS Glue • Amazon Kinesis • AWS Lake Formation • Amazon Managed Streaming for Apache Kafka (Amazon MSK) • Amazon OpenSearch Service (Amazon Elasticsearch Service) • Amazon QuickSight • Amazon Redshift
Management and Governance: • AWS Auto Scaling • AWS CloudFormation • AWS CloudTrail • Amazon CloudWatch • AWS Command Line Interface (AWS CLI) • AWS Compute Optimizer • AWS Config • AWS Control Tower • AWS License Manager • Amazon Managed Grafana • Amazon Managed Service for Prometheus • AWS Management Console • AWS Organizations • AWS Personal Health Dashboard • AWS Proton • AWS Service Catalog • AWS Systems Manager • AWS Trusted Advisor • AWS Well-Architected Tool
Media Services: • Amazon Elastic Transcoder • Amazon Kinesis Video Streams
Migration and Transfer: • AWS Application Discovery Service • AWS Application Migration Service (CloudEndure Migration) • AWS Database Migration Service (AWS DMS) • AWS DataSync • AWS Migration Hub • AWS Server Migration Service (AWS SMS) • AWS Snow Family • AWS Transfer Family
Out-of-scope AWS services and features The following is a non-exhaustive list of AWS services and features that are not covered on the exam. These services and features do not represent every AWS offering that is excluded from the exam content.
AWS solutions architect associate exam prep facts and summaries questions and answers dump – Solution Architecture Definition 1:
Solution architecture is a practice of defining and describing an architecture of a system delivered in context of a specific solution and as such it may encompass description of an entire system or only its specific parts. Definition of a solution architecture is typically led by a solution architect.
AWS solutions architect associate exam prep facts and summaries questions and answers dump – Solution Architecture Definition 2:
If you are running an application in a production environment and must add a new EBS volume with data from a snapshot, what could you do to avoid degraded performance during the volume’s first use? Initialize the data by reading each storage block on the volume. Volumes created from an EBS snapshot must be initialized. Initializing occurs the first time a storage block on the volume is read, and the performance impact can be impacted by up to 50%. You can avoid this impact in production environments by pre-warming the volume by reading all of the blocks.
If you are running a legacy application that has hard-coded static IP addresses and it is running on an EC2 instance; what is the best failover solution that allows you to keep the same IP address on a new instance? Elastic IP addresses (EIPs) are designed to be attached/detached and moved from one EC2 instance to another. They are a great solution for keeping a static IP address and moving it to a new instance if the current instance fails. This will reduce or eliminate any downtime uses may experience.
Which feature of Intel processors help to encrypt data without significant impact on performance? AES-NI
You can mount to EFS from which two of the following?
On-prem servers running Linux
EC2 instances running Linux
EFS is not compatible with Windows operating systems.
When a file(s) is encrypted and the stored data is not in transit it’s known as encryption at rest. What is an example of encryption at rest?
When would vertical scaling be necessary? When an application is built entirely into one source code, otherwise known as a monolithic application.
Fault-Tolerance allows for continuous operation throughout a failure, which can lead to a low Recovery Time Objective. RPO vs RTO
High-Availability means automating tasks so that an instance will quickly recover, which can lead to a low Recovery Time Objective. RPO vs. RTO
Frequent backups reduce the time between the last backup and recovery point, otherwise known as the Recovery Point Objective. RPO vs. RTO
Which represents the difference between Fault-Tolerance and High-Availability? High-Availability means the system will quickly recover from a failure event, and Fault-Tolerance means the system will maintain operations during a failure.
From a security perspective, what is a principal?An anonymous user falls under the definition of a principal. A principal can be an anonymous user acting on a system.
An authenticated user falls under the definition of a principal. A principal can be an authenticated user acting on a system.
23. It is the customer’s responsibility to patch the operating system on an EC2 instance.
24. In designing an environment, what four main points should a Solutions Architect keep in mind? Cost-efficient, secure, application session state, undifferentiated heavy lifting: These four main points should be the framework when designing an environment.
25. In the context of disaster recovery, what does RPO stand for? RPO is the abbreviation for Recovery Point Objective.
26.What are the benefits of horizontal scaling?
Vertical scaling can be costly while horizontal scaling is cheaper.
Horizontal scaling suffers from none of the size limitations of vertical scaling.
Having horizontal scaling means you can easily route traffic to another instance of a server.
Q1: A Solutions Architect is designing a critical business application with a relational database that runs on an EC2 instance. It requires a single EBS volume that can support up to 16,000 IOPS. Which Amazon EBS volume type can meet the performance requirements of this application?
A. EBS Provisioned IOPS SSD
B. EBS Throughput Optimized HDD
C. EBS General Purpose SSD
D. EBS Cold HDD
Answer: A ( Get the SAA Exam Prep for More: iOS – Android – Windows )
EBS Provisioned IOPS SSD provides sustained performance for mission-critical low-latency workloads. EBS General Purpose SSD can provide bursts of performance up to 3,000 IOPS and have a maximum baseline performance of 10,000 IOPS for volume sizes greater than 3.3 TB. The 2 HDD options are lower cost, high throughput volumes.
Q2: An application running on EC2 instances processes sensitive information stored on Amazon S3. The information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 is a security risk. Which solution will resolve the security concern?
A. Access the data through an Internet Gateway.
B. Access the data through a VPN connection.
C. Access the data through a NAT Gateway.
D.Access the data through a VPC endpoint for Amazon S3
Answer: D ( Get the SAA-C03 Exam Prep for More: iOS – Android – Windows )
VPC endpoints for Amazon S3 provide secure connections to S3 buckets that do not require a gateway or NAT instances. NAT Gateways and Internet Gateways still route traffic over the Internet to the public endpoint for Amazon S3. There is no way to connect to Amazon S3 via VPN.
Q3: An organization is building an Amazon Redshift cluster in their shared services VPC. The cluster will host sensitive data. How can the organization control which networks can access the cluster?
A. Run the cluster in a different VPC and connect through VPC peering.
B. Create a database user inside the Amazon Redshift cluster only for users on the network.
C. Define a cluster security group for the cluster that allows access from the allowed networks.
D. Only allow access to networks that connect with the shared services network via VPN.
Answer: ( Get the SAA-C03 Exam Prep for More: iOS – Android – Windows ) A security group can grant access to traffic from the allowed networks via the CIDR range for each network. VPC peering and VPN are connectivity services and cannot control traffic for security. Amazon Redshift user accounts address authentication and authorization at the user level and have no control over network traffic.
Q4: A web application allows customers to upload orders to an S3 bucket. The resulting Amazon S3 events trigger a Lambda function that inserts a message to an SQS queue. A single EC2 instance reads messages from the queue, processes them, and stores them in an DynamoDB table partitioned by unique order ID. Next month traffic is expected to increase by a factor of 10 and a Solutions Architect is reviewing the architecture for possible scaling problems. Which component is MOST likely to need re-architecting to be able to scale to accommodate the new traffic?
A. Lambda function
B. SQS queue
C. EC2 instance
D. DynamoDB table
Answer: C ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows ) A single EC2 instance will not scale and is a single point of failure in the architecture. A much better solution would be to have EC2 instances in an Auto Scaling group across 2 availability zones read messages from the queue. The other responses are all managed services that can be configured to scale or will scale automatically.
Q5: An application requires a highly available relational database with an initial storage capacity of 8 TB. The database will grow by 8 GB every day. To support expected traffic, at least eight read replicas will be required to handle database reads. Which option will meet these requirements?
B. Amazon S3
C. Amazon Aurora
D. Amazon Redshift
Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows ) Amazon Aurora is a relational database that will automatically scale to accommodate data growth. Amazon Redshift does not support read replicas and will not automatically scale. DynamoDB is a NoSQL service, not a relational database. Amazon S3 is object storage, not a relational database.
C. Divide your files system into multiple smaller file systems.
D. Provision higher IOPs for your EFS.
Answer: B ( Get the SAA-C03 Exam Prep for More: iOS – Android – Windows ) Amazon EFS now allows you to instantly provision the throughput required for your applications independent of the amount of data stored in your file system. This allows you to optimize throughput for your application’s performance needs.
Q8: A Solution Architect is designing an online shopping application running in a VPC on EC2 instances behind an ELB Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The application tier must read and write data to a customer managed database cluster. There should be no access to the database from the Internet, but the cluster must be able to obtain software patches from the Internet.
Which VPC design meets these requirements?
A. Public subnets for both the application tier and the database cluster
B. Public subnets for the application tier, and private subnets for the database cluster
C. Public subnets for the application tier and NAT Gateway, and private subnets for the database cluster
D. Public subnets for the application tier, and private subnets for the database cluster and NAT Gateway
The online application must be in public subnets to allow access from clients’ browsers. The database cluster must be in private subnets to meet the requirement that there be no access from the Internet.
A NAT Gateway is required to give the database cluster the ability to download patches from the Internet. NAT Gateways must be deployed in public subnets.
Q9: What command should you run on a running instance if you want to view its user data (that is used at launch)?
A. curl http://254.169.254.169/latest/user-data
B. curl http://localhost/latest/meta-data/bootstrap
C. curl http://localhost/latest/user-data
D. curl http://169.254.169.254/latest/user-data
Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows ) Retrieve Instance User Data To retrieve user data from within a running instance, use the following URI: http://169.254.169.254/latest/user-data
Q10: A company is developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? (Select TWO.)
C. Elastic Load Balancing
E. Storage Gateway
Answer: B. and D. ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows ) Both DynamoDB and ElastiCache provide high performance storage of key-value pairs. CloudWatch and ELB are not storage services. Storage Gateway is a storage service, but it is a hybrid Storage service that enables on-premises applications to use cloud storage.
A stateful web service will keep track of the “state” of a client’s connection and data over several requests. So for example, the client might login, select a users account data, update their address, attach a photo, and change the status flag, then disconnect.
In a stateless web service, the server doesn’t keep any information from one request to the next. The client needs to do it’s work in a series of simple transactions, and the client has to keep track of what happens between requests. So in the above example, the client needs to do each operation separately: connect and update the address, disconnect. Connect and attach the photo, disconnect. Connect and change the status flag, disconnect.
A stateless web service is much simpler to implement, and can handle greater volume of clients.
An anonymous user falls under the definition of a principal. A principal can be an anonymous user acting on a system. An authenticated user falls under the definition of a principal. A principal can be an authenticated user acting on a system.
Q12: What are the characteristics of a tiered application?
A. All three application layers are on the same instance
B. The presentation tier is on an isolated instance than the logic layer
C. None of the tiers can be cloned
D. The logic layer is on an isolated instance than the data layer
E. Additional machines can be added to help the application by implementing horizontal scaling
F. Incapable of horizontal scaling
Answer: B. D. and E. ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows )
In a tiered application, the presentation layer is separate from the logic layer; the logic layer is separate from the data layer. Since parts of the application are isolated, they can scale horizontally.
Q17: You lead a team to develop a new online game application in AWS EC2. The application will have a large number of users globally. For a great user experience, this application requires very low network latency and jitter. If the network speed is not fast enough, you will lose customers. Which tool would you choose to improve the application performance? (Select TWO.)
A. AWS VPN
B. AWS Global Accelerator
C. Direct Connect
D. API Gateway
Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep App for More: iOS – Android – Windows )
Notes: This online game application has global users and needs low latency. Both CloudFront and Global Accelerator can speed up the distribution of contents over the AWS global network. AWS Global Accelerator works at the network layer and is able to direct traffic to optimal endpoints. Check what is global-accelerator for reference. CloudFront delivers content through edge locations and users are routed to the edge location that has the lowest time delay.
Q18: A company has a media processing application deployed in a local data center. Its file storage is built on a Microsoft Windows file server. The application and file server need to be migrated to AWS. You want to quickly set up the file server in AWS and the application code should continue working to access the file systems. Which method should you choose to create the file server?
A. Create a Windows File Server from Amazon WorkSpaces.
B. Configure a high performance Windows File System in Amazon EFS.
C. Create a Windows File Server in Amazon FSx.
D. Configure a secure enterprise storage through Amazon WorkDocs.
Notes: In this question, a Windows file server is required in AWS and the application should continue to work unchanged. Amazon FSx for Windows File Server is the correct answer as it is backed by a fully native Windows file system.
Q19: You are developing an application using AWS SDK to get objects from AWS S3. The objects have big sizes and sometimes there are failures when getting objects especially when the network connectivity is poor. You want to get a specific range of bytes in a single GET request and retrieve the whole object in parts. Which method can achieve this?
A. Enable multipart upload in the AWS SDK.
B. Use the “Range” HTTP header in a GET request to download the specified range bytes of an object.
C. Reduce the retry requests and enlarge the retry timeouts through AWS SDK when fetching S3 objects.
D. Retrieve the whole S3 object through a single GET operation.
Answer: – ( Get the SAA-C02 / SAA-C03 Exam Prep App for More: iOS – Android – Windows )
Notes: Because with byte-range fetches, users can establish concurrent connections to Amazon S3 to fetch different parts from within the same object.
Through the “Range” header in the HTTP GET request, a specified portion of the objects can be downloaded instead of the whole objects. Check the explanations in here.
Q20: You have an application hosted in an Auto Scaling group and an application load balancer distributes traffic to the ASG. You want to add a scaling policy that keeps the average aggregate CPU utilization of the Auto Scaling group to be 60 percent. The capacity of the Auto Scaling group should increase or decrease based on this target value. Which scaling policy does it belong to?
Notes: A target tracking scaling policy can be applied to check the ASGAverageCPUUtilization metric. In ASG, you can add a target tracking scaling policy based on a target. Check here for different scaling policies.
Q21: You need to launch a number of EC2 instances to run Cassandra. There are large distributed and replicated workloads in Cassandra and you plan to launch instances using EC2 placement groups. The traffic should be distributed evenly across several partitions and each partition should contain multiple instances. Which strategy would you use when launching the placement groups?
A. Cluster placement strategy
B. Spread placement strategy.
C. Partition placement strategy.
D. Network placement strategy.
Answer: – ( Get the SAA-C02 / SAA-C03 Exam Prep App for More: iOS – Android – Windows )
Notes: Placement groups have the placement strategies of Cluster, Partition and Spread. With the Partition placement strategy, instances in one partition do not share the underlying hardware with other partitions. This strategy is suitable for distributed and replicated workloads such as Cassandra. Details please refer to Placement Groups Limitation partition.
Q22: To improve the network performance, you launch a C5 EC2 Amazon Linux instance and enable enhanced networking by modifying the instance attribute with “aws ec2 modify-instance-attribute –instance-id instance_id –ena-support”. Which mechanism does the EC2 instance use to enhance the networking capabilities?
A. Intel 82599 Virtual Function (VF) interface.
B. Elastic Fabric Adapter (EFA).
C. Elastic Network Adapter (ENA).
D. Elastic Network Interface (ENI).
Notes: Enhanced networking has two mechanisms: Elastic Network Adapter (ENA) and Intel 82599Virtual Function (VF) interface. For ENA, users can enable it with –ena-support. References can be found here
Q23: You work for an online retailer where any downtime at all can cause a significant loss of revenue. You have architected your application to be deployed on an Auto Scaling Group of EC2 instances behind a load balancer. You have configured and deployed these resources using a CloudFormation template. The Auto Scaling Group is configured with default settings, and a simple CPU utilization scaling policy. You have also set up multiple Availability Zones for high availability. The Load Balancer does health checks against an html file generated by script. When you begin performing load testing on your application and notice in CloudWatch that the load balancer is not sending traffic to one of your EC2 instances. What could be the problem?
A. The EC2 instance has failed the load balancer health check.
B. The instance has not been registered with CloudWatch.
C. The EC2 instance has failed EC2 status checks.
D. You are load testing at a moderate traffic level and not all instances are needed.
Notes: The load balancer will route the incoming requests only to the healthy instances. The EC2 instance may have passed status check and be considered health to the Auto Scaling Group, but the ELB may not use it if the ELB health check has not been met. The ELB health check has a default of 30 seconds between checks, and a default of 3 checks before making a decision. Therefore the instance could be visually available but unused for at least 90 seconds before the GUI would show it as failed. In CloudWatch where the issue was noticed it would appear to be a healthy EC2 instance but with no traffic. Which is what was observed.
References: ELB HealthCheck
Q24: Your company is using a hybrid configuration because there are some legacy applications which are not easily converted and migrated to AWS. And with this configuration comes a typical scenario where the legacy apps must maintain the same private IP address and MAC address. You are attempting to convert the application to the cloud and have configured an EC2 instance to house the application. What you are currently testing is removing the ENI from the legacy instance and attaching it to the EC2 instance. You want to attempt a cold attach. What does this mean?
A. Attach ENI when it’s stopped.
B. Attach ENI before the public IP address is assigned.
C. Attach ENI to an instance when it’s running.
D. Attach ENI when the instance is being launched.
Notes: Best practices for configuring network interfaces You can attach a network interface to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach). You can detach secondary network interfaces when the instance is running or stopped. However, you can’t detach the primary network interface. You can move a network interface from one instance to another, if the instances are in the same Availability Zone and VPC but in different subnets. When launching an instance using the CLI, API, or an SDK, you can specify the primary network interface and additional network interfaces. Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance. A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly. Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves. Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance. If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing. If possible, use a secondary private IPv4 address on the primary network interface instead. For more information, see Assigning a secondary private IPv4 address.
Q25: Your company has recently converted to a hybrid cloud environment and will slowly be migrating to a fully AWS cloud environment. The AWS side is in need of some steps to prepare for disaster recovery. A disaster recovery plan needs drawn up and disaster recovery drills need to be performed for compliance reasons. The company wants to establish Recovery Time and Recovery Point Objectives. The RTO and RPO can be pretty relaxed. The main point is to have a plan in place, with as much cost savings as possible. Which AWS disaster recovery pattern will best meet these requirements?
A. Warm Standby
B. Backup and restore
C. Multi Site
D. Pilot Light
Notes: Backup and Restore: This is the least expensive option and cost is the overriding factor.
Q26: An international travel company has an application which provides travel information and alerts to users all over the world. The application is hosted on groups of EC2 instances in Auto Scaling Groups in multiple AWS Regions. There are also load balancers routing traffic to these instances. In two countries, Ireland and Australia, there are compliance rules in place that dictate users connect to the application in eu-west-1 and ap-southeast-1. Which service can you use to meet this requirement?
A. Use Route 53 weighted routing.
B. Use Route 53 geolocation routing.
C. Configure CloudFront and the users will be routed to the nearest edge location.
D. Configure the load balancers to route users to the proper region.
Notes: Geolocation routing lets you choose the resources that serve your traffic based on the geographic location of your users, meaning the location that DNS queries originate from. For example, you might want all queries from Europe to be routed to an ELB in the Frankfurt region. When you use geolocation routing, you can localize your content and present some or all of your website in the language of your users. You can also use geolocation routing to restrict distribution of content to only the locations in which you have distribution rights. Another possible use is for balancing load across endpoints in a predictable, easy-to-manage way, so that each user location is consistently routed to the same endpoint.
Q26: You have taken over management of several instances in the company AWS environment. You want to quickly review scripts used to bootstrap the instances at runtime. A URL command can be used to do this. What can you append to the URL http://169.254.169.254/latest/ to retrieve this data?
Notes: When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives.
Q27: A software company has created an application to capture service requests from users and also enhancement requests. The application is deployed on an Auto Scaling group of EC2 instances fronted by an Application Load Balancer. The Auto Scaling group has scaled to maximum capacity, but there are still requests being lost. The cost of these instances is becoming an issue. What step can the company take to ensure requests aren’t lost?
A. Use larger instances in the Auto Scaling group.
B. Use spot instances to save money.
C. Use an SQS queue with the Auto Scaling group to capture all requests.
D. Use a Network Load Balancer instead for faster throughput.
Notes: There are some scenarios where you might think about scaling in response to activity in an Amazon SQS queue. For example, suppose that you have a web app that lets users upload images and use them online. In this scenario, each image requires resizing and encoding before it can be published. The app runs on EC2 instances in an Auto Scaling group, and it’s configured to handle your typical upload rates. Unhealthy instances are terminated and replaced to maintain current instance levels at all times. The app places the raw bitmap data of the images in an SQS queue for processing. It processes the images and then publishes the processed images where they can be viewed by users. The architecture for this scenario works well if the number of image uploads doesn’t vary over time. But if the number of uploads changes over time, you might consider using dynamic scaling to scale the capacity of your Auto Scaling group.
Q28: A company has an auto scaling group of EC2 instances hosting their retail sales application. Any significant downtime for this application can result in large losses of profit. Therefore the architecture also includes an Application Load Balancer and an RDS database in a Multi-AZ deployment. The company has a very aggressive Recovery Time Objective (RTO) in case of disaster. How long will a failover typically complete?
A. Under 10 minutes
B. Within an hour
C. Almost instantly
D. one to two minutes
Notes: What happens during Multi-AZ failover and how long does it take? Failover is automatically handled by Amazon RDS so that you can resume database operations as quickly as possible without administrative intervention. When failing over, Amazon RDS simply flips the canonical name record (CNAME) for your DB instance to point at the standby, which is in turn promoted to become the new primary. We encourage you to follow best practices and implement database connection retry at the application layer. Failovers, as defined by the interval between the detection of the failure on the primary and the resumption of transactions on the standby, typically complete within one to two minutes. Failover time can also be affected by whether large uncommitted transactions must be recovered; the use of adequately large instance types is recommended with Multi-AZ for best results. AWS also recommends the use of Provisioned IOPS with Multi-AZ instances for fast, predictable, and consistent throughput performance.
Q29: You have two EC2 instances running in the same VPC, but in different subnets. You are removing the secondary ENI from an EC2 instance and attaching it to another EC2 instance. You want this to be fast and with limited disruption. So you want to attach the ENI to the EC2 instance when it’s running. What is this called?
Notes: Here are some best practices for configuring network interfaces. You can attach a network interface to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach). You can detach secondary network interfaces when the instance is running or stopped. However, you can’t detach the primary network interface. You can move a network interface from one instance to another if the instances are in the same Availability Zone and VPC but in different subnets. When launching an instance using the CLI, API, or an SDK, you can specify the primary network interface and additional network interfaces. Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance. A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly. Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves. Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance. If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing. If possible, use a secondary private IPv4 address on the primary network interface instead. For more information, see Assigning a secondary private IPv4 address.
Q30: You suspect that one of the AWS services your company is using has gone down. How can you check on the status of this service?
A. AWS Trusted Advisor
B. Amazon Inspector
C. AWS Personal Health Dashboard
D. AWS Organizations
Notes: AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that may impact you. While the Service Health Dashboard displays the general status of AWS services, Personal Health Dashboard gives you a personalized view of the performance and availability of the AWS services underlying your AWS resources. The dashboard displays relevant and timely information to help you manage events in progress, and provides proactive notification to help you plan for scheduled activities. With Personal Health Dashboard, alerts are triggered by changes in the health of AWS resources, giving you event visibility and guidance to help quickly diagnose and resolve issues.
Q31: You have configured an Auto Scaling Group of EC2 instances fronted by an Application Load Balancer and backed by an RDS database. You want to begin monitoring the EC2 instances using CloudWatch metrics. Which metric is not readily available out of the box?
Q32: Several instances you are creating have a specific data requirement. The requirement states that the data on the root device needs to persist independently from the lifetime of the instance. After considering AWS storage options, which is the simplest way to meet these requirements?
A. Store your root device data on Amazon EBS.
B. Store the data on the local instance store.
C. Create a cron job to migrate the data to S3.
D. Send the data to S3 using S3 lifecycle rules.
Notes: By using Amazon EBS, data on the root device will persist independently from the lifetime of the instance. This enables you to stop and restart the instance at a subsequent time, which is similar to shutting down your laptop and restarting it when you need it again.
Q33: A company has an Auto Scaling Group of EC2 instances hosting their retail sales application. Any significant downtime for this application can result in large losses of profit. Therefore the architecture also includes an Application Load Balancer and an RDS database in a Multi-AZ deployment. What will happen to preserve high availability if the primary database fails?
A. A Lambda function kicks off a CloudFormation template to deploy a backup database.
B. The CNAME is switched from the primary db instance to the secondary.
C. Route 53 points the CNAME to the secondary database instance.
D. The Elastic IP address for the primary database is moved to the secondary database.
Notes: Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
Failover is automatically handled by Amazon RDS so that you can resume database operations as quickly as possible without administrative intervention. When failing over, Amazon RDS simply flips the canonical name record (CNAME) for your DB instance to point at the standby, which is in turn promoted to become the new primary.
Q34: After several issues with your application and unplanned downtime, your recommendation to migrate your application to AWS is approved. You have set up high availability on the front end with a load balancer and an Auto Scaling Group. What step can you take with your database to configure high-availability and ensure minimal downtime (under five minutes)?
A. Create a read replica.
B. Enable Multi-AZ failover on the database.
C. Take frequent snapshots of your database.
D. Create your database using CloudFormation and save the template for reuse.
Notes: In the event of a planned or unplanned outage of your DB instance, Amazon RDS automatically switches to a standby replica in another Availability Zone if you have enabled Multi-AZ. The time it takes for the failover to complete depends on the database activity and other conditions at the time the primary DB instance became unavailable. Failover times are typically 60–120 seconds. However, large transactions or a lengthy recovery process can increase failover time. When the failover is complete, it can take additional time for the RDS console to reflect the new Availability Zone. Note the above sentences. Large transactions could cause a problem in getting back up within five minutes, but this is clearly the best of the available choices to attempt to meet this requirement. We must move through our questions on the exam quickly, but always evaluate all the answers for the best possible solution.
Q35: A new startup is considering the advantages of using DynamoDB versus a traditional relational database in AWS RDS. The NoSQL nature of DynamoDB presents a small learning curve to the team members who all have experience with traditional databases. The company will have multiple databases, and the decision will be made on a case-by-case basis. Which of the following use cases would favour DynamoDB? Select two.
Notes: DynamoDB is a NoSQL database that supports key-value and document data structures. A key-value store is a database service that provides support for storing, querying, and updating collections of objects that are identified using a key and values that contain the actual content being stored. Meanwhile, a document data store provides support for storing, querying, and updating items in a document format such as JSON, XML, and HTML. DynamoDB’s fast and predictable performance characteristics make it a great match for handling session data. Plus, since it’s a fully-managed NoSQL database service, you avoid all the work of maintaining and operating a separate session store.
Storing metadata for Amazon S3 objects is correct because the Amazon DynamoDB stores structured data indexed by primary key and allows low-latency read and write access to items ranging from 1 byte up to 400KB. Amazon S3 stores unstructured blobs and is suited for storing large objects up to 5 TB. In order to optimize your costs across AWS services, large objects or infrequently accessed data sets should be stored in Amazon S3, while smaller data elements or file pointers (possibly to Amazon S3 objects) are best saved in Amazon DynamoDB.
Q36: You have been tasked with designing a strategy for backing up EBS volumes attached to an instance-store-backed EC2 instance. You have been asked for an executive summary on your design, and the executive summary should include an answer to the question, “What can an EBS volume do when snapshotting the volume is in progress”?
A. The volume can be used normally while the snapshot is in progress.
B. The volume can only accommodate writes while a snapshot is in progress.
C. The volume can not be used while a snapshot is in progress.
D. The volume can only accommodate reads while a snapshot is in progress.
Notes: You can create a point-in-time snapshot of an EBS volume and use it as a baseline for new volumes or for data backup. If you make periodic snapshots of a volume, the snapshots are incremental; the new snapshot saves only the blocks that have changed since your last snapshot. Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed. While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume.
Q37: You are working as a Solutions Architect in a large healthcare organization. You have many Auto Scaling Groups that you need to create. One requirement is that you need to reuse some software licenses and therefore need to use dedicated hosts on EC2 instances in your Auto Scaling Groups. What step must you take to meet this requirement?
A. Create your launch configuration, but manually change the instances to Dedicated Hosts in the EC2 console.
B. Use a launch template with your Auto Scaling Group.
C. Create the Dedicated Host EC2 instances, then add them to an existing Auto Scaling Group.
D. Make sure your launch configurations are using Dedicated Hosts.
Notes: In addition to the features of Amazon EC2 Auto Scaling that you can configure by using launch templates, launch templates provide more advanced Amazon EC2 configuration options. For example, you must use launch templates to use Amazon EC2 Dedicated Hosts. Dedicated Hosts are physical servers with EC2 instance capacity that are dedicated to your use. While Amazon EC2 Dedicated Instances also run on dedicated hardware, the advantage of using Dedicated Hosts over Dedicated Instances is that you can bring eligible software licenses from external vendors and use them on EC2 instances. If you currently use launch configurations, you can specify a launch template when you update an Auto Scaling group that was created using a launch configuration. To create a launch template to use with an Auto Scaling Group, create the template from scratch, create a new version of an existing template, or copy the parameters from a launch configuration, running instance, or other template.
Q38: Your organization uses AWS CodeDeploy for deployments. Now you are starting a project on the AWS Lambda platform. For your deployments, you’ve been given a requirement of performing blue-green deployments. When you perform deployments, you want to split traffic, sending a small percentage of the traffic to the new version of your application. Which deployment configuration will allow this splitting of traffic?
Notes: With canary, traffic is shifted in two increments. You can choose from predefined canary options that specify the percentage of traffic shifted to your updated Lambda function version in the first increment and the interval, in minutes, before the remaining traffic is shifted in the second increment.
Q39: A financial institution has an application that produces huge amounts of actuary data, which is ultimately expected to be in the terabyte range. There is a need to run complex analytic queries against terabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Which storage service will best meet this requirement?
Notes: Amazon Redshift is a fast, fully-managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It enables you to run complex analytic queries against terabytes to petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Most results come back in seconds. With Redshift, you can start small for just $0.25 per hour with no commitments and scale-out to petabytes of data for $1,000 per terabyte per year, less than a tenth of the cost of traditional on-premises solutions. Amazon Redshift also includes Amazon Redshift Spectrum, allowing you to run SQL queries directly against exabytes of unstructured data in Amazon S3 data lakes. No loading or transformation is required, and you can use open data formats, including Avro, CSV, Grok, Amazon Ion, JSON, ORC, Parquet, RCFile, RegexSerDe, Sequence, Text, and TSV. Redshift Spectrum automatically scales query compute capacity based on the data retrieved, so queries against Amazon S3 run fast, regardless of data set size.
Q40: A company has an application for sharing static content, such as photos. The popularity of the application has grown, and the company is now sharing content worldwide. This worldwide service has caused some issues with latency. What AWS services can be used to host a static website, serve content to globally dispersed users, and address latency issues, while keeping cost under control? Choose two.
Notes: Amazon S3 is an object storage built to store and retrieve any amount of data from anywhere on the Internet. It’s a simple storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low costs. AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around the world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Both services integrate with AWS Shield for DDoS protection.
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services. CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing, or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience. Lastly, if you use AWS origins such as Amazon S3, Amazon EC2, or Elastic Load Balancing, you don’t pay for any data transferred between these services and CloudFront.
Q41: You have just been hired by a large organization which uses many different AWS services in their environment. Some of the services which handle data include: RDS, Redshift, ElastiCache, DynamoDB, S3, and Glacier. You have been instructed to configure a web application using stateless web servers. Which services can you use to handle session state data? Choose two.
Q42: After an IT Steering Committee meeting you have been put in charge of configuring a hybrid environment for the company’s compute resources. You weigh the pros and cons of various technologies based on the requirements you are given. Your primary requirement is the necessity for a private, dedicated connection, which bypasses the Internet and can provide throughput of 10 Gbps. Which option will you select?
A. AWS Direct Connect
B. VPC Peering
C. AWS VPN
D. AWS Direct Gateway
Notes: AWS Direct Connect can reduce network costs, increase bandwidth throughput, and provide a more consistent network experience than internet-based connections. It uses industry-standard 802.1q VLANs to connect to Amazon VPC using private IP addresses. You can choose from an ecosystem of WAN service providers for integrating your AWS Direct Connect endpoint in an AWS Direct Connect location with your remote networks. AWS Direct Connect lets you establish 1 Gbps or 10 Gbps dedicated network connections (or multiple connections) between AWS networks and one of the AWS Direct Connect locations. You can also work with your provider to create sub-1G connection or use link aggregation group (LAG) to aggregate multiple 1 gigabit or 10 gigabit connections at a single AWS Direct Connect endpoint, allowing you to treat them as a single, managed connection. A Direct Connect gateway is a globally available resource to enable connections to multiple Amazon VPCs across different regions or AWS accounts.
Q43: An application is hosted on an EC2 instance in a VPC. The instance is in a subnet in the VPC, and the instance has a public IP address. There is also an internet gateway and a security group with the proper ingress configured. But your testers are unable to access the instance from the Internet. What could be the problem?
A. Make sure the instance has a private IP address.
B. Add a route to the route table, from the subnet containing the instance, to the Internet Gateway.
C. A NAT gateway needs to be configured.
D. A Virtual private gateway needs to be configured.
The question doesn’t state if the subnet containing the instance is public or private. An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet. To enable access to or from the internet for instances in a subnet in a VPC, you must do the following:
Attach an internet gateway to your VPC.
Add a route to your subnet’s route table that directs internet-bound traffic to the internet gateway. If a subnet is associated with a route table that has a route to an internet gateway, it’s known as a public subnet. If a subnet is associated with a route table that does not have a route to an internet gateway, it’s known as a private subnet.
Ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address).
Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance.
In your subnet route table, you can specify a route for the internet gateway to all destinations not explicitly known to the route table (0.0.0.0/0 for IPv4 or ::/0 for IPv6). Alternatively, you can scope the route to a narrower range of IP addresses. For example, the public IPv4 addresses of your company’s public endpoints outside of AWS, or the elastic IP addresses of other Amazon EC2 instances outside your VPC. To enable communication over the Internet for IPv4, your instance must have a public IPv4 address or an Elastic IP address that’s associated with a private IPv4 address on your instance. Your instance is only aware of the private (internal) IP address space defined within the VPC and subnet. The internet gateway logically provides the one-to-one NAT on behalf of your instance so that when traffic leaves your VPC subnet and goes to the Internet, the reply address field is set to the public IPv4 address or elastic IP address of your instance and not its private IP address. Conversely, traffic that’s destined for the public IPv4 address or elastic IP address of your instance has its destination address translated into the instance’s private IPv4 address before the traffic is delivered to the VPC. To enable communication over the Internet for IPv6, your VPC and subnet must have an associated IPv6 CIDR block, and your instance must be assigned an IPv6 address from the range of the subnet. IPv6 addresses are globally unique, and therefore public by default.
Q44: A data company has implemented a subscription service for storing video files. There are two levels of subscription: personal and professional use. The personal users can upload a total of 5 GB of data, and professional users can upload as much as 5 TB of data. The application can upload files of size up to 1 TB to an S3 Bucket. What is the best way to upload files of this size?
A. Multipart upload
B. Single-part Upload
C. AWS Snowball
D. AWS SnowMobile
Notes: The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object (see Operations on Objects). Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket. You can list all of your in-progress multipart uploads or get a list of the parts that you have uploaded for a specific multipart upload. Each of these operations is explained in this section.
Q45: You have multiple EC2 instances housing applications in a VPC in a single Availability Zone. The applications need to communicate at extremely high throughputs to avoid latency for end users. The average throughput needs to be 6 Gbps. What’s the best measure you can do to ensure this throughput?
Notes: Amazon Web Services’ (AWS) solution to reducing latency between instances involves the use of placement groups. As the name implies, a placement group is just that — a group. AWS instances that exist within a common availability zone can be grouped into a placement group. Group members are able to communicate with one another in a way that provides low latency and high throughput. A cluster placement group is a logical grouping of instances within a single Availability Zone. A cluster placement group can span peered VPCs in the same Region. Instances in the same cluster placement group enjoy a higher per-flow throughput limit of up to 10 Gbps for TCP/IP traffic and are placed in the same high-bisection bandwidth segment of the network.
Q46: A team member has been tasked to configure four EC2 instances for four separate applications. These are not high-traffic apps, so there is no need for an Auto Scaling Group. The instances are all in the same public subnet and each instance has an EIP address, and all of the instances have the same Security Group. But none of the instances can send or receive internet traffic. You verify that all the instances have a public IP address. You also verify that an internet gateway has been configured. What is the most likely issue?
A. There is no route in the route table to the internet gateway (or it has been deleted).
B. Each instance needs its own security group.
C. The route table is corrupt.
D. You are using the default nacl.
Notes: The question details all of the configuration needed for internet access, except for a route to the IGW in the route table. This is definitely a key step in any checklist for internet connectivity. It is quite possible to have a subnet with the ‘Public’ attribute set but no route to the Internet in the assigned Route table. (test it yourself). This may have been a setup error, or someone may have thoughtlessly altered the shared Route table for a special case instead of creating a new Route table for the special case.
Q47: You have been assigned to create an architecture which uses load balancers to direct traffic to an Auto Scaling Group of EC2 instances across multiple Availability Zones. The application to be deployed on these instances is a life insurance application which requires path-based and host-based routing. Which type of load balancer will you need to use?
A. Any type of load balancer will meet these requirements.
B. Classic Load Balancer
C. Network Load Balancer
D. Application Load Balancer
Notes: Only the Application Load Balancer can support path-based and host-based routing. Using an Application Load Balancer instead of a Classic Load Balancer has the following benefits:
Support for path-based routing. You can configure rules for your listener that forward requests based on the URL in the request. This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL.
Support for host-based routing. You can configure rules for your listener that forward requests based on the host field in the HTTP header. This enables you to route requests to multiple domains using a single load balancer.
Support for routing based on fields in the request, such as standard and custom HTTP headers and methods, query parameters, and source IP addresses.
Support for routing requests to multiple applications on a single EC2 instance. You can register each instance or IP address with the same target group using multiple ports.
Support for redirecting requests from one URL to another.
Support for returning a custom HTTP response.
Support for registering targets by IP address, including targets outside the VPC for the load balancer.
Support for registering Lambda functions as targets.
Support for the load balancer to authenticate users of your applications through their corporate or social identities before routing requests.
Support for containerized applications. Amazon Elastic Container Service (Amazon ECS) can select an unused port when scheduling a task and register the task with a target group using this port. This enables you to make efficient use of your clusters.
Support for monitoring the health of each service independently, as health checks are defined at the target group level and many CloudWatch metrics are reported at the target group level. Attaching a target group to an Auto Scaling group enables you to scale each service dynamically based on demand.
Access logs contain additional information and are stored in compressed format.
Q48: You have been assigned to create an architecture which uses load balancers to direct traffic to an Auto Scaling Group of EC2 instances across multiple Availability Zones. You were considering using an Application Load Balancer, but some of the requirements you have been given seem to point to a Classic Load Balancer. Which requirement would be better served by an Application Load Balancer?
A. Support for EC2-Classic
B. Path-based routing
C. Support for sticky sessions using application-generated cookies
D. Support for TCP and SSL listeners
Using an Application Load Balancer instead of a Classic Load Balancer has the following benefits:
Support for path-based routing. You can configure rules for your listener that forward requests based on the URL in the request. This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL.
Q49: You have been tasked to review your company disaster recovery plan due to some new requirements. The driving factor is that the Recovery Time Objective has become very aggressive. Because of this, it has been decided to configure Multi-AZ deployments for the RDS MySQL databases. Unrelated to DR, it has been determined that some read traffic needs to be offloaded from the master database. What step can be taken to meet this requirement?
A. Convert to Aurora to allow the standby to serve read traffic.
B. Redirect some of the read traffic to the standby database.
C. Add DAX to the solution to alleviate excess read traffic.
D. Add read replicas to offload some read traffic.
Notes: Amazon RDS Read Replicas for MySQL and MariaDB now support Multi-AZ deployments. Combining Read Replicas with Multi-AZ enables you to build a resilient disaster recovery strategy and simplify your database engine upgrade process. Amazon RDS Read Replicas enable you to create one or more read-only copies of your database instance within the same AWS Region or in a different AWS Region. Updates made to the source database are then asynchronously copied to your Read Replicas. In addition to providing scalability for read-heavy workloads, Read Replicas can be promoted to become a standalone database instance when needed.
Q50: A gaming company is designing several new games which focus heavily on player-game interaction. The player makes a certain move and the game has to react very quickly to change the environment based on that move and to present the next decision for the player in real-time. A tool is needed to continuously collect data about player-game interactions and feed the data into the gaming platform in real-time. Which AWS service can best meet this need?
A. AWS Lambda
B. Kinesis Data Streams
C. Kinesis Data Analytics
D. AWS IoT
Notes: Kinesis Data Streams can be used to continuously collect data about player-game interactions and feed the data into your gaming platform. With Kinesis Data Streams, you can design a game that provides engaging and dynamic experiences based on players’ actions and behaviors.
Q51: You are designing an architecture for a financial company which provides a day trading application to customers. After viewing the traffic patterns for the existing application you notice that traffic is fairly steady throughout the day, with the exception of large spikes at the opening of the market in the morning and at closing around 3 pm. Your architecture will include an Auto Scaling Group of EC2 instances. How can you configure the Auto Scaling Group to ensure that system performance meets the increased demands at opening and closing of the market?
A. Configure a Dynamic Scaling Policy to scale based on CPU Utilization.
B. Use a load balancer to ensure that the load is distributed evenly during high-traffic periods.
C. Configure your Auto Scaling Group to have a desired size which will be able to meet the demands of the high-traffic periods.
D. Use a predictive scaling policy on the Auto Scaling Group to meet opening and closing spikes.
Notes: Use a predictive scaling policy on the Auto Scaling Group to meet opening and closing spikes: Using data collected from your actual EC2 usage and further informed by billions of data points drawn from our own observations, we use well-trained Machine Learning models to predict your expected traffic (and EC2 usage) including daily and weekly patterns. The model needs at least one day’s of historical data to start making predictions; it is re-evaluated every 24 hours to create a forecast for the next 48 hours. What we can gather from the question is that the spikes at the beginning and end of day can potentially affect performance. Sure, we can use dynamic scaling, but remember, scaling up takes a little bit of time. We have the information to be proactive, use predictive scaling, and be ready for these spikes at opening and closing.
Q52: A software gaming company has produced an online racing game which uses CloudFront for fast delivery to worldwide users. The game also uses DynamoDB for storing in-game and historical user data. The DynamoDB table has a preconfigured read and write capacity. Users have been reporting slow down issues, and an analysis has revealed that the DynamoDB table has begun throttling during peak traffic times. Which step can you take to improve game performance?
A. Add a load balancer in front of the web servers.
B. Add ElastiCache to cache frequently accessed data in memory.
C. Add an SQS Queue to queue requests which could be lost.
D. Make sure DynamoDB Auto Scaling is turned on.
Notes: Amazon DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns. This enables a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic, without throttling. When the workload decreases, Application Auto Scaling decreases the throughput so that you don’t pay for unused provisioned capacity. Note that if you use the AWS Management Console to create a table or a global secondary index, DynamoDB auto scaling is enabled by default. You can modify your auto scaling settings at any time.
Q53: You have configured an Auto Scaling Group of EC2 instances. You have begun testing the scaling of the Auto Scaling Group using a stress tool to force the CPU utilization metric being used to force scale out actions. The stress tool is also being manipulated by removing stress to force a scale in. But you notice that these actions are only taking place in five-minute intervals. What is happening?
A. Auto Scaling Groups can only scale in intervals of five minutes or greater.
B. The Auto Scaling Group is following the default cooldown procedure.
C. A load balancer is managing the load and limiting the effectiveness of stressing the servers.
D. The stress tool is configured to run for five minutes.
Notes: The cooldown period helps you prevent your Auto Scaling group from launching or terminating additional instances before the effects of previous activities are visible. You can configure the length of time based on your instance startup time or other application needs. When you use simple scaling, after the Auto Scaling group scales using a simple scaling policy, it waits for a cooldown period to complete before any further scaling activities due to simple scaling policies can start. An adequate cooldown period helps to prevent the initiation of an additional scaling activity based on stale metrics. By default, all simple scaling policies use the default cooldown period associated with your Auto Scaling Group, but you can configure a different cooldown period for certain policies, as described in the following sections. Note that Amazon EC2 Auto Scaling honors cooldown periods when using simple scaling policies, but not when using other scaling policies or scheduled scaling. A default cooldown period automatically applies to any scaling activities for simple scaling policies, and you can optionally request to have it apply to your manual scaling activities. When you use the AWS Management Console to update an Auto Scaling Group, or when you use the AWS CLI or an AWS SDK to create or update an Auto Scaling Group, you can set the optional default cooldown parameter. If a value for the default cooldown period is not provided, its default value is 300 seconds.
Q54: A team of architects is designing a new AWS environment for a company which wants to migrate to the Cloud. The architects are considering the use of EC2 instances with instance store volumes. The architects realize that the data on the instance store volumes are ephemeral. Which action will not cause the data to be deleted on an instance store volume?
B. The underlying disk drive fails.
C. Hardware disk failure.
D. Instance is stopped
Notes: Some Amazon Elastic Compute Cloud (Amazon EC2) instance types come with a form of directly attached, block-device storage known as the instance store. The instance store is ideal for temporary storage, because the data stored in instance store volumes is not persistent through instance stops, terminations, or hardware failures.
Q55: You work for an advertising company that has a real-time bidding application. You are also using CloudFront on the front end to accommodate a worldwide user base. Your users begin complaining about response times and pauses in real-time bidding. Which service can be used to reduce DynamoDB response times by an order of magnitude (milliseconds to microseconds)?
B. DynamoDB Auto Scaling
D. CloudFront Edge Caches
Notes: Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache that can reduce Amazon DynamoDB response times from milliseconds to microseconds, even at millions of requests per second. While DynamoDB offers consistent single-digit millisecond latency, DynamoDB with DAX takes performance to the next level with response times in microseconds for millions of requests per second for read-heavy workloads. With DAX, your applications remain fast and responsive, even when a popular event or news story drives unprecedented request volumes your way. No tuning required.
Q56: A travel company has deployed a website which serves travel updates to users all over the world. The traffic this database serves is very read heavy and can have some latency issues at certain times of the year. What can you do to alleviate these latency issues?
Notes: Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances. Read replicas are available in Amazon RDS for MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server as well as Amazon Aurora.
Q57: A large financial institution is gradually moving their infrastructure and applications to AWS. The company has data needs that will utilize all of RDS, DynamoDB, Redshift, and ElastiCache. Which description best describes Amazon Redshift?
A. Key-value and document database that delivers single-digit millisecond performance at any scale.
B. Cloud-based relational database.
C. Can be used to significantly improve latency and throughput for many read-heavy application workloads.
D. Near real-time complex querying on massive data sets.
Notes: Amazon Redshift is a fast, fully-managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against terabytes to petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Most results come back in seconds. With Redshift, you can start small for just $0.25 per hour with no commitments and scale out to petabytes of data for $1,000 per terabyte per year, less than a tenth the cost of traditional on-premises solutions. Amazon Redshift also includes Amazon Redshift Spectrum, allowing you to run SQL queries directly against exabytes of unstructured data in Amazon S3 data lakes. No loading or transformation is required, and you can use open data formats, including Avro, CSV, Grok, Amazon Ion, JSON, ORC, Parquet, RCFile, RegexSerDe, Sequence, Text, and TSV. Redshift Spectrum automatically scales query compute capacity based on the data retrieved, so queries against Amazon S3 run fast, regardless of data set size.
Q58: You are designing an architecture which will house an Auto Scaling Group of EC2 instances. The application hosted on the instances is expected to be an extremely popular social networking site. Forecasts for traffic to this site expect very high traffic and you will need a load balancer to handle tens of millions of requests per second while maintaining high throughput at ultra low latency. You need to select the type of load balancer to front your Auto Scaling Group to meet this high traffic requirement. Which load balancer will you select?
A. You will need an Application Load Balancer to meet this requirement.
B. All the AWS load balancers meet the requirement and perform the same.
C. You will select a Network Load Balancer to meet this requirement.
D. You will need a Classic Load Balancer to meet this requirement.
Notes: Network Load Balancer Overview: A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration. When you enable an Availability Zone for the load balancer, Elastic Load Balancing creates a load balancer node in the Availability Zone. By default, each load balancer node distributes traffic across the registered targets in its Availability Zone only. If you enable cross-zone load balancing, each load balancer node distributes traffic across the registered targets in all enabled Availability Zones. It is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on your part. The Network Load Balancer is API-compatible with the Application Load Balancer, including full programmatic control of Target Groups and Targets. Here are some of the most important features:
Static IP Addresses – Each Network Load Balancer provides a single IP address for each Availability Zone in its purview. If you have targets in us-west-2a and other targets in us-west-2c, NLB will create and manage two IP addresses (one per AZ); connections to that IP address will spread traffic across the instances in all the VPC subnets in the AZ. You can also specify an existing Elastic IP for each AZ for even greater control. With full control over your IP addresses, a Network Load Balancer can be used in situations where IP addresses need to be hard-coded into DNS records, customer firewall rules, and so forth.
Zonality – The IP-per-AZ feature reduces latency with improved performance, improves availability through isolation and fault tolerance, and makes the use of Network Load Balancers transparent to your client applications. Network Load Balancers also attempt to route a series of requests from a particular source to targets in a single AZ while still providing automatic failover should those targets become unavailable.
Source Address Preservation – With Network Load Balancer, the original source IP address and source ports for the incoming connections remain unmodified, so application software need not support X-Forwarded-For, proxy protocol, or other workarounds. This also means that normal firewall rules, including VPC Security Groups, can be used on targets.
Long-running Connections – NLB handles connections with built-in fault tolerance, and can handle connections that are open for months or years, making them a great fit for IoT, gaming, and messaging applications.
Failover – Powered by Route 53 health checks, NLB supports failover between IP addresses within and across regions.
Q59: An organization of about 100 employees has performed the initial setup of users in IAM. All users except administrators have the same basic privileges. But now it has been determined that 50 employees will have extra restrictions on EC2. They will be unable to launch new instances or alter the state of existing instances. What will be the quickest way to implement these restrictions?
A. Create an IAM Role for the restrictions. Attach it to the EC2 instances.
B. Create the appropriate policy. Place the restricted users in the new policy.
C. Create the appropriate policy. With only 20 users, attach the policy to each user.
D. Create the appropriate policy. Create a new group for the restricted users. Place the restricted users in the new group and attach the policy to the group.
Notes: You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. AWS supports six types of policies: identity-based policies, resource-based policies, permissions boundaries, Organizations SCPs, ACLs, and session policies. IAM policies define permissions for an action regardless of the method that you use to perform the operation. For example, if a policy allows the GetUser action, then a user with that policy can get user information from the AWS Management Console, the AWS CLI, or the AWS API. When you create an IAM user, you can choose to allow console or programmatic access. If console access is allowed, the IAM user can sign in to the console using a user name and password. Or if programmatic access is allowed, the user can use access keys to work with the CLI or API.
Q60: You are managing S3 buckets in your organization. This management of S3 extends to Amazon Glacier. For auditing purposes you would like to be informed if an object is restored to S3 from Glacier. What is the most efficient way you can do this?
A. Create a CloudWatch event for uploads to S3
B. Create an SNS notification for any upload to S3.
C. Configure S3 notifications for restore operations from Glacier.
D. Create a Lambda function which is triggered by restoration of object from Glacier to S3.
Notes: The Amazon S3 notification feature enables you to receive notifications when certain events happen in your bucket. To enable notifications, you must first add a notification configuration that identifies the events you want Amazon S3 to publish and the destinations where you want Amazon S3 to send the notifications. An S3 notification can be set up to notify you when objects are restored from Glacier to S3.
Q61: Your company has gotten back results from an audit. One of the mandates from the audit is that your application, which is hosted on EC2, must encrypt the data before writing this data to storage. Which service could you use to meet this requirement?
A. AWS Cloud HSM
B. Security Token Service
C. EBS encryption
D. AWS KMS
Notes: You can configure your application to use the KMS API to encrypt all data before saving it to disk. This link details how to choose an encryption service for various use cases:
Q62: Recent worldwide events have dictated that you perform your duties as a Solutions Architect from home. You need to be able to manage several EC2 instances while working from home and have been testing the ability to ssh into these instances. One instance in particular has been a problem and you cannot ssh into this instance. What should you check first to troubleshoot this issue?
A. Make sure that the security group for the instance has ingress on port 80 from your home IP address.
B. Make sure that your VPC has a connected Virtual Private Gateway.
C. Make sure that the security group for the instance has ingress on port 22 from your home IP address.
D. Make sure that the Security Group for the instance has ingress on port 443 from your home IP address.
Notes: The rules of a security group control the inbound traffic that’s allowed to reach the instances that are associated with the security group. The rules also control the outbound traffic that’s allowed to leave them. The following are the characteristics of security group rules:
By default, security groups allow all outbound traffic.
Security group rules are always permissive; you can’t create rules that deny access.
Security groups are stateful. If you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. For VPC security groups, this also means that responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules. For more information, see Connection tracking.
You can add and remove rules at any time. Your changes are automatically applied to the instances that are associated with the security group. The effect of some rule changes can depend on how the traffic is tracked. For more information, see Connection tracking. When you associate multiple security groups with an instance, the rules from each security group are effectively aggregated to create one set of rules. Amazon EC2 uses this set of rules to determine whether to allow access. You can assign multiple security groups to an instance. Therefore, an instance can have hundreds of rules that apply. This might cause problems when you access the instance. We recommend that you condense your rules as much as possible.
Q62: A consultant is hired by a small company to configure an AWS environment. The consultant begins working with the VPC and launching EC2 instances within the VPC. The initial instances will be placed in a public subnet. The consultant begins to create security groups. What is true of the default security group?
A. You can delete this group, however, you can’t change the group’s rules.
B. You can delete this group or you can change the group’s rules.
C. You can’t delete this group, nor can you change the group’s rules.
D. You can’t delete this group, however, you can change the group’s rules.
Notes: Your VPC includes a default security group. You can’t delete this group, however, you can change the group’s rules. The procedure is the same as modifying any other security group. For more information, see Adding, removing, and updating rules.
Notes: Network ACLs are stateless, which means that responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa).
The following are the basic characteristics of security groups for your VPC:
There are quotas on the number of security groups that you can create per VPC, the number of rules that you can add to each security group, and the number of security groups that you can associate with a network interface. For more information, see Amazon VPC quotas.
You can specify allow rules, but not deny rules.
You can specify separate rules for inbound and outbound traffic.
When you create a security group, it has no inbound rules. Therefore, no inbound traffic originating from another host to your instance is allowed until you add inbound rules to the security group.
By default, a security group includes an outbound rule that allows all outbound traffic. You can remove the rule and add outbound rules that allow specific outbound traffic only. If your security group has no outbound rules, no outbound traffic originating from your instance is allowed.
Security groups are stateful. If you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. Responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules.
Q64: Your company needs to deploy an application in the company AWS account. The application will reside on EC2 instances in an Auto Scaling Group fronted by an Application Load Balancer. The company has been using Elastic Beanstalk to deploy the application due to limited AWS experience within the organization. The application now needs upgrades and a small team of subcontractors have been hired to perform these upgrades. What can be used to provide the subcontractors with short-lived access tokens that act as temporary security credentials to the company AWS account?
A. IAM Roles
B. AWS STS
C. IAM user accounts
D. AWS SSO
Notes: AWS Security Token Service (AWS STS) is the service that you can use to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use. You can use the AWS Security Token Service (AWS STS) to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use, with the following differences: Temporary security credentials are short-term, as the name implies. They can be configured to last for anywhere from a few minutes to several hours. After the credentials expire, AWS no longer recognizes them or allows any kind of access from API requests made with them. Temporary security credentials are not stored with the user but are generated dynamically and provided to the user when requested. When (or even before) the temporary security credentials expire, the user can request new credentials, as long as the user requesting them still has permissions to do so.
Q65: The company you work for has reshuffled teams a bit and you’ve been moved from the AWS IAM team to the AWS Network team. One of your first assignments is to review the subnets in the main VPCs. What are two key concepts regarding subnets?
A. A subnet spans all the Availability Zones in a Region.
B. Private subnets can only hold database.
C. Each subnet maps to a single Availability Zone.
D. Every subnet you create is associated with the main route table for the VPC.
E. Each subnet is associated with one security group.
Notes: A VPC spans all the Availability Zones in the region. After creating a VPC, you can add one or more subnets in each Availability Zone. When you create a subnet, you specify the CIDR block for the subnet, which is a subset of the VPC CIDR block. Each subnet must reside entirely within one Availability Zone and cannot span zones. Availability Zones are distinct locations that are engineered to be isolated from failures in other Availability Zones. By launching instances in separate Availability Zones, you can protect your applications from the failure of a single location. A VPC spans all of the Availability Zones in the Region. After creating a VPC, you can add one or more subnets in each Availability Zone. You can optionally add subnets in a Local Zone, which is an AWS infrastructure deployment that places compute, storage, database, and other select services closer to your end users. A Local Zone enables your end users to run applications that require single-digit millisecond latencies. For information about the Regions that support Local Zones, see Available Regions in the Amazon EC2 User Guide for Linux Instances. When you create a subnet, you specify the CIDR block for the subnet, which is a subset of the VPC CIDR block. We assign a unique ID to each subnet.
Q67: You are reviewing Change Control requests, and you note that there is a change designed to reduce wasted CPU cycles by increasing the value of your Amazon SQS “VisibilityTimeout” attribute. What does this mean?
A. While processing a message, a consumer instance can amend the message visibility counter by a fixed amount.
B. When a consumer instance retrieves a message, that message will be hidden from other consumer instances for a fixed period.
C. When the consumer instance polls for new work the SQS service will allow it to wait a certain time for a message to be available before closing the connection.
D. While processing a message, a consumer instance can reset the message visibility by restarting the preset timeout counter.
E. When the consumer instance polls for new work, the consumer instance will wait a certain time until it has a full workload before closing the connection.
F. When a new message is added to the SQS queue, it will be hidden from consumer instances for a fixed period.
Notes: Poor timing of SQS processes can significantly impact the cost effectiveness of the solution. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours.
Q68: You are a security architect working for a large antivirus company. The production environment has recently been moved to AWS and is in a public subnet. You are able to view the production environment over HTTP. However, when your customers try to update their virus definition files over a custom port, that port is blocked. You log in to the console and you allow traffic in over the custom port. How long will this take to take effect?
A. After a few minutes.
C. Straight away, but to the new instances only.
D. Straight away to the new instances, but old instances must be stopped and restarted before the new rules apply.
Q69: Amazon SQS keeps track of all tasks and events in an application.
Notes: Amazon SWF (not Amazon SQS) keeps track of all tasks and events in an application. Amazon SQS requires you to implement your own application-level tracking, especially if your application uses multiple queues. Amazon SWF FAQs.
Q70: Your Security Manager has hired a security contractor to audit your network and firewall configurations. The consultant doesn’t have access to an AWS account. You need to provide the required access for the auditing tasks, and answer a question about login details for the official AWS firewall appliance. Which of the following might you do? Choose 2
A. Create an IAM User with a policy that can Read Security Group and NACL settings.
B. Explain that AWS implements network security differently and that there is no such thing as an official AWS firewall appliance. Security Groups and NACLs are used instead.
C. Create an IAM Role with a policy that can Read Security Group and NACL settings.
D. Explain that AWS is a cloud service and that AWS manages the Network appliances.
E. Create an IAM Role with a policy that can Read Security Group and Route settings.
Answer: A and B
Notes: Create an IAM user for the auditor and explain that the firewall functionality is implemented as stateful Security Groups, and stateless subnet NACLs. AWS has removed the Firewall appliance from the hub of the network and implemented the firewall functionality as stateful Security Groups, and stateless subnet NACLs. This is not a new concept in networking, but rarely implemented at this scale.
Q71: How many internet gateways can I attach to my custom VPC?
A. 5 B. 3 C. 2 D. 1
Q72: How long can a message be retained in an SQS Queue?
Q73: Although your application customarily runs at 30% usage, you have identified a recurring usage spike (>90%) between 8pm and midnight daily. What is the most cost-effective way to scale your application to meet this increased need?
A. Manually deploy Reactive Event-based Scaling each night at 7:45.
B. Deploy additional EC2 instances to meet the demand.
C. Use scheduled scaling to boost your capacity at a fixed interval.
D. Increase the size of the Resource Group to meet demand.
Notes: Scheduled scaling allows you to set your own scaling schedule. For example, let’s say that every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling actions based on the predictable traffic patterns of your web application. Scaling actions are performed automatically as a function of time and date.
Reference: Scheduled scaling for Amazon EC2 Auto Scaling.
Q74: To save money, you quickly stored some data in one of the attached volumes of an EC2 instance and stopped it for the weekend. When you returned on Monday and restarted your instance, you discovered that your data was gone. Why might that be?
A. The EBS volume was not large enough to store your data.
B. The instance failed to connect to the root volume on Monday.
C. The elastic block-level storage service failed over the weekend.
D. The volume was ephemeral, block-level storage. Data on an instance store volume is lost if an instance is stopped.
Notes: the EC2 instance had an instance store volume attached to it. Instance store volumes are ephemeral, meaning that data in attached instance store volumes is lost if the instance stops.
Reference: Instance store lifetime
Q75: Select all the true statements on S3 URL styles: Choose 2
A. Virtual hosted-style URLs will be eventually depreciated in favor of Path-Style URLs for S3 bucket access.
B. Virtual-host-style URLs (such as: https://bucket-name.s3.Region.amazonaws.com/key name) are supported by AWS.
C. Path-Style URLs (such as https://s3.Region.amazonaws.com/bucket-name/key name) are supported by AWS.
D. DNS compliant names are NOT recommended for the URLs to access S3.
Answer: B and C
Notes: Virtual-host-style URLs and Path-Style URLs (soon to be retired) are supported by AWS. DNS compliant names are recommended for the URLs to access S3.
Q76: With EBS, I can ____. Choose 2
A. Create an encrypted snapshot from an unencrypted snapshot by creating an encrypted copy of the unencrypted snapshot.
B. Create an unencrypted volume from an encrypted snapshot.
C. Create an encrypted volume from a snapshot of another encrypted volume.
D. Encrypt an existing volume.
Answer: A and C Notes: Although there is no direct way to encrypt an existing unencrypted volume or snapshot, you can encrypt them by creating either a volume or a snapshot. Reference: Encrypting unencrypted resources. You can create an encrypted volume from a snapshot of another encrypted volume. References: https://docs.aws.amazon.com/ebs
Q77: You have been engaged by a company to design and lead a migration to an AWS environment. The team is concerned about the capabilities of the new environment, especially when it comes to high availability and cost-effectiveness. The design calls for about 20 instances (c3.2xlarge) pulling jobs/messages from SQS. Network traffic per instance is estimated to be around 500 Mbps at the beginning and end of each job. Which configuration should you plan on deploying?
A. Use a 2nd Network Interface to separate the SQS traffic for the storage traffic.
B. Choose a different instance type that better matched the traffic demand.
C.Spread the Instances over multiple AZs to minimize the traffic concentration and maximize fault-tolerance.
D. Deploy as a Cluster Placement Group as the aggregated burst traffic could be around 10 Gbps.
Answer: C Notes: With a multi-AZ configuration, an additional reliability point is scored as the entire Availability Zone itself is ruled out as a single point of failure. This ensures high availability. Wherever possible, use simple solutions such as spreading the load out rather than expensive high tech solutions. References:AZ
Q78: You are a solutions architect working for a cosmetics company. Your company has a busy Magento online store that consists of a two-tier architecture. The web servers are on EC2 instances deployed across multiple AZs, and the database is on a Multi-AZ RDS MySQL database instance. Your store is having a Black Friday sale in five days, and having reviewed the performance for the last sale you expect the site to start running very slowly during the peak load. You investigate and you determine that the database was struggling to keep up with the number of reads that the store was generating. Which solution would you implement to improve the application read performance the most?
A. Deploy an Amazon ElastiCache cluster with nodes running in each AZ.
B. Upgrade your RDS MySQL instance to use provisioned IOPS.
C. Add an RDS Read Replica in each AZ.
D. Upgrade the RDS MySQL instance to a larger type.
Answer: C Notes: RDS Replicas can substantially increase the Read performance of your database. Multiple read replicas can be made to increase performance further. It will also require the least modifications to any code, and is generally possible to be implemented in the timeframe specified References:RDS
Q79: Which native AWS service will act as a file system mounted on an S3 bucket?
A. Amazon Elastic Block Store
B. File Gateway
C. Amazon S3
D. Amazon Elastic File System
Answer: B Notes: A file gateway supports a file interface into Amazon Simple Storage Service (Amazon S3) and combines a service and a virtual software appliance. By using this combination, you can store and retrieve objects in Amazon S3 using industry-standard file protocols such as Network File System (NFS) and Server Message Block (SMB). The software appliance, or gateway, is deployed into your on-premises environment as a virtual machine (VM) running on VMware ESXi, Microsoft Hyper-V, or Linux Kernel-based Virtual Machine (KVM) hypervisor. The gateway provides access to objects in S3 as files or file share mount points. You can manage your S3 data using lifecycle policies, cross-region replication, and versioning. You can think of a file gateway as a file system mount on S3.
Reference: What is AWS Storage Gateway? .
Q80:You have been evaluating the NACLS in your company. Most of the NACLs are configured the same: 100 All Traffic Allow 200 All Traffic Deny ‘*’ All Traffic Deny If a request comes in, how will it be evaluated?
A. The default will deny traffic.
B. The request will be allowed.
C. The highest numbered rule will be used, a deny.
D. All rules will be evaluated and the end result will be Deny.
Notes: Rules are evaluated starting with the lowest numbered rule. As soon as a rule matches traffic, it’s applied immediately regardless of any higher-numbered rule that may contradict it. The following are the basic things that you need to know about network ACLs: Your VPC automatically comes with a modifiable default network ACL. By default, it allows all inbound and outbound IPv4 traffic and, if applicable, IPv6 traffic. You can create a custom network ACL and associate it with a subnet. By default, each custom network ACL denies all inbound and outbound traffic until you add rules. Each subnet in your VPC must be associated with a network ACL. If you don’t explicitly associate a subnet with a network ACL, the subnet is automatically associated with the default network ACL. You can associate a network ACL with multiple subnets. However, a subnet can be associated with only one network ACL at a time. When you associate a network ACL with a subnet, the previous association is removed. A network ACL contains a numbered list of rules. We evaluate the rules in order, starting with the lowest-numbered rule, to determine whether traffic is allowed in or out of any subnet associated with the network ACL. The highest number that you can use for a rule is 32766. We recommend that you start by creating rules in increments (for example, increments of 10 or 100) so that you can insert new rules where you need to later on. A network ACL has separate inbound and outbound rules, and each rule can either allow or deny traffic. Network ACLs are stateless, which means that responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa).
Q81: You have been given an assignment to configure Network ACLs in your VPC. Before configuring the NACLs, you need to understand how the NACLs are evaluated. How are NACL rules evaluated?
A. NACL rules are evaluated by rule number from lowest to highest and executed immediately when a matching rule is found.
B. NACL rules are evaluated by rule number from highest to lowest, and executed immediately when a matching rule is found.
C. All NACL rules that you configure are evaluated before traffic is passed through.
D. NACL rules are evaluated by rule number from highest to lowest, and all are evaluated before traffic is passed through.
Notes: NACL rules are evaluated by rule number from lowest to highest and executed immediately when a matching rule is found.
You can add or remove rules from the default network ACL, or create additional network ACLs for your VPC. When you add or remove rules from a network ACL, the changes are automatically applied to the subnets that it’s associated with. A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC. The following are the parts of a network ACL rule:
Rule number. Rules are evaluated starting with the lowest-numbered rule. As soon as a rule matches traffic, it’s applied regardless of any higher-numbered rule that might contradict it.
Type. The type of traffic, for example, SSH. You can also specify all traffic or a custom range.
Protocol. You can specify any protocol that has a standard protocol number. For more information, see Protocol Numbers. If you specify ICMP as the protocol, you can specify any or all of the ICMP types and codes.
Port range. The listening port or port range for the traffic. For example, 80 for HTTP traffic.
Source. [Inbound rules only] The source of the traffic (CIDR range).
Destination. [Outbound rules only] The destination for the traffic (CIDR range).
Allow/Deny. Whether to allow or deny the specified traffic.
Q82: Your company has gone through an audit with a focus on data storage. You are currently storing historical data in Amazon Glacier. One of the results of the audit is that a portion of the infrequently-accessed historical data must be able to be accessed immediately upon request. Where can you store this data to meet this requirement?
A. S3 Standard
B. Leave infrequently-accessed data in Glacier.
C. S3 Standard-IA
D. Store the data in EBS
Notes: S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of S3 Standard, with a low-per-GB storage price and per GB retrieval fee. This combination of low cost and high performance make S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files. S3 Storage Classes can be configured at the object level and a single bucket can contain objects stored across S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA. You can also use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes.
Q84: After an IT Steering Committee meeting, you have been put in charge of configuring a hybrid environment for the company’s compute resources. You weigh the pros and cons of various technologies, such as VPN and Direct Connect, and based on the requirements you have decided to configure a VPN connection. What features and advantages can a VPN connection provide?
A VPN provides a connection between an on-premises network and a VPC, using a secure and private connection with IPsec and TLS.
A VPC/VPN Connection utilizes IPSec to establish encrypted network connectivity between your intranet and Amazon VPC over the Internet. VPN Connections can be configured in minutes and are a good solution if you have an immediate need, have low-to-modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity.
AWS Client VPN is a managed client-based VPN service that enables you to securely access your AWS resources or your on-premises network. With AWS Client VPN, you configure an endpoint to which your users can connect to establish a secure TLS VPN session. This enables clients to access resources in AWS or on-premises from any location using an OpenVPN-based VPN client.
Q86: Your company has decided to go to a hybrid cloud environment. Part of this effort will be to move a large data warehouse to the cloud. The warehouse is 50TB, and will take over a month to migrate given the current bandwidth available. What is the best option available to perform this migration considering both cost and performance aspects?
The AWS Snowball Edge is a type of Snowball device with on-board storage and compute power for select AWS capabilities. Snowball Edge can undertake local processing and edge-computing workloads in addition to transferring data between your local environment and the AWS Cloud.
Each Snowball Edge device can transport data at speeds faster than the internet. This transport is done by shipping the data in the appliances through a regional carrier. The appliances are rugged shipping containers, complete with E Ink shipping labels. The AWS Snowball Edge device differs from the standard Snowball because it can bring the power of the AWS Cloud to your on-premises location, with local storage and compute functionality.
Snowball Edge devices have three options for device configurations: storage optimized, compute optimized, and with GPU. When this guide refers to Snowball Edge devices, it’s referring to all options of the device. Whenever specific information applies to only one or more optional configurations of devices, like how the Snowball Edge with GPU has an on-board GPU, it will be called out. For more information, see Snowball Edge Device Options.
Q87: You have been assigned the review of the security in your company AWS cloud environment. Your final deliverable will be a report detailing potential security issues. One of the first things that you need to describe is the responsibilities of the company under the shared responsibility module. Which measure is the customer’s responsibility?
EC2 instance OS Patching
Notes:Security and compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates. The customer assumes responsibility for, and management of, the guest operating system (including updates and security patches), other associated application software, and the configuration of the AWS provided security group firewall. Customers should carefully consider the services they choose, as their responsibilities vary depending on the services used, the integration of those services into their IT environment, and applicable laws and regulations. The nature of this shared responsibility also provides the flexibility and customer control that permits the deployment. As shown in the chart below, this differentiation of responsibility is commonly referred to as Security “of” the Cloud versus Security “in” the Cloud.
Customers that deploy an Amazon EC2 instance are responsible for management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.
Q88: You work for a busy real estate company, and you need to protect your data stored on S3 from accidental deletion. Which of the following actions might you take to achieve this? Choose 2
A. Create a bucket policy that prohibits anyone from deleting things from the bucket. B. Enable S3 – Infrequent Access Storage (S3 – IA). C. Enable versioning on the bucket. If a file is accidentally deleted, delete the delete marker. D. Configure MFA-protected API access. E. Use pre-signed URL’s so that users will not be able to accidentally delete data.
Answer: C and D Notes: The best answers are to allow versioning on the bucket and to protect the objects by configuring MFA-protected API access. Reference:https://docs.aws.amazon.com/s3
Q89: AWS intends to shut down your spot instance; which of these scenarios is possible? Choose 3
A. AWS sends a notification of termination and you receive it 120 seconds before the intended forced shutdown.
B. AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, and you delay it by sending a ‘Delay300’ instruction before the forced shutdown takes effect.
C. AWS sends a notification of termination and you receive it 120 seconds before the intended forced shutdown, but AWS does not action the shutdown.
D. AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, but you block the shutdown because you used ‘Termination Protection’ when you initialized the instance.
E. AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, but the defined duration period (also known as Spot blocks) hasn’t ended yet.
F. AWS sends a notification of termination, but you do not receive it within the 120 seconds and the instance is shutdown.
Answer: A E and F Notes: When Amazon EC2 is going to interrupt your Spot Instance, it emits an event two minutes prior to the actual interruption (except for hibernation, which gets the interruption notice, but not two minutes in advance because hibernation begins immediately).
In rare situations, Spot blocks may be interrupted due to Amazon EC2 capacity needs. In these cases, AWS provides a two-minute warning before the instance is terminated, and customers are not charged for the terminated instances even if they have used them.
It is possible that your Spot Instance is terminated before the warning can be made available. Reference: https://docs.aws.amazon.com/ec2
Q90: What does the “EAR” in a policy document stand for?
A. Effects, APIs, Roles B. Effect, Action, Resource C. Ewoks, Always, Romanticize D. Every, Action, Reasonable
Answer: B. Notes: The elements included in a policy document that make up the “EAR” are effect, action, and resource.
Reference: Policies and Permissions in IAM
Q91: _____ provides real-time streaming of data.
A. Kinesis Data Analytics
B. Kinesis Data Firehose
C. Kinesis Data Streams
Q95 [SAA-C03]: A company runs a public-facing three-tier web application in a VPC across multiple Availability Zones. Amazon EC2 instances for the application tier running in private subnets need to download software patches from the internet. However, the EC2 instances cannot be directly accessible from the internet. Which actions should be taken to allow the EC2 instances to download the needed patches? (Select TWO.)
A. Configure a NAT gateway in a public subnet. B. Define a custom route table with a route to the NAT gateway for internet traffic and associate it with the private subnets for the application tier. C. Assign Elastic IP addresses to the EC2 instances. D. Define a custom route table with a route to the internet gateway for internet traffic and associate it with the private subnets for the application tier. E. Configure a NAT instance in a private subnet.
Answer: A. B.
Notes: – A NAT gateway forwards traffic from the EC2 instances in the private subnet to the internet or other AWS services, and then sends the response back to the instances. After a NAT gateway is created, the route tables for private subnets must be updated to point internet traffic to the NAT gateway.
Q96 [SAA-C03]: A solutions architect wants to design a solution to save costs for Amazon EC2 instances that do not need to run during a 2-week company shutdown. The applications running on the EC2 instances store data in instance memory that must be present when the instances resume operation. Which approach should the solutions architect recommend to shut down and resume the EC2 instances?
A. Modify the application to store the data on instance store volumes. Reattach the volumes while restarting them. B. Snapshot the EC2 instances before stopping them. Restore the snapshot after restarting the instances. C. Run the applications on EC2 instances enabled for hibernation. Hibernate the instances before the 2- week company shutdown. D. Note the Availability Zone for each EC2 instance before stopping it. Restart the instances in the same Availability Zones after the 2-week company shutdown.
Notes: Hibernating EC2 instances save the contents of instance memory to an Amazon Elastic Block Store (Amazon EBS) root volume. When the instances restart, the instance memory contents are reloaded.
Q97 [SAA-C03]: A company plans to run a monitoring application on an Amazon EC2 instance in a VPC. Connections are made to the EC2 instance using the instance’s private IPv4 address. A solutions architect needs to design a solution that will allow traffic to be quickly directed to a standby EC2 instance if the application fails and becomes unreachable. Which approach will meet these requirements?
A) Deploy an Application Load Balancer configured with a listener for the private IP address and register the primary EC2 instance with the load balancer. Upon failure, de-register the instance and register the standby EC2 instance. B) Configure a custom DHCP option set. Configure DHCP to assign the same private IP address to the standby EC2 instance when the primary EC2 instance fails. C) Attach a secondary elastic network interface to the EC2 instance configured with the private IP address. Move the network interface to the standby EC2 instance if the primary EC2 instance becomes unreachable. D) Associate an Elastic IP address with the network interface of the primary EC2 instance. Disassociate the Elastic IP from the primary instance upon failure and associate it with a standby EC2 instance.
Answer: C. Notes: A secondary elastic network interface can be added to an EC2 instance. While primary network interfaces cannot be detached from an instance, secondary network interfaces can be detached and attached to a different EC2 instance.
A. Enable cross-origin resource sharing (CORS) on the S3 bucket. B. Enable S3 Versioning on the S3 bucket. C. Provide the users with a signed URL for the script. D. Configure an S3 bucket policy to allow public execute privileges.
Notes: Web browsers will block running a script that originates from a server with a domain name that is different from the webpage. Amazon S3 can be configured with CORS to send HTTP headers that allow the script to run
Q99 [SAA-C03]: A company’s security team requires that all data stored in the cloud be encrypted at rest at all times using encryption keys stored on premises. Which encryption options meet these requirements? (Select TWO.)
A. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). B. Use server-side encryption with AWS KMS managed encryption keys (SSE-KMS). C. Use server-side encryption with customer-provided encryption keys (SSE-C). D. Use client-side encryption to provide at-rest encryption. E. Use an AWS Lambda function invoked by Amazon S3 events to encrypt the data using the customer’s keys.
Answer: C. D.
Notes: Server-side encryption with customer-provided keys (SSE-C) enables Amazon S3 to encrypt objects on the server side using an encryption key provided in the PUT request. The same key must be provided in the GET requests for Amazon S3 to decrypt the object. Customers also have the option to encrypt data on the client side before uploading it to Amazon S3, and then they can decrypt the data after downloading it. AWS software development kits (SDKs) provide an S3 encryption client that streamlines the process.
Q100 [SAA-C03]: A company uses Amazon EC2 Reserved Instances to run its data processing workload. The nightly job typically takes 7 hours to run and must finish within a 10-hour time window. The company anticipates temporary increases in demand at the end of each month that will cause the job to run over the time limit with the capacity of the current resources. Once started, the processing job cannot be interrupted before completion. The company wants to implement a solution that would provide increased resource capacity as cost-effectively as possible. What should a solutions architect do to accomplish this?
A) Deploy On-Demand Instances during periods of high demand. B) Create a second EC2 reservation for additional instances. C) Deploy Spot Instances during periods of high demand. D) Increase the EC2 instance size in the EC2 reservation to support the increased workload.
Notes: While Spot Instances would be the least costly option, they are not suitable for jobs that cannot be interrupted or must complete within a certain time period. On-Demand Instances would be billed for the number of seconds they are running.
Q101 [SAA-C03]: A company runs an online voting system for a weekly live television program. During broadcasts, users submit hundreds of thousands of votes within minutes to a front-end fleet of Amazon EC2 instances that run in an Auto Scaling group. The EC2 instances write the votes to an Amazon RDS database. However, the database is unable to keep up with the requests that come from the EC2 instances. A solutions architect must design a solution that processes the votes in the most efficient manner and without downtime. Which solution meets these requirements?
A. Migrate the front-end application to AWS Lambda. Use Amazon API Gateway to route user requests to the Lambda functions. B. Scale the database horizontally by converting it to a Multi-AZ deployment. Configure the front-end application to write to both the primary and secondary DB instances. C. Configure the front-end application to send votes to an Amazon Simple Queue Service (Amazon SQS) queue. Provision worker instances to read the SQS queue and write the vote information to the database. D. Use Amazon EventBridge (Amazon CloudWatch Events) to create a scheduled event to re-provision the database with larger, memory optimized instances during voting periods. When voting ends, re-provision the database to use smaller instances.
Notes: – Decouple the ingestion of votes from the database to allow the voting system to continue processing votes without waiting for the database writes. Add dedicated workers to read from the SQS queue to allow votes to be entered into the database at a controllable rate. The votes will be added to the database as fast as the database can process them, but no votes will be lost.
Q102 [SAA-C03]: A company has a two-tier application architecture that runs in public and private subnets. Amazon EC2 instances running the web application are in the public subnet and an EC2 instance for the database runs on the private subnet. The web application instances and the database are running in a single Availability Zone (AZ). Which combination of steps should a solutions architect take to provide high availability for this architecture? (Select TWO.)
A. Create new public and private subnets in the same AZ. B. Create an Amazon EC2 Auto Scaling group and Application Load Balancer spanning multiple AZs for the web application instances. C. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer. D. Create new public and private subnets in a new AZ. Create a database using an EC2 instance in the public subnet in the new AZ. Migrate the old database contents to the new database. E. Create new public and private subnets in the same VPC, each in a new AZ. Create an Amazon RDS Multi-AZ DB instance in the private subnets. Migrate the old database contents to the new DB instance.
Answer: B. E.
Notes: Create new subnets in a new Availability Zone (AZ) to provide a redundant network. Create an Auto Scaling group with instances in two AZs behind the load balancer to ensure high availability of the web application and redistribution of web traffic between the two public AZs. Create an RDS DB instance in the two private subnets to make the database tier highly available too.
Q103 [SAA-C03]: A website runs a custom web application that receives a burst of traffic each day at noon. The users upload new pictures and content daily, but have been complaining of timeouts. The architecture uses Amazon EC2 Auto Scaling groups, and the application consistently takes 1 minute to initiate upon boot up before responding to user requests. How should a solutions architect redesign the architecture to better respond to changing traffic?
A. Configure a Network Load Balancer with a slow start configuration. B. Configure Amazon ElastiCache for Redis to offload direct requests from the EC2 instances. C. Configure an Auto Scaling step scaling policy with an EC2 instance warmup condition. D. Configure Amazon CloudFront to use an Application Load Balancer as the origin.
Notes: The current configuration puts new EC2 instances into service before they are able to respond to transactions. This could also cause the instances to overscale. With a step scaling policy, you can specify the number of seconds that it takes for a newly launched instance to warm up. Until its specified warm-up time has expired, an EC2 instance is not counted toward the aggregated metrics of the Auto Scaling group. While scaling out, the Auto Scaling logic does not consider EC2 instances that are warming up as part of the current capacity of the Auto Scaling group. Therefore, multiple alarm breaches that fall in the range of the same step adjustment result in a single scaling activity. This ensures that you do not add more instances than you need.
Q104 [SAA-C03]: An application running on AWS uses an Amazon Aurora Multi-AZ DB cluster deployment for its database. When evaluating performance metrics, a solutions architect discovered that the database reads are causing high I/O and adding latency to the write requests against the database. What should the solutions architect do to separate the read requests from the write requests?
A. Enable read-through caching on the Aurora database. B. Update the application to read from the Multi-AZ standby instance. C. Create an Aurora replica and modify the application to use the appropriate endpoints. D. Create a second Aurora database and link it to the primary database as a read replica.
Notes: Aurora Replicas provide a way to offload read traffic. Aurora Replicas share the same underlying storage as the main database, so lag time is generally very low. Aurora Replicas have their own endpoints, so the application will need to be configured to direct read traffic to the new endpoints.
Reference: Aurora Replicas
Question 106: A company plans to migrate its on-premises workload to AWS. The current architecture is composed of a Microsoft SharePoint server that uses a Windows shared file storage. The Solutions Architect needs to use a cloud storage solution that is highly available and can be integrated with Active Directory for access control and authentication. Which of the following options can satisfy the given requirement?
A. Create a file system using Amazon EFS and join it to an Active Directory domain. B. Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory domain in AWS. C. Create a Network File System (NFS) file share using AWS Storage Gateway. D. Launch an Amazon EC2 Windows Server to mount a new S3 bucket as a file volume.
Notes: Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessible over the industry-standard Service Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. Amazon FSx is accessible from Windows, Linux, and MacOS compute instances and devices. Thousands of compute instances and devices can access a file system concurrently.
Question 107: A company plans to migrate its on-premises workload to AWS. The current architecture is composed of a Microsoft SharePoint server that uses a Windows shared file storage. The Solutions Architect needs to use a cloud storage solution that is highly available and can be integrated with Active Directory for access control and authentication. Which of the following options can satisfy the given requirement?
A. Create a file system using Amazon EFS and join it to an Active Directory domain. B. Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory domain in AWS. C. Create a Network File System (NFS) file share using AWS Storage Gateway. D. Launch an Amazon EC2 Windows Server to mount a new S3 bucket as a file volume.
Notes: Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessible over the industry-standard Service Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. Amazon FSx is accessible from Windows, Linux, and MacOS compute instances and devices. Thousands of compute instances and devices can access a file system concurrently.
Category: Design Resilient Architectures
Question 108: A Forex trading platform, which frequently processes and stores global financial data every minute, is hosted in your on-premises data center and uses an Oracle database. Due to a recent cooling problem in their data center, the company urgently needs to migrate their infrastructure to AWS to improve the performance of their applications. As the Solutions Architect, you are responsible in ensuring that the database is properly migrated and should remain available in case of database server failure in the future. Which of the following is the most suitable solution to meet the requirement?
A. Create an Oracle database in RDS with Multi-AZ deployments. B. Launch an Oracle database instance in RDS with Recovery Manager (RMAN) enabled. C. Launch an Oracle Real Application Clusters (RAC) in RDS. D. Convert the database schema using the AWS Schema Conversion Tool and AWS Database Migration Service. Migrate the Oracle database to a non-cluster Amazon Aurora with a single instance.
Notes: Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable.
Reference: RDS Multi AZ
Category: Design Resilient Architectures
Question 109: A data analytics company, which uses machine learning to collect and analyze consumer data, is using Redshift cluster as their data warehouse. You are instructed to implement a disaster recovery plan for their systems to ensure business continuity even in the event of an AWS region outage. Which of the following is the best approach to meet this requirement?
A. Do nothing because Amazon Redshift is a highly available, fully-managed data warehouse which can withstand an outage of an entire AWS region. B. Enable Cross-Region Snapshots Copy in your Amazon Redshift Cluster. C. Create a scheduled job that will automatically take the snapshot of your Redshift Cluster and store it to an S3 bucket. Restore the snapshot in case of an AWS region outage. D. Use Automated snapshots of your Redshift Cluster.
Notes: You can configure Amazon Redshift to copy snapshots for a cluster to another region. To configure cross-region snapshot copy, you need to enable this copy feature for each cluster and configure where to copy snapshots and how long to keep copied automated snapshots in the destination region. When cross-region copy is enabled for a cluster, all new manual and automatic snapshots are copied to the specified region.
Reference: Redshift Snapshots
Category: Design Resilient Architectures
Question 109: A start-up company has an EC2 instance that is hosting a web application. The volume of users is expected to grow in the coming months and hence, you need to add more elasticity and scalability in your AWS architecture to cope with the demand. Which of the following options can satisfy the above requirement for the given scenario? (Select TWO.)
A. Set up two EC2 instances and then put them behind an Elastic Load balancer (ELB). B. Set up two EC2 instances deployed using Launch Templates and integrated with AWS Glue. C. Set up an S3 Cache in front of the EC2 instance. D. Set up two EC2 instances and use Route 53 to route traffic based on a Weighted Routing Policy. E. Set up an AWS WAF behind your EC2 Instance.
Answer: A. D. Notes: Using an Elastic Load Balancer is an ideal solution for adding elasticity to your application. Alternatively, you can also create a policy in Route 53, such as a Weighted routing policy, to evenly distribute the traffic to 2 or more EC2 instances. Hence, setting up two EC2 instances and then put them behind an Elastic Load balancer (ELB) and setting up two EC2 instances and using Route 53 to route traffic based on a Weighted Routing Policy are the correct answers.
Reference: Elastic Load Balancing
Category: Design Resilient Architectures
Question 110: A company plans to deploy a Docker-based batch application in AWS. The application will be used to process both mission-critical data as well as non-essential batch jobs. Which of the following is the most cost-effective option to use in implementing this architecture?
A. Use ECS as the container management service then set up Reserved EC2 Instances for processing both mission-critical and non-essential batch jobs. B. Use ECS as the container management service then set up a combination of Reserved and Spot EC2 Instances for processing mission-critical and non-essential batch jobs respectively. C. Use ECS as the container management service then set up On-Demand EC2 Instances for processing both mission-critical and non-essential batch jobs. D. Use ECS as the container management service then set up Spot EC2 Instances for processing both mission-critical and non-essential batch jobs.
Answer: B. Notes: Amazon ECS lets you run batch workloads with managed or custom schedulers on Amazon EC2 On-Demand Instances, Reserved Instances, or Spot Instances. You can launch a combination of EC2 instances to set up a cost-effective architecture depending on your workload. You can launch Reserved EC2 instances to process the mission-critical data and Spot EC2 instances for processing non-essential batch jobs. There are two different charge models for Amazon Elastic Container Service (ECS): Fargate Launch Type Model and EC2 Launch Type Model. With Fargate, you pay for the amount of vCPU and memory resources that your containerized application requests while for EC2 launch type model, there is no additional charge. You pay for AWS resources (e.g. EC2 instances or EBS volumes) you create to store and run your application. You only pay for what you use, as you use it; there are no minimum fees and no upfront commitments. In this scenario, the most cost-effective solution is to use ECS as the container management service then set up a combination of Reserved and Spot EC2 Instances for processing mission-critical and non-essential batch jobs respectively. You can use Scheduled Reserved Instances (Scheduled Instances) which enables you to purchase capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term. This will ensure that you have an uninterrupted compute capacity to process your mission-critical batch jobs.
Reference: Emazon ECS
Category: Design Resilient Architectures
Question 111: A company has recently adopted a hybrid cloud architecture and is planning to migrate a database hosted on-premises to AWS. The database currently has over 50 TB of consumer data, handles highly transactional (OLTP) workloads, and is expected to grow. The Solutions Architect should ensure that the database is ACID-compliant and can handle complex queries of the application. Which type of database service should the Architect use?
A. Amazon DynamoDB B. Amazon RDS C. Amazon Redshift D. Amazon Aurora
Answer: D. Notes: Amazon Aurora (Aurora) is a fully managed relational database engine that’s compatible with MySQL and PostgreSQL. You already know how MySQL and PostgreSQL combine the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. The code, tools, and applications you use today with your existing MySQL and PostgreSQL databases can be used with Aurora. With some workloads, Aurora can deliver up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. Aurora includes a high-performance storage subsystem. Its MySQL- and PostgreSQL-compatible database engines are customized to take advantage of that fast distributed storage. The underlying storage grows automatically as needed, up to 64 tebibytes (TiB). Aurora also automates and standardizes database clustering and replication, which are typically among the most challenging aspects of database configuration and administration.
Category: Design Resilient Architectures
Question 112: An online stocks trading application that stores financial data in an S3 bucket has a lifecycle policy that moves older data to Glacier every month. There is a strict compliance requirement where a surprise audit can happen at anytime and you should be able to retrieve the required data in under 15 minutes under all circumstances. Your manager instructed you to ensure that retrieval capacity is available when you need it and should handle up to 150 MB/s of retrieval throughput. Which of the following should you do to meet the above requirement? (Select TWO.)
A. Retrieve the data using Amazon Glacier Select. B. Use Bulk Retrieval to access the financial data. C. Purchase provisioned retrieval capacity. D. Use Expedited Retrieval to access the financial data. E. Specify a range, or portion, of the financial data archive to retrieve.
Answer: C. D. Notes: Expedited retrievals allow you to quickly access your data when occasional urgent requests for a subset of archives are required. For all but the largest archives (250 MB+), data accessed using Expedited retrievals are typically made available within 1–5 minutes. Provisioned Capacity ensures that retrieval capacity for Expedited retrievals is available when you need it. To make an Expedited, Standard, or Bulk retrieval, set the Tier parameter in the Initiate Job (POST jobs) REST API request to the option you want, or the equivalent in the AWS CLI or AWS SDKs. If you have purchased provisioned capacity, then all expedited retrievals are automatically served through your provisioned capacity. Provisioned capacity ensures that your retrieval capacity for expedited retrievals is available when you need it. Each unit of capacity provides that at least three expedited retrievals can be performed every five minutes and provides up to 150 MB/s of retrieval throughput. You should purchase provisioned retrieval capacity if your workload requires highly reliable and predictable access to a subset of your data in minutes. Without provisioned capacity Expedited retrievals are accepted, except for rare situations of unusually high demand. However, if you require access to Expedited retrievals under all circumstances, you must purchase provisioned retrieval capacity.
Reference: Amazon Glacier
Category: Design Resilient Architectures
Question 113: An organization stores and manages financial records of various companies in its on-premises data center, which is almost out of space. The management decided to move all of their existing records to a cloud storage service. All future financial records will also be stored in the cloud. For additional security, all records must be prevented from being deleted or overwritten. Which of the following should you do to meet the above requirement? A. Use AWS Storage Gateway to establish hybrid cloud storage. Store all of your data in Amazon S3 and enable object lock. B. Use AWS DataSync to move the data. Store all of your data in Amazon EFS and enable object lock. C. Use AWS Storage Gateway to establish hybrid cloud storage. Store all of your data in Amazon EBS and enable object lock. D. Use AWS DataSync to move the data. Store all of your data in Amazon S3 and enable object lock.
Answer: D. Notes: AWS DataSync allows you to copy large datasets with millions of files, without having to build custom solutions with open source tools, or license and manage expensive commercial network acceleration software. You can use DataSync to migrate active data to AWS, transfer data to the cloud for analysis and processing, archive data to free up on-premises storage capacity, or replicate data to AWS for business continuity. AWS DataSync enables you to migrate your on-premises data to Amazon S3, Amazon EFS, and Amazon FSx for Windows File Server. You can configure DataSync to make an initial copy of your entire dataset, and schedule subsequent incremental transfers of changing data towards Amazon S3. Enabling S3 Object Lock prevents your existing and future records from being deleted or overwritten. AWS DataSync is primarily used to migrate existing data to Amazon S3. On the other hand, AWS Storage Gateway is more suitable if you still want to retain access to the migrated data and for ongoing updates from your on-premises file-based applications. ReferenceText: AWS DataSync ReferenceUrl: https://aws.amazon.com/datasync/faqs/ Category: Design Secure Applications and Architectures
Question 114: A solutions architect is designing a solution to run a containerized web application by using Amazon Elastic Container Service (Amazon ECS). The solutions architect wants to minimize cost by running multiple copies of a task on each container instance. The number of task copies must scale as the load increases and decreases. Which routing solution distributes the load to the multiple tasks?
A. Configure an Application Load Balancer to distribute the requests by using path-based routing. B. Configure an Application Load Balancer to distribute the requests by using dynamic host port mapping. C. Configure an Amazon Route 53 alias record set to distribute the requests with a failover routing policy. D. Configure an Amazon Route 53 alias record set to distribute the requests with a weighted routing policy.
Answer: B. Notes: With dynamic host port mapping, multiple tasks from the same service are allowed for each container instance. You can use weighted routing policies to route traffic to instances at proportions that you specify. You cannot use weighted routing policies to manage multiple tasks on a single container.
Notes: With dynamic host port mapping, multiple tasks from the same service are allowed for each container instance. You can use weighted routing policies to route traffic to instances at proportions that you specify. You cannot use weighted routing policies to manage multiple tasks on a single container. Reference:Choosing a routing policy Category: Design Cost-Optimized Architectures
Question 115: Question: A Solutions Architect needs to deploy a mobile application that can collect votes for a popular singing competition. Millions of users from around the world will submit votes using their mobile phones. These votes must be collected and stored in a highly scalable and highly available data store which will be queried for real-time ranking. Which of the following combination of services should the architect use to meet this requirement? A. Amazon Redshift and AWS Mobile Hub B. Amazon DynamoDB and AWS AppSync C. Amazon Relational Database Service (RDS) and Amazon MQ D. Amazon Aurora and Amazon Cognito
Answer: B. Notes: When the word durability pops out, the first service that should come to your mind is Amazon S3. Since this service is not available in the answer options, we can look at the other data store available which is Amazon DynamoDB. DynamoDB is durable, scalable, and highly available data store which can be used for real-time tabulation. You can also use AppSync with DynamoDB to make it easy for you to build collaborative apps that keep shared data updated in real time. You just specify the data for your app with simple code statements and AWS AppSync manages everything needed to keep the app data updated in real time. This will allow your app to access data in Amazon DynamoDB, trigger AWS Lambda functions, or run Amazon Elasticsearch queries and combine data from these services to provide the exact data you need for your app.
Question 116: The usage of a company’s image-processing application is increasing suddenly with no set pattern. The application’s processing time grows linearly with the size of the image. The processing can take up to 20 minutes for large image files. The architecture consists of a web tier, an Amazon Simple Queue Service (Amazon SQS) standard queue, and message consumers that process the images on Amazon EC2 instances. When a high volume of requests occurs, the message backlog in Amazon SQS increases. Users are reporting the delays in processing. A solutions architect must improve the performance of the application in compliance with cloud best practices. Which solution will meet these requirements?
A. Purchase enough Dedicated Instances to meet the peak demand. Deploy the instances for the consumers. B. Convert the existing SQS standard queue to an SQS FIFO queue. Increase the visibility timeout. C. Configure a scalable AWS Lambda function as the consumer of the SQS messages. D. Create a message consumer that is an Auto Scaling group of instances. Configure the Auto Scaling group to scale based upon the ApproximateNumberOfMessages Amazon CloudWatch metric.
Answer: D. Notes: FIFO queues will solve problems that occur when messages are processed out of order. FIFO queues will not improve performance during sudden volume increases. Additionally, you cannot convert SQS queues after you create them. Reference: FIFO Queues
Question 117: An application is hosted on an EC2 instance with multiple EBS Volumes attached and uses Amazon Neptune as its database. To improve data security, you encrypted all of the EBS volumes attached to the instance to protect the confidential data stored in the volumes. Which of the following statements are true about encrypted Amazon Elastic Block Store volumes? (Select TWO.)
A. All data moving between the volume and the instance are encrypted. B. Snapshots are automatically encrypted. C. The volumes created from the encrypted snapshot are not encrypted. D. Snapshots are not automatically encrypted. E. Only the data in the volume is encrypted and not all the data moving between the volume and the instance. Answer: A. B. Notes: Amazon Elastic Block Store (Amazon EBS) provides block level storage volumes for use with EC2 instances. EBS volumes are highly available and reliable storage volumes that can be attached to any running instance that is in the same Availability Zone. EBS volumes that are attached to an EC2 instance are exposed as storage volumes that persist independently from the life of the instance. Encryption operations occur on the servers that host EC2 instances, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. You can encrypt both the boot and data volumes of an EC2 instance. Reference:EBS
Question 118: A reporting application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. For complex reports, the application can take up to 15 minutes to respond to a request. A solutions architect is concerned that users will receive HTTP 5xx errors if a report request is in process during a scale-in event. What should the solutions architect do to ensure that user requests will be completed before instances are terminated?
A. Enable sticky sessions (session affinity) for the target group of the instances. B. Increase the instance size in the Application Load Balancer target group. C. Increase the cooldown period for the Auto Scaling group to a greater amount of time than the time required for the longest running responses. D. Increase the deregistration delay timeout for the target group of the instances to greater than 900 seconds.
Answer: D. Notes: By default, Elastic Load Balancing waits 300 seconds before the completion of the deregistration process, which can help in-flight requests to the target become complete. To change the amount of time that Elastic Load Balancing waits, update the deregistration delay value. Reference: Deregistration Delay.
Question 119: A company used Amazon EC2 Spot Instances for a demonstration that is now complete. A solutions architect must remove the Spot Instances to stop them from incurring cost. What should the solutions architect do to meet this requirement?
A. Cancel the Spot request only. B. Terminate the Spot Instances only. C. Cancel the Spot request. Terminate the Spot Instances. D. Terminate the Spot Instances. Cancel the Spot request.
Answer: C. Notes: To remove the Spot Instances, the appropriate steps are to cancel the Spot request and then to terminate the Spot Instances. Reference:Spot Instances
Question 120: Which components are required to build a site-to-site VPN connection on AWS? (Select TWO.) A. An Internet Gateway B. A NAT gateway C. A customer Gateway D. A Virtual Private Gateway E. Amazon API Gateway
Answer: C. D. Notes: A virtual private gateway is attached to a VPC to create a site-to-site VPN connection on AWS. You can accept private encrypted network traffic from an on-premises data center into your VPC without the need to traverse the open public internet. A customer gateway is required for the VPN connection to be established. A customer gateway device is set up and configured in the customer’s data center. Reference: What is AWS Site-to-Site VPN?
Question 121: A company runs its website on Amazon EC2 instances behind an Application Load Balancer that is configured as the origin for an Amazon CloudFront distribution. The company wants to protect against cross-site scripting and SQL injection attacks. Which approach should a solutions architect recommend to meet these requirements?
A. Enable AWS Shield Advanced. List the CloudFront distribution as a protected resource. B. Define an AWS Shield Advanced policy in AWS Firewall Manager to block cross-site scripting and SQL injection attacks. C. Set up AWS WAF on the CloudFront distribution. Use conditions and rules that block cross-site scripting and SQL injection attacks. D. Deploy AWS Firewall Manager on the EC2 instances. Create conditions and rules that block cross-site scripting and SQL injection attacks.
Answer: C. Notes: AWS WAF can detect the presence of SQL code that is likely to be malicious (known as SQL injection). AWS WAF also can detect the presence of a script that is likely to be malicious (known as cross-site scripting). Reference: AWS WAF.
Question 122: A media company is designing a new solution for graphic rendering. The application requires up to 400 GB of storage for temporary data that is discarded after the frames are rendered. The application requires approximately 40,000 random IOPS to perform the rendering. What is the MOST cost-effective storage option for this rendering application? A. A storage optimized Amazon EC2 instance with instance store storage B. A storage optimized Amazon EC2 instance with a Provisioned IOPS SSD (io1 or io2) Amazon Elastic Block Store (Amazon EBS) volume C. A burstable Amazon EC2 instance with a Throughput Optimized HDD (st1) Amazon Elastic Block Store (Amazon EBS) volume D. A burstable Amazon EC2 instance with Amazon S3 storage over a VPC endpoint
Answer: A. Notes: SSD-Backed Storage Optimized (i2) instances provide more than 365,000 random IOPS. The instance store has no additional cost, compared with the regular hourly cost of the instance. Reference: Amazon EC2 pricing.
Question 123: A company is deploying a new application that will consist of an application layer and an online transaction processing (OLTP) relational database. The application must be available at all times. However, the application will have periods of inactivity. The company wants to pay the minimum for compute costs during these idle periods. Which solution meets these requirements MOST cost-effectively? A. Run the application in containers with Amazon Elastic Container Service (Amazon ECS) on AWS Fargate. Use Amazon Aurora Serverless for the database. B. Run the application on Amazon EC2 instances by using a burstable instance type. Use Amazon Redshift for the database. C. Deploy the application and a MySQL database to Amazon EC2 instances by using AWS CloudFormation. Delete the stack at the beginning of the idle periods. D. Deploy the application on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. Use Amazon RDS for MySQL for the database.
Answer: A. Notes: When Amazon ECS uses Fargate for compute, it incurs no costs when the application is idle. Aurora Serverless also incurs no compute costs when it is idle. Reference: AWS Fargate Pricing.
Question 124:Which options best describe characteristics of events in event-driven design? (Select THREE.)
A. Events are usually processed asynchronously
B. Events usually expect an immediate reply
C. Events are used to share information about a change in state
D. Events are observable
E. Events direct the actions of targets
Answer: A. C. D. Notes: Events are used to share information about a change in state. Events are observable and usually processed asynchronously. Events do not direct the actions of targets, and events do not expect a reply. Events can be used to trigger synchronous communications, and in this case, an event source like API Gateway might wait for a response. Reference:Event Driven Design on AWS
Questions 125: Which of these scenarios would lead you to choose AWS AppSync and GraphQL APIs over API Gateway and REST APIs? Choose THREE.
A. You need a strongly typed schema for developers.
B. You need a server-controlled response.
C. You need multiple authentication options to the same API.
D. You need to integrate with existing clients.
E. You need client-specific responses that require data from many backend resources.
Answer: A. C. E Notes:
With GraphQL, you define the schema and data types in advance. If it’s not in the schema, you can’t query for it. Developers can download the schema and generate source code off the schema to work with it.
Consider GraphQL for applications where you need a client-specific response that needs data from lots of backend sources. When you need a server-controlled choose REST.
AWS AppSync allows you to use multiple authentication options on the same API, but API Gateway allows you to associate only one authentication option per resource.
When you need to integrate with existing clients, REST is much more mature, and there are more tools in which to use it. Most clients are written for REST.
Question 126: Which options are TRUE statements about serverless security? (Select THREE.)
A. Logging and metrics are especially critical because you can’t go back to the server to see what happened when something fails.
B. Because you aren’t responsible for the operating system and the network itself, you don’t need to worry about mitigating external attacks.
C. The distributed perimeter means your code needs to defend each of the potential paths that might be used to reach your functions.
D. You can use Lambda’s fine-grained controls to scope its reach with a much smaller set of permissions as opposed to traditional approaches.
E. You may use the same tooling as with your server-based applications, but the best practices you follow will be different.
Answer: A. C. and D.
Notes: In Lambda’s ephemeral environment, logging and metrics are more critical because once the code runs, you can no longer go back to the server to find out what has happened.
The security perimeter you are defending has to consider the different services that might trigger a function, and your code needs to defend each of those potential paths.
You can use Lambda’s fine-grained controls to scope its reach with a much smaller set of permissions as opposed to traditional approaches where you may give broad permissions for your application on its servers. Scope your functions to limit permission sharing between any unrelated components.
Security best practices don’t change with serverless, but the tooling you’ll use will change. For example, techniques such as installing agents on your host may not be relevant any more.
While you aren’t responsible for the operating system or the network itself, you do need to protect your network boundaries and mitigate external attacks.
Question 127: Which options are examples of steps you take to protect your serverless application from attacks? (Select FOUR.)
A. Update your operating system with the latest patches.
B. Configure geoblocking on Amazon CloudFront in front of regional API endpoints.
C. Disable origin access identity on Amazon S3.
D. Disable CORS on your APIs.
E. Use resource policies to limit access to your APIs to users from a specified account.
F. Filter out specific traffic patterns with AWS WAF.
G. Parameterize queries so that your Lambda function expects a single input.
Answer: B. E. F. G
Notes: You aren’t responsible for the operating system or network configuration where your functions run, and AWS is ensuring the security of the data within those managed services. You are responsible for protecting data entering your application and limiting access to your AWS resources. You still need to protect data that originates client-side or that travels to or from endpoints outside AWS.
When integrating CloudFront with regional API endpoints, CloudFront also supports geoblocking, which you can use to prevent requests from being served from particular geographic locations.
Use origin access identity with Amazon S3 to allow bucket access only through CloudFront.
CORS is a browser security feature that restricts cross-origin HTTP requests that are initiated from scripts running in the browser. It is enforced by the browser. If your APIs will receive cross-origin requests, you should enable CORS support in API Gateway.
IAM resource policies can be used to limit access to your APIs. For example, you can restrict access to users from a specified AWS account or deny traffic from a specified source IP address or CIDR block.
AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits. AWS WAF lets you create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns you define.
Lambda functions are triggered by events. These events submit an event parameter to the Lambda function and could be exploited for SQL injection. You can prevent this type of attack by parameterizing queries so that your Lambda function expects a single input.
Question 128:Which options reflect best practices for automating your deployment pipeline with serverless applications? (Select TWO.)
A. Select one deployment framework and use it for all of your deployments for consistency.
B. Use different AWS accounts for each environment in your deployment pipeline.
C. Use AWS SAM to configure safe deployments and include pre- and post-traffic tests.
D. Create a specific AWS SAM template to match each environment to keep them distinct.
Answer: B. and C.
Notes: You may use multiple deployment frameworks for an application so that you can use the framework that best suits the type of deployment. For example, you might use the AWS SAM framework to define your application stack and deployment preferences and then use AWS CDK to provision any infrastructure-related resources, such as the CI/CD pipeline.
It is a best practice to use different AWS accounts for each environment. This approach limits the blast radius of issues that occur and makes it less complex to differentiate which resources are associated with each environment. Because of the way costs are calculated with serverless, spinning up additional environments doesn’t add much to your cost.
AWS SAM lets you configure safe deployment preferences so that you can run code before the deployment, and after the deployment and rollback if there is a problem. You can also specify a method for shifting traffic to the new version a little bit at a time.
It is a best practice to use one AWS SAM template across environments and use options to parameterize values that are different per environment. This helps ensure that the environment is built with exactly the same stack.
Question 129: Your application needs to connect to an Amazon RDS instance on the backend. What is the best recommendation to the developer whose function must read from and write to the Amazon RDS instance?
A. Use reserved concurrency to limit the number of concurrent functions that would try to write to the database
B. Use the database proxy feature to provide connection pooling for the functions
C. Initialize the number of connections you want outside of the handler
D. Use the database TTL setting to clean up connections
Notes: Use the database proxy feature to provide connection pooling for the functions
Question 130: A company runs a cron job on an Amazon EC2 instance on a predefined schedule. The cron job calls a bash script that encrypts a 2 KB file. A security engineer creates an AWS Key Management Service (AWS KMS) CMK with a key policy.
The key policy and the EC2 instance role have the necessary configuration for this job.
Which process should the bash script use to encrypt the file?
A) Use the aws kms encrypt command to encrypt the file by using the existing CMK.
B) Use the aws kms create-grant command to generate a grant for the existing CMK.
C) Use the aws kms encrypt command to generate a data key. Use the plaintext data key to encrypt the file.
D) Use the aws kms generate-data-key command to generate a data key. Use the encrypted data key to encrypt the file.
Notes: KMS allow encryption for raw data up to 4K but it is not recommended so A is possible but not good practice. Create grant is a ‘policy’ things, not an encryption things. Kms encrypt doesn’t generate data key. Only D generate a data key clear text and encrypted. You then encrypt the file with the data key, add the encrypted data key to the encrypted file metadata for later decryption.
Question 131: A Security engineer must develop an AWS Identity and Access Management (IAM) strategy for a company’s organization in AWS Organizations. The company needs to give developers autonomy to develop and test their applications on AWS, but the company also needs to implement security guardrails to help protect itself. The company creates and distributes applications with different levels of data classification and types. The solution must maximize scalability.
Which combination of steps should the security engineer take to meet these requirements? (Choose three.)
A) Create an SCP to restrict access to highly privileged or unauthorized actions to specific AM principals. Assign the SCP to the appropriate AWS accounts.
B) Create an IAM permissions boundary to allow access to specific actions and IAM principals. Assign the IAM permissions boundary to all AM principals within the organization
C) Create a delegated IAM role that has capabilities to create other IAM roles. Use the delegated IAM role to provision IAM principals by following the principle of least privilege.
D) Create OUs based on data classification and type. Add the AWS accounts to the appropriate OU. Provide developers access to the AWS accounts based on business need.
E) Create IAM groups based on data classification and type. Add only the required developers’ IAM role to the IAM groups within each AWS account.
F) Create IAM policies based on data classification and type. Add the minimum required IAM policies to the developers’ IAM role within each AWS account.
Answer: A B and C
If you look at the choices, there are three related to SCP, which controls services, and three related to IAM and permissions boundaries.
Limiting services doesn’t help with data classification – using boundaries, policies and roles give you the scalability and can solve the problem.
Question 132: A company is ready to deploy a public web application. The company will use AWS and will host the application on an Amazon EC2 instance. The company must use SSL/TLS encryption. The company is already using AWS Certificate Manager (ACM) and will export a certificate for use with the deployment.
How can a security engineer deploy the application to meet these requirements?
A) Put the EC2 instance behind an Application Load Balancer (ALB). In the EC2 console, associate the certificate with the ALB by choosing HTTPS and 443.
B) Put the EC2 instance behind a Network Load Balancer. Associate the certificate with the EC2 instance.
C) Put the EC2 instance behind a Network Load Balancer (NLB). In the EC2 console, associate the certificate with the NLB by choosing HTTPS and 443.
D) Put the EC2 instance behind an Application Load Balancer. Associate the certificate with the EC2 instance.
Notes: You can’t directly install Amazon-issued certificates on Amazon Elastic Compute Cloud (EC2) instances. Instead, use the certificate with a load balancer, and then register the EC2 instance behind the load balancer.
What are the 6 pillars of a well architected framework:
AWS Well-Architected helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications and workloads. Based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization — AWS Well-Architected provides a consistent approach for customers and partners to evaluate architectures, and implement designs that can scale over time.
1. Operational Excellence
The operational excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. You can find prescriptive guidance on implementation in the Operational Excellence Pillar whitepaper.
2. Security The security pillar includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies. You can find prescriptive guidance on implementation in the Security Pillar whitepaper.
3. Reliability The reliability pillar includes the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. You can find prescriptive guidance on implementation in the Reliability Pillar whitepaper.
4. Performance Efficiency The performance efficiency pillar includes the ability to use computing resources efficiently to meet system requirements and to maintain that efficiency as demand changes and technologies evolve. You can find prescriptive guidance on implementation in the Performance Efficiency Pillar whitepaper.
5. Cost Optimization The cost optimization pillar includes the ability to avoid or eliminate unneeded cost or suboptimal resources. You can find prescriptive guidance on implementation in the Cost Optimization Pillar whitepaper.
The ability to increase efficiency across all components of a workload by maximizing the benefits from the provisioned resources.
There are six best practice areas for sustainability in the cloud:
Region Selection – AWS Global Infrastructure
User Behavior Patterns – Auto Scaling, Elastic Load Balancing
Software and Architecture Patterns – AWS Design Principles
The AWS Well-Architected Framework provides architectural best practices across the five pillars for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. The framework provides a set of questions that allows you to review an existing or proposed architecture. It also provides a set of AWS best practices for each pillar. Using the Framework in your architecture helps you produce stable and efficient systems, which allows you to focus on functional requirements.
Other AWS Facts and Summaries and Questions/Answers Dump
The reality, of course, today is that if you come up with a great idea you don’t get to go quickly to a successful product. There’s a lot of undifferentiated heavy lifting that stands between your idea and that success. The kinds of things that I’m talking about when I say undifferentiated heavy lifting are things like these: figuring out which servers to buy, how many of them to buy, what time line to buy them.
Eventually you end up with heterogeneous hardware and you have to match that. You have to think about backup scenarios if you lose your data center or lose connectivity to a data center. Eventually you have to move facilities. There’s negotiations to be done. It’s a very complex set of activities that really is a big driver of ultimate success.
But they are undifferentiated from, it’s not the heart of, your idea. We call this muck. And it gets worse because what really happens is you don’t have to do this one time. You have to drive this loop. After you get your first version of your idea out into the marketplace, you’ve done all that undifferentiated heavy lifting, you find out that you have to cycle back. Change your idea. The winners are the ones that can cycle this loop the fastest.
On every cycle of this loop you have this undifferentiated heavy lifting, or muck, that you have to contend with. I believe that for most companies, and it’s certainly true at Amazon, that 70% of your time, energy, and dollars go into the undifferentiated heavy lifting and only 30% of your energy, time, and dollars gets to go into the core kernel of your idea.
I think what people are excited about is that they’re going to get a chance they see a future where they may be able to invert those two. Where they may be able to spend 70% of their time, energy and dollars on the differentiated part of what they’re doing.
So my exam was yesterday and I got the results in 24 hours. I think that’s how they review all saa exams, not showing the results right away anymore.
I scored 858. Was practicing with Stephan’s udemy lectures and Bonso exam tests. My test results were as follows Test 1. 63%, 93% Test 2. 67%, 87% Test 3. 81 % Test 4. 72% Test 5. 75 % Test 6. 81% Stephan’s test. 80%
I was reading all question explanations (even the ones I got correct)
The actual exam was pretty much similar to these. The topics I got were:
A lot of S3 (make sure you know all of it from head to toes)
DataSync and Database Migration Service in same questions. Make sure you know the difference
One EKS question
2-3 KMS questions
Security group question
A lot of RDS Multi-AZ
SQS + SNS fan out pattern
ECS microservice architecture question
And that’s all I can remember)
I took extra 30 minutes, because English is not my native language and I had plenty of time to think and then review flagged questions.
Hey guys, just giving my update so all of you guys working towards your certs can stay motivated as these success stories drove me to reach this goal.
Background: 12 years of military IT experience, never worked with the cloud. I’ve done 7 deployments (that is a lot in 12 years), at which point I came home from the last one burnt out with a family that barely knew me. I knew I needed a change, but had no clue where to start or what I wanted to do. I wasn’t really interested in IT but I knew it’d pay the bills. After seeing videos about people in IT working from home(which after 8+ years of being gone from home really appealed to me), I stumbled across a video about a Solutions Architect’s daily routine working from home and got me interested in AWS.
AWS Solutions Architect SAA Certification Preparation time: It took me 68 days straight of hard work to pass this exam with confidence. No rest days, more than 120 pages of hand-written notes and hundreds and hundreds of flash cards.
In the beginning, I hopped on Stephane Maarek’s course for the CCP exam just to see if it was for me. I did the course in about a week and then after doing some research on here, got the CCP Practice exams from tutorialsdojo.com Two weeks after starting the Udemy course, I passed the exam. By that point, I’d already done lots of research on the different career paths and the best way to study, etc.
Cantrill(10/10) – That same day, I hopped onto Cantrill’s course for the SAA and got to work. Somebody had mentioned that by doing his courses you’d be over-prepared for the exam. While I think a combination of material is really important for passing the certification with confidence, I can say without a doubt Cantrill’s courses got me 85-90% of the way there. His forum is also amazing, and has directly contributed to me talking with somebody who works at AWS to land me a job, which makes the money I spent on all of his courses A STEAL. As I continue my journey (up next is SA Pro), I will be using all of his courses.
Neal Davis(8/10) – After completing Cantrill’s course, I found myself needing a resource to reinforce all the material I’d just learned. AWS is an expansive platform and the many intricacies of the different services can be tricky. For this portion, I relied on Neal Davis’s Training Notes series. These training notes are a very condensed version of the information you’ll need to pass the exam, and with the proper context are very useful to find the things you may have missed in your initial learnings. I will be using his other Training Notes for my other exams as well.
TutorialsDojo(10/10) – These tests filled in the gaps and allowed me to spot my weaknesses and shore them up. I actually think my real exam was harder than these, but because I’d spent so much time on the material I got wrong, I was able to pass the exam with a safe score.
As I said, I was surprised at how difficult the exam was. A lot of my questions were related to DBs, and a lot of them gave no context as to whether the data being loaded into them was SQL or NoSQL which made the choice selection a little frustrating. A lot of the questions have 2 VERY SIMILAR answers, and often time the wording of the answers could be easy to misinterpret (such as when you are creating a Read Replica, do you attach it to the primary application DB that is slowing down because of read issues or attach it to the service that is causing the primary DB to slow down). For context, I was scoring 95-100% on the TD exams prior to taking the test and managed a 823 on the exam so I don’t know if I got unlucky with a hard test or if I’m not as prepared as I thought I was (i.e. over-thinking questions).
Anyways, up next is going back over the practical parts of the course as I gear up for the SA Pro exam. I will be taking my time with this one, and re-learning the Linux CLI in preparation for finding a new job.
PS if anybody on here is hiring, I’m looking! I’m the hardest worker I know and my goal is to make your company as streamlined and profitable as possible. 🙂
Whitepapers are the important information about each services that are published by Amazon in their website. If you are preparing for the AWS certifications, it is very important to use the some of the most recommended whitepapers to read before writing the exam.
Data Security questions could be the more challenging and it’s worth noting that you need to have a good understanding of security processes described in the whitepaper titled “Overview of Security Processes”.
In the above list, most important whitepapers are Overview of Security Processes and Storage Options in the Cloud. Read more here…
Stephen Maarek’s Udemy course, and his 6 exam practices
Adrian Cantrill’s online course (about `60% done)
(My company has udemy business account so I was able to use Stephen’s course/exam)
I scheduled my exam at the end of March, and started with Adrian’s. But I was dumb thinking that I could go through his course within 3 weeks… I stopped around 12% of his course and went to the textbook and finished reading the all-in-one exam guide within a weekend. Then I started going through Stephen’s course. While learning the course, I pushed back the exam to end of April, because I knew I wouldn’t be ready by the exam comes along.
Five days before the exam, I finished Stephen’s course, and then did his final exam on the course. I failed miserably (around 50%). So I did one of Stephen’s practice exam and did worse (42%). I thought maybe it might be his exams that are slightly difficult, so I went and bought Jon Bonso’s exam and got 60% on his first one. And then I realized based on all the questions on the exams, I was definitely lacking some fundamentals. I went back to Adrian’s course and things were definitely sticking more – I think it has to do with his explanations + more practical stuff. Unfortunately, I could not finish his course before the exam (because I was cramming), and on the day of the exam, I could only do Bonso’s four of six exams, with barely passing one of them.
Please, don’t do what I did. I was desperate to get this thing over with it. I wanted to move on and work on other things for job search, but if you’re not in this situation, please don’t do this. I can’t for love of god tell you about OAI and Cloudfront and why that’s different than S3 URL. The only thing that I can remember is all the practical stuff that I did with Adrian’s course. I’ll never forget how to create VPC, because he make you manually go through it. I’m not against Stephen’s course – they are different on its own way (see the tips below).
So here’s what I recommend doing before writing for aws exam:
Don’t schedule your exam beforehand. Go through the materials that you are doing, and make sure you get at least 80% on all of the Jon Bonso’s exam (I’d recommend maybe 90% or higher)
If you like to learn things practically, I do recommend Adrian’s course. If you like to learn things conceptually, go with Stephen Maarek’s course. I find Stephen’s course more detailed when going through different architectures, but I can’t really say that because I didn’t really finish Adrian’s course
Jon Bonso’s exam was about the same difficulty as the actual exam. But they’re slightly more tricky. For example, many of the questions will give you two different situation and you really have to figure out what they are asking for because they might contradict to each other, but the actual question is asking one specific thing. However, there were few questions that were definitely obvious if you knew the service.
I’m upset that even though I passed the exam, I’m still lacking some practical stuff, so I’m just going to go through Adrian’s Developer exam but without cramming this time. If you actually l