You can translate the content of this page by selecting a language in the select box.
AWS Certified Solutions Architects are responsible for designing, deploying, and managing AWS cloud applications. The AWS Cloud Solutions Architect Associate exam validates an examinee’s ability to effectively demonstrate knowledge of how to design and deploy secure and robust applications on AWS technologies. The AWS Solutions Architect Associate training provides an overview of key AWS services, security, architecture, pricing, and support. The AWS Certified Solutions Architect – Associate (SAA-C03) Examination is a required examination for the AWS Certified Solutions Architect – Professional level. Successful completion of this examination can lead to a salary raise or promotion for those in cloud roles.
aws solution architect cheat sheet 2022 pdf
aws certified solutions architect pdf 2022
aws solutions architect cheat sheet 2022
AWS Certified Solutions Architect – Associate average salary
The AWS Certified Solutions Architect – Associate average salary is $149,446/year
In this blog, we will help you prepare for the AWS Solution Architect Associate Certification Exam, give you some facts and summaries, provide AWS Solution Architect Associate Top Questions and Answers Dump

The popular AWS Certified Solutions Architect Associate exam will have its new version this coming August. 2022.
AWS Certified Solutions Architect – Associate (SAA-C03) Exam Guide

The AWS Certified Solutions Architect – Associate (SAA-C03) exam is intended for individuals who perform in a solutions architect role.
The exam validates a candidate’s ability to use AWS technologies to design solutions based on the AWS Well-Architected Framework.
The exam also validates a candidate’s ability to complete the following tasks:
• Design solutions that incorporate AWS services to meet current business requirements and future projected needs
• Design architectures that are secure, resilient, high-performing, and cost-optimized
• Review existing solutions and determine improvements
Unscored content
The exam includes 15 unscored questions that do not affect your score.
AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.
Target candidate description
The target candidate should have at least 1 year of hands-on experience designing cloud solutions that use AWS services
Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720.
Your score shows how you performed on the exam as a whole and whether or not you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.
Content outline:
Domain 1: Design Secure Architectures 30%
Domain 2: Design Resilient Architectures 26%
Domain 3: Design High-Performing Architectures 24%
Domain 4: Design Cost-Optimized Architectures 20%
Domain 1: Design Secure Architectures
This exam domain is focused on securing your architectures on AWS and comprises 30% of the exam. Task statements include:
Task Statement 1: Design secure access to AWS resources.
Knowledge of:
• Access controls and management across multiple accounts
• AWS federated access and identity services (for example, AWS Identity and Access Management [IAM], AWS Single Sign-On [AWS SSO])
• AWS global infrastructure (for example, Availability Zones, AWS Regions)
• AWS security best practices (for example, the principle of least privilege)
• The AWS shared responsibility model
Skills in:
• Applying AWS security best practices to IAM users and root users (for example, multi-factor authentication [MFA])
• Designing a flexible authorization model that includes IAM users, groups, roles, and policies
• Designing a role-based access control strategy (for example, AWS Security Token Service [AWS STS], role switching, cross-account access)
• Designing a security strategy for multiple AWS accounts (for example, AWS Control Tower, service control policies [SCPs])
• Determining the appropriate use of resource policies for AWS services
• Determining when to federate a directory service with IAM roles
Task Statement 2: Design secure workloads and applications.
Knowledge of:
• Application configuration and credentials security
• AWS service endpoints
• Control ports, protocols, and network traffic on AWS
• Secure application access
• Security services with appropriate use cases (for example, Amazon Cognito, Amazon GuardDuty, Amazon Macie)
• Threat vectors external to AWS (for example, DDoS, SQL injection)
Skills in:
• Designing VPC architectures with security components (for example, security groups, route tables, network ACLs, NAT gateways)
• Determining network segmentation strategies (for example, using public subnets and private subnets)
• Integrating AWS services to secure applications (for example, AWS Shield, AWS WAF, AWS SSO, AWS Secrets Manager)
• Securing external network connections to and from the AWS Cloud (for example, VPN, AWS Direct Connect)
Task Statement 3: Determine appropriate data security controls.
Knowledge of:
• Data access and governance
• Data recovery
• Data retention and classification
• Encryption and appropriate key management
Skills in:
• Aligning AWS technologies to meet compliance requirements
• Encrypting data at rest (for example, AWS Key Management Service [AWS KMS])
• Encrypting data in transit (for example, AWS Certificate Manager [ACM] using TLS)
• Implementing access policies for encryption keys
• Implementing data backups and replications
• Implementing policies for data access, lifecycle, and protection
• Rotating encryption keys and renewing certificates
Domain 2: Design Resilient Architectures
This exam domain is focused on designing resilient architectures on AWS and comprises 26% of the exam. Task statements include:
Task Statement 1: Design scalable and loosely coupled architectures.
Knowledge of:
• API creation and management (for example, Amazon API Gateway, REST API)
• AWS managed services with appropriate use cases (for example, AWS Transfer Family, Amazon
Simple Queue Service [Amazon SQS], Secrets Manager)
• Caching strategies
• Design principles for microservices (for example, stateless workloads compared with stateful workloads)
• Event-driven architectures
• Horizontal scaling and vertical scaling
• How to appropriately use edge accelerators (for example, content delivery network [CDN])
• How to migrate applications into containers
• Load balancing concepts (for example, Application Load Balancer)
• Multi-tier architectures
• Queuing and messaging concepts (for example, publish/subscribe)
• Serverless technologies and patterns (for example, AWS Fargate, AWS Lambda)
• Storage types with associated characteristics (for example, object, file, block)
• The orchestration of containers (for example, Amazon Elastic Container Service [Amazon ECS],Amazon Elastic Kubernetes Service [Amazon EKS])
• When to use read replicas
• Workflow orchestration (for example, AWS Step Functions)
Skills in:
• Designing event-driven, microservice, and/or multi-tier architectures based on requirements
• Determining scaling strategies for components used in an architecture design
• Determining the AWS services required to achieve loose coupling based on requirements
• Determining when to use containers
• Determining when to use serverless technologies and patterns
• Recommending appropriate compute, storage, networking, and database technologies based on requirements
• Using purpose-built AWS services for workloads
Task Statement 2: Design highly available and/or fault-tolerant architectures.
Knowledge of:
• AWS global infrastructure (for example, Availability Zones, AWS Regions, Amazon Route 53)
• AWS managed services with appropriate use cases (for example, Amazon Comprehend, Amazon Polly)
• Basic networking concepts (for example, route tables)
• Disaster recovery (DR) strategies (for example, backup and restore, pilot light, warm standby,
active-active failover, recovery point objective [RPO], recovery time objective [RTO])
• Distributed design patterns
• Failover strategies
• Immutable infrastructure
• Load balancing concepts (for example, Application Load Balancer)
• Proxy concepts (for example, Amazon RDS Proxy)
• Service quotas and throttling (for example, how to configure the service quotas for a workload in a standby environment)
• Storage options and characteristics (for example, durability, replication)
• Workload visibility (for example, AWS X-Ray)
Skills in:
• Determining automation strategies to ensure infrastructure integrity
• Determining the AWS services required to provide a highly available and/or fault-tolerant architecture across AWS Regions or Availability Zones
• Identifying metrics based on business requirements to deliver a highly available solution
• Implementing designs to mitigate single points of failure
• Implementing strategies to ensure the durability and availability of data (for example, backups)
• Selecting an appropriate DR strategy to meet business requirements
• Using AWS services that improve the reliability of legacy applications and applications not built for the cloud (for example, when application changes are not possible)
• Using purpose-built AWS services for workloads
Domain 3: Design High-Performing Architectures
This exam domain is focused on designing high-performing architectures on AWS and comprises 24% of the exam. Task statements include:
Task Statement 1: Determine high-performing and/or scalable storage solutions.
Knowledge of:
• Hybrid storage solutions to meet business requirements
• Storage services with appropriate use cases (for example, Amazon S3, Amazon Elastic File System [Amazon EFS], Amazon Elastic Block Store [Amazon EBS])
• Storage types with associated characteristics (for example, object, file, block)
Skills in:
• Determining storage services and configurations that meet performance demands
• Determining storage services that can scale to accommodate future needs
Task Statement 2: Design high-performing and elastic compute solutions.
Knowledge of:
• AWS compute services with appropriate use cases (for example, AWS Batch, Amazon EMR, Fargate)
• Distributed computing concepts supported by AWS global infrastructure and edge services
• Queuing and messaging concepts (for example, publish/subscribe)
• Scalability capabilities with appropriate use cases (for example, Amazon EC2 Auto Scaling, AWS Auto Scaling)
• Serverless technologies and patterns (for example, Lambda, Fargate)
• The orchestration of containers (for example, Amazon ECS, Amazon EKS)
Skills in:
• Decoupling workloads so that components can scale independently
• Identifying metrics and conditions to perform scaling actions
• Selecting the appropriate compute options and features (for example, EC2 instance types) to meet business requirements
• Selecting the appropriate resource type and size (for example, the amount of Lambda memory) to meet business requirements
Task Statement 3: Determine high-performing database solutions.
Knowledge of:
• AWS global infrastructure (for example, Availability Zones, AWS Regions)
• Caching strategies and services (for example, Amazon ElastiCache)
• Data access patterns (for example, read-intensive compared with write-intensive)
• Database capacity planning (for example, capacity units, instance types, Provisioned IOPS)
• Database connections and proxies
• Database engines with appropriate use cases (for example, heterogeneous migrations, homogeneous migrations)
• Database replication (for example, read replicas)
• Database types and services (for example, serverless, relational compared with non-relational, in-memory)
Skills in:
• Configuring read replicas to meet business requirements
• Designing database architectures
• Determining an appropriate database engine (for example, MySQL compared with
PostgreSQL)
• Determining an appropriate database type (for example, Amazon Aurora, Amazon DynamoDB)
• Integrating caching to meet business requirements
Task Statement 4: Determine high-performing and/or scalable network architectures.
Knowledge of:
• Edge networking services with appropriate use cases (for example, Amazon CloudFront, AWS Global Accelerator)
• How to design network architecture (for example, subnet tiers, routing, IP addressing)
• Load balancing concepts (for example, Application Load Balancer)
• Network connection options (for example, AWS VPN, Direct Connect, AWS PrivateLink)
Skills in:
• Creating a network topology for various architectures (for example, global, hybrid, multi-tier)
• Determining network configurations that can scale to accommodate future needs
• Determining the appropriate placement of resources to meet business requirements
• Selecting the appropriate load balancing strategy
Task Statement 5: Determine high-performing data ingestion and transformation solutions.
Knowledge of:
• Data analytics and visualization services with appropriate use cases (for example, Amazon Athena, AWS Lake Formation, Amazon QuickSight)
• Data ingestion patterns (for example, frequency)
• Data transfer services with appropriate use cases (for example, AWS DataSync, AWS Storage Gateway)
• Data transformation services with appropriate use cases (for example, AWS Glue)
• Secure access to ingestion access points
• Sizes and speeds needed to meet business requirements
• Streaming data services with appropriate use cases (for example, Amazon Kinesis)
Skills in:
• Building and securing data lakes
• Designing data streaming architectures
• Designing data transfer solutions
• Implementing visualization strategies
• Selecting appropriate compute options for data processing (for example, Amazon EMR)
• Selecting appropriate configurations for ingestion
• Transforming data between formats (for example, .csv to .parquet)
Domain 4: Design Cost-Optimized Architectures
This exam domain is focused optimizing solutions for cost-effectiveness on AWS and comprises 20% of the exam. Task statements include:
Task Statement 1: Design cost-optimized storage solutions.
Knowledge of:
• Access options (for example, an S3 bucket with Requester Pays object storage)
• AWS cost management service features (for example, cost allocation tags, multi-account billing)
• AWS cost management tools with appropriate use cases (for example, AWS Cost Explorer, AWS Budgets, AWS Cost and Usage Report)
• AWS storage services with appropriate use cases (for example, Amazon FSx, Amazon EFS, Amazon S3, Amazon EBS)
• Backup strategies
• Block storage options (for example, hard disk drive [HDD] volume types, solid state drive [SSD] volume types)
• Data lifecycles
• Hybrid storage options (for example, DataSync, Transfer Family, Storage Gateway)
• Storage access patterns
• Storage tiering (for example, cold tiering for object storage)
• Storage types with associated characteristics (for example, object, file, block)
Skills in:
• Designing appropriate storage strategies (for example, batch uploads to Amazon S3 compared with individual uploads)
• Determining the correct storage size for a workload
• Determining the lowest cost method of transferring data for a workload to AWS storage
• Determining when storage auto scaling is required
• Managing S3 object lifecycles
• Selecting the appropriate backup and/or archival solution
• Selecting the appropriate service for data migration to storage services
• Selecting the appropriate storage tier
• Selecting the correct data lifecycle for storage
• Selecting the most cost-effective storage service for a workload
Task Statement 2: Design cost-optimized compute solutions.
Knowledge of:
• AWS cost management service features (for example, cost allocation tags, multi-account billing)
• AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report)
• AWS global infrastructure (for example, Availability Zones, AWS Regions)
• AWS purchasing options (for example, Spot Instances, Reserved Instances, Savings Plans)
• Distributed compute strategies (for example, edge processing)
• Hybrid compute options (for example, AWS Outposts, AWS Snowball Edge)
• Instance types, families, and sizes (for example, memory optimized, compute optimized, virtualization)
• Optimization of compute utilization (for example, containers, serverless computing, microservices)
• Scaling strategies (for example, auto scaling, hibernation)
Skills in:
• Determining an appropriate load balancing strategy (for example, Application Load Balancer [Layer 7] compared with Network Load Balancer [Layer 4] compared with Gateway Load Balancer)
• Determining appropriate scaling methods and strategies for elastic workloads (for example, horizontal compared with vertical, EC2 hibernation)
• Determining cost-effective AWS compute services with appropriate use cases (for example, Lambda, Amazon EC2, Fargate)
• Determining the required availability for different classes of workloads (for example, production workloads, non-production workloads)
• Selecting the appropriate instance family for a workload
• Selecting the appropriate instance size for a workload
Task Statement 3: Design cost-optimized database solutions.
Knowledge of:
• AWS cost management service features (for example, cost allocation tags, multi-account billing)
• AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report)
• Caching strategies
• Data retention policies
• Database capacity planning (for example, capacity units)
• Database connections and proxies
• Database engines with appropriate use cases (for example, heterogeneous migrations, homogeneous migrations)
• Database replication (for example, read replicas)
• Database types and services (for example, relational compared with non-relational, Aurora, DynamoDB)
Skills in:
• Designing appropriate backup and retention policies (for example, snapshot frequency)
• Determining an appropriate database engine (for example, MySQL compared with PostgreSQL)
• Determining cost-effective AWS database services with appropriate use cases (for example, DynamoDB compared with Amazon RDS, serverless)
• Determining cost-effective AWS database types (for example, time series format, columnar format)
• Migrating database schemas and data to different locations and/or different database engines
Task Statement 4: Design cost-optimized network architectures.
Knowledge of:
• AWS cost management service features (for example, cost allocation tags, multi-account billing)
• AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report)
• Load balancing concepts (for example, Application Load Balancer)
• NAT gateways (for example, NAT instance costs compared with NAT gateway costs)
• Network connectivity (for example, private lines, dedicated lines, VPNs)
• Network routing, topology, and peering (for example, AWS Transit Gateway, VPC peering)
• Network services with appropriate use cases (for example, DNS)
Skills in:
• Configuring appropriate NAT gateway types for a network (for example, a single shared NAT
gateway compared with NAT gateways for each Availability Zone)
• Configuring appropriate network connections (for example, Direct Connect compared with VPN compared with internet)
• Configuring appropriate network routes to minimize network transfer costs (for example, Region to Region, Availability Zone to Availability Zone, private to public, Global Accelerator, VPC endpoints)
• Determining strategic needs for content delivery networks (CDNs) and edge caching
• Reviewing existing workloads for network optimizations
• Selecting an appropriate throttling strategy
• Selecting the appropriate bandwidth allocation for a network device (for example, a single VPN compared with multiple VPNs, Direct Connect speed)
Which key tools, technologies, and concepts might be covered on the exam?
The following is a non-exhaustive list of the tools and technologies that could appear on the exam.
This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam.
The general tools and technologies in this list appear in no particular order.
AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance:
• Compute
• Cost management
• Database
• Disaster recovery
• High performance
• Management and governance
• Microservices and component decoupling
• Migration and data transfer
• Networking, connectivity, and content delivery
• Resiliency
• Security
• Serverless and event-driven design principles
• Storage
AWS Services and Features
There are lots of new services and feature updates in scope for the new AWS Certified Solutions Architect Associate certification! Here’s a list of some of the new services that will be in scope for the new version of the exam:
Analytics:
• Amazon Athena
• AWS Data Exchange
• AWS Data Pipeline
• Amazon EMR
• AWS Glue
• Amazon Kinesis
• AWS Lake Formation
• Amazon Managed Streaming for Apache Kafka (Amazon MSK)
• Amazon OpenSearch Service (Amazon Elasticsearch Service)
• Amazon QuickSight
• Amazon Redshift
Application Integration:
• Amazon AppFlow
• AWS AppSync
• Amazon EventBridge (Amazon CloudWatch Events)
• Amazon MQ
• Amazon Simple Notification Service (Amazon SNS)
• Amazon Simple Queue Service (Amazon SQS)
• AWS Step Functions
AWS Cost Management:
• AWS Budgets
• AWS Cost and Usage Report
• AWS Cost Explorer
• Savings Plans
Compute:
• AWS Batch
• Amazon EC2
• Amazon EC2 Auto Scaling
• AWS Elastic Beanstalk
• AWS Outposts
• AWS Serverless Application Repository
• VMware Cloud on AWS
• AWS Wavelength
Containers:
• Amazon Elastic Container Registry (Amazon ECR)
• Amazon Elastic Container Service (Amazon ECS)
• Amazon ECS Anywhere
• Amazon Elastic Kubernetes Service (Amazon EKS)
• Amazon EKS Anywhere
• Amazon EKS Distro
Database:
• Amazon Aurora
• Amazon Aurora Serverless
• Amazon DocumentDB (with MongoDB compatibility)
• Amazon DynamoDB
• Amazon ElastiCache
• Amazon Keyspaces (for Apache Cassandra)
• Amazon Neptune
• Amazon Quantum Ledger Database (Amazon QLDB)
• Amazon RDS
• Amazon Redshift
• Amazon Timestream
Developer Tools:
• AWS X-Ray
Front-End Web and Mobile:
• AWS Amplify
• Amazon API Gateway
• AWS Device Farm
• Amazon Pinpoint
Machine Learning:
• Amazon Comprehend
• Amazon Forecast
• Amazon Fraud Detector
• Amazon Kendra
• Amazon Lex
• Amazon Polly
• Amazon Rekognition
• Amazon SageMaker
• Amazon Textract
• Amazon Transcribe
• Amazon Translate
Management and Governance:
• AWS Auto Scaling
• AWS CloudFormation
• AWS CloudTrail
• Amazon CloudWatch
• AWS Command Line Interface (AWS CLI)
• AWS Compute Optimizer
• AWS Config
• AWS Control Tower
• AWS License Manager
• Amazon Managed Grafana
• Amazon Managed Service for Prometheus
• AWS Management Console
• AWS Organizations
• AWS Personal Health Dashboard
• AWS Proton
• AWS Service Catalog
• AWS Systems Manager
• AWS Trusted Advisor
• AWS Well-Architected Tool
Media Services:
• Amazon Elastic Transcoder
• Amazon Kinesis Video Streams
Migration and Transfer:
• AWS Application Discovery Service
• AWS Application Migration Service (CloudEndure Migration)
• AWS Database Migration Service (AWS DMS)
• AWS DataSync
• AWS Migration Hub
• AWS Server Migration Service (AWS SMS)
• AWS Snow Family
• AWS Transfer Family
Networking and Content Delivery:
• Amazon CloudFront
• AWS Direct Connect
• Elastic Load Balancing (ELB)
• AWS Global Accelerator
• AWS PrivateLink
• Amazon Route 53
• AWS Transit Gateway
• Amazon VPC
• AWS VPN
Security, Identity, and Compliance:
• AWS Artifact
• AWS Audit Manager
• AWS Certificate Manager (ACM)
• AWS CloudHSM
• Amazon Cognito
• Amazon Detective
• AWS Directory Service
• AWS Firewall Manager
• Amazon GuardDuty
• AWS Identity and Access Management (IAM)
• Amazon Inspector
• AWS Key Management Service (AWS KMS)
• Amazon Macie
• AWS Network Firewall
• AWS Resource Access Manager (AWS RAM)
• AWS Secrets Manager
• AWS Security Hub
• AWS Shield
• AWS Single Sign-On
• AWS WAF
Serverless:
• AWS AppSync
• AWS Fargate
• AWS Lambda
Storage:
• AWS Backup
• Amazon Elastic Block Store (Amazon EBS)
• Amazon Elastic File System (Amazon EFS)
• Amazon FSx (for all types)
• Amazon S3
• Amazon S3 Glacier
• AWS Storage Gateway
Out-of-scope AWS services and features
The following is a non-exhaustive list of AWS services and features that are not covered on the exam.
These services and features do not represent every AWS offering that is excluded from the exam content.
Analytics:
• Amazon CloudSearch
Application Integration:
• Amazon Managed Workflows for Apache Airflow (Amazon MWAA)
AR and VR:
• Amazon Sumerian
Blockchain:
• Amazon Managed Blockchain
Compute:
• Amazon Lightsail
Database:
• Amazon RDS on VMware
Developer Tools:
• AWS Cloud9
• AWS Cloud Development Kit (AWS CDK)
• AWS CloudShell
• AWS CodeArtifact
• AWS CodeBuild
• AWS CodeCommit
• AWS CodeDeploy
• Amazon CodeGuru
• AWS CodeStar
• Amazon Corretto
• AWS Fault Injection Simulator (AWS FIS)
• AWS Tools and SDKs
Front-End Web and Mobile:
• Amazon Location Service
Game Tech:
• Amazon GameLift
• Amazon Lumberyard
Internet of Things:
• All services
Which new AWS services will be covered in the SAA-C03?
AWS Data Exchange,
AWS Data Pipeline,
AWS Lake Formation,
Amazon Managed Streaming for Apache Kafka,
Amazon AppFlow,
AWS Outposts,
VMware Cloud on AWS,
AWS Wavelength,
Amazon Neptune,
Amazon Quantum Ledger Database,
Amazon Timestream,
AWS Amplify,
Amazon Comprehend,
Amazon Forecast,
Amazon Fraud Detector,
Amazon Kendra,
AWS License Manager,
Amazon Managed Grafana,
Amazon Managed Service for Prometheus,
AWS Proton,
Amazon Elastic Transcoder,
Amazon Kinesis Video Streams,
AWS Application Discovery Service,
AWS WAF Serverless,
AWS AppSync,

Get the AWS SAA-C03 Exam Prep App on: iOS – Android – Windows 10/11
Solution Architecture Definition 1:
Solution architecture is a practice of defining and describing an architecture of a system delivered in context of a specific solution and as such it may encompass description of an entire system or only its specific parts. Definition of a solution architecture is typically led by a solution architect.
Solution Architecture Definition 2:
The AWS Certified Solutions Architect – Associate examination is intended for individuals who perform a solutions architect role and have one or more years of hands-on experience designing available, cost-efficient, fault-tolerant, and scalable distributed systems on AWS.
AWS Solution Architect Associate Exam Facts and Summaries (SAA-C03)
- Take an AWS Training Class
- Study AWS Whitepapers and FAQs: AWS Well-Architected webpage (various whitepapers linked)
- If you are running an application in a production environment and must add a new EBS volume with data from a snapshot, what could you do to avoid degraded performance during the volume’s first use?
Initialize the data by reading each storage block on the volume.
Volumes created from an EBS snapshot must be initialized. Initializing occurs the first time a storage block on the volume is read, and the performance impact can be impacted by up to 50%. You can avoid this impact in production environments by pre-warming the volume by reading all of the blocks. - If you are running a legacy application that has hard-coded static IP addresses and it is running on an EC2 instance; what is the best failover solution that allows you to keep the same IP address on a new instance?
Elastic IP addresses (EIPs) are designed to be attached/detached and moved from one EC2 instance to another. They are a great solution for keeping a static IP address and moving it to a new instance if the current instance fails. This will reduce or eliminate any downtime uses may experience. - Which feature of Intel processors help to encrypt data without significant impact on performance?
AES-NI - You can mount to EFS from which two of the following?
- On-prem servers running Linux
- EC2 instances running Linux
EFS is not compatible with Windows operating systems.
-
When a file(s) is encrypted and the stored data is not in transit it’s known as encryption at rest. What is an example of encryption at rest?
-
When would vertical scaling be necessary? When an application is built entirely into one source code, otherwise known as a monolithic application.
-
Fault-Tolerance allows for continuous operation throughout a failure, which can lead to a low Recovery Time Objective. RPO vs RTO
- High-Availability means automating tasks so that an instance will quickly recover, which can lead to a low Recovery Time Objective. RPO vs. RTO
- Frequent backups reduce the time between the last backup and recovery point, otherwise known as the Recovery Point Objective. RPO vs. RTO
- Which represents the difference between Fault-Tolerance and High-Availability? High-Availability means the system will quickly recover from a failure event, and Fault-Tolerance means the system will maintain operations during a failure.
- From a security perspective, what is a principal? An anonymous user falls under the definition of a principal. A principal can be an anonymous user acting on a system.
An authenticated user falls under the definition of a principal. A principal can be an authenticated user acting on a system.
- What are two types of session data saving for an Application Session State? Stateless and Stateful
23. It is the customer’s responsibility to patch the operating system on an EC2 instance.
24. In designing an environment, what four main points should a Solutions Architect keep in mind? Cost-efficient, secure, application session state, undifferentiated heavy lifting: These four main points should be the framework when designing an environment.
25. In the context of disaster recovery, what does RPO stand for? RPO is the abbreviation for Recovery Point Objective.
26. What are the benefits of horizontal scaling?
Vertical scaling can be costly while horizontal scaling is cheaper.
Horizontal scaling suffers from none of the size limitations of vertical scaling.
Having horizontal scaling means you can easily route traffic to another instance of a server.
Top
Reference: AWS Solution Architect Associate Exam Prep
Top 100 AWS Solution Architect Associate Exam Prep Questions and Answers Dump – SAA-C03

For a better mobile experience, download the mobile app below:
A company is developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? (Select TWO.)
- A. CloudWatch
- B. DynamoDB
- C. Elastic Load Balancing
- D. ElastiCache
- E. Storage Gateway
Top

Q1: A Solutions Architect is designing a critical business application with a relational database that runs on an EC2 instance. It requires a single EBS volume that can support up to 16,000 IOPS.
Which Amazon EBS volume type can meet the performance requirements of this application?
- A. EBS Provisioned IOPS SSD
- B. EBS Throughput Optimized HDD
- C. EBS General Purpose SSD
- D. EBS Cold HDD
Q2: An application running on EC2 instances processes sensitive information stored on Amazon S3. The information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 is a security risk.
Which solution will resolve the security concern?
- A. Access the data through an Internet Gateway.
- B. Access the data through a VPN connection.
- C. Access the data through a NAT Gateway.
- D.Access the data through a VPC endpoint for Amazon S3
Q3: An organization is building an Amazon Redshift cluster in their shared services VPC. The cluster will host sensitive data.
How can the organization control which networks can access the cluster?
- A. Run the cluster in a different VPC and connect through VPC peering.
- B. Create a database user inside the Amazon Redshift cluster only for users on the network.
- C. Define a cluster security group for the cluster that allows access from the allowed networks.
- D. Only allow access to networks that connect with the shared services network via VPN.
Q4: A web application allows customers to upload orders to an S3 bucket. The resulting Amazon S3 events trigger a Lambda function that inserts a message to an SQS queue. A single EC2 instance reads messages from the queue, processes them, and stores them in an DynamoDB table partitioned by unique order ID. Next month traffic is expected to increase by a factor of 10 and a Solutions Architect is reviewing the architecture for possible scaling problems.
Which component is MOST likely to need re-architecting to be able to scale to accommodate the new traffic?
- A. Lambda function
- B. SQS queue
- C. EC2 instance
- D. DynamoDB table
Q5: An application requires a highly available relational database with an initial storage capacity of 8 TB. The database will grow by 8 GB every day. To support expected traffic, at least eight read replicas will be required to handle database reads.
Which option will meet these requirements?
- A. DynamoDB
- B. Amazon S3
- C. Amazon Aurora
- D. Amazon Redshift
Q6: How can you improve the performance of EFS?
- A. Use an instance-store backed EC2 instance.
- B. Provision more throughput than is required.
- C. Divide your files system into multiple smaller file systems.
- D. Provision higher IOPs for your EFS.
Q7:
If you are designing an application that requires fast (10 – 25Gbps), low-latency connections between EC2 instances, what EC2 feature should you use?
- A. Snapshots
- B. Instance store volumes
- C. Placement groups
- D. IOPS provisioned instances.
Q8: A Solution Architect is designing an online shopping application running in a VPC on EC2 instances behind an ELB Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The application tier must read and write data to a customer managed database cluster. There should be no access to the database from the Internet, but the cluster must be able to obtain software patches from the Internet.
Which VPC design meets these requirements?
- A. Public subnets for both the application tier and the database cluster
- B. Public subnets for the application tier, and private subnets for the database cluster
- C. Public subnets for the application tier and NAT Gateway, and private subnets for the database cluster
- D. Public subnets for the application tier, and private subnets for the database cluster and NAT Gateway
Q9: What command should you run on a running instance if you want to view its user data (that is used at launch)?
- A. curl http://254.169.254.169/latest/user-data
- B. curl http://localhost/latest/meta-data/bootstrap
- C. curl http://localhost/latest/user-data
- D. curl http://169.254.169.254/latest/user-data

Q10: A company is developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? (Select TWO.)
- A. CloudWatch
- B. DynamoDB
- C. Elastic Load Balancing
- D. ElastiCache
- E. Storage Gateway

Q11: From a security perspective, what is a principal?
- A. An identity
- B. An anonymous user
- C. An authenticated user
- D. A resource
Q12: What are the characteristics of a tiered application?
- A. All three application layers are on the same instance
- B. The presentation tier is on an isolated instance than the logic layer
- C. None of the tiers can be cloned
- D. The logic layer is on an isolated instance than the data layer
- E. Additional machines can be added to help the application by implementing horizontal scaling
- F. Incapable of horizontal scaling
Q13: When using horizontal scaling, how can a server’s capacity closely match it’s rising demand?
A. By frequently purchasing additional instances and smaller resources
B. By purchasing more resources very far in advance
C. By purchasing more resources after demand has risen
D. It is not possible to predict demand
Q14: What is the concept behind AWS’ Well-Architected Framework?
A. It’s a set of best practice areas, principles, and concepts that can help you implement effective AWS solutions.
B. It’s a set of best practice areas, principles, and concepts that can help you implement effective solutions tailored to your specific business.
C. It’s a set of best practice areas, principles, and concepts that can help you implement effective solutions from another web host.
D. It’s a set of best practice areas, principles, and concepts that can help you implement effective E-Commerce solutions.
Q15: Select the true statements regarding AWS Regions.
A. Availability Zones are isolated locations within regions
B. Region codes identify specific regions (example: US-EAST-2)
C. All AWS Regions contain the full set of AWS services.
D. An AWS Region is assigned based on the user’s location when creating an AWS account.
Q16: Which is not one of the five pillars of a well-architected framework?
A. Reliability
B. Performance Efficiency
C. Structural Simplicity
D. Security
E. Operational Excellence
Q17: You lead a team to develop a new online game application in AWS EC2. The application will have a large number of users globally. For a great user experience, this application requires very low network latency and jitter. If the network speed is not fast enough, you will lose customers. Which tool would you choose to improve the application performance? (Select TWO.)
A. AWS VPN
B. AWS Global Accelerator
C. Direct Connect
D. API Gateway
E. CloudFront
Q18: A company has a media processing application deployed in a local data center. Its file storage is built on a Microsoft Windows file server. The application and file server need to be migrated to AWS. You want to quickly set up the file server in AWS and the application code should continue working to access the file systems. Which method should you choose to create the file server?
A. Create a Windows File Server from Amazon WorkSpaces.
B. Configure a high performance Windows File System in Amazon EFS.
C. Create a Windows File Server in Amazon FSx.
D. Configure a secure enterprise storage through Amazon WorkDocs.

Q19: You are developing an application using AWS SDK to get objects from AWS S3. The objects have big sizes and sometimes there are failures when getting objects especially when the network connectivity is poor. You want to get a specific range of bytes in a single GET request and retrieve the whole object in parts. Which method can achieve this?
A. Enable multipart upload in the AWS SDK.
B. Use the “Range” HTTP header in a GET request to download the specified range bytes of an object.
C. Reduce the retry requests and enlarge the retry timeouts through AWS SDK when fetching S3 objects.
D. Retrieve the whole S3 object through a single GET operation.
Q20: You have an application hosted in an Auto Scaling group and an application load balancer distributes traffic to the ASG. You want to add a scaling policy that keeps the average aggregate CPU utilization of the Auto Scaling group to be 60 percent. The capacity of the Auto Scaling group should increase or decrease based on this target value. Which scaling policy does it belong to?
A. Target tracking scaling policy.
B. Step scaling policy.
C. Simple scaling policy.
D. Scheduled scaling policy.
Q21: You need to launch a number of EC2 instances to run Cassandra. There are large distributed and replicated workloads in Cassandra and you plan to launch instances using EC2 placement groups. The traffic should be distributed evenly across several partitions and each partition should contain multiple instances. Which strategy would you use when launching the placement groups?
A. Cluster placement strategy
B. Spread placement strategy.
C. Partition placement strategy.
D. Network placement strategy.
Q22: To improve the network performance, you launch a C5 EC2 Amazon Linux instance and enable enhanced networking by modifying the instance attribute with “aws ec2 modify-instance-attribute –instance-id instance_id –ena-support”. Which mechanism does the EC2 instance use to enhance the networking capabilities?
A. Intel 82599 Virtual Function (VF) interface.
B. Elastic Fabric Adapter (EFA).
C. Elastic Network Adapter (ENA).
D. Elastic Network Interface (ENI).
Q23: You work for an online retailer where any downtime at all can cause a significant loss of revenue. You have architected your application to be deployed on an Auto Scaling Group of EC2 instances behind a load balancer. You have configured and deployed these resources using a CloudFormation template. The Auto Scaling Group is configured with default settings, and a simple CPU utilization scaling policy. You have also set up multiple Availability Zones for high availability. The Load Balancer does health checks against an html file generated by script. When you begin performing load testing on your application and notice in CloudWatch that the load balancer is not sending traffic to one of your EC2 instances. What could be the problem?
A. The EC2 instance has failed the load balancer health check.
B. The instance has not been registered with CloudWatch.
C. The EC2 instance has failed EC2 status checks.
D. You are load testing at a moderate traffic level and not all instances are needed.
Q24: Your company is using a hybrid configuration because there are some legacy applications which are not easily converted and migrated to AWS. And with this configuration comes a typical scenario where the legacy apps must maintain the same private IP address and MAC address. You are attempting to convert the application to the cloud and have configured an EC2 instance to house the application. What you are currently testing is removing the ENI from the legacy instance and attaching it to the EC2 instance. You want to attempt a cold attach. What does this mean?
A. Attach ENI when it’s stopped.
B. Attach ENI before the public IP address is assigned.
C. Attach ENI to an instance when it’s running.
D. Attach ENI when the instance is being launched.
Q25: Your company has recently converted to a hybrid cloud environment and will slowly be migrating to a fully AWS cloud environment. The AWS side is in need of some steps to prepare for disaster recovery. A disaster recovery plan needs drawn up and disaster recovery drills need to be performed for compliance reasons. The company wants to establish Recovery Time and Recovery Point Objectives. The RTO and RPO can be pretty relaxed. The main point is to have a plan in place, with as much cost savings as possible. Which AWS disaster recovery pattern will best meet these requirements?
A. Warm Standby
B. Backup and restore
C. Multi Site
D. Pilot Light
Q26: An international travel company has an application which provides travel information and alerts to users all over the world. The application is hosted on groups of EC2 instances in Auto Scaling Groups in multiple AWS Regions. There are also load balancers routing traffic to these instances. In two countries, Ireland and Australia, there are compliance rules in place that dictate users connect to the application in eu-west-1 and ap-southeast-1. Which service can you use to meet this requirement?
A. Use Route 53 weighted routing.
B. Use Route 53 geolocation routing.
C. Configure CloudFront and the users will be routed to the nearest edge location.
D. Configure the load balancers to route users to the proper region.
Q26: You have taken over management of several instances in the company AWS environment. You want to quickly review scripts used to bootstrap the instances at runtime. A URL command can be used to do this. What can you append to the URL http://169.254.169.254/latest/ to retrieve this data?
A. user-data/
B. instance-demographic-data/
C. meta-data/
D. instance-data/

Q27: A software company has created an application to capture service requests from users and also enhancement requests. The application is deployed on an Auto Scaling group of EC2 instances fronted by an Application Load Balancer. The Auto Scaling group has scaled to maximum capacity, but there are still requests being lost. The cost of these instances is becoming an issue. What step can the company take to ensure requests aren’t lost?
A. Use larger instances in the Auto Scaling group.
B. Use spot instances to save money.
C. Use an SQS queue with the Auto Scaling group to capture all requests.
D. Use a Network Load Balancer instead for faster throughput.

Q28: A company has an auto scaling group of EC2 instances hosting their retail sales application. Any significant downtime for this application can result in large losses of profit. Therefore the architecture also includes an Application Load Balancer and an RDS database in a Multi-AZ deployment. The company has a very aggressive Recovery Time Objective (RTO) in case of disaster. How long will a failover typically complete?
A. Under 10 minutes
B. Within an hour
C. Almost instantly
D. one to two minutes

Q29: You have two EC2 instances running in the same VPC, but in different subnets. You are removing the secondary ENI from an EC2 instance and attaching it to another EC2 instance. You want this to be fast and with limited disruption. So you want to attach the ENI to the EC2 instance when it’s running. What is this called?
A. hot attach
B. warm attach
C. cold attach
D. synchronous attach
Q30: You suspect that one of the AWS services your company is using has gone down. How can you check on the status of this service?
A. AWS Trusted Advisor
B. Amazon Inspector
C. AWS Personal Health Dashboard
D. AWS Organizations
Q31: You have configured an Auto Scaling Group of EC2 instances fronted by an Application Load Balancer and backed by an RDS database. You want to begin monitoring the EC2 instances using CloudWatch metrics. Which metric is not readily available out of the box?
A. CPU utilization
B. DiskReadOps
C. NetworkIn
D. Memory utilization
Q32: Several instances you are creating have a specific data requirement. The requirement states that the data on the root device needs to persist independently from the lifetime of the instance. After considering AWS storage options, which is the simplest way to meet these requirements?
A. Store your root device data on Amazon EBS.
B. Store the data on the local instance store.
C. Create a cron job to migrate the data to S3.
D. Send the data to S3 using S3 lifecycle rules.
Q33: A company has an Auto Scaling Group of EC2 instances hosting their retail sales application. Any significant downtime for this application can result in large losses of profit. Therefore the architecture also includes an Application Load Balancer and an RDS database in a Multi-AZ deployment. What will happen to preserve high availability if the primary database fails?
A. A Lambda function kicks off a CloudFormation template to deploy a backup database.
B. The CNAME is switched from the primary db instance to the secondary.
C. Route 53 points the CNAME to the secondary database instance.
D. The Elastic IP address for the primary database is moved to the secondary database.
Q34: After several issues with your application and unplanned downtime, your recommendation to migrate your application to AWS is approved. You have set up high availability on the front end with a load balancer and an Auto Scaling Group. What step can you take with your database to configure high-availability and ensure minimal downtime (under five minutes)?
A. Create a read replica.
B. Enable Multi-AZ failover on the database.
C. Take frequent snapshots of your database.
D. Create your database using CloudFormation and save the template for reuse.
Q35: A new startup is considering the advantages of using DynamoDB versus a traditional relational database in AWS RDS. The NoSQL nature of DynamoDB presents a small learning curve to the team members who all have experience with traditional databases. The company will have multiple databases, and the decision will be made on a case-by-case basis. Which of the following use cases would favour DynamoDB? Select two.
A. Strong referential integrity between tables
B. Storing BLOB data
C. Storing infrequently accessed data
D. Managing web session data
E. Storing metadata for S3 objects
Q36: You have been tasked with designing a strategy for backing up EBS volumes attached to an instance-store-backed EC2 instance. You have been asked for an executive summary on your design, and the executive summary should include an answer to the question, “What can an EBS volume do when snapshotting the volume is in progress”?
A. The volume can be used normally while the snapshot is in progress.
B. The volume can only accommodate writes while a snapshot is in progress.
C. The volume can not be used while a snapshot is in progress.
D. The volume can only accommodate reads while a snapshot is in progress.
Q37: You are working as a Solutions Architect in a large healthcare organization. You have many Auto Scaling Groups that you need to create. One requirement is that you need to reuse some software licenses and therefore need to use dedicated hosts on EC2 instances in your Auto Scaling Groups. What step must you take to meet this requirement?
A. Create your launch configuration, but manually change the instances to Dedicated Hosts in the EC2 console.
B. Use a launch template with your Auto Scaling Group.
C. Create the Dedicated Host EC2 instances, then add them to an existing Auto Scaling Group.
D. Make sure your launch configurations are using Dedicated Hosts.

Q38: Your organization uses AWS CodeDeploy for deployments. Now you are starting a project on the AWS Lambda platform. For your deployments, you’ve been given a requirement of performing blue-green deployments. When you perform deployments, you want to split traffic, sending a small percentage of the traffic to the new version of your application. Which deployment configuration will allow this splitting of traffic?
A. Canary
B. All at Once
C. Linear
D. Weighted routing

Q39: A financial institution has an application that produces huge amounts of actuary data, which is ultimately expected to be in the terabyte range. There is a need to run complex analytic queries against terabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Which storage service will best meet this requirement?
A. RDS
B. DynamoDB
C. Redshift
D. ElastiCache

Q40: A company has an application for sharing static content, such as photos. The popularity of the application has grown, and the company is now sharing content worldwide. This worldwide service has caused some issues with latency. What AWS services can be used to host a static website, serve content to globally dispersed users, and address latency issues, while keeping cost under control? Choose two.
A. EC2 placement group
B. S3
C. Cloudfront
D. AWS Global Accelerator
E. AWS CloudFormation
Q41: You have just been hired by a large organization which uses many different AWS services in their environment. Some of the services which handle data include: RDS, Redshift, ElastiCache, DynamoDB, S3, and Glacier. You have been instructed to configure a web application using stateless web servers. Which services can you use to handle session state data? Choose two.
A. RDS
B. Glacier
C. Redshift
D. Elasticache
E. DynamoDB
Q42: After an IT Steering Committee meeting you have been put in charge of configuring a hybrid environment for the company’s compute resources. You weigh the pros and cons of various technologies based on the requirements you are given. Your primary requirement is the necessity for a private, dedicated connection, which bypasses the Internet and can provide throughput of 10 Gbps. Which option will you select?
A. AWS Direct Connect
B. VPC Peering
C. AWS VPN
D. AWS Direct Gateway
Q43: An application is hosted on an EC2 instance in a VPC. The instance is in a subnet in the VPC, and the instance has a public IP address. There is also an internet gateway and a security group with the proper ingress configured. But your testers are unable to access the instance from the Internet. What could be the problem?
A. Make sure the instance has a private IP address.
B. Add a route to the route table, from the subnet containing the instance, to the Internet Gateway.
C. A NAT gateway needs to be configured.
D. A Virtual private gateway needs to be configured.
Q44: A data company has implemented a subscription service for storing video files. There are two levels of subscription: personal and professional use. The personal users can upload a total of 5 GB of data, and professional users can upload as much as 5 TB of data. The application can upload files of size up to 1 TB to an S3 Bucket. What is the best way to upload files of this size?
A. Multipart upload
B. Single-part Upload
C. AWS Snowball
D. AWS SnowMobile

Q45: You have multiple EC2 instances housing applications in a VPC in a single Availability Zone. The applications need to communicate at extremely high throughputs to avoid latency for end users. The average throughput needs to be 6 Gbps. What’s the best measure you can do to ensure this throughput?
A. Put the instances in a placement group
B. Use Elastic Network Interfaces
C. Use Auto Scaling Groups
D. Increase the size of the instances
Q46: A team member has been tasked to configure four EC2 instances for four separate applications. These are not high-traffic apps, so there is no need for an Auto Scaling Group. The instances are all in the same public subnet and each instance has an EIP address, and all of the instances have the same Security Group. But none of the instances can send or receive internet traffic. You verify that all the instances have a public IP address. You also verify that an internet gateway has been configured. What is the most likely issue?
A. There is no route in the route table to the internet gateway (or it has been deleted).
B. Each instance needs its own security group.
C. The route table is corrupt.
D. You are using the default nacl.
Q47: You have been assigned to create an architecture which uses load balancers to direct traffic to an Auto Scaling Group of EC2 instances across multiple Availability Zones. The application to be deployed on these instances is a life insurance application which requires path-based and host-based routing. Which type of load balancer will you need to use?
A. Any type of load balancer will meet these requirements.
B. Classic Load Balancer
C. Network Load Balancer
D. Application Load Balancer
Q48: You have been assigned to create an architecture which uses load balancers to direct traffic to an Auto Scaling Group of EC2 instances across multiple Availability Zones. You were considering using an Application Load Balancer, but some of the requirements you have been given seem to point to a Classic Load Balancer. Which requirement would be better served by an Application Load Balancer?
A. Support for EC2-Classic
B. Path-based routing
C. Support for sticky sessions using application-generated cookies
D. Support for TCP and SSL listeners
Q49: You have been tasked to review your company disaster recovery plan due to some new requirements. The driving factor is that the Recovery Time Objective has become very aggressive. Because of this, it has been decided to configure Multi-AZ deployments for the RDS MySQL databases. Unrelated to DR, it has been determined that some read traffic needs to be offloaded from the master database. What step can be taken to meet this requirement?
A. Convert to Aurora to allow the standby to serve read traffic.
B. Redirect some of the read traffic to the standby database.
C. Add DAX to the solution to alleviate excess read traffic.
D. Add read replicas to offload some read traffic.
Q50: A gaming company is designing several new games which focus heavily on player-game interaction. The player makes a certain move and the game has to react very quickly to change the environment based on that move and to present the next decision for the player in real-time. A tool is needed to continuously collect data about player-game interactions and feed the data into the gaming platform in real-time. Which AWS service can best meet this need?
A. AWS Lambda
B. Kinesis Data Streams
C. Kinesis Data Analytics
D. AWS IoT
Q51: You are designing an architecture for a financial company which provides a day trading application to customers. After viewing the traffic patterns for the existing application you notice that traffic is fairly steady throughout the day, with the exception of large spikes at the opening of the market in the morning and at closing around 3 pm. Your architecture will include an Auto Scaling Group of EC2 instances. How can you configure the Auto Scaling Group to ensure that system performance meets the increased demands at opening and closing of the market?
A. Configure a Dynamic Scaling Policy to scale based on CPU Utilization.
B. Use a load balancer to ensure that the load is distributed evenly during high-traffic periods.
C. Configure your Auto Scaling Group to have a desired size which will be able to meet the demands of the high-traffic periods.
D. Use a predictive scaling policy on the Auto Scaling Group to meet opening and closing spikes.
Q52: A software gaming company has produced an online racing game which uses CloudFront for fast delivery to worldwide users. The game also uses DynamoDB for storing in-game and historical user data. The DynamoDB table has a preconfigured read and write capacity. Users have been reporting slow down issues, and an analysis has revealed that the DynamoDB table has begun throttling during peak traffic times. Which step can you take to improve game performance?
A. Add a load balancer in front of the web servers.
B. Add ElastiCache to cache frequently accessed data in memory.
C. Add an SQS Queue to queue requests which could be lost.
D. Make sure DynamoDB Auto Scaling is turned on.
Q53: You have configured an Auto Scaling Group of EC2 instances. You have begun testing the scaling of the Auto Scaling Group using a stress tool to force the CPU utilization metric being used to force scale out actions. The stress tool is also being manipulated by removing stress to force a scale in. But you notice that these actions are only taking place in five-minute intervals. What is happening?
A. Auto Scaling Groups can only scale in intervals of five minutes or greater.
B. The Auto Scaling Group is following the default cooldown procedure.
C. A load balancer is managing the load and limiting the effectiveness of stressing the servers.
D. The stress tool is configured to run for five minutes.
Q54: A team of architects is designing a new AWS environment for a company which wants to migrate to the Cloud. The architects are considering the use of EC2 instances with instance store volumes. The architects realize that the data on the instance store volumes are ephemeral. Which action will not cause the data to be deleted on an instance store volume?
A. Reboot
B. The underlying disk drive fails.
C. Hardware disk failure.
D. Instance is stopped
Q55: You work for an advertising company that has a real-time bidding application. You are also using CloudFront on the front end to accommodate a worldwide user base. Your users begin complaining about response times and pauses in real-time bidding. Which service can be used to reduce DynamoDB response times by an order of magnitude (milliseconds to microseconds)?
A. DAX
B. DynamoDB Auto Scaling
C. Elasticache
D. CloudFront Edge Caches
Q56: A travel company has deployed a website which serves travel updates to users all over the world. The traffic this database serves is very read heavy and can have some latency issues at certain times of the year. What can you do to alleviate these latency issues?
A. Place CloudFront in front of the Database.
B. Add read replicas
C. Configure RDS Multi-AZ
D. Configure multi-Region RDS

Q57: A large financial institution is gradually moving their infrastructure and applications to AWS. The company has data needs that will utilize all of RDS, DynamoDB, Redshift, and ElastiCache. Which description best describes Amazon Redshift?
A. Key-value and document database that delivers single-digit millisecond performance at any scale.
B. Cloud-based relational database.
C. Can be used to significantly improve latency and throughput for many read-heavy application workloads.
D. Near real-time complex querying on massive data sets.

Q58: You are designing an architecture which will house an Auto Scaling Group of EC2 instances. The application hosted on the instances is expected to be an extremely popular social networking site. Forecasts for traffic to this site expect very high traffic and you will need a load balancer to handle tens of millions of requests per second while maintaining high throughput at ultra low latency. You need to select the type of load balancer to front your Auto Scaling Group to meet this high traffic requirement. Which load balancer will you select?
A. You will need an Application Load Balancer to meet this requirement.
B. All the AWS load balancers meet the requirement and perform the same.
C. You will select a Network Load Balancer to meet this requirement.
D. You will need a Classic Load Balancer to meet this requirement.

Q59: An organization of about 100 employees has performed the initial setup of users in IAM. All users except administrators have the same basic privileges. But now it has been determined that 50 employees will have extra restrictions on EC2. They will be unable to launch new instances or alter the state of existing instances. What will be the quickest way to implement these restrictions?
A. Create an IAM Role for the restrictions. Attach it to the EC2 instances.
B. Create the appropriate policy. Place the restricted users in the new policy.
C. Create the appropriate policy. With only 20 users, attach the policy to each user.
D. Create the appropriate policy. Create a new group for the restricted users. Place the restricted users in the new group and attach the policy to the group.
Q60: You are managing S3 buckets in your organization. This management of S3 extends to Amazon Glacier. For auditing purposes you would like to be informed if an object is restored to S3 from Glacier. What is the most efficient way you can do this?
A. Create a CloudWatch event for uploads to S3
B. Create an SNS notification for any upload to S3.
C. Configure S3 notifications for restore operations from Glacier.
D. Create a Lambda function which is triggered by restoration of object from Glacier to S3.
Q61: Your company has gotten back results from an audit. One of the mandates from the audit is that your application, which is hosted on EC2, must encrypt the data before writing this data to storage. Which service could you use to meet this requirement?
A. AWS Cloud HSM
B. Security Token Service
C. EBS encryption
D. AWS KMS
Q62: Recent worldwide events have dictated that you perform your duties as a Solutions Architect from home. You need to be able to manage several EC2 instances while working from home and have been testing the ability to ssh into these instances. One instance in particular has been a problem and you cannot ssh into this instance. What should you check first to troubleshoot this issue?
A. Make sure that the security group for the instance has ingress on port 80 from your home IP address.
B. Make sure that your VPC has a connected Virtual Private Gateway.
C. Make sure that the security group for the instance has ingress on port 22 from your home IP address.
D. Make sure that the Security Group for the instance has ingress on port 443 from your home IP address.

Q62: A consultant is hired by a small company to configure an AWS environment. The consultant begins working with the VPC and launching EC2 instances within the VPC. The initial instances will be placed in a public subnet. The consultant begins to create security groups. What is true of the default security group?
A. You can delete this group, however, you can’t change the group’s rules.
B. You can delete this group or you can change the group’s rules.
C. You can’t delete this group, nor can you change the group’s rules.
D. You can’t delete this group, however, you can change the group’s rules.
Q63: You are evaluating the security setting within the main company VPC. There are several NACLs and security groups to evaluate and possibly edit. What is true regarding NACLs and security groups?
A. Network ACLs and security groups are both stateful.
B. Network ACLs and security groups are both stateless.
C. Network ACLs are stateless, and security groups are stateful.
D. Network ACLs and stateful, and security groups are stateless.
Q64: Your company needs to deploy an application in the company AWS account. The application will reside on EC2 instances in an Auto Scaling Group fronted by an Application Load Balancer. The company has been using Elastic Beanstalk to deploy the application due to limited AWS experience within the organization. The application now needs upgrades and a small team of subcontractors have been hired to perform these upgrades. What can be used to provide the subcontractors with short-lived access tokens that act as temporary security credentials to the company AWS account?
A. IAM Roles
B. AWS STS
C. IAM user accounts
D. AWS SSO
Q65: The company you work for has reshuffled teams a bit and you’ve been moved from the AWS IAM team to the AWS Network team. One of your first assignments is to review the subnets in the main VPCs. What are two key concepts regarding subnets?
A. A subnet spans all the Availability Zones in a Region.
B. Private subnets can only hold database.
C. Each subnet maps to a single Availability Zone.
D. Every subnet you create is associated with the main route table for the VPC.
E. Each subnet is associated with one security group.
Q66: Amazon Web Services offers 4 different levels of support. Which of the following are valid support levels? Choose 3
A. Enterprise
B. Developer
C. Corporate
D. Business
E. Free Tier
Q67: You are reviewing Change Control requests, and you note that there is a change designed to reduce wasted CPU cycles by increasing the value of your Amazon SQS “VisibilityTimeout” attribute. What does this mean?
A. While processing a message, a consumer instance can amend the message visibility counter by a fixed amount.
B. When a consumer instance retrieves a message, that message will be hidden from other consumer instances for a fixed period.
C. When the consumer instance polls for new work the SQS service will allow it to wait a certain time for a message to be available before closing the connection.
D. While processing a message, a consumer instance can reset the message visibility by restarting the preset timeout counter.
E. When the consumer instance polls for new work, the consumer instance will wait a certain time until it has a full workload before closing the connection.
F. When a new message is added to the SQS queue, it will be hidden from consumer instances for a fixed period.
Q68: You are a security architect working for a large antivirus company. The production environment has recently been moved to AWS and is in a public subnet. You are able to view the production environment over HTTP. However, when your customers try to update their virus definition files over a custom port, that port is blocked. You log in to the console and you allow traffic in over the custom port. How long will this take to take effect?
A. After a few minutes.
B. Immediately.
C. Straight away, but to the new instances only.
D. Straight away to the new instances, but old instances must be stopped and restarted before the new rules apply.
Q69: Amazon SQS keeps track of all tasks and events in an application.
A. True
B. False
Q70: Your Security Manager has hired a security contractor to audit your network and firewall configurations. The consultant doesn’t have access to an AWS account. You need to provide the required access for the auditing tasks, and answer a question about login details for the official AWS firewall appliance. Which of the following might you do?
Choose 2
A. Create an IAM User with a policy that can Read Security Group and NACL settings.
B. Explain that AWS implements network security differently and that there is no such thing as an official AWS firewall appliance. Security Groups and NACLs are used instead.
C. Create an IAM Role with a policy that can Read Security Group and NACL settings.
D. Explain that AWS is a cloud service and that AWS manages the Network appliances.
E. Create an IAM Role with a policy that can Read Security Group and Route settings.
Q71: How many internet gateways can I attach to my custom VPC?
A. 5
B. 3
C. 2
D. 1
Q72: How long can a message be retained in an SQS Queue?
A. 14 days
B. 1 day
C. 7 days
D. 30 days
Q73: Although your application customarily runs at 30% usage, you have identified a recurring usage spike (>90%) between 8pm and midnight daily. What is the most cost-effective way to scale your application to meet this increased need?
A. Manually deploy Reactive Event-based Scaling each night at 7:45.
B. Deploy additional EC2 instances to meet the demand.
C. Use scheduled scaling to boost your capacity at a fixed interval.
D. Increase the size of the Resource Group to meet demand.
Q74: To save money, you quickly stored some data in one of the attached volumes of an EC2 instance and stopped it for the weekend. When you returned on Monday and restarted your instance, you discovered that your data was gone. Why might that be?
A. The EBS volume was not large enough to store your data.
B. The instance failed to connect to the root volume on Monday.
C. The elastic block-level storage service failed over the weekend.
D. The volume was ephemeral, block-level storage. Data on an instance store volume is lost if an instance is stopped.
Q75: Select all the true statements on S3 URL styles: Choose 2
A. Virtual hosted-style URLs will be eventually depreciated in favor of Path-Style URLs for S3 bucket access.
B. Virtual-host-style URLs (such as: https://bucket-name.s3.Region.amazonaws.com/key name) are supported by AWS.
C. Path-Style URLs (such as https://s3.Region.amazonaws.com/bucket-name/key name) are supported by AWS.
D. DNS compliant names are NOT recommended for the URLs to access S3.
Q76: With EBS, I can ____. Choose 2
A. Create an encrypted snapshot from an unencrypted snapshot by creating an encrypted copy of the unencrypted snapshot.
B. Create an unencrypted volume from an encrypted snapshot.
C. Create an encrypted volume from a snapshot of another encrypted volume.
D. Encrypt an existing volume.
Q77: You have been engaged by a company to design and lead a migration to an AWS environment. The team is concerned about the capabilities of the new environment, especially when it comes to high availability and cost-effectiveness. The design calls for about 20 instances (c3.2xlarge) pulling jobs/messages from SQS. Network traffic per instance is estimated to be around 500 Mbps at the beginning and end of each job. Which configuration should you plan on deploying?
A. Use a 2nd Network Interface to separate the SQS traffic for the storage traffic.
B. Choose a different instance type that better matched the traffic demand.
C.Spread the Instances over multiple AZs to minimize the traffic concentration and maximize fault-tolerance.
D. Deploy as a Cluster Placement Group as the aggregated burst traffic could be around 10 Gbps.
Q78: You are a solutions architect working for a cosmetics company. Your company has a busy Magento online store that consists of a two-tier architecture. The web servers are on EC2 instances deployed across multiple AZs, and the database is on a Multi-AZ RDS MySQL database instance. Your store is having a Black Friday sale in five days, and having reviewed the performance for the last sale you expect the site to start running very slowly during the peak load. You investigate and you determine that the database was struggling to keep up with the number of reads that the store was generating. Which solution would you implement to improve the application read performance the most?
A. Deploy an Amazon ElastiCache cluster with nodes running in each AZ.
B. Upgrade your RDS MySQL instance to use provisioned IOPS.
C. Add an RDS Read Replica in each AZ.
D. Upgrade the RDS MySQL instance to a larger type.
Q79: Which native AWS service will act as a file system mounted on an S3 bucket?
A. Amazon Elastic Block Store
B. File Gateway
C. Amazon S3
D. Amazon Elastic File System
Q80:You have been evaluating the NACLS in your company. Most of the NACLs are configured the same: 100 All Traffic Allow 200 All Traffic Deny ‘*’ All Traffic Deny If a request comes in, how will it be evaluated?
A. The default will deny traffic.
B. The request will be allowed.
C. The highest numbered rule will be used, a deny.
D. All rules will be evaluated and the end result will be Deny.
Q81: You have been given an assignment to configure Network ACLs in your VPC. Before configuring the NACLs, you need to understand how the NACLs are evaluated. How are NACL rules evaluated?
A. NACL rules are evaluated by rule number from lowest to highest and executed immediately when a matching rule is found.
B. NACL rules are evaluated by rule number from highest to lowest, and executed immediately when a matching rule is found.
C. All NACL rules that you configure are evaluated before traffic is passed through.
D. NACL rules are evaluated by rule number from highest to lowest, and all are evaluated before traffic is passed through.

Q82: Your company has gone through an audit with a focus on data storage. You are currently storing historical data in Amazon Glacier. One of the results of the audit is that a portion of the infrequently-accessed historical data must be able to be accessed immediately upon request. Where can you store this data to meet this requirement?
A. S3 Standard
B. Leave infrequently-accessed data in Glacier.
C. S3 Standard-IA
D. Store the data in EBS
Q84: After an IT Steering Committee meeting, you have been put in charge of configuring a hybrid environment for the company’s compute resources. You weigh the pros and cons of various technologies, such as VPN and Direct Connect, and based on the requirements you have decided to configure a VPN connection. What features and advantages can a VPN connection provide?
Q86: Your company has decided to go to a hybrid cloud environment. Part of this effort will be to move a large data warehouse to the cloud. The warehouse is 50TB, and will take over a month to migrate given the current bandwidth available. What is the best option available to perform this migration considering both cost and performance aspects?
Q87: You have been assigned the review of the security in your company AWS cloud environment. Your final deliverable will be a report detailing potential security issues. One of the first things that you need to describe is the responsibilities of the company under the shared responsibility module. Which measure is the customer’s responsibility?
Q88: You work for a busy real estate company, and you need to protect your data stored on S3 from accidental deletion. Which of the following actions might you take to achieve this? Choose 2
A. Create a bucket policy that prohibits anyone from deleting things from the bucket.
B. Enable S3 – Infrequent Access Storage (S3 – IA).
C. Enable versioning on the bucket. If a file is accidentally deleted, delete the delete marker.
D. Configure MFA-protected API access.
E. Use pre-signed URL’s so that users will not be able to accidentally delete data.
Q89: AWS intends to shut down your spot instance; which of these scenarios is possible? Choose 3
A. AWS sends a notification of termination and you receive it 120 seconds before the intended forced shutdown.
B. AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, and you delay it by sending a ‘Delay300’ instruction before the forced shutdown takes effect.
C. AWS sends a notification of termination and you receive it 120 seconds before the intended forced shutdown, but AWS does not action the shutdown.
D. AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, but you block the shutdown because you used ‘Termination Protection’ when you initialized the instance.
E. AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, but the defined duration period (also known as Spot blocks) hasn’t ended yet.
F. AWS sends a notification of termination, but you do not receive it within the 120 seconds and the instance is shutdown.
Q90: What does the “EAR” in a policy document stand for?
A. Effects, APIs, Roles
B. Effect, Action, Resource
C. Ewoks, Always, Romanticize
D. Every, Action, Reasonable
Q92: You can use _ to build a schema for your data, and _ to query the data that’s stored in S3.
A. Glue, Athena
B. EC2, SQS
C. EC2, Glue
D. Athena, Lambda
Q93: What type of work does EMR perform?
A. Data processing information (DPI) jobs.
B. Big data (BD) jobs.
C. Extract, transform, and load (ETL) jobs.
D. Huge amounts of data (HAD) jobs
Q94: _____ allows you to transform data using SQL as it’s being passed through Kinesis.
A. RDS
B. Kinesis Data Analytics
C. Redshift
D. DynamoDB
Q95 [SAA-C03]: A company runs a public-facing three-tier web application in a VPC across multiple Availability Zones. Amazon EC2 instances for the application tier running in private subnets need to download software patches from the internet. However, the EC2 instances cannot be directly accessible from the internet. Which actions should be taken to allow the EC2 instances to download the needed patches? (Select TWO.)
A. Configure a NAT gateway in a public subnet.
B. Define a custom route table with a route to the NAT gateway for internet traffic and associate it with the private subnets for the application tier.
C. Assign Elastic IP addresses to the EC2 instances.
D. Define a custom route table with a route to the internet gateway for internet traffic and associate it with the private subnets for the application tier.
E. Configure a NAT instance in a private subnet.
Q96 [SAA-C03]: A solutions architect wants to design a solution to save costs for Amazon EC2 instances that do not need to run during a 2-week company shutdown. The applications running on the EC2 instances store data in instance memory that must be present when the instances resume operation. Which approach should the solutions architect recommend to shut down and resume the EC2 instances?
A. Modify the application to store the data on instance store volumes. Reattach the volumes while restarting them.
B. Snapshot the EC2 instances before stopping them. Restore the snapshot after restarting the instances.
C. Run the applications on EC2 instances enabled for hibernation. Hibernate the instances before the 2- week company shutdown.
D. Note the Availability Zone for each EC2 instance before stopping it. Restart the instances in the same Availability Zones after the 2-week company shutdown.

Q97 [SAA-C03]: A company plans to run a monitoring application on an Amazon EC2 instance in a VPC. Connections are made to the EC2 instance using the instance’s private IPv4 address. A solutions architect needs to design a solution that will allow traffic to be quickly directed to a standby EC2 instance if the application fails and becomes unreachable. Which approach will meet these requirements?
A) Deploy an Application Load Balancer configured with a listener for the private IP address and register the primary EC2 instance with the load balancer. Upon failure, de-register the instance and register the standby EC2 instance.
B) Configure a custom DHCP option set. Configure DHCP to assign the same private IP address to the standby EC2 instance when the primary EC2 instance fails.
C) Attach a secondary elastic network interface to the EC2 instance configured with the private IP address. Move the network interface to the standby EC2 instance if the primary EC2 instance becomes unreachable.
D) Associate an Elastic IP address with the network interface of the primary EC2 instance. Disassociate the Elastic IP from the primary instance upon failure and associate it with a standby EC2 instance.
Q98 [SAA-C03]: An analytics company is planning to offer a web analytics service to its users. The service will require that the users’ webpages include a JavaScript script that makes authenticated GET requests to the company’s Amazon S3 bucket. What must a solutions architect do to ensure that the script will successfully execute?
A. Enable cross-origin resource sharing (CORS) on the S3 bucket.
B. Enable S3 Versioning on the S3 bucket.
C. Provide the users with a signed URL for the script.
D. Configure an S3 bucket policy to allow public execute privileges.
Q99 [SAA-C03]: A company’s security team requires that all data stored in the cloud be encrypted at rest at all times using encryption keys stored on premises. Which encryption options meet these requirements? (Select TWO.)
A. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3).
B. Use server-side encryption with AWS KMS managed encryption keys (SSE-KMS).
C. Use server-side encryption with customer-provided encryption keys (SSE-C).
D. Use client-side encryption to provide at-rest encryption.
E. Use an AWS Lambda function invoked by Amazon S3 events to encrypt the data using the customer’s keys.
Q100 [SAA-C03]: A company uses Amazon EC2 Reserved Instances to run its data processing workload. The nightly job typically takes 7 hours to run and must finish within a 10-hour time window. The company anticipates temporary increases in demand at the end of each month that will cause the job to run over the time limit with the capacity of the current resources. Once started, the processing job cannot be interrupted before completion. The company wants to implement a solution that would provide increased resource capacity as cost-effectively as possible. What should a solutions architect do to accomplish this?
A) Deploy On-Demand Instances during periods of high demand.
B) Create a second EC2 reservation for additional instances.
C) Deploy Spot Instances during periods of high demand.
D) Increase the EC2 instance size in the EC2 reservation to support the increased workload.
Q101 [SAA-C03]: A company runs an online voting system for a weekly live television program. During broadcasts, users submit hundreds of thousands of votes within minutes to a front-end fleet of Amazon EC2 instances that run in an Auto Scaling group. The EC2 instances write the votes to an Amazon RDS database. However, the database is unable to keep up with the requests that come from the EC2 instances. A solutions architect must design a solution that processes the votes in the most efficient manner and without downtime. Which solution meets these requirements?
A. Migrate the front-end application to AWS Lambda. Use Amazon API Gateway to route user requests to the Lambda functions.
B. Scale the database horizontally by converting it to a Multi-AZ deployment. Configure the front-end application to write to both the primary and secondary DB instances.
C. Configure the front-end application to send votes to an Amazon Simple Queue Service (Amazon SQS) queue. Provision worker instances to read the SQS queue and write the vote information to the database.
D. Use Amazon EventBridge (Amazon CloudWatch Events) to create a scheduled event to re-provision the database with larger, memory optimized instances during voting periods. When voting ends, re-provision the database to use smaller instances.
Q102 [SAA-C03]: A company has a two-tier application architecture that runs in public and private subnets. Amazon EC2 instances running the web application are in the public subnet and an EC2 instance for the database runs on the private subnet. The web application instances and the database are running in a single Availability Zone (AZ). Which combination of steps should a solutions architect take to provide high availability for this architecture? (Select TWO.)
A. Create new public and private subnets in the same AZ.
B. Create an Amazon EC2 Auto Scaling group and Application Load Balancer spanning multiple AZs for the web application instances.
C. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer.
D. Create new public and private subnets in a new AZ. Create a database using an EC2 instance in the public subnet in the new AZ. Migrate the old database contents to the new database.
E. Create new public and private subnets in the same VPC, each in a new AZ. Create an Amazon RDS Multi-AZ DB instance in the private subnets. Migrate the old database contents to the new DB instance.
Q103 [SAA-C03]: A website runs a custom web application that receives a burst of traffic each day at noon. The users upload new pictures and content daily, but have been complaining of timeouts. The architecture uses Amazon EC2 Auto Scaling groups, and the application consistently takes 1 minute to initiate upon boot up before responding to user requests. How should a solutions architect redesign the architecture to better respond to changing traffic?
A. Configure a Network Load Balancer with a slow start configuration.
B. Configure Amazon ElastiCache for Redis to offload direct requests from the EC2 instances.
C. Configure an Auto Scaling step scaling policy with an EC2 instance warmup condition.
D. Configure Amazon CloudFront to use an Application Load Balancer as the origin.
Q104 [SAA-C03]: An application running on AWS uses an Amazon Aurora Multi-AZ DB cluster deployment for its database. When evaluating performance metrics, a solutions architect discovered that the database reads are causing high I/O and adding latency to the write requests against the database. What should the solutions architect do to separate the read requests from the write requests?
A. Enable read-through caching on the Aurora database.
B. Update the application to read from the Multi-AZ standby instance.
C. Create an Aurora replica and modify the application to use the appropriate endpoints.
D. Create a second Aurora database and link it to the primary database as a read replica.
Question 106: A company plans to migrate its on-premises workload to AWS. The current architecture is composed of a Microsoft SharePoint server that uses a Windows shared file storage. The Solutions Architect needs to use a cloud storage solution that is highly available and can be integrated with Active Directory for access control and authentication. Which of the following options can satisfy the given requirement?
A. Create a file system using Amazon EFS and join it to an Active Directory domain.
B. Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory domain in AWS.
C. Create a Network File System (NFS) file share using AWS Storage Gateway.
D. Launch an Amazon EC2 Windows Server to mount a new S3 bucket as a file volume.
Question 107: A company plans to migrate its on-premises workload to AWS. The current architecture is composed of a Microsoft SharePoint server that uses a Windows shared file storage. The Solutions Architect needs to use a cloud storage solution that is highly available and can be integrated with Active Directory for access control and authentication. Which of the following options can satisfy the given requirement?
A. Create a file system using Amazon EFS and join it to an Active Directory domain.
B. Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory domain in AWS.
C. Create a Network File System (NFS) file share using AWS Storage Gateway.
D. Launch an Amazon EC2 Windows Server to mount a new S3 bucket as a file volume.
Question 108: A Forex trading platform, which frequently processes and stores global financial data every minute, is hosted in your on-premises data center and uses an Oracle database. Due to a recent cooling problem in their data center, the company urgently needs to migrate their infrastructure to AWS to improve the performance of their applications. As the Solutions Architect, you are responsible in ensuring that the database is properly migrated and should remain available in case of database server failure in the future. Which of the following is the most suitable solution to meet the requirement?
A. Create an Oracle database in RDS with Multi-AZ deployments.
B. Launch an Oracle database instance in RDS with Recovery Manager (RMAN) enabled.
C. Launch an Oracle Real Application Clusters (RAC) in RDS.
D. Convert the database schema using the AWS Schema Conversion Tool and AWS Database Migration Service. Migrate the Oracle database to a non-cluster Amazon Aurora with a single instance.
Question 109: A data analytics company, which uses machine learning to collect and analyze consumer data, is using Redshift cluster as their data warehouse. You are instructed to implement a disaster recovery plan for their systems to ensure business continuity even in the event of an AWS region outage. Which of the following is the best approach to meet this requirement?
A. Do nothing because Amazon Redshift is a highly available, fully-managed data warehouse which can withstand an outage of an entire AWS region.
B. Enable Cross-Region Snapshots Copy in your Amazon Redshift Cluster.
C. Create a scheduled job that will automatically take the snapshot of your Redshift Cluster and store it to an S3 bucket. Restore the snapshot in case of an AWS region outage.
D. Use Automated snapshots of your Redshift Cluster.
Question 109: A start-up company has an EC2 instance that is hosting a web application. The volume of users is expected to grow in the coming months and hence, you need to add more elasticity and scalability in your AWS architecture to cope with the demand. Which of the following options can satisfy the above requirement for the given scenario? (Select TWO.)
A. Set up two EC2 instances and then put them behind an Elastic Load balancer (ELB).
B. Set up two EC2 instances deployed using Launch Templates and integrated with AWS Glue.
C. Set up an S3 Cache in front of the EC2 instance.
D. Set up two EC2 instances and use Route 53 to route traffic based on a Weighted Routing Policy.
E. Set up an AWS WAF behind your EC2 Instance.
Question 110: A company plans to deploy a Docker-based batch application in AWS. The application will be used to process both mission-critical data as well as non-essential batch jobs. Which of the following is the most cost-effective option to use in implementing this architecture?
A. Use ECS as the container management service then set up Reserved EC2 Instances for processing both mission-critical and non-essential batch jobs.
B. Use ECS as the container management service then set up a combination of Reserved and Spot EC2 Instances for processing mission-critical and non-essential batch jobs respectively.
C. Use ECS as the container management service then set up On-Demand EC2 Instances for processing both mission-critical and non-essential batch jobs.
D. Use ECS as the container management service then set up Spot EC2 Instances for processing both mission-critical and non-essential batch jobs.
Question 112: An online stocks trading application that stores financial data in an S3 bucket has a lifecycle policy that moves older data to Glacier every month. There is a strict compliance requirement where a surprise audit can happen at anytime and you should be able to retrieve the required data in under 15 minutes under all circumstances. Your manager instructed you to ensure that retrieval capacity is available when you need it and should handle up to 150 MB/s of retrieval throughput. Which of the following should you do to meet the above requirement? (Select TWO.)
A. Retrieve the data using Amazon Glacier Select.
B. Use Bulk Retrieval to access the financial data.
C. Purchase provisioned retrieval capacity.
D. Use Expedited Retrieval to access the financial data.
E. Specify a range, or portion, of the financial data archive to retrieve.
Question 113: An organization stores and manages financial records of various companies in its on-premises data center, which is almost out of space. The management decided to move all of their existing records to a cloud storage service. All future financial records will also be stored in the cloud. For additional security, all records must be prevented from being deleted or overwritten. Which of the following should you do to meet the above requirement?
A. Use AWS Storage Gateway to establish hybrid cloud storage. Store all of your data in Amazon S3 and enable object lock.
B. Use AWS DataSync to move the data. Store all of your data in Amazon EFS and enable object lock.
C. Use AWS Storage Gateway to establish hybrid cloud storage. Store all of your data in Amazon EBS and enable object lock.
D. Use AWS DataSync to move the data. Store all of your data in Amazon S3 and enable object lock.
Question 114: A solutions architect is designing a solution to run a containerized web application by using Amazon Elastic Container Service (Amazon ECS). The solutions architect wants to minimize cost by running multiple copies of a task on each container instance. The number of task copies must scale as the load increases and decreases. Which routing solution distributes the load to the multiple tasks?
A. Configure an Application Load Balancer to distribute the requests by using path-based routing.
B. Configure an Application Load Balancer to distribute the requests by using dynamic host port mapping.
C. Configure an Amazon Route 53 alias record set to distribute the requests with a failover routing policy.
D. Configure an Amazon Route 53 alias record set to distribute the requests with a weighted routing policy.
Question 115: Question: A Solutions Architect needs to deploy a mobile application that can collect votes for a popular singing competition. Millions of users from around the world will submit votes using their mobile phones. These votes must be collected and stored in a highly scalable and highly available data store which will be queried for real-time ranking. Which of the following combination of services should the architect use to meet this requirement?
A. Amazon Redshift and AWS Mobile Hub
B. Amazon DynamoDB and AWS AppSync
C. Amazon Relational Database Service (RDS) and Amazon MQ
D. Amazon Aurora and Amazon Cognito
Question 116: The usage of a company’s image-processing application is increasing suddenly with no set pattern. The application’s processing time grows linearly with the size of the image. The processing can take up to 20 minutes for large image files. The architecture consists of a web tier, an Amazon Simple Queue Service (Amazon SQS) standard queue, and message consumers that process the images on Amazon EC2 instances. When a high volume of requests occurs, the message backlog in Amazon SQS increases. Users are reporting the delays in processing. A solutions architect must improve the performance of the application in compliance with cloud best practices. Which solution will meet these requirements?
A. Purchase enough Dedicated Instances to meet the peak demand. Deploy the instances for the consumers.
B. Convert the existing SQS standard queue to an SQS FIFO queue. Increase the visibility timeout.
C. Configure a scalable AWS Lambda function as the consumer of the SQS messages.
D. Create a message consumer that is an Auto Scaling group of instances. Configure the Auto Scaling group to scale based upon the ApproximateNumberOfMessages Amazon CloudWatch metric.
Question 117: An application is hosted on an EC2 instance with multiple EBS Volumes attached and uses Amazon Neptune as its database. To improve data security, you encrypted all of the EBS volumes attached to the instance to protect the confidential data stored in the volumes. Which of the following statements are true about encrypted Amazon Elastic Block Store volumes? (Select TWO.)
Question 118: A reporting application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. For complex reports, the application can take up to 15 minutes to respond to a request. A solutions architect is concerned that users will receive HTTP 5xx errors if a report request is in process during a scale-in event. What should the solutions architect do to ensure that user requests will be completed before instances are terminated?
A. Enable sticky sessions (session affinity) for the target group of the instances.
B. Increase the instance size in the Application Load Balancer target group.
C. Increase the cooldown period for the Auto Scaling group to a greater amount of time than the time required for the longest running responses.
D. Increase the deregistration delay timeout for the target group of the instances to greater than 900 seconds.
Question 119: Question: A company used Amazon EC2 Spot Instances for a demonstration that is now complete. A solutions architect must remove the Spot Instances to stop them from incurring cost. What should the solutions architect do to meet this requirement?
A. Cancel the Spot request only.
B. Terminate the Spot Instances only.
C. Cancel the Spot request. Terminate the Spot Instances.
D. Terminate the Spot Instances. Cancel the Spot request.
Question 120: Question: Which components are required to build a site-to-site VPN connection on AWS? (Select TWO.)
A. An Internet Gateway
B. A NAT gateway
C. A customer Gateway
D. A Virtual Private Gateway
E. Amazon API Gateway
Question 121: A company runs its website on Amazon EC2 instances behind an Application Load Balancer that is configured as the origin for an Amazon CloudFront distribution. The company wants to protect against cross-site scripting and SQL injection attacks. Which approach should a solutions architect recommend to meet these requirements?
A. Enable AWS Shield Advanced. List the CloudFront distribution as a protected resource.
B. Define an AWS Shield Advanced policy in AWS Firewall Manager to block cross-site scripting and SQL injection attacks.
C. Set up AWS WAF on the CloudFront distribution. Use conditions and rules that block cross-site scripting and SQL injection attacks.
D. Deploy AWS Firewall Manager on the EC2 instances. Create conditions and rules that block cross-site scripting and SQL injection attacks.
Question 122: A media company is designing a new solution for graphic rendering. The application requires up to 400 GB of storage for temporary data that is discarded after the frames are rendered. The application requires approximately 40,000 random IOPS to perform the rendering. What is the MOST cost-effective storage option for this rendering application?
A. A storage optimized Amazon EC2 instance with instance store storage
B. A storage optimized Amazon EC2 instance with a Provisioned IOPS SSD (io1 or io2) Amazon Elastic Block Store (Amazon EBS) volume
C. A burstable Amazon EC2 instance with a Throughput Optimized HDD (st1) Amazon Elastic Block Store (Amazon EBS) volume
D. A burstable Amazon EC2 instance with Amazon S3 storage over a VPC endpoint
Question 123: A company is deploying a new application that will consist of an application layer and an online transaction processing (OLTP) relational database. The application must be available at all times. However, the application will have periods of inactivity. The company wants to pay the minimum for compute costs during these idle periods. Which solution meets these requirements MOST cost-effectively?
A. Run the application in containers with Amazon Elastic Container Service (Amazon ECS) on AWS Fargate. Use Amazon Aurora Serverless for the database.
B. Run the application on Amazon EC2 instances by using a burstable instance type. Use Amazon Redshift for the database.
C. Deploy the application and a MySQL database to Amazon EC2 instances by using AWS CloudFormation. Delete the stack at the beginning of the idle periods.
D. Deploy the application on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. Use Amazon RDS for MySQL for the database.

What are the 6 pillars of a well architected framework:
AWS Well-Architected helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications and workloads. Based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization — AWS Well-Architected provides a consistent approach for customers and partners to evaluate architectures, and implement designs that can scale over time.
1. Operational Excellence
The operational excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. You can find prescriptive guidance on implementation in the Operational Excellence Pillar whitepaper.
2. Security
The security pillar includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies. You can find prescriptive guidance on implementation in the Security Pillar whitepaper.
3. Reliability
The reliability pillar includes the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. You can find prescriptive guidance on implementation in the Reliability Pillar whitepaper.
4. Performance Efficiency
The performance efficiency pillar includes the ability to use computing resources efficiently to meet system requirements and to maintain that efficiency as demand changes and technologies evolve. You can find prescriptive guidance on implementation in the Performance Efficiency Pillar whitepaper.
5. Cost Optimization
The cost optimization pillar includes the ability to avoid or eliminate unneeded cost or suboptimal resources. You can find prescriptive guidance on implementation in the Cost Optimization Pillar whitepaper.
6. Sustainability
-
- The ability to increase efficiency across all components of a workload by maximizing the benefits from the provisioned resources.
- There are six best practice areas for sustainability in the cloud:
- Region Selection – AWS Global Infrastructure
- User Behavior Patterns – Auto Scaling, Elastic Load Balancing
- Software and Architecture Patterns – AWS Design Principles
- Data Patterns – Amazon EBS, Amazon EFS, Amazon FSx, Amazon S3
- Hardware Patterns – Amazon EC2, AWS Elastic Beanstalk
- Development and Deployment Process – AWS CloudFormation
- Key AWS service:
- Amazon EC2 Auto Scaling
Source: 6 pillards of AWs Well architected Framework
The AWS Well-Architected Framework provides architectural best practices across the five pillars for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. The framework provides a set of questions that allows you to review an existing or proposed architecture. It also provides a set of AWS best practices for each pillar.
Using the Framework in your architecture helps you produce stable and efficient systems, which allows you to focus on functional requirements.
Other AWS Facts and Summaries and Questions/Answers Dump
- AWS Certified Solution Architect Associate Exam Prep App
- AWS S3 facts and summaries and Q&A Dump
- AWS DynamoDB facts and summaries and Questions and Answers Dump
- AWS EC2 facts and summaries and Questions and Answers Dump
- AWS Serverless facts and summaries and Questions and Answers Dump
- AWS Developer and Deployment Theory facts and summaries and Questions and Answers Dump
- AWS IAM facts and summaries and Questions and Answers Dump
- AWS Lambda facts and summaries and Questions and Answers Dump
- AWS SQS facts and summaries and Questions and Answers Dump
- AWS RDS facts and summaries and Questions and Answers Dump
- AWS ECS facts and summaries and Questions and Answers Dump
- AWS CloudWatch facts and summaries and Questions and Answers Dump
- AWS SES facts and summaries and Questions and Answers Dump
- AWS EBS facts and summaries and Questions and Answers Dump
- AWS ELB facts and summaries and Questions and Answers Dump
- AWS Autoscaling facts and summaries and Questions and Answers Dump
- AWS VPC facts and summaries and Questions and Answers Dump
- AWS KMS facts and summaries and Questions and Answers Dump
- AWS Elastic Beanstalk facts and summaries and Questions and Answers Dump
- AWS CodeBuild facts and summaries and Questions and Answers Dump
- AWS CodeDeploy facts and summaries and Questions and Answers Dump
- AWS CodePipeline facts and summaries and Questions and Answers Dump

What means undifferentiated heavy lifting?
The reality, of course, today is that if you come up with a great idea you don’t get to go quickly to a successful product. There’s a lot of undifferentiated heavy lifting that stands between your idea and that success. The kinds of things that I’m talking about when I say undifferentiated heavy lifting are things like these: figuring out which servers to buy, how many of them to buy, what time line to buy them.
Eventually you end up with heterogeneous hardware and you have to match that. You have to think about backup scenarios if you lose your data center or lose connectivity to a data center. Eventually you have to move facilities. There’s negotiations to be done. It’s a very complex set of activities that really is a big driver of ultimate success.
But they are undifferentiated from, it’s not the heart of, your idea. We call this muck. And it gets worse because what really happens is you don’t have to do this one time. You have to drive this loop. After you get your first version of your idea out into the marketplace, you’ve done all that undifferentiated heavy lifting, you find out that you have to cycle back. Change your idea. The winners are the ones that can cycle this loop the fastest.
On every cycle of this loop you have this undifferentiated heavy lifting, or muck, that you have to contend with. I believe that for most companies, and it’s certainly true at Amazon, that 70% of your time, energy, and dollars go into the undifferentiated heavy lifting and only 30% of your energy, time, and dollars gets to go into the core kernel of your idea.
I think what people are excited about is that they’re going to get a chance they see a future where they may be able to invert those two. Where they may be able to spend 70% of their time, energy and dollars on the differentiated part of what they’re doing.

AWS Certified Solutions Architect Associates Questions and Answers around the web.
Testimonial: Passed SAA-C02!
So my exam was yesterday and I got the results in 24 hours. I think that’s how they review all saa exams, not showing the results right away anymore.
I scored 858. Was practicing with Stephan’s udemy lectures and Bonso exam tests. My test results were as follows Test 1. 63%, 93% Test 2. 67%, 87% Test 3. 81 % Test 4. 72% Test 5. 75 % Test 6. 81% Stephan’s test. 80%
I was reading all question explanations (even the ones I got correct)
The actual exam was pretty much similar to these. The topics I got were:
-
A lot of S3 (make sure you know all of it from head to toes)
-
VPC peering
-
DataSync and Database Migration Service in same questions. Make sure you know the difference
-
One EKS question
-
2-3 KMS questions
-
Security group question
-
A lot of RDS Multi-AZ
-
SQS + SNS fan out pattern
-
ECS microservice architecture question
-
Route 53
-
NAT gateway
And that’s all I can remember)
I took extra 30 minutes, because English is not my native language and I had plenty of time to think and then review flagged questions.
Good luck with your exams guys!
Testimonial: Passed SAA-C02

Hey guys, just giving my update so all of you guys working towards your certs can stay motivated as these success stories drove me to reach this goal.
Background: 12 years of military IT experience, never worked with the cloud. I’ve done 7 deployments (that is a lot in 12 years), at which point I came home from the last one burnt out with a family that barely knew me. I knew I needed a change, but had no clue where to start or what I wanted to do. I wasn’t really interested in IT but I knew it’d pay the bills. After seeing videos about people in IT working from home(which after 8+ years of being gone from home really appealed to me), I stumbled across a video about a Solutions Architect’s daily routine working from home and got me interested in AWS.
It took me 68 days straight of hard work to pass this exam with confidence. No rest days, more than 120 pages of hand-written notes and hundreds and hundreds of flash cards.
In the beginning, I hopped on Stephane Maarek’s course for the CCP exam just to see if it was for me. I did the course in about a week and then after doing some research on here, got the CCP Practice exams from tutorialsdojo.com Two weeks after starting the Udemy course, I passed the exam. By that point, I’d already done lots of research on the different career paths and the best way to study, etc.
Cantrill(10/10) – That same day, I hopped onto Cantrill’s course for the SAA and got to work. Somebody had mentioned that by doing his courses you’d be over-prepared for the exam. While I think a combination of material is really important for passing the certification with confidence, I can say without a doubt Cantrill’s courses got me 85-90% of the way there. His forum is also amazing, and has directly contributed to me talking with somebody who works at AWS to land me a job, which makes the money I spent on all of his courses A STEAL. As I continue my journey (up next is SA Pro), I will be using all of his courses.
Neal Davis(8/10) – After completing Cantrill’s course, I found myself needing a resource to reinforce all the material I’d just learned. AWS is an expansive platform and the many intricacies of the different services can be tricky. For this portion, I relied on Neal Davis’s Training Notes series. These training notes are a very condensed version of the information you’ll need to pass the exam, and with the proper context are very useful to find the things you may have missed in your initial learnings. I will be using his other Training Notes for my other exams as well.
TutorialsDojo(10/10) – These tests filled in the gaps and allowed me to spot my weaknesses and shore them up. I actually think my real exam was harder than these, but because I’d spent so much time on the material I got wrong, I was able to pass the exam with a safe score.
As I said, I was surprised at how difficult the exam was. A lot of my questions were related to DBs, and a lot of them gave no context as to whether the data being loaded into them was SQL or NoSQL which made the choice selection a little frustrating. A lot of the questions have 2 VERY SIMILAR answers, and often time the wording of the answers could be easy to misinterpret (such as when you are creating a Read Replica, do you attach it to the primary application DB that is slowing down because of read issues or attach it to the service that is causing the primary DB to slow down). For context, I was scoring 95-100% on the TD exams prior to taking the test and managed a 823 on the exam so I don’t know if I got unlucky with a hard test or if I’m not as prepared as I thought I was (i.e. over-thinking questions).
Anyways, up next is going back over the practical parts of the course as I gear up for the SA Pro exam. I will be taking my time with this one, and re-learning the Linux CLI in preparation for finding a new job.
PS if anybody on here is hiring, I’m looking! I’m the hardest worker I know and my goal is to make your company as streamlined and profitable as possible. 🙂
Testimonial: How did you prepare for AWS Certified Solutions Architect – Associate Level certification?
Practical knowledge is 30% important and rest is Jayendra blog and Dumps.
Buying udemy courses doesn’t make you pass, I can tell surely without going to dumps and without going to jayendra’s blog not easy to clear the certification.
Read FAQs of S3, IAM, EC2, VPC, SQS, Autoscaling, Elastic Load Balancer, EBS, RDS, Lambda, API Gateway, ECS.
Read the Security Whitepaper and Shared Responsibility model.
The most important thing is basic questions from the last introduced topics to the exam is very important like Amazon Kinesis, etc…
– ACloudGuru course with practice test’s
– Created my own cheat sheet in excel
– Practice questions on various website
– Few AWS services FAQ’s
– Some questions were your understanding about which service to pick for the use case.
– many questions on VPC
– a couple of unexpected question on AWS CloudHSM, AWS systems manager, aws athena
– encryption at rest and in transit services
– migration from on-premise to AWS
– backup data in az vs regional
I believe the time was sufficient.
Overall I feel AWS SAA was more challenging in theory than GCP Associate CE.
some resources I bookmarked:
- Comparison of AWS Services
- Solutions Architect – Associate | Qwiklabs
- okeeffed/cheat-sheets
- A curated list of AWS resources to prepare for the AWS Certifications
- AWS Cheat Sheet
Whitepapers are the important information about each services that are published by Amazon in their website. If you are preparing for the AWS certifications, it is very important to use the some of the most recommended whitepapers to read before writing the exam.
The following are the list of whitepapers that are useful for preparing solutions architectexam. Also you will be able to find the list of whitepapers in the exam blueprint.
- Overview of Security Processes
- Storage Options in the Cloud
- Defining Fault Tolerant Applications in the AWS Cloud
- Overview of Amazon Web Services
- Compliance Whitepaper
- Architecting for the AWS Cloud
Data Security questions could be the more challenging and it’s worth noting that you need to have a good understanding of security processes described in the whitepaper titled “Overview of Security Processes”.
In the above list, most important whitepapers are Overview of Security Processes and Storage Options in the Cloud. Read more here…
Big thanks to /u/acantril for his amazing course – AWS Certified Solutions Architect – Associate (SAA-C02) – the best IT course I’ve ever had – and I’ve done many on various other platforms:
-
CBTNuggets
-
LinuxAcademy
-
ACloudGuru
-
Udemy
-
Linkedin
-
O’Reilly
-
#AWS #SAAC02 #SAAC03 #SolutionsArchitect #AWSSAA #SAA #AWSCertification #AWSTraining #LearnAWS #CloudArchitect #SolutionsArchitect #Djamgatech

If you’re on the fence with buying one of his courses, stop thinking and buy it, I guarantee you won’t regret it! Other materials used for study:
-
Jon Bonso Practice Exams for SAA-C02 @ Tutorialsdojo (amazing practice exams!)
-
Random YouTube videos (example)
-
Official AWS Documentation (example)
-
TechStudySlack (learning community)
Study duration approximately ~3 months with the following regimen:
-
Daily study from
30min
to2hrs
-
Usually early morning before work
-
Sometimes on the train when commuting from/to work
-
Sometimes in the evening
-
Due to being a father/husband, study wasn’t always possible
-
-
All learned topics reviewed weekly

Testimonial: I passed SAA-C02 … But don’t do what I did to pass it

I’ve been following this subreddit for awhile and gotten some helpful tips, so I’d like to give back with my two cents. FYI I passed the exam 788
The exam materials that I used were the following:
-
AWS Certified Solutions Architect Associate All-in-One Exam Guide (Banerjee)
-
Stephen Maarek’s Udemy course, and his 6 exam practices
-
Adrian Cantrill’s online course (about `60% done)
-
TutorialDojo’s exams
(My company has udemy business account so I was able to use Stephen’s course/exam)
I scheduled my exam at the end of March, and started with Adrian’s. But I was dumb thinking that I could go through his course within 3 weeks… I stopped around 12% of his course and went to the textbook and finished reading the all-in-one exam guide within a weekend. Then I started going through Stephen’s course. While learning the course, I pushed back the exam to end of April, because I knew I wouldn’t be ready by the exam comes along.
Five days before the exam, I finished Stephen’s course, and then did his final exam on the course. I failed miserably (around 50%). So I did one of Stephen’s practice exam and did worse (42%). I thought maybe it might be his exams that are slightly difficult, so I went and bought Jon Bonso’s exam and got 60% on his first one. And then I realized based on all the questions on the exams, I was definitely lacking some fundamentals. I went back to Adrian’s course and things were definitely sticking more – I think it has to do with his explanations + more practical stuff. Unfortunately, I could not finish his course before the exam (because I was cramming), and on the day of the exam, I could only do Bonso’s four of six exams, with barely passing one of them.
Please, don’t do what I did. I was desperate to get this thing over with it. I wanted to move on and work on other things for job search, but if you’re not in this situation, please don’t do this. I can’t for love of god tell you about OAI and Cloudfront and why that’s different than S3 URL. The only thing that I can remember is all the practical stuff that I did with Adrian’s course. I’ll never forget how to create VPC, because he make you manually go through it. I’m not against Stephen’s course – they are different on its own way (see the tips below).
So here’s what I recommend doing before writing for aws exam:
-
Don’t schedule your exam beforehand. Go through the materials that you are doing, and make sure you get at least 80% on all of the Jon Bonso’s exam (I’d recommend maybe 90% or higher)
-
If you like to learn things practically, I do recommend Adrian’s course. If you like to learn things conceptually, go with Stephen Maarek’s course. I find Stephen’s course more detailed when going through different architectures, but I can’t really say that because I didn’t really finish Adrian’s course
-
Jon Bonso’s exam was about the same difficulty as the actual exam. But they’re slightly more tricky. For example, many of the questions will give you two different situation and you really have to figure out what they are asking for because they might contradict to each other, but the actual question is asking one specific thing. However, there were few questions that were definitely obvious if you knew the service.
I’m upset that even though I passed the exam, I’m still lacking some practical stuff, so I’m just going to go through Adrian’s Developer exam but without cramming this time. If you actually learn the materials and practice them, they are definitely useful in the real world. I hope this will help you passing and actually learning the stuff.
P.S I vehemently disagree with Adrian in one thing in his course. doggogram.io is definitely better than catagram.io, although his cats are pretty cool
Testimonial: I passed the SAA-C02 exam!

I sat the exam at a PearsonVUE test centre and scored 816.
The exam had lots of questions around S3, RDS and storage. To be honest it was a bit of a blur but they are the ones I remember.
I was a bit worried before sitting the exam as I was only hit 76% in the official AWS practice exam the night before but it turned out alright in the end!
I have around 8 years of experience in IT but AWS was relatively new to me around 5 weeks ago.
Training Material Used
Firstly I ran through the u/stephanemaarek course which I found to pretty much cover all that was required!
I then used the u/Tutorials_Dojo practice exams. I took one before starting Stephane’s course to see where I was at with no training. I got 46% but I suppose a few of them were lucky guesses!
I then finished the course and took another test and hit around 65%, TD was great as they gave explanations on the answers. I then used this go back to the course to go over my weak areas again.
I then seemed to not be able to get higher than the low 70% on the exams so I went through u/neal-davis course, this was also great as it had an “Exam Cram” video at the end of each topic.
I also set up flashcards on BrainScape which helped me remember AWS services and what their function is.
All in all it was a great learning experience and I look forward to putting my skills into action!

Testimonial: I passed SAA with (799), had about an hour left on the clock.
Many FSx / EFS / Lustre questions
S3 Use cases, storage tiers, cloudfront were pretty prominent too
Only got one “figure out what’s wrong with this IAM policy” question
A handful of dynamodb questions and a handful for picking use cases between different database types or caching layers.
Other typical tips: When you’re unclear on what answer you should pick, or if they seem very similar – work on eliminating answers first. “It can’t be X because oy Y” and that can help a lot.
Testimonial: Passed the AWS Solutions Architect Associate exam!
I prepared mostly from freely available resources as my basics were strong. Bought Jon Bonso’s tests on Udemy and they turned out to be super important while preparing for those particular type of questions (i.e. the questions which feel subjective, but they aren’t), understanding line of questioning and most suitable answers for some common scenarios.
Created a Notion notebook to note down those common scenarios, exceptions, what supports what, integrations etc. Used that notebook and cheat sheets on Tutorials Dojo website for revision on final day.
Found the exam was little tougher than Jon Bonso’s, but his practice tests on Udemy were crucial. Wouldn’t have passed it without them.
Piece of advice for upcoming test aspirants: Get your basics right, especially networking. Understand properly how different services interact in VPC. Focus more on the last line of the question. It usually gives you a hint upon what exactly is needed. Whether you need cost optimization, performance efficiency or high availability. Little to no operational effort means serverless. Understand all serverless services thoroughly.

Testimonial: Passed Solutions Architect Associate (SAA-C02) Today!
I have almost no experience with AWS, except for completing the Certified Cloud Practitioner earlier this year. My work is pushing all IT employees to complete some cloud training and certifications, which is why I chose to do this.
How I Studied:
My company pays for acloudguru subscriptions for its employees, so I used that for the bulk of my learning. I took notes on 3×5 notecards on the key terms and concepts for review.
Once I scored passing grades on the ACG practice tests, I took the Jon Bonso tests on Udemy, which are much more difficult and fairly close to the difficulty of the actual exam. I scored 45%-74% on every Bonso practice test, and spent 1-2 hours after each test reviewing what I missed, supplementing my note cards, and taking time to understand my weak spots. I only took these tests once each, but in between each practice test, I would review all my note cards until I had the content largely memorized.
The Test:
This was one of the most difficult certification tests I’ve ever done. The exam was remote proctored with PearsonVUE (I used PSI for the CCP and didn’t like it as much) I felt like I was failing half the time. I marked about 25% of the questions for review, and I used up the entire allotted time. The questions are mostly about understanding which services interact with which other services, or which services are incompatible with the scenario. It was important for me to read through each response and eliminate the ones that don’t make sense. A lot of the responses mentioned a lot of AWS services that sound good but don’t actually work together (i.e. if it doesn’t make sense to have service X querying database Y, so that probably isn’t the right answer). I can’t point to one domain that really needs to be studied more than any other. You need to know all of the content for the exam.
Final Thoughts:
The ACG practice tests are not a good metric for success for the actual SAA exam, and I would not have passed without Bonso’s tests showing me my weak spots. PearsonVUE is better than PSI. Make sure to study everything thoroughly and review excessively. You don’t necessarily need 5 different study sources and years of experience to be able to pass (although both of those definitely help) and good luck to anyone that took the time to read!

Testimonial: Passed AWS CSAA today!
AWS Certified Solutions Architect Associate
So glad to pass my first AWS certification after 6 weeks of preparation.
My Preparation:
After a series of trial of error in regards to picking the appropriate learning content. Eventually, I went with the community’s advice, and took the course presented by the amazing u/stephanemaarek, in addition to the practice exams by Jon Bonso.
At this point, I can’t say anything that hasn’t been said already about how helpful they are. It’s a great combination of learning material, I appreciate the instructor’s work, and the community’s help in this sub.
Review:
Throughout the course I noted down the important points, and used the course slides as a reference in the first review iteration.
Before resorting to Udemy’s practice exams, I purchased a practice exam from another website, that I regret (not to defame the other vendor, I would simply recommend Udemy).
Udemy’s practice exams were incredible, in that they made me aware of the points I hadn’t understood clearly. After each exam, I would go both through the incorrect answers, as well as the questions I marked for review, wrote down the topic for review, and read the explanation thoroughly. The explanations point to the respective documentation in AWS, which is a recommended read, especially if you don’t feel confident with the service.
What I want to note, is that I didn’t get satisfying marks on the first go at the practice exams (I got an average of ~70%).
Throughout the 6 practice exams, I aggregated a long list of topics to review, went back to the course slides and practice-exams explanations, in addition to the AWS documentation for the respective service.
On the second go I averaged 85%. The second attempt at the exams was important as a confidence boost, as I made sure I understood the services more clearly.
The take away:
Don’t feel disappointed if you get bad results at your practice-exams. Make sure to review the topics and give it another shot.
The AWS documentation is your friend! It is vert clear and concise. My only regret is not having referenced the documentation enough after learning new services.
The exam:
I scheduled the exam using PSI.
I was very confident going into the exam. But going through such an exam environment for the first time made me feel under pressure. Partly, because I didn’t feel comfortable being monitored (I was afraid to get eliminated if I moved or covered my mouth), but mostly because there was a lot at stake from my side, and I had to pass it in the first go.
The questions were harder than expected, but I tried analyze the questions more, and eliminate the invalid answers.
I was very nervous and kept reviewing flagged questions up to the last minute. Luckily, I pulled through.
The take away:
The proctors are friendly, just make sure you feel comfortable in the exam place, and use the practice exams to prepare for the actual’s exam’s environment. That includes sitting in a straight posture, not talking/whispering, or looking away.
Make sure to organize the time dedicated for each questions well, and don’t let yourself get distracted by being monitored like I did.
Don’t skip the question that you are not sure of. Try to select the most probable answer, then flag the question. This will make the very-stressful, last-minute review easier.
You have been engaged by a company to design and lead a migration to an AWS environment. The team is concerned about the capabilities of the new environment, especially when it comes to high availability and cost-effectiveness. The design calls for about 20 instances (c3.2xlarge) pulling jobs/messages from SQS. Network traffic per instance is estimated to be around 500 Mbps at the beginning and end of each job. Which configuration should you plan on deploying?
Spread the Instances over multiple AZs to minimize the traffic concentration and maximize fault-tolerance. With a multi-AZ configuration, an additional reliability point is scored as the entire Availability Zone itself is ruled out as a single point of failure. This ensures high availability. Wherever possible, use simple solutions such as spreading the load out rather than expensive high tech solutions
To save money, you quickly stored some data in one of the attached volumes of an EC2 instance and stopped it for the weekend. When you returned on Monday and restarted your instance, you discovered that your data was gone. Why might that be?
The volume was ephemeral, block-level storage. Data on an instance store volume is lost if an instance is stopped.
The most likely answer is that the EC2 instance had an instance store volume attached to it. Instance store volumes are ephemeral, meaning that data in attached instance store volumes is lost if the instance stops.
Reference: Instance store lifetime
Your company likes the idea of storing files on AWS. However, low-latency service of the last few days of files is important to customer service. Which Storage Gateway configuration would you use to achieve both of these ends?
A file gateway simplifies file storage in Amazon S3, integrates to existing applications through industry-standard file system protocols, and provides a cost-effective alternative to on-premises storage. It also provides low-latency access to data through transparent local caching.
Cached volumes allow you to store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data.
You’ve been commissioned to develop a high-availability application with a stateless web tier. Identify the most cost-effective means of reaching this end.
Use an Elastic Load Balancer, a multi-AZ deployment of an Auto-Scaling group of EC2 Spot instances (primary) running in tandem with an Auto-Scaling group of EC2 On-Demand instances (secondary), and DynamoDB.
With proper scripting and scaling policies, running EC2 On-Demand instances behind the Spot instances will deliver the most cost-effective solution because On-Demand instances will only spin up if the Spot instances are not available. DynamoDB lends itself to supporting stateless web/app installations better than RDS .
You are building a NAT Instance in an m3.medium using the AWS Linux2 distro with amazon-linux-extras installed. Which of the following do you need to set?
Ensure that “Source/Destination Checks” is disabled on the NAT instance. With a NAT instance, the most common oversight is forgetting to disable Source/Destination Checks. TNote: This is a legacy topic and while it may appear on the AWS exam it will only do so infrequently.
You are reviewing Change Control requests and you note that there is a proposed change designed to reduce errors due to SQS Eventual Consistency by updating the “DelaySeconds” attribute. What does this mean?
When a new message is added to the SQS queue, it will be hidden from consumer instances for a fixed period.
Delay queues let you postpone the delivery of new messages to a queue for a number of seconds, for example, when your consumer application needs additional time to process messages. If you create a delay queue, any messages that you send to the queue remain invisible to consumers for the duration of the delay period. The default (minimum) delay for a queue is 0 seconds. The maximum is 15 minutes. To set delay seconds on individual messages, rather than on an entire queue, use message timers to allow Amazon SQS to use the message timer’s DelaySeconds value instead of the delay queue’s DelaySeconds value. Reference: Amazon SQS delay queues.
Amazon SQS keeps track of all tasks and events in an application: True or False?
False. Amazon SWF (not Amazon SQS) keeps track of all tasks and events in an application. Amazon SQS requires you to implement your own application-level tracking, especially if your application uses multiple queues. Amazon SWF FAQs.
You work for a company, and you need to protect your data stored on S3 from accidental deletion. Which actions might you take to achieve this?
Allow versioning on the bucket and to protect the objects by configuring MFA-protected API access.
Your Security Manager has hired a security contractor to audit your network and firewall configurations. The consultant doesn’t have access to an AWS account. You need to provide the required access for the auditing tasks, and answer a question about login details for the official AWS firewall appliance. Which actions might you do?
AWS has removed the Firewall appliance from the hub of the network and implemented the firewall functionality as stateful Security Groups, and stateless subnet NACLs. This is not a new concept in networking, but rarely implemented at this scale.
Create an IAM user for the auditor and explain that the firewall functionality is implemented as stateful Security Groups, and stateless subnet NACLs
Amazon ElastiCache can fulfill a number of roles. Which operations can be implemented using ElastiCache for Redis.
Amazon ElastiCache offers a fully managed Memcached and Redis service. Although the name only suggests caching functionality, the Redis service in particular can offer a number of operations such as Pub/Sub, Sorted Sets and an In-Memory Data Store. However, Amazon ElastiCache for Redis doesn’t support multithreaded architectures.
You have been asked to deploy an application on a small number of EC2 instances. The application must be placed across multiple Availability Zones and should also minimize the chance of underlying hardware failure. Which actions would provide this solution?
Deploy the EC2 servers in a Spread Placement Group.
Spread Placement Groups are recommended for applications that have a small number of critical instances which need to be kept separate from each other. Launching instances in a Spread Placement Group reduces the risk of simultaneous failures that might occur when instances share the same underlying hardware. Spread Placement Groups provide access to distinct hardware, and are therefore suitable for mixing instance types or launching instances over time. In this case, deploying the EC2 instances in a Spread Placement Group is the only correct option.
You manage a NodeJS messaging application that lives on a cluster of EC2 instances. Your website occasionally experiences brief, strong, and entirely unpredictable spikes in traffic that overwhelm your EC2 instances’ resources and freeze the application. As a result, you’re losing recently submitted messages from end-users. You use Auto Scaling to deploy additional resources to handle the load during spikes, but the new instances don’t spin-up fast enough to prevent the existing application servers from freezing. Can you provide the most cost-effective solution in preventing the loss of recently submitted messages?
Use Amazon SQS to decouple the application components and keep the messages in queue until the extra Auto-Scaling instances are available.
Neither increasing the size of your EC2 instances nor maintaining additional EC2 instances is cost-effective, and pre-warming an ELB signifies that these spikes in traffic are predictable. The cost-effective solution to the unpredictable spike in traffic is to use SQS to decouple the application components.
True statements on S3 URL styles
Virtual-host-style URLs (such as: https://bucket-name.s3.Region.amazonaws.com/key name) are supported by AWS.
Path-Style URLs (such as https://s3.Region.amazonaws.com/bucket-name/key name) are supported by AWS.
You run an automobile reselling company that has a popular online store on AWS. The application sits behind an Auto Scaling group and requires new instances of the Auto Scaling group to identify their public and private IP addresses. How can you achieve this?
Using a Curl or Get Command to get the latest meta-data from http://169.254.169.254/latest/meta-data/
What data formats are used to create CloudFormation templates?
JSOn and YAML
You have launched a NAT instance into a public subnet, and you have configured all relevant security groups, network ACLs, and routing policies to allow this NAT to function. However, EC2 instances in the private subnet still cannot communicate out to the internet. What troubleshooting steps should you take to resolve this issue?
Disable the Source/Destination Check on your NAT instance.
A NAT instance sends and retrieves traffic on behalf of instances in a private subnet. As a result, source/destination checks on the NAT instance must be disabled to allow the sending and receiving traffic for the private instances. Route 53 resolves DNS names, so it would not help here. Traffic that is originating from your NAT instance will not pass through an ELB. Instead, it is sent directly from the public IP address of the NAT Instance out to the Internet.
You need a storage service that delivers the lowest-latency access to data for a database running on a single EC2 instance. Which of the following AWS storage services is suitable for this use case?
Amazon EBS is a block level storage service for use with Amazon EC2. Amazon EBS can deliver performance for workloads that require the lowest-latency access to data from a single EC2 instance. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.
What are DynamoDB use cases?
Use cases include storing JSON data, BLOB data and storing web session data.
You are reviewing Change Control requests, and you note that there is a change designed to reduce costs by updating the Amazon SQS “WaitTimeSeconds” attribute. What does this mean?
When the consumer instance polls for new work, the SQS service will allow it to wait a certain time for one or more messages to be available before closing the connection.
Poor timing of SQS processes can significantly impact the cost effectiveness of the solution.
Long polling helps reduce the cost of using Amazon SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request) and false empty responses (when messages are available but aren’t included in a response).
Reference: Here
You have been asked to decouple an application by utilizing SQS. The application dictates that messages on the queue CAN be delivered more than once, but must be delivered in the order they have arrived while reducing the number of empty responses. Which option is most suitable?
Configure a FIFO SQS queue and enable long polling.
You are a security architect working for a large antivirus company. The production environment has recently been moved to AWS and is in a public subnet. You are able to view the production environment over HTTP. However, when your customers try to update their virus definition files over a custom port, that port is blocked. You log in to the console and you allow traffic in over the custom port. How long will this take to take effect?
Immediately.
You need to restrict access to an S3 bucket. Which methods can you use to do so?
There are two ways of securing S3, using either Access Control Lists (Permissions) or by using bucket Policies.
You are reviewing Change Control requests, and you note that there is a change designed to reduce wasted CPU cycles by increasing the value of your Amazon SQS “VisibilityTimeout” attribute. What does this mean?
When a consumer instance retrieves a message, that message will be hidden from other consumer instances for a fixed period.
Poor timing of SQS processes can significantly impact the cost effectiveness of the solution. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours.
With EBS, I can ____.
Create an encrypted volume from a snapshot of another encrypted volume.
Create an encrypted snapshot from an unencrypted snapshot by creating an encrypted copy of the unencrypted snapshot.
You can create an encrypted volume from a snapshot of another encrypted volume.
Although there is no direct way to encrypt an existing unencrypted volume or snapshot, you can encrypt them by creating either a volume or a snapshot. Reference: Encrypting unencrypted resources.
Following advice from your consultant, you have configured your VPC to use dedicated hosting tenancy. Your VPC has an Amazon EC2 Auto Scaling designed to launch or terminate Amazon EC2 instances on a regular basis, in order to meet workload demands. A subsequent change to your application has rendered the performance gains from dedicated tenancy superfluous, and you would now like to recoup some of these greater costs. How do you revert your instance tenancy attribute of a VPC to default for new launched EC2 instances?
Modify the instance tenancy attribute of your VPC from dedicated to default using the AWS CLI, an AWS SDK, or the Amazon EC2 API.
You can change the instance tenancy attribute of a VPC from dedicated to default. Modifying the instance tenancy of the VPC does not affect the tenancy of any existing instances in the VPC. The next time you launch an instance in the VPC, it has a tenancy of default, unless you specify otherwise during launch. You can modify the instance tenancy attribute of a VPC using the AWS CLI, an AWS SDK, or the Amazon EC2 API only. Reference: Change the tenancy of a VPC.
How do DynamoDB indices work?
What is Amazon DynamoDB?
Amazon DynamoDB is a fast, fully managed NoSQL database service. DynamoDB makes it simple and cost-effective to store and retrieve any amount of data and serve any level of request traffic.
DynamoDB is used to create tables that store and retrieve any level of data.
- DynamoDB uses SSD’s to store data.
- Provides Automatic and synchronous data.
- Maximum item size is 400KB
- Supports cross-region replication.
DynamoDB Core Concepts:
- The fundamental concepts around DynamoDB are:
- Tables-which is a collection of data.
- Items- They are the individual entries in the table.
- Attributes- These are the properties associated with the entries.
- Primary Keys.
- Secondary Indexes.
- DynamoDB streams.
Secondary Indexes:
- The Secondary index is a data structure that contains a subset of attributes from the table, along with an alternate key that supports Query operations.
- Every secondary index is related to only one table, from where it obtains data. This is called base table of the index.
- When you create an index you create an alternate key for the index i.e. Partition Key and Sort key, DynamoDB creates a copy of the attributes into the index, including primary key attributes derived from the table.
- After this is done, you use the query/scan in the same way as you would use a query on a table.
Every secondary index is instinctively maintained by DynamoDB.
DynamoDB Indexes: DynamoDB supports two indexes:
- Local Secondary Index (LSI)- The index has the same partition key as the base table but a different sort key,
- Global Secondary index (GSI)- The index has a partition key and sort key are different from those on the base table.
While creating more than one table using secondary table , you must do it in a sequence. Create table one after the another. When you create the first table wait for it to be active.
Once that table is active, create another table and wait for it to get active and so on. If you try to create one or more tables continuously DynamoDB will return a LimitExceededException.
You must specify the following, for every secondary index:
- Type- You must mention the type of index you are creating whether it is a Global Secondary Index or a Local Secondary index.
- Name- You must specify the name for the index. The rules for naming the indexes are the same as that for the table it is connected with. You can use the same name for the indexes that are connected with the different base table.
- Key- The key schema for the index states that every attribute in the index must be of the top level attribute of type-string, number, or binary. Other data types which include documents and sets are not allowed. Other requirements depend on the type of index you choose.
- For GSI- The partitions key can be any scalar attribute of the base table.
Sort key is optional and this too can be any scalar attribute of the base table.
- For LSI- The partition key must be the same as the base table’s partition key.
The sort key must be a non-key table attribute.
- Additional Attributes: The additional attributes are in addition to the tables key attributes. They are automatically projected into every index. You can use attributes for any data type, including scalars, documents and sets.
- Throughput: The throughput settings for the index if necessary are:
- GSI: Specify read and write capacity unit settings. These provisioned throughput settings are not dependent on the base tables settings.
- LSI- You do not need to specify read and write capacity unit settings. Any read and write operations on the local secondary index are drawn from the provisioned throughput settings of the base table.
You can create upto 5 Global and 5 Local Secondary Indexes per table. With the deletion of a table all the indexes are connected with the table are also deleted.
You can use the Scan or Query operation to fetch the data from the table. DynamoDB will give you the results in descending or ascending order.
(Source)
What is NLB in AWS?
An NLB is a Network Load Balancer.
Network Load Balancer Overview: A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration. When you enable an Availability Zone for the load balancer, Elastic Load Balancing creates a load balancer node in the Availability Zone. By default, each load balancer node distributes traffic across the registered targets in its Availability Zone only. If you enable cross-zone load balancing, each load balancer node distributes traffic across the registered targets in all enabled Availability Zones. It is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on your part. The Network Load Balancer is API-compatible with the Application Load Balancer, including full programmatic control of Target Groups and Targets. Here are some of the most important features:
- Static IP Addresses – Each Network Load Balancer provides a single IP address for each Availability Zone in its purview. If you have targets in us-west-2a and other targets in us-west-2c, NLB will create and manage two IP addresses (one per AZ); connections to that IP address will spread traffic across the instances in all the VPC subnets in the AZ. You can also specify an existing Elastic IP for each AZ for even greater control. With full control over your IP addresses, a Network Load Balancer can be used in situations where IP addresses need to be hard-coded into DNS records, customer firewall rules, and so forth.
- Zonality – The IP-per-AZ feature reduces latency with improved performance, improves availability through isolation and fault tolerance, and makes the use of Network Load Balancers transparent to your client applications. Network Load Balancers also attempt to route a series of requests from a particular source to targets in a single AZ while still providing automatic failover should those targets become unavailable.
- Source Address Preservation – With Network Load Balancer, the original source IP address and source ports for the incoming connections remain unmodified, so application software need not support X-Forwarded-For, proxy protocol, or other workarounds. This also means that normal firewall rules, including VPC Security Groups, can be used on targets.
- Long-running Connections – NLB handles connections with built-in fault tolerance, and can handle connections that are open for months or years, making them a great fit for IoT, gaming, and messaging applications.
- Failover – Powered by Route 53 health checks, NLB supports failover between IP addresses within and across regions.
How many types of VPC endpoints are available?
There are two types of VPC endpoints: (1) interface endpoints and (2) gateway endpoints. Interface endpoints enable connectivity to services over AWS PrivateLink.
What is the purpose of key pair with Amazon AWS EC2?
Amazon AWS uses key pair to encrypt and decrypt login information.
A sender uses a public key to encrypt data, which its receiver then decrypts using another private key. These two keys, public and private, are known as a key pair.
You need a key pair to be able to connect to your instances. The way this works on Linux and Windows instances is different.
First, when you launch a new instance, you assign a key pair to it. Then, when you log in to it, you use the private key.
The difference between Linux and Windows instances is that Linux instances do not have a password already set and you must use the key pair to log in to Linux instances. On the other hand, on Windows instances, you need the key pair to decrypt the administrator password. Using the decrypted password, you can use RDP and then connect to your Windows instance.
Amazon EC2 stores only the public key, and you can either generate it inside Amazon EC2 or you can import it. Since the private key is not stored by Amazon, it’s advisable to store it in a secure place as anyone who has this private key can log in on your behalf.
What is the difference between a VPC SG and an EC2 security group?
There are two types of Security Groups based on where you launch your instance. When you launch your instance on EC2-Classic, you have to specify an EC2-Classic Security Group . On the other hand, when you launch an instance in a VPC, you will have to specify an EC2-VPC Security Group. Now that we have a clear understanding what we are comparing, lets see their main differences:
- When the instance is launched, you can only choose a Security Group that resides in the same region as the instance.
- You cannot change the Security Group after the instance has launched (you may edit the rules)
- They are not IPv6 Capable
- You can change the Security Group after the instance has launched
- They are IPv6 Capable
Generally speaking, they are not interchangeable and there are more capabilities on the EC2-VPC SGs. You may read more about them on Differences Between Security Groups for EC2-Classic and EC2-VPC
Why do AWS DynamoDB and S3 use gateway VPC endpoints rather than interface endpoints?
I think this is historical in nature. S3 and DynamoDB were the first services to support VPC endpoints. The release of those VPC endpoint features pre-dates two important services that subsequently enabled interface endpoints: Network Load Balancer and AWS PrivateLink.
What is the best way to develop AWS Lambda functions locally on your laptop?
- Separate the Lambda handler from your core logic.
- Take advantage of execution context reuse to improve the performance of your function. Initialize SDK clients and database connections outside of the function handler, and cache static assets locally in the
/tmp
directory. Subsequent invocations processed by the same instance of your function can reuse these resources. This saves execution time and avoid potential data leaks across invocations, don’t use the execution context to store user data, events, or other information with security implications. If your function relies on a mutable state that can’t be stored in memory within the handler, consider creating a separate function or separate versions of a function for each user. - Use AWS Lambda Environment Variables to pass operational parameters to your function. For example, if you are writing to an Amazon S3 bucket, instead of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable.
How can I see if/when someone logs into my AWS Windows instance?
You can use VPC Flow Logs. The steps would be the following:
- Enable VPC Flow Logs for the VPC your EC2 instance lives in. You can do this from the VPC console
- Having VPC Flow Logs enabled will create a CloudWatch Logs log group
- Find the Elastic Network Interface assigned to your EC2 instance. Also, get the private IP of your EC2 instance. You can do this from the EC2 console.
- Find the CloudWatch Logs log stream for that ENI.
- Search the log stream for records where your Windows instance’s IP is the destination IP, make sure the port is the one you’re looking for. You’ll see records that tell you if someone has been connecting to your EC2 instance. For example, there are bytes transferred, status=ACCEPT, log-status=OK. You will also know the source IP that connected to your instance.
I recommend using CloudWatch Logs Metric Filters, so you don’t have to do all this manually. Metric Filters will find the patterns I described in your CloudWatch Logs entries and will publish a CloudWatch metric. Then you can trigger an alarm that notifies you when someone logs in to your instance.
Here are more details from the AWS Official Blog and the AWS documentation for VPC Flow Logs records:
VPC Flow Logs – Log and View Network Traffic Flows
Also, there are 3rd-party tools that simplify all these steps for you and give you very nice visibility and alerts into what’s happening in your AWS network resources. I’ve tried Observable Networks and it’s great: Observable Networks
While enabling ports on AWS NAT gateway when you allow inbound traffic on port 80/443 , do you need to allow outbound traffic on the same ports or is it sufficient to allow outbound traffic on ephemeral ports (1024-65535)?
Typically outbound traffic is not blocked by NAT on any port, so you would not need to explicitly allow those, since they should already be allowed. Your firewall generally would have a rule to allow return traffic that was initiated outbound from inside your office.
Is AWS traffic between EC2 nodes in the same availability zone secure with respect to sending sensitive data?
According to Amazon’s documentation, it is impossible for one instance to sniff traffic bound for a different instance.
https://d0.awsstatic.com/whitepapers/aws-security-whitepaper.pdf
- Packet sniffing by other tenants. It is not possible for a virtual instance running in promiscuous mode to receive or “sniff” traffic that is intended for a different virtual instance. While you can place your interfaces into promiscuous mode, the hypervisor will not deliver any traffic to them that is not addressed to them. Even two virtual instances that are owned by the same customer located on the same physical host cannot listen to each other’s traffic. Attacks such as ARP cache poisoning do not work within Amazon EC2 and Amazon VPC. While Amazon EC2 does provide ample protection against one customer inadvertently or maliciously attempting to view another’s data, as a standard practice you should encrypt sensitive traffic.
But as you can see, they still recommend that you should maintain encryption inside your network. We have taken the approach of terminating SSL at the external interface of the ELB, but then initiating SSL from the ELB to our back-end servers, and even further, to our (RDS) databases. It’s probably belt-and-suspenders, but in my industry it’s needed. Heck, we have some interfaces that require HTTPS and a VPN.
What’s the use case for S3 Pre-signed URL for uploading objects?
I get the use-case to allow access to private/premium content in S3 using Presigned-url that can be used to view or download the file until the expiration time set, But what’s a real life scenario in which a Webapp would have the need to generate URI to give users temporary credentials to upload an object, can’t the same be done by using the SDK and exposing a REST API at the backend.
Asking this since I want to build a POC for this functionality in Java, but struggling to find a real-world use-case for the same
Pre-signed URLs are used to provide short-term access to a private object in your S3 bucket. They work by appending an AWS Access Key, expiration time, and Sigv4 signature as query parameters to the S3 object. There are two common use cases when you may want to use them:
- Simple, occasional sharing of private files.
- Frequent, programmatic access to view or upload a file in an application.
Imagine you may want to share a confidential presentation with a business partner, or you want to allow a friend to download a video file you’re storing in your S3 bucket. In both situations, you could generate a URL, and share it to allow the recipient short-term access.
There are a couple of different approaches for generating these URLs in an ad-hoc, one-off fashion, including:
- Using the AWS Tools for Powershell.
- Using the AWS CLI.
Source: Here

FROM AWS:REINVENT 2021:
AWS on Air
Peter DeSantis Keynote
Join Peter DeSantis, Senior Vice President, Utility Computing and Apps, to learn how AWS has optimized its cloud infrastructure to run some of the world’s most demanding workloads and give your business a competitive edge.
Werner Vogels Keynote
Join Dr. Werner Vogels, CTO, Amazon.com, as he goes behind the scenes to show how Amazon is solving today’s hardest technology problems. Based on his experience working with some of the largest and most successful applications in the world, Dr. Vogels shares his insights on building truly resilient architectures and what that means for the future of software development.
Accelerating innovation with AI and ML
Applied artificial intelligence (AI) solutions, such as contact center intelligence (CCI), intelligent document processing (IDP), and media intelligence (MI), have had a significant market and business impact for customers, partners, and AWS. This session details how partners can collaborate with AWS to differentiate their products and solutions with AI and machine learning (ML). It also shares partner and customer success stories and discusses opportunities to help customers who are looking for turnkey solutions.
Application integration patterns for microservices
An implication of applying the microservices architectural style is that a lot of communication between components is done over the network. In order to achieve the full capabilities of microservices, this communication needs to happen in a loosely coupled manner. In this session, explore some fundamental application integration patterns based on messaging and connect them to real-world use cases in a microservices scenario. Also, learn some of the benefits that asynchronous messaging can have over REST APIs for communication between microservices.
Maintain application availability and performance with Amazon CloudWatch
Avoiding unexpected user behavior and maintaining reliable performance is crucial. This session is for application developers who want to learn how to maintain application availability and performance to improve the end user experience. Also, discover the latest on Amazon CloudWatch.
How Amazon.com transforms customer experiences through AI/ML
Amazon is transforming customer experiences through the practical application of AI and machine learning (ML) at scale. This session is for senior business and technology decision-makers who want to understand Amazon.com’s approach to launching and scaling ML-enabled innovations in its core business operations and toward new customer opportunities. See specific examples from various Amazon businesses to learn how Amazon applies AI/ML to shape its customer experience while improving efficiency, increasing speed, and lowering cost. Also hear the lessons the Amazon teams have learned from the cultural, process, and technical aspects of building and scaling ML capabilities across the organization.
Accelerating data-led migrations
Data has become a strategic asset. Customers of all sizes are moving data to the cloud to gain operational efficiencies and fuel innovation. This session details how partners can create repeatable and scalable solutions to help their customers derive value from their data, win new customers, and grow their business. It also discusses how to drive partner-led data migrations using AWS services, tools, resources, and programs, such as the AWS Migration Acceleration Program (MAP). Also, this session shares customer success stories from partners who have used MAP and other resources to help customers migrate to AWS and improve business outcomes.
Accelerate front-end web and mobile development with AWS Amplify
User-facing web and mobile applications are the primary touchpoint between organizations and their customers. To meet the ever-rising bar for customer experience, developers must deliver high-quality apps with both foundational and differentiating features. AWS Amplify helps front-end web and mobile developers build faster front to back. In this session, review Amplify’s core capabilities like authentication, data, and file storage and explore new capabilities, such as Amplify Geo and extensibility features for easier app customization with AWS services and better integration with existing deployment pipelines. Also learn how customers have been successful using Amplify to innovate in their businesses.

AWS Amplify is a set of tools and services that makes it quickand easy for front-end web and mobile developers to build full-stack applications on AWS
Amplify DataStore provides a programming model for leveraging shared and distributed data without writing additional code for offline and online scenarios, which makes working
with distributed, cross-user data just as simple as working with local-only data
AWS AppSync is a managed GraphQL API service
Amazon DynamoDB is a serverless key-value and document database that’s highly scalable
Amazon S3 allows you to store static assets
DevOps revolution
While DevOps has not changed much, the industry has fundamentally transformed over the last decade. Monolithic architectures have evolved into microservices. Containers and serverless have become the default. Applications are distributed on cloud infrastructure across the globe. The technical environment and tooling ecosystem has changed radically from the original conditions in which DevOps was created. So, what’s next? In this session, learn about the next phase of DevOps: a distributed model that emphasizes swift development, observable systems, accountable engineers, and resilient applications.
Innovation Day
Innovation Day is a virtual event that brings together organizations and thought leaders from around the world to share how cloud technology has helped them capture new business opportunities, grow revenue, and solve the big problems facing us today, and in the future. Featured topics include building the first human basecamp on the moon, the next generation F1 car, manufacturing in space, the Climate Pledge from Amazon, and building the city of the future at the foot of Mount Fuji.
Latest AWS Products and Services announced at re:invent 2021
Graviton 3: AWS today announced the newest generation of its Arm-based Graviton processors: the Graviton 3. The company promises that the new chip will be 25 percent faster than the last-generation chips, with 2x faster floating-point performances and a 3x speedup for machine-learning workloads. AWS also promises that the new chips will use 60 percent less power.
Trn1 to train models for various applications
AWS Mainframe Modernization: Cut mainframe migration time by 2/3
AWS Private 5G: Deploy and manage your own private 5G network (Set up and scale a private mobile network in days)
Transaction for Governed tables in Lake Formation: Automatically manages conflicts and error
Serverless and On-Demand Analytics for Redshift, EMAR, MSK, Kinesis:
Amazon Sagemaker Canvas: Create ML predictions without any ML experience or writing any code
AWS IoT TwinMaker: Real Time system that makes it easy to create and use digital twins of real-world systems.
Amazon DevOps Guru for RDS: Automatically detect, diagnose, and resolve hard-to-find database issues.
Amazon DynamoDB Standard-Infrequent Access table class: Reduce costs by up to 60%. Maintain the same performance, durability, scaling. and availability as Standard
AWS Database Migration Service Fleet Advisor: Accelerate database migration with automated inventory and migration: This service makes it easier and faster to get your data to the cloud and match it with the correct database service. “DMS Fleet Advisor automatically builds an inventory of your on-prem database and analytics service by streaming data from on prem to Amazon S3. From there, we take it over. We analyze [the data] to match it with the appropriate amount of AWS Datastore and then provide customized migration plans.
Amazon Sagemaker Ground Truth Plus: Deliver high-quality training datasets fast, and reduce data labeling cost.
Amazon SageMaker Training Compiler: Accelerate model training by 50%
Amazon SageMaker Inference Recommender: Reduce time to deploy from weeks to hours
Amazon SageMaker Serverless Inference: Lower cost of ownership with pay-per-use pricing
Amazon Kendra Experience Builder: Deploy Intelligent search applications powered by Amazon Kendra with a few clicks.
Amazon Lex Automated Chatbot Designer: Drastically Simplifies bot design with advanced natural language understanding
Amazon SageMaker Studio Lab: A no cost, no setup access to powerful machine learning technology
AWS Cloud WAN: Build, manage and monitor global wide area networks
AWS Amplify Studio: Visually build complete, feature-rich apps in hours instead of weeks, with full control over the application code.
AWS Carbon Footprint Tool: Don’t forget to turn off the lights.
AWS Well-Architected Sustainability Pillar: Learn, measure, and improve your workloads using environmental best practices in cloud computing
AWS re:Post: Get Answers from AWS experts. A Reimagined Q&A Experience for the AWS Community
How do you build something completely new?
FROM AWS:REINVENT 2020:
Automate anything with AWS Systems Manager
You can automate any task that involves interaction with AWS and on-premises resources, including in multi-account and multi-Region environments, with AWS Systems Manager. In this session, learn more about three new Systems Manager launches at re:Invent—Change Manager, Fleet Manager, and Application Manager. In addition, learn how Systems Manager Automation can be used across multiple Regions and accounts, integrate with other AWS services, and extend to on-premises. This session takes a deep dive into how to author a custom runbook using an automation document, and how to execute automation anywhere.
Deliver cloud operations at scale with AWS Managed Services
Learn how you can quickly build scaled AWS operations tooling to meet some of the most complex and compliant operations system requirements.
Turbocharging query execution on Amazon EMR
Learn about the performance improvements made in Amazon EMR for Apache Spark and Presto, giving Amazon EMR one of the fastest runtimes for analytics workloads in the cloud. This session dives deep into how AWS generates smart query plans in the absence of accurate table statistics. It also covers adaptive query execution—a technique to dynamically collect statistics during query execution—and how AWS uses dynamic partition pruning to generate query predicates for speeding up table joins. You also learn about execution improvements such as data prefetching and pruning of nested data types.
Detect machine learning (ML) model drift in production
Explore how state-of-the-art algorithms built into Amazon SageMaker are used to detect declines in machine learning (ML) model quality. One of the big factors that can affect the accuracy of models is the difference in the data used to generate predictions and what was used for training. For example, changing economic conditions could drive new interest rates affecting home purchasing predictions. Amazon SageMaker Model Monitor automatically detects drift in deployed models and provides detailed alerts that help you identify the source of the problem so you can be more confident in your ML applications.
Amazon Lightsail: The easiest way to get started on AWS
Amazon Lightsail is AWS’s simple, virtual private server. In this session, learn more about Lightsail and its newest launches. Lightsail is designed for simple web apps, websites, and dev environments. This session reviews core product features, such as preconfigured blueprints, managed databases, load balancers, networking, and snapshots, and includes a demo of the most recent launches. Attend this session to learn more about how you can get up and running on AWS in the easiest way possible.
Deep dive into AWS Lambda security: Function isolation
This session dives into the security model behind AWS Lambda functions, looking at how you can isolate workloads, build multiple layers of protection, and leverage fine-grained authorization. You learn about the implementation, the open-source Firecracker technology that provides one of the most important layers, and what this means for how you build on Lambda. You also see how AWS Lambda securely runs your functions packaged and deployed as container images. Finally, you learn about SaaS, customization, and safe patterns for running your own customers’ code in your Lambda functions.
Unauthorized users and financially motivated third parties also have access to advanced cloud capabilities. This causes concerns and creates challenges for customers responsible for the security of their cloud assets. Join us as Roy Feintuch, chief technologist of cloud products, and Maya Horowitz, director of threat intelligence and research, face off in an epic battle of defense against unauthorized cloud-native attacks. In this session, Roy uses security analytics, threat hunting, and cloud intelligence solutions to dissect and analyze some sneaky cloud breaches so you can strengthen your cloud defense. This presentation is brought to you by Check Point Software, an AWS Partner.
Best practices for security governance in serverless applications
AWS provides services and features that your organization can leverage to improve the security of a serverless application. However, as organizations grow and developers deploy more serverless applications, how do you know if all of the applications are in compliance with your organization’s security policies? This session walks you through serverless security, and you learn about protections and guardrails that you can build to avoid misconfigurations and catch potential security risks.

How Amazon.com automates cash identification & matching with AWS AI/ML
The Amazon Cash application service matches incoming customer payments with accounts and open invoices, while an email ingestion service (EIS) processes more than 1 million semi-structured and unstructured remittance emails monthly. In this session, learn how this EIS classifies the emails, extracts invoice data from the emails, and then identifies the right invoices to close on Amazon financial platforms. Dive deep on how these services automated 89.5% of cash applications using AWS AI & ML services. Hear about how these services will eliminate the manual effort of 1000 cash application analysts in the next 10 years.
Understanding AWS Lambda streaming events
Dive into the details of using Amazon Kinesis Data Streams and Amazon DynamoDB Streams as event sources for AWS Lambda. This session walks you through how AWS Lambda scales along with these two event sources. It also covers best practices and challenges, including how to tune streaming sources for optimum performance and how to effectively monitor them.
Building real-time applications using Apache Flink
Build real-time applications using Apache Flink with Apache Kafka and Amazon Kinesis Data Streams. Apache Flink is a framework and engine for building streaming applications for use cases such as real-time analytics and complex event processing. This session covers best practices for building low-latency applications with Apache Flink when reading data from either Amazon MSK or Amazon Kinesis Data Streams. It also covers best practices for running low-latency Apache Flink applications using Amazon Kinesis Data Analytics and discusses AWS’s open-source contributions to this use case.

App modernization on AWS with Apache Kafka and Confluent Cloud
Learn how you can accelerate application modernization and benefit from the open-source Apache Kafka ecosystem by connecting your legacy, on-premises systems to the cloud. In this session, hear real customer stories about timely insights gained from event-driven applications built on an event streaming platform from Confluent Cloud running on AWS, which stores and processes historical data and real-time data streams. Confluent makes Apache Kafka enterprise-ready using infinite Kafka storage with Amazon S3 and multiple private networking options including AWS PrivateLink, along with self-managed encryption keys for storage volume encryption with AWS Key Management Service (AWS KMS).
BI at hyperscale: Quickly build and scale dashboards with Amazon QuickSight
Data-driven business intelligence (BI) decision making is more important than ever in this age of remote work. An increasing number of organizations are investing in data transformation initiatives, including migrating data to the cloud, modernizing data warehouses, and building data lakes. But what about the last mile—connecting the dots for end users with dashboards and visualizations? Come to this session to learn how Amazon QuickSight allows you to connect to your AWS data and quickly build rich and interactive dashboards with self-serve and advanced analytics capabilities that can scale from tens to hundreds of thousands of users, without managing any infrastructure and only paying for what you use.
Is there an Updated SAA-C03 Practice Exam?
As of this writing, the official SAA-C02 practice exam is not yet available. It would probably take about 3 more months before AWS finally releases the official version of the SAA-C03 practice exam for the new AWS Certified Solutions Architect Associate. In the meantime, you can try the new SAA-C03 sample exam so you can have a better idea of what will be the topic coverage would be, and how the scenarios will be presented.
This sample SAA-C03 sample exam PDF file can provide you with a hint of what the real SAA-C03 exam will look like in your upcoming test. In addition, the SAA-C03 sample questions also contain the necessary explanation and reference links that you can study.
Top-paying Cloud certifications:
- Google Certified Professional Cloud Architect — $175,761/year
- AWS Certified Solutions Architect – Associate — $149,446/year
- Azure/Microsoft Cloud Solution Architect – $141,748/yr
- Google Cloud Associate Engineer – $145,769/yr
- AWS Certified Cloud Practitioner — $131,465/year
- Microsoft Certified: Azure Fundamentals — $126,653/year
- Microsoft Certified: Azure Administrator Associate — $125,993/year
AWS Certified Solution Architect Associate Exam Prep Quiz App
Download AWS Solution Architect Associate Exam Prep Pro App (No Ads, Full version with answers) for:
Android – iOS – Windows 10 – Amazon Android
How to Load balance EC2 Instances in an Autoscaling Group?
In this AWS tutorial, we are going to discuss how we can make the best use of AWS services to build a highly scalable, and fault tolerant configuration of EC2 instances. The use of Load Balancers and Auto Scaling Groups falls under a number of best practices in AWS, including Performance Efficiency, Reliability and high availability.
Before we dive into this hands-on tutorial on how exactly we can build this solution, let’s have a brief recap on what an Auto Scaling group is, and what a Load balancer is.
Autoscaling group (ASG)
An Autoscaling group (ASG) is a logical grouping of instances which can scale up and scale down depending on pre-configured settings. By setting Scaling policies of your ASG, you can choose how many EC2 instances are launched and terminated based on your application’s load. You can do this based on manual, dynamic, scheduled or predictive scaling.
Elastic Load Balancer (ELB)
An Elastic Load Balancer (ELB) is a name describing a number of services within AWS designed to distribute traffic across multiple EC2 instances in order to provide enhanced scalability, availability, security and more. The particular type of Load Balancer we will be using today is an Application Load Balancer (ALB). The ALB is a Layer 7 Load Balancer designed to distribute HTTP/HTTPS traffic across multiple nodes – with added features such as TLS termination, Sticky Sessions and Complex routing configurations.
Getting Started
First of all, we open our AWS management console and head to the EC2 management console.
We scroll down on the left-hand side and select ‘Launch Templates’. A Launch Template is a configuration template which defines the settings for EC2 instances launched by the ASG.
Under Launch Templates, we will select “Create launch template”.
We specify the name ‘MyTestTemplate’ and use the same text in the description.
Under the ‘Auto Scaling guidance’ box, tick the box which says ‘Provide guidance to help me set up a template that I can use with EC2 Auto Scaling’ and scroll down to launch template contents.
When it comes to choosing our AMI (Amazon Machine Image) we can choose the Amazon Linux 2 under ‘Quick Start’.
The Amazon Linux 2 AMI is free tier eligible, and easy to use for our demonstration purposes.
Next, we select the ‘t2.micro’ under instance types, as this is also free tier eligible.
Under Network Settings, we create a new Security Group called ExampleSG in our default VPC, allowing HTTP access to everyone. It should look like this.
We can then add our IAM Role we created earlier. Under Advanced Details, select your IAM instance profile.
Then we need to include some user data which will load a simple web server and web page onto our Launch Template when the EC2 instance launches.
Under ‘advanced details’, and in ‘User data’ paste the following code in the box.
#!/bin/bash yum update -y yum install -y httpd.x86_64 systemctl start httpd.service systemctl enable httpd.service echo “Hello World from $(hostname -f)” > /var/www/html/index.html
Then simply click ‘Create Launch Template’ and we are done!
We are now able to build an Auto Scaling Group from our launch template.
On the same console page, select ‘Auto Scaling Groups’, and Create Auto Scaling Group.
We will call our Auto Scaling Group ‘ExampleASG’, and select the Launch Template we just created, then select next.
On the next page, keep the default VPC and select any default AZ and Subnet from the list and click next.
Under ‘Configure Advanced Options’ select ‘Attach to a new load balancer’ .
You will notice the settings below will change and we will now build our load balancer directly on the same page.
Select the Application Load Balancer, and leave the default Load Balancer name.
Choose an ‘Internet Facing’ Load balancer, select another AZ and leave all of the other defaults the same. It should look something like the following.
Under ‘Listeners and routing’, select ‘Create a target group’ and select the target group which was just created. It will be called something like ‘ExampleASG-1’. Click next.
Now we get to Group Size. This is where we specify the desired, minimum and maximum capacity of our Auto Scaling Group.
Set the capacities as follows:
Click ‘skip to review’, and click ‘Create Auto Scaling Group’.
You will now see the Auto Scaling Group building, and the capacity is updating.
After a short while, navigate to the EC2 Dashboard, and you will see that two EC2 instances have been launched!
To make sure our Auto Scaling group is working as it should – select any instance, and terminate the instance. After one instance has been terminated you should see another instance pending and go into a running state – bringing capacity back to 2 instances (as per our desired capacity).
If we also head over to the Load Balancer console, you will find our Application Load Balancer has been created.
If you select the load balancer, and scroll down, you will find the DNS name of your ALB – it will look something like ‘ ExampleASG-1-1435567571.us-east-1.elb.amazonaws.com’.
If you enter the DNS name into our URL, you should get the following page show up:
The message will display a ‘Hello World’ message including the IP address of the EC2 instance which is serving up the webpage behind the load balancer.
If you refresh the page a few times, you should see that the IP address listed will change. This is because the load balancer is routing you to the other EC2 instance, validating that our simple webpage is being served from behind our ALB.
The final step Is to make sure you delete all of the resources you configured! Start by deleting the Auto Scaling Group – and ensure you delete your load balancer also – this will ensure you don’t incur any charges.
Architectural Diagram
Below, you’ll find the architectural diagram of what we have built.
Learn how to Master AWS Cloud
Ultimate Training Packages – Our popular training bundles (on-demand video course + practice exams + ebook) will maximize your chances of passing your AWS certification the first time.
Membership – For unlimited access to our cloud training catalog, enroll in our monthly or annual membership program.
Challenge Labs – Build hands-on cloud skills in a secure sandbox environment. Learn, build, test and fail forward without risking unexpected cloud bills.
This post originally appeared on: https://digitalcloud.training/load-balancing-ec2-instances-in-an-autoscaling-group/
Download AWS Solution Architect Associate Exam Prep Quiz App for:
All Platforms (PWA) – Android – iOS – Windows 10 – Amazon Android

AWS Cloud Certifications Breaking News – Testimonials – AWS Top Stories
- Using remote write AWS Prometheusby /u/ema_eltuti (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 9, 2022 at 3:24 pm
Good morning, I am new using the service of Amazon Prometheus, I have created my first workspace. I have installed prometheus in an ec2 instance, which will function as a scrape, to collect information from any instance in the region. I couldn’t find any documentation to do it so I asked for your help, I’m trying to do something like this: I need help setting up an ec2 instance with prometheus to capture information from all instances of the region and then send it to prometheus workspace in AWS. https://preview.redd.it/csjhrnn0hpg91.png?width=2644&format=png&auto=webp&s=d6e6f20e1ee027af489324f1e17d8d303e4722cb thanks! submitted by /u/ema_eltuti [link] [comments]
- Lambda has Cache?by /u/OkAcanthocephala1450 (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 9, 2022 at 3:05 pm
Hi everyone! , i'm working with lambda. The part of function consist on opening a file , modifying ,closing and updating on s3. During some tests , the lambda uploads the file on s3 , and if i type test again but with different data to process, it saves the same last file it uploaded!! To fix it i created a randomnumber = random.randint(10000,99999) , which i set as file name so whenever it runs it creates a new one and it has nothing to do with old one! But whenever i click test ,change the data and test again , in some seconds, it gives the same random number!! Can someone explain ,does lambda saves the state of last run ? submitted by /u/OkAcanthocephala1450 [link] [comments]
- Build AWS Config rules using AWS CloudFormation Guardby /u/shadowsyntax (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 9, 2022 at 2:29 pm
submitted by /u/shadowsyntax [link] [comments]
- Using an EC2 Load Balancer to make some requests go to squarespace based off of host instead of to the main application without using a redirectby /u/hangingonthetelephon (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 9, 2022 at 1:20 pm
Hi -- I'm working on a project which underwent a significant re-organization. The application is primarily composed of: A frontend create-react-app client, which is its own ECS service A backend flask+celery ECS service Both of the above are in an ECS EC2 cluster, and a load balancer is used to route /api requests to the backend and all other requests to the front end, as well as enable ssl. Godaddy is used for DNS and is configured to point at the load balancer. Previously, that was the entire application, and it lived at ourdomain.com. However, the decision was made to move the main application to app.ourdomain.com, and to create a squarespace which would be the main info/documentation site so that everyone on the team could easily edit the living docs/pr stuff etc. So now, we have ourdomain.com pointing at squarespace (through GoDaddy DNS) and app.ourdomain.com pointing at our load balancer. This works totally fine, but... We got the request to have backward compatibility to allow ourdomain.com/api/* to still be supported (whereas currently only app.ourdomain.com/api/* will work). To me, it seems like the natural solution is to just have GoDaddy point ourdomain.com back at the load balancer, like it original did and then have the load balancer route ourdomain.com to the squarespace if api is not in the path, and then use the same rules as before api to the backend service, everything else to the frontend service My question is: is it possible to do this without using redirects? We want the squarespace site to show up at ourdomain.com but if ourdomain.com is pointing at the load balancer, I don't know how to set up the load balancer to render the squarespace site without redirecting to a separate URL. I have a feeling what I need to is: set up some sort of AWS DNS thing (Route53? new to aws not sure exactly) that points at Squarespace site, and then have the load balancer point at the AWS entity that aliases the squarespace site? Is there another approach which you think might be better? submitted by /u/hangingonthetelephon [link] [comments]
- Looking to get into AWSby /u/theboogersalesman (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 9, 2022 at 1:08 pm
Is knowing python enough to get started with AWS? I’m not a strong programmer but I would like to start looking into the different fields of AWS. submitted by /u/theboogersalesman [link] [comments]
- I don't understand my RDS Average Active Session chartby /u/Jidey43 (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 9, 2022 at 10:20 am
This morning we had an outage on our HTTP application, connected to a PostgreSQL hosted on RDS. During the outage, the database load looked like this, but I can't figure what this "relation" spike means in this context. I can't find the meaning behind the "relation" keyword in the documentation. Thanks. https://preview.redd.it/wueqlgpuxng91.png?width=1615&format=png&auto=webp&s=09230de60d76f6aa33e31b70adbfdaa69440a971 submitted by /u/Jidey43 [link] [comments]
- How do you personally version serverless projects at your company?by /u/shellwhale (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 9, 2022 at 10:05 am
I'm working on this project called foobar requested by a client at the company that is made of multiple lambdas, step functions, and such. The client considers it as a single and only product but behind the scenes, I can see that there are some parts of the software that are doing their own thing while some functions are reused across every part (authentification is an example). What we're currently doing is that each lambda function and each step function are stored in their own repo and with their own version number. Internally the team (3 devs) says that each function is its own microservice, but when we deploy the project we actually treat it as a whole thing and deploy it from a single Terraform repository where a Jenkins jobs clones ~100 functions. The "whole thing" aka foobar actually gets its own version number (on paper) because that's what the client wants, a single product. I feel this causes troubles in ways that I don't quite understand yet, troubles such as when the client asks for a feature X to launch before feature Y, it's not always possible to deploy. I never quite understood what microservices really were about but is a product-based approach coming from the client really compatible with this? submitted by /u/shellwhale [link] [comments]
- Building a private, serverless web applicationby /u/ocularsinister2 (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 9, 2022 at 9:51 am
I have a new project in mind for a tool that should only be accessible to users in our offices or connected to our VPN. I'd like to write the tool using serverless technologies - CloudFront + either AppSync or API Gateway. I know I can make a private API Gateway to protect the backend, but is there a way to also protect the front end assets? Ideally, what I would like is CloudFront to resolve to a private IP on my network - is that possible? What alternative services could I use to serve the front end assets (an ALB perhaps?) submitted by /u/ocularsinister2 [link] [comments]
- Hybrid S3 Sever/ Proxy Server ?by /u/MissionContext6434 (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 9, 2022 at 9:06 am
Hi Is there a way, did anyone manage to deploy Hybrid s3 server with SSO ? Basically, our users downloading things from s3 directly, i want to setup local server on lan that cattchs all the data, the login to him is same as login to s3, with sso. its need to be a local proxy server, hybrid s3 server did anyone depoly somthing like that here ? will QNAP support this ? submitted by /u/MissionContext6434 [link] [comments]
- Lambda keeps consuming more and more memory until it kills it selfby /u/Moelby (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 9, 2022 at 6:24 am
Hello I'm having a problem with lambda. Our lambdas are generating images on demand. This done with konva.js and node-canvas (node-canvas is in a layer). When ever we our lambda is under sustained load (calling the endpoint in a loop with await, the problem occurs no matter the concurrency) the memory usage keeps rising and rising for each invocation, until it's eventually 100% and the lambda runtime is being killed. It's like the runtime never garbage collects anything. We have tried increasing the memory all the way to 5GB, but the issue still occurs (although we can call it more times before it runs out). Our setup consists a APIGW2 Http endpoint in front of the lambda. The lambda is placed in our vpc, in a private subnet with a nat gateway. Everything is deployed with CDK. The function roughly does this: Parse the url If the image already exists in S3, we return that. Else Get the necessary data from our DB (MySQL aurora). The connection is a variable outside the handler. Download the necessary fonts from s3 Generate the image. The image contains another image we give it in the URL. That we download. Upload the image to S3 Return it as `Buffer.toString('base64')` Based on this (we don't use sharp) https://github.com/lovell/sharp/issues/1710#issuecomment-494110353, we as mentioned above tried to increase the memory (from from 1gb -> 2 -> 3 -> 5gb). But it still has the same memory increase until it dies, and it starts all over. submitted by /u/Moelby [link] [comments]
- ECS Anywhere cluster running on a bunch of 2007 Intel Macbooks (link to it in the comments)by /u/ruptwelve (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 9, 2022 at 5:43 am
submitted by /u/ruptwelve [link] [comments]
- Has anyone successfully deployed a React application to Elastic Beanstalk?by /u/assetIncomeForever (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 9, 2022 at 5:29 am
Hello. I've created an application too interface with Workday from the Workday developers site. It works great locally but if like to deploy it to Beanstalk. So far if l if been unsuccessful. Has anyone deployed a React so that uses Models express? That's in advance! submitted by /u/assetIncomeForever [link] [comments]
- How to manage GuardDuty costs?by /u/bnchandrapal (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 9, 2022 at 5:19 am
Hi, I have enabled GuardDuty on all AWS accounts under AWS Org. When I enabled it "S3 Protection" was also enabled by default. Months passed and I had a few alerts but fortunately no incidents. Recently when I saw the price of GuardDuty it has gone upto ~ $6.5k USD per month and it has been increasing. It had increased from $2.5k to $6.5k in past 6 months. The majority of the cost is from one particular AWS account (which uses S3 bucket as data lake storage) and the GuardDuty usage showed average daily cost of S3 protection in that account as $200 USD. I see that S3 protection does provide some highly useful alerts - like Exfiltration:S3/AnomalousBehavior, Exfiltration:S3/MaliciousIPCaller, etc. These type of alerts depend on S3 Data events and require ML models to segregate noise and actual events. Find all here - https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-s3.html S3 alerts Policy:* are alerted even if S3 protection is not enabled. Few questions: Is there any cheaper alternative for GuardDuty S3 protection? Have you ever found true incidents based on S3 protection alerts (say anomalous behavior, exfiltration, etc)? Do have GuardDuty enabled on accounts which have huge S3 buckets with frequent access? If yes, why? What would you say if someone asks the ROI of it? Do you have any other recommendations for this situation? PS: I have tried enabling S3 access logs on those huge buckets and failed badly. S3 access logs with lambda analysis costs a lot of money as well. submitted by /u/bnchandrapal [link] [comments]
- Creating Domain Admins in AWS-managed ADby /u/syeedhasan_ (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 9, 2022 at 4:48 am
I've created a managed directory service, attached an EC2 instance to the domain, and the connection works perfectly. What I can't understand is - why can't I add another user to the Domain Admins group? I can create a user just fine but I can't switch the groups. Same goes for creation of Service Accounts. Tried to Google a lot but can't find anything. Any pointers on how to do this? submitted by /u/syeedhasan_ [link] [comments]
- Trouble deploying Mac M1 instance in Virginia.by /u/Sonicthunder (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 9, 2022 at 3:44 am
AWS newb here. I can’t seem to successfully replay Mac M1 instance in the Virginia. They seem to be available there, but I can’t seem to get it running. Any tips? submitted by /u/Sonicthunder [link] [comments]
- AWS Automator - Turn off dev/test envs at night by a schedule without any configby /u/Vast_Kaleidoscope893 (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 9, 2022 at 1:55 am
Hi! I was thinking of creating a simple service where people can define a schedule (for example 'every day at midnight'), provide specific AWS creds, and it takes care of automating this and shutting down and turning on the resources by the schedule (EC2, RDS etc...). I know this is achievable in many ways with lambdas or other ways, but I was thinking it would be convenient to have a simple, fast, slick UI, which solves this without any code/configuration. This can also be be extended later to be general AWS automation for 1 time operation or recurring operations. (eg; Automatically remove efs provisioned throughput after the weekend) WDYT? submitted by /u/Vast_Kaleidoscope893 [link] [comments]
- Is it possible to limit access to S3 based on the client's IP, but on a file by file basis?by /u/maxifederer (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 9, 2022 at 1:52 am
I know you can do it for folders, but can you do it for every file during creation? Like you create a file on S3 and set the permission during file creation? Or can you only do it through the management console? submitted by /u/maxifederer [link] [comments]
- Re-Add Failed Message to AWS SQS Queue?by /u/bikramjeets (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 9, 2022 at 12:58 am
I have a SQS that receives messages from a Producer Lambda, and is read from by another Consumer Lambda. The messages are basically notifications that have to be delivered to my company's end-users (SMS'es, emails, in-app notifications, etc). Sometimes, the processing of the messages in the Consumer Lambda might 'fail' for various reasons, such as due to a downstream microservice being temporarily unavailable. I want to re-add these failed messages back to the SQS queue so they can be retried later. Is there any built-in mechanism to do this, or a better way than just adding a new message object to the queue with just the data copied over? submitted by /u/bikramjeets [link] [comments]
- Configuring Connect to work with Lambda and Dynamoby /u/LacZingen (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 9, 2022 at 12:26 am
Building a contact flow that takes user input and want to pass that to a lambda function to query dynamo, then return the results to Connect to be read out to the caller. Have the boiler plate stuff done but struggling connecting the last few pieces. Specifically storing the user input and passing that to the lambda and then storing the result of the lambda and giving that to the caller. r submitted by /u/LacZingen [link] [comments]
- Securely retrieving secrets with AWS Lambdaby /u/shadowsyntax (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 9, 2022 at 12:25 am
submitted by /u/shadowsyntax [link] [comments]
- AWS Amplify validation rules? Only create one Profile object via GraphQL APIby /u/daredeviloper (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 9, 2022 at 12:19 am
Apologies if this has been asked, I could not find anything. I’d like to allow a user to setup a profile. Using AWS Amplify + GraphQL API I see that I can create a @model Profile, and I see where I can specify only owner can create.. But the API seems to let the owner create infinite Profile objects.. how do I prevent this and only allow one? Where does the custom logic go? submitted by /u/daredeviloper [link] [comments]
- Amazon S3 adds a new policy condition key to require or restrict server-side encryption with customer-provided keys (SSE-C)by /u/jsonpile (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 8, 2022 at 10:34 pm
AWS introduces further granularity when restricting encryption types for s3! submitted by /u/jsonpile [link] [comments]
- aws_breaking_changes: List of changes announced for AWS that may break existing codeby /u/mooreds (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 8, 2022 at 6:07 pm
submitted by /u/mooreds [link] [comments]
- Amazon EKS IAM roles and policies with Terraformby /u/Waleed-Engineer (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 8, 2022 at 11:23 am
submitted by /u/Waleed-Engineer [link] [comments]
- How to retrieve images from S3 the right wayby /u/abstract_code (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on August 8, 2022 at 9:57 am
I'm wondering what the best practices in this type of cases are, I have a basic decoupled frontend and backend architecture, I use docker for both development and production, nothing too complex. Vue as frontend Laravel as backend S3 for storing images with no public access and READ,PUT permissions linked to an dedicated IAM account. I have a form in my frontend where I send an image from my frontend to the backend, and then in the backend using Laravel AWS SDK I upload the file to S3 through with my IAM account access keys. After uploading the file to S3, I save the object URL and Path in my database so I can display them later on. Here comes my concerns, as public access on the bucket is restricted, when I try to display an image from S3 using the object url, it says access denied, obviously S3 can't tell who is trying to download this images and blocks the access. So the options I see are: Download the images in my backend using my IAM user access keys and pass them to the frontend. Use the IAM user access keys in the frontend to download the images directly from S3 using the URL or Path. Make the bucket publicly accessible from anywhere, this would solve my problem easily, and taking into account that images are not private or restricted, I think this would be the route to take. But I would like to skip this one as maybe in future projects I need the feature of "private" images images can't be publicly accessible from the browser throught the URL. Now this is my concern for development environment where all servers are inside local docker containers, as for the production environment, frontend webserver and backend server are inside ECS on separate clusters. Could I somehow use S3 Gateway Endpoints and IAM Roles to upload / download files without the need of an IAM user with its access keys? I guess this depends on the way Laravel or Vue retrieve those files from S3, maybe specifying the S3 endpoint is just enough if I have everything set up correctly in AWS. Sorry if this questions feel a bit silly, it's my first architecture and I want to follow best practices from the beginning so I get a great foundation for later on, thanks in advance! submitted by /u/abstract_code [link] [comments]
- Looking for recommendations on what to pursue...by /u/UnitedWolverine5501 (AWS Certifications) on August 4, 2022 at 12:26 pm
So, I’m a network engineer who has begun breaking into cloud. I have my CCNP in R&S, I’m working on my CCNP for automation now and I will probably concentrate on getting one of the specialties CCNP certs every year to keep my CCNP in play. I have Linux certification and I’ve gotten my AWS-SAA. I see where networking is headed (at least for the good jobs, lol) and I want to really break into more cloud-based networking, maybe that’s including a load of scripting for automation, idk, but I want to build on my enterprise networking experience/knowledge and be of value to an employer and have fun at what I do because lately with my networking gigs, it just hasn’t been. My question is in asking advice on what AWS certs should I pursue. I could see getting very interested in code as I’m already having fun with learning automation, studying for my SAA was actually fun and exciting for me almost like when I first got into networking, where the thrill has since burned down to a simmer level of excitement now (I have ADHD, I lose interest in things after a while, lol). I’m assuming Adv. Networking would be an obvious one, but then what? DevOps Pro, SA Pro (which I’ve heard is a beast)? Any of the other AWS specialties? submitted by /u/UnitedWolverine5501 [link] [comments]
- Focusing on particular subjects based on weaknesses and trendsby /u/posty72 (AWS Certifications) on August 4, 2022 at 10:30 am
I found good insight with what to study for the exam so this is me paying it forward. YMMV. The approach I took to studying was to start with a few practice exams on Udemy to get comfortable with the question structure. As I went through these I recorded the subjects I needed to improve on, and went through the AWS docs to do this. About a week out from the exam I started watching this sub and took note of the subjects that people said were coming up a lot, brought Adrian Cantrills Solutions Architect course, and focused solely on them. This turned out to be a pretty great strategy! There lots of questions on RDS/Aurora, slightly less on storage (S3, EFS, etc), then a smattering of EC2, serverless, networking and SQS/SNS/Kinesis. I would also recommend covering HA/FT and cost optimisation. submitted by /u/posty72 [link] [comments]
- Seeking Recommendations: Next Certificationby /u/Eightstream (AWS Certifications) on August 4, 2022 at 6:22 am
Hi all, I am trying to work out which certification I should target next. I am a data scientist by background. I have my Cloud Practitioner, and my Data Analytics Specialty. I have been asked to manage the rollout of a new AWS-based analytics environment for my team (previously we have worked in a bit of a mishmash of on-prem servers). I have a good amount of experience with AWS as an analytics user, but not a lot of infrastructure experience. I would like to target a certification that will help me build the knowledge I am going to need for the job. I was thinking of targeting either the Database Specialty, or one of the Solutions Architect certs - but not sure what is going to be more useful. Any advice would be gratefully appreciated. submitted by /u/Eightstream [link] [comments]
- Can I pass SAA-C02 with just the practice exams?by /u/sabrina_ng (AWS Certifications) on August 4, 2022 at 6:10 am
I am in the middle (~20% completed) of e-learning with Neal's course. Running out of time if I want to take SAA-C02 exam that is expiring on 29 August. My test date is 15 August, or I can reschedule it to 22 August. Can I put aside the course materials and just do the practice papers on Tutorials Dojo? Will it be enough to pass the certification? I plan to catch up on the e-learning again (but at least without the stress of passing exam) after I'm done with SAA-C02. submitted by /u/sabrina_ng [link] [comments]
- Just bought SAA-CO2 practice exams from Tutorials Dojo. Are Timed Mode and Review Mode exams the same?by /u/diabolical-sun (AWS Certifications) on August 4, 2022 at 1:16 am
As the title says, I recently bought the practice exams from Tutorials Dojo. I've been using review mode since I'm not ready to take the exams and I'm still learning the material, but I wanted to leave 2 or 3 sets that I can do with 'fresh eyes' on timed-mode when I'm ready. Timed mode is kind of pointless when you've seen the questions before so saving a few for later can better help simulate the exam process. My assumption was that the question sets are the same, but I'm not certain and wanted to plan accordingly. submitted by /u/diabolical-sun [link] [comments]
- Labs for AWS Security Specialty Examby /u/trpoole (AWS Certifications) on August 3, 2022 at 11:45 pm
For those of you who have taken the AWS Security Specialty certification exam, I was wondering how important labs are to pass that exam. I didn’t need labs to pass the AWS Solutions Architect exam and was wondering if the same is true for the security exam. I understand that labs help to reinforce concepts but are they critical for passing the exam? submitted by /u/trpoole [link] [comments]
- AWS Cloud Practitioner Challenge discount not obtained!by /u/anicetito (AWS Certifications) on August 3, 2022 at 10:44 pm
Hi there folks. I have some experience with few AWS services. A few days ago I found this link about a Practitioner Challenge where Amazon gives you a 50% discount to take the Cloud Practitioner exam: https://pages.awscloud.com/GLOBAL-ln-GC-TrainCert-Cloud-Practitioner-Challenge-2022-reg.html I registered and receiving an email saying I need to schedule it through PSI before Dec 14 in order to get 50% discount. However when I get to the payment part, the exam is listed as $100 (which is the original price) and there's no discount present. Does somebody know if I need to do something else in order to get the discount, or if there's some sort of generic discount code for the challenge? Thanks in advance. submitted by /u/anicetito [link] [comments]
- AWS - PearsonVUE rescheduling policy - Online examby /u/SoftStingray (AWS Certifications) on August 3, 2022 at 9:08 pm
I've never taken a AWS exam so I don't know if this is different, but with Comptia and Cisco you can literally reschedule 1 minute before your exam without any issues. My university sent me an email saying that you can reschedule only 24 hours in advance when taking a AWS exam online with PearsonVue. Anyone have any experience with this? Thanks submitted by /u/SoftStingray [link] [comments]
- Test Centre vs Onlineby /u/HiddenDigital (AWS Certifications) on August 3, 2022 at 8:41 pm
Hi - just wondered what everyone’s experience of doing the online tests are. I plan on doing SAA-C02 in a couple weeks. There’s better timings and options for Online (from home). I done AWS CCP & Azure Fundamentals at a test centre as wanted a different environment from the house. But as above I have more times available if I do it from home. submitted by /u/HiddenDigital [link] [comments]
- SAA-C03 Exam Versionby /u/hasitha1989 (AWS Certifications) on August 3, 2022 at 4:42 pm
I don't have much time to take SAA-C02 version exam. Will it be difficult to pass SAA-C03 version ? submitted by /u/hasitha1989 [link] [comments]
- Passed SAA-C02by /u/Collapsed_Society (AWS Certifications) on August 3, 2022 at 3:10 pm
TL;DR: Passed with an 804. Journey summarized below. I passed the SAA-C02 this past Monday (August 1st). Like most folks, I want to share what led to my success. First, a little background about me, I've been working IT for 33 years and have held many different positions from junior admin to manager of large teams maintaining and operating enterprise information systems. Currently, I lead a small team operating and maintaining cloud infrastructure, platform services, and applications. Fewer headaches and more keyboard time with a small team. I do get my hands on the keyboard, but not as much as I like. Therefore, as an old guy and to lead effectively, I feel it is necessary to understand the tech and associated challenges my team contends with on a daily basis. Now the good stuff. I gave myself six weeks to learn the material. I used Neal Davis's course on Udemy and added his six practice tests to my routine. Additionally, I purchased AWS Certified Solutions Architect Study Guide: Associate SAA-C02 Exam, Piper and Clinton, as a secondary study source. Also, the study guide provided flash cards and additional tests on the Sybex website. Occasionally, I would find articles and videos on various websites and YouTube to view something from a different angle. Lastly, I would tear through AWS documentation for any areas where I was struggling to understand the concepts. CAVEAT: SAA-C02 expires on 29 August. SAA-C03 takes its place. Please keep this in mind as you source your study materials. Neal Davis's practice tests on Udemy used similar phrasing as the questions on the certification exam. Furthermore, each question on the practice tests provided a rationale for both correct and incorrect answers. Also, the test engine categorizes your test results, which allows you to filter on content domains, correct vs incorrect, all questions, etc. as you conduct your test review. As a result, when I took (wrote) the test, I felt comfortable with the phraseology of the questions. That's it folks. Developer is up next for me, then DevOps Engineer starts in October. Good luck on your AWS journey. EDIT: Corrected a typo. submitted by /u/Collapsed_Society [link] [comments]
- Aws solution architect associate difficultyby /u/runescapefisher (AWS Certifications) on August 3, 2022 at 2:21 pm
Hi everyone, in terms of difficulty, how bad is this exam? I already got a certified kubernetes developer cert and a sec+ with a CS undergrad degree. I’m just wondering if this cert is pure memorization or problem solving ? I’m still studying for it but the material is so boring. submitted by /u/runescapefisher [link] [comments]
- How many 50% discount vouchers do you get after passing an exam?by /u/butrimodre (AWS Certifications) on August 3, 2022 at 12:59 pm
My friend showed me that they got two vouchers in their account. They only did one exam a week ago. I always thought it's 1 voucher per exam. submitted by /u/butrimodre [link] [comments]
- PASSED the AWS Certified SysOps Administrator Associate Exam SOA-C02by /u/Ok_Connection3504 (AWS Certifications) on August 3, 2022 at 10:03 am
submitted by /u/Ok_Connection3504 [link] [comments]
- Is it worth getting certification?by /u/Notalabel_4566 (AWS Certifications) on August 3, 2022 at 7:39 am
I've been a web developer for 2 years now but have no aws experience. I'm constantly thinking ahead about career growth and how I can be the most prepared for whatever the next step may be. I'm wondering if it would be advantageous for me to get AWS CCP/SAA certification? Would it help me in searching future job? submitted by /u/Notalabel_4566 [link] [comments]
- If you take and pass the CCP, does it give you a voucher for one of the associate exams? If so, how much is the discount?by /u/Zodiax (AWS Certifications) on August 3, 2022 at 12:18 am
submitted by /u/Zodiax [link] [comments]
- Recommendations to a similar teaching method like Professor Messorby /u/Dependent-Ad5908 (AWS Certifications) on August 3, 2022 at 12:03 am
I really like the way that Professor Messor teaches his Comptia classes. I am looking for a similar teaching method for my AWS learning. Messor sets up videos in bullet points and discusses the subject matter. After I watch his videos, I purchase his compact study notes. I then create flash cards from my notes and his compact study notes. Do any of you know of a professor that has the same teaching method with compact study notes for AWS? Thank you. submitted by /u/Dependent-Ad5908 [link] [comments]
- realistically...AWS certificationsby /u/GuyRedditer (AWS Certifications) on August 2, 2022 at 10:22 pm
As someone with no experience in AWS or cloud computing and who wants to learn and work in the field, i know there are free resources online, im using those now. But i wanna know if its actually worth it? Does getting a job get easy with an AWS cert. Has anyone gotten a job in this field using AWS certs? If so, how did you do it Also, should i learn any programming languages? Thanks submitted by /u/GuyRedditer [link] [comments]
- Solutions Architect Associate - are you building cool things?by /u/strahan47 (AWS Certifications) on August 2, 2022 at 9:41 pm
Hi there Currently studying for the Cloud Practitioner exam and looking forward to getting it out of the way. I feel like there's a lot of memorization and not much hands on experience with this one. So question for Solutions Architect Associates - is the learning more hands on, and are you able to build some cool things now that you have it? Or would that be more for the Developer Associate? (Edit: I know Python and SQL, just wondering if the SAA will help me build things) submitted by /u/strahan47 [link] [comments]
- Passed SAA- CO2by /u/millilitre14 (AWS Certifications) on August 2, 2022 at 8:23 pm
I have been working in AWS cloud since March and wanted to get SAA certification for a while. Finally got around to it a month ago. Went through Stephane Maarek's udemy course+ practice tests , Tutorials Dojo practice tests. Finally got the results yesterday that I passed with a score of 823. I was not sure if I failed/passed throughout the exam! Some of the questions were pretty straightforward , especially very similar to tutorials dojo practice questions. I got some questions on using Polly, Datalake, Quicksight which I just guessed ! Lots of VPC related questions, hybrid architecture , Aurora and kinesis . I am just relieved that its all over ! Planning to give Data Analytics next. submitted by /u/millilitre14 [link] [comments]
- Passed CLF-C01by /u/DavidWiese (AWS Certifications) on August 2, 2022 at 7:51 pm
Just took the exam at a Pearson Vue center and got a PASS notification immediately after finishing the exam. It says I will get my final score via email in the next few days, however. I used Stephane Maarek's Udemy course and felt like it covered absolutely everything. I started studying a few weeks ago and this was what I did: Went through the entire course at 1.25x speed. Went through the course again but just the overviews, summaries, and quizzes at 1.75x speed. Skipped all of the demo/hands on. The last few days I went through just the summaries and quizzes at 2.0x speed, and made flash cards for any of the services I kept forgetting, and for the 6 Pillars (had at least 3 questions on these so I'm glad I did). I also purchased Stephane's 6 Practice Exams and found they were helpful. After going through the above, I passed each one above 80%. The 6th one felt the hardest but I did as well on that as all the others. The entire exam took me 23 minutes for all 65 questions. I felt really confident throughout the whole thing. A handful of questions I had to narrow it down and make a best guess, but otherwise I felt like I knew the answer almost immediately. e- my score was 920 submitted by /u/DavidWiese [link] [comments]
- Passed Advanced Networkingby /u/Kind-Court-4030 (AWS Certifications) on August 2, 2022 at 5:17 pm
Passed, but barely. 761. I used Cantril's videos, the official book, the AWS videos, and some YT stuff. Cantril was the best...nothing else is remotely close in terms of quality. For practice tests, I used TutorialsDojo and Whizlabs. I thought TDJ was the best for explanations, and Whizlabs were a lot harder, which was its own kind of good for this exam. I have a lot of experience in (mostly Cisco) networking, and do not feel like that really saved me in this exam all that much. submitted by /u/Kind-Court-4030 [link] [comments]
- Passed SAA-C02by /u/NoLifeguard9438 (AWS Certifications) on August 2, 2022 at 2:04 pm
The main reason why I decided to go for this cert is that I had to learn AWS stuff anyway as we want to employ infrastructure as code solutions. I thought a cert could be a nice add-on while getting into this domain and cert materials could drive my learning process. So initially I planned to go with Stephane Maarek's materials (that I've already purchased) but then you guys started writing posts left and right on how you failed to pass the exam using his materials. So I panicked a bit and bought A. Cantrill's SAA course. Plus TD's practice exams. I passed CCP using Maarek's materials a month earlier. Regarding the test itself, felt more difficult than TD exams (when doing TD exams I got an impression there were more straighforward questions, taking you few moments to answer). Regarding the questions themselves, a lot of them were related to RDS/Aurora and Gateway Storage. A bit less on S3, VPC related things like NAT or hybrid cloud connectivity. Not much related to EC2. The unscored questions were standing out quite clearly so it was easy to notice where not to waste too much time. I didn't get pass/fail feedback. It took me around 100h to go through Cantrill's course alone. I went with a different approach from what Adrian is recommending. He's suggesting to go through it with 1-2h lessons for about 2 months of time. I was doing sometimes 10h per day to speed this thing through. Time was a factor for me. Besides I believe if I did it 'the right way' I would start forgetting the material before reaching the finish line. The scope of his course is questionable IMHO. Too much detail on things you'll eventually forget if you won't use it on a daily basis. I think you get the vibe already that I preffer the Maarek's way. Especially that I was also watching Maarek's materials on subjects that I felt were missing or not explained in a way I could clearly understand. The stuff I learned for the CCP was really helpful, while watching Cantrill's course I had lots of repetitions and it was easier to digest Adrian's content. btw I recommend watching it at 1.25 speed minimum. Regarding the TD exams, i was averaging them at 75-81% on the first run. The worst result I got on Maarek's test exam - 73%. Adrian also has 2 exam courses but they are not as well prepared as what you get from TD or Maarek. I highly recommend TD's materials, they're an eye-opener on what to expect during the actual exam and reveal your weak points. To sum up, if you want to get the cert and that's about it - go with Maarek. Also if you graduated in IT, you probably don't need to know what OSI model is or how to convert between numeral systems. I recommend Cantrill's course if you have A LOT of time and perhaps you're on a new career path in IT. Again this is just me, lots of people seem to benefit a lot from the Cantrill's course. For me it was so much to digest to the point I was starting to have doubts why am I doing this thing in the first place and honestly that was the last AWS cert I wasted my time on. Overall I spent tons of time (across a bit more than 1.5 months) on the course, going through cheat sheets, doing practice exams and reviewing them. At this point I'm just happy it's done and I can move on with my life. submitted by /u/NoLifeguard9438 [link] [comments]
- What to do next, learn python( as I have no programming skills) or go with Aws Certified developer associate certification as I have recently passed SAA.by /u/97breezy (AWS Certifications) on August 2, 2022 at 2:00 pm
I have cleared my aws solutions architect certification recently.. but I'm confused about what to do next should I learn python (at least some basics so that I can put it on my resume ) or should I go for AWS developer associate certification because I have heard it overlaps with SAA and since I have passed recently my knowledge is pretty fresh. submitted by /u/97breezy [link] [comments]
- PASSED! AWS Certified SysOps Administrator Associate SOA-C02 Exam Topicsby /u/xyberneto (AWS Certifications) on August 1, 2022 at 11:46 pm
Spent the first half of the year getting comfortable and settled down with my new company and after a few weeks of study, practice tests and video courses, I finally passed the AWS Certified SysOps Administrator Associate SOA-C02 exam! The 3 Exam Labs that I encountered are challenging and you should really do lots of hands-on for this test in order for you to pass. In terms of lab mechanics, the Exam Lab is pretty much similar with this TD Lab on YouTube. One lab has multiple individual tasks that you need to answer and accomplish. I first used Adrian Cantrill's SysOps video course and then proceed with TD's video course and practice tests. I'm also lurking in this sub quite often for exam feedback and I recommend reading this one for reference: https://www.reddit.com/r/AWSCertifications/comments/tii7uy/passed_sysops_soac02_but_had_horrifying_exam_labs/ Sharing my SOA-C02 Exam Study Guide for those who are about to take the test. I want to say that I wouldn't be able to stress enough the importance of the Official SOA-C02 Exam Guide to pass the exam. This document is literally "The Guide" that you should read before starting your exam preparations. SOA-C02 Exam Domains: Domain 1: Monitoring, Logging, and Remediation 20% Domain 2: Reliability and Business Continuity 16% Domain 3: Deployment, Provisioning, and Automation 18% Domain 4: Security and Compliance 16% Domain 5: Networking and Content Delivery 18% Domain 6: Cost and Performance Optimization 12% Domain 1 is the highest domain here. Monitoring and Logging are all tasks that can be done on Amazon CloudWatch so you have to focus on all modules of CloudWatch for the test, including (but not limited to) CloudWatch Metric, CloudWatch Logs, CloudWatch Dashboard etc. Remediation usually is related to AWS Config and troubleshooting so focus on those stuff too. SOA-C02 Exam Topics : Analytics: Amazon Elasticsearch Service (Amazon ES) Application Integration: Amazon EventBridge (Amazon CloudWatch Events) Amazon Simple Notification Service (Amazon SNS) Amazon Simple Queue Service (Amazon SQS) AWS Cost Management: AWS Cost and Usage Report AWS Cost Explorer Savings Plans Compute: AWS Application Auto Scaling Amazon EC2 Amazon EC2 Auto Scaling Amazon EC2 Image Builder AWS Lambda Database: Amazon Aurora Amazon ElastiCache Amazon RDS Management, Monitoring, and Governance: AWS CloudFormation AWS CloudTrail Amazon CloudWatch AWS Command Line Interface (AWS CLI) AWS Compute Optimizer AWS Config AWS Control Tower AWS License Manager AWS Management Console AWS OpsWorks AWS Organizations AWS Personal Health Dashboard AWS Secrets Manager AWS Service Catalog AWS Systems Manager AWS Systems Manager Parameter Store AWS tools and SDKs AWS Trusted Advisor Migration and Transfer: AWS DataSync AWS Transfer Family Networking and Content Delivery: AWS Client VPN Amazon CloudFront Elastic Load Balancing AWS Firewall Manager AWS Global Accelerator Amazon Route 53 Amazon Route 53 Resolver AWS Transit Gateway Amazon VPC Amazon VPC Traffic Mirroring Security, Identity, and Compliance: AWS Certificate Manager (ACM) Amazon Detective AWS Directory Service Amazon GuardDuty AWS IAM Access Analyzer AWS Identity and Access Management (IAM) Amazon Inspector AWS Key Management Service (AWS KMS) AWS License Manager AWS Secrets Manager AWS Security Hub Security, Identity, and Compliance: AWS Certificate Manager (ACM) Amazon Detective AWS Directory Service Amazon GuardDuty AWS IAM Access Analyzer AWS Identity and Access Management (IAM) Amazon Inspector AWS Key Management Service (AWS KMS) AWS License Manager AWS Secrets Manager AWS Security Hub AWS Shield AWS WAF Storage: Amazon Elastic Block Store (Amazon EBS) Amazon Elastic File System (Amazon EFS) Amazon FSx Amazon S3 Amazon S3 Glacier AWS Backup AWS Storage Gateway AWS Pro and Specialty level exams up next! submitted by /u/xyberneto [link] [comments]
Download AWS Solution Architect Associate Exam Prep Pro App (No Ads, Full version with answers) for:
Android – iOS – Windows 10 – Amazon Android
AWS Certification & Training | LinkedIn
There are many trends within the current cloud computing industry that have a sway on the conversations which take place throughout the market. One of these key areas of discussion is ‘Serverless’.
Serverless application deployment is a way of provisioning infrastructure in a managed way, without having to worry about any building and maintenance of servers – you launch the service and it works. Scaling, high availability, and automotive processes are looked after using managed AWS Serverless service. AWS Step Functions provides us a useful way to coordinate the components of distributed applications and microservices using visual workflows.
What is AWS Step Functions?
AWS Step Functions let developers build distributed applications, automate IT and business processes, and build data and machine learning pipelines by using AWS services.
Using Step Functions workflows, developers can focus on higher-value business logic instead of worrying about failures, retries, parallelization, and service integrations. In other words, AWS Step Functions is serverless workload orchestration service which can make developers’ lives much easier.
Components and Integrations
AWS Step Functions consist of a few components, the first being a State Machine.
What is a state machine?
The State Machine model uses given states and transitions to complete the tasks at hand. It is an abstract machine (system) that can be in one state at a time, but it can also switch between them. As a result, it doesn’t allow infinity loops, which removes one source of errors entirely, which is often costly.
With AWS Step Functions, you can define workflows as state machines, which simplify complex code into easy-to-understand statements and diagrams. The process of building applications and confirming they work as expected is actually much faster and easier.
State
In a state machine, a state is referred to by its name, which can be any string, but must be unique within the state machine. State instances exist until their execution is complete.
An individual component of your state machine can be in any of the following 8 types of states:
- Task state – Do some work in your state machine. From a task state, Amazon Step Functions can call Lambda functions directly
- Choice state – Make a choice between different branches of execution
- Fail state – Stops execution and marks it as failure
- Succeed state – Stops execution and marks it as a success
- Pass state – Simply pass its input to its output or inject some fixed data
- Wait state – Provide a delay for a certain amount of time or until a specified time/date
- Parallel state – Begin parallel branches of execution
- Map state – Adds a for-each loop condition
Limits
There are some limits which you need to be aware of when you are using AWS Step Functions. This table will break down the limits:
Use Cases and Examples
If you need to build workflows across multiple Amazon services, then AWS Step Functions are a great tool for you. Serverless microservices can be orchestrated with Step Functions, data pipelines can be built, and security incidents can be handled with Step Functions. It is possible to use Step Functions both synchronously and asynchronously.
Instead of manually orchestrating long-running, multiple ETL jobs or maintaining a separate application, Step Functions can ensure that these jobs are executed in order and complete successfully.
As a third feature, Step Functions are a great way to automate recurring tasks, such as updating patches, selecting infrastructure, and synchronizing data, and Step Functions will scale automatically, respond to timeouts, and retry missed tasks when they fail.
With Step Functions, you can create responsive serverless applications and microservices with multiple AWS Lambda functions without writing code for workflow logic, parallel processes, error handling, or timeouts.
Additionally, services and data can be orchestrated that run on Amazon EC2 instances, containers, or on-premises servers.
Pricing
Each time you perform a step in your workflow, Step Functions counts a state transition. State transitions, including retries, are charged across all state machines.
There is a Free Tier for AWS Step Functions of 4000 State Transitions per month.
With AWS Step Functions, you pay for the number state transitions you use per month.
Step Functions count a state transition each time a step of your workflow is executed. You are charged for the total number of state transitions across all your state machines, including retries.
State Transitions cost a flat rate of $0.000025 per state transition thereafter.
Summary
In summary, Step Functions are a powerful tool which you can use to improve the application development and productivity of your developers. By migrating your logic workflows into the cloud you will benefit from lower cost, rapid deployment. As this is a serverless service, you will be able to remove any undifferentiated heavy lifting from the application development process.
Interview Questions
Q: How does AWS Step Function create a State Machine?
A: A state machine is a collection of states which allows you to perform tasks in the form of lambda functions, or another service, in sequence, passing the output of one task to another. You can add branching logic based on the output of a task to determine the next state.
Q: How can we share data in AWS Step Functions without passing it between the steps?
A: You can make use of InputPath and ResultPath. In the ValidationWaiting step you can set the following properties (in State Machine definition)
This way you can send to external service only data that is actually needed by it and you won’t lose access to any data that was previously in the input.
Q: How can I diagnose an error or a failure within AWS Step Functions?
A: The following are some possible failure events that may occur
- State Machine Definition Issues.
- Task Failures due to exceptions thrown in a Lambda Function.
- Transient or Networking Issues.
- A task has surpassed its timeout threshold.
- Privileges are not set appropriately for a task to execute.
Source: This AWS Step Function post originally appeared on: https://digitalcloud.training/
AWS Secrets Manager vs SSM Parameter Store
If you want to be an AWS cloud professional, you need to understand the differences between the myriad of services AWS offer. You also need an in-depth understanding on how to use the Security services to ensure that your account infrastructure is highly secure and safe to use. This is job zero at AWS, and there is nothing that is taken more seriously than Security. AWS makes it really easy to implement security best practices and provides you with many tools to do so.
AWS Secrets Manager and SSM Parameter store sound like very similar services on the surface -however, when you dig deeper – comparing AWS Secrets Manager vs SSM Parameter Store – you will find some significant differences which help you understand exactly when to use each tool.
AWS Secrets Manager
AWS Secrets Manager is designed to provide encryption for confidential information (like database credentials and API keys) that needs to be guarded safely in a secure way. Encryption is automatically enabled when creating a secret entry and there are a number of additional features we are going to explore in this article.
Through using AWS Secrets Manager, you can manage a wide range of secrets: Database credentials, API keys, and other self defined secrets are all eligible for this service.
If you are responsible for storing and managing secrets within your team, as well as ensuring that your company follows regulatory requirements – this is possible through AWS Secrets Manager which securely and safely stores all secrets within one place. Secrets Manager also has a large degree of added functionality.
SSM Parameter store
SSM Parameter store is slightly different. The key differences become evident when you compare how AWS Secrets Manager vs SSM Parameter Store are used.
The SSM Parameter Store focuses on a slightly wider set of requirements. Based on your compliance requirements, SSM Parameter Store can be used to store the secrets encrypted or unencrypted within your code base.
By storing environmental configuration data and other parameters, the software simplifies and optimizes the application deployment process. With the AWS Secrets Manager, you can add key rotation, cross-account access, and faster integration with services offered by AWS.
Based on this explanation you may think that they both sound similar. Let’s break down the similarities and differences between these roles.
Similarities
Managed Key/Value Store Services
Both services allow you to store values using a name and key. This is an extremely useful aspect of both of the services as the deployment of the application can reference different parameters or different secrets based on the deployment environment, allowing customizable and highly integratable deployments of your applications.
Both Referenceable in CloudFormation
You can use the powerful Infrastructure as Code (IaC) tool AWS CloudFormation to build your applications programmatically. The effortless deployment of either product using CloudFormation allows a seamless developer experience, without using painful manual processes.
While SSM Parameter Store only allows one version of a parameter to be active at any given time, Secrets Manager allows multiple versions to exist at the same time when you are rotating a secret using staging labels.
Similar Encryption Options
They are both inherently very secure services – and you do not have to choose one over another based on the encryption offered by either service.
Through another AWS Security service, KMS (the Key Management Service), IAM policies can be outlined to control and outline specific permissions on which only certain IAM users and roles have permission to decrypt the value. This restricts access to anyone who doesn’t need it – and it abides to the principle of least privilege, helping you abide by compliance standards.
Versioning
Versioning outlines the ability to save multiple, and iteratively developed versions of something to allow quicker restore lost versions, and maintain multiple copies of the same thing etc.
Both services support versioning of secret values within the service. This allows you to view multiple previous versions of your parameters. You can also optionally choose to promote a former version to the master up to date version, which can be useful as your application changes.
Given that there are lots of similarities between the two services, it is now time to view and compare the differences, along with some use cases of either service.
Differences
Cost
The costs are different across the services, namely the fact that SSM tends to cost less compared to Secrets Manager. Standard parameters are free for SSM. You won’t be charged for the first 10,000 parameters you store, however, Advanced Parameters will cost you. For every 10,000 API calls and every secret per month, AWS Secret Manager bills you a fixed fee.
This may factor into how you use each service and how you define your cloud spending strategy, so this is valuable information.
Password generation
A useful feature within AWS Secrets Manager allows us to generate random data during the creation phase to allow for the secure and auditable creation of strong and unique passwords and subsequently reference it in the same CloudFormation stack. This allows our applications to be fully built using IaC, and gives us all the benefits which that entails.
AWS Systems Manager Parameter Store on the other hand doesn’t work this way, and doesn’t allow us to generate random data — we need to do it manually using console or AWS CLI, and this can’t happen during the creation phase.
Rotation of Secrets
A Powerful feature of AWS Secrets Manager is the ability to automatically rotate credentials based on a pre-defined schedule, which you set. AWS Secrets Manager integrates this feature natively with many AWS services, and this feature (automated data rotation) is simply not possible using AWS Systems Manager Parameter Store.You will have to refresh and update data daily which will include a lot more manual setup to achieve the same functionality that is supported natively with Secrets Manager.
Cross-Account Access
Firstly, there is currently no way to attach resource-based IAM policy for AWS Systems Manager Parameter Store (Standard type).This means that cross-account access is not possible for Parameter store, and if you need this functionality you will have to configure an extensive work around, or use AWS Secrets Manager.
Size of Secrets
Each of the options stores a maximum set size of secret / parameter.
Secrets Manager can store secrets of up to 10kb in size.
Standard Parameters can use up to 4096 characters (4KB size) for each entry, and Advanced Parameters can store up to 8KB entries.
Multi-Region Deployment
Like with many other features of AWS secrets Manager, AWS SSM Parameter store does not come with the same functionality. In this case you can’t easily replicate your secrets across multiple regions for added functionality / value, and you will need to implement an extensive work around for this to work.
In terms of use cases, you may want to use AWS Secrets Manager to store your encrypted secrets with easy rotation. If you require a feature rich solution for managing your secrets to stay compliant with your regulatory and compliance requirements, consider choosing AWS Secrets Manager.
On the other hand, you may want to choose SSM Parameter Store as a cheaper option to store your encrypted or unencrypted secrets. Parameter Store will provide some limited functionality to enable your application deployments by storing your parameters in a safe, cheap and secure way.
Source: This post originally appeared on https://digitalcloud.training/
|