What are the Top 100 AWS Solutions Architect Associate Certification Exam Questions and Answers Dump SAA-C03?
AWS Certified Solutions Architects are responsible for designing, deploying, and managing AWS cloud applications. The AWS Cloud Solutions Architect Associate exam validates an examinee’s ability to effectively demonstrate knowledge of how to design and deploy secure and robust applications on AWS technologies. The AWS Solutions Architect Associate training provides an overview of key AWS services, security, architecture, pricing, and support.
The AWS Certified Solutions Architect – Associate (SAA-C03) Examination is a required examination for the AWS Certified Solutions Architect – Professional level. Successful completion of this examination can lead to a salary raise or promotion for those in cloud roles. Below is the Top 100 AWS solutions architect associate exam prep facts and summaries questions and answers dump.
With average increases in salary of over 25% for certified individuals, you’re going to be in a much better position to secure your dream job or promotion if you earn your AWS Certified Solutions Architect Associate certification. You’ll also develop strong hands-on skills by doing the guided hands-on lab exercises in our course which will set you up for successfully performing in a solutions architect role.
We recommend that you allocate at least 60 minutes of study time per day and you will then be able to complete the certification within 5 weeks (including taking the actual exam). Study times can vary based on your experience with AWS and how much time you have each day, with some students passing their exams much faster and others taking a little longer. Get our eBook here.
The AWS Solutions Architect Associate exam is an associate-level exam that requires a solid understanding of the AWS platform and a broad range of AWS services. The AWS Certified Solutions Architect Associate exam questions are scenario-based questions and can be challenging. Despite this, the AWS Solutions Architect Associate is often earned by beginners to cloud computing.
The AWS Certified Solutions Architect – Associate (SAA-C03) exam is intended for individuals who perform in a solutions architect role. The exam validates a candidate’s ability to use AWS technologies to design solutions based on the AWS Well-Architected Framework.
The SAA-C03 exam is a multiple choice examination that is 65 questions in length. You can take the exam in a testing center or using an online proctored exam from your home or office. You have 130 minutes to complete your exam and the passing mark is 720 points out of 100 points (72%). If English is not your first language you can request an accommodation when booking your exam that will qualify you for an additional 30 minutes exam extension.
The exam also validates a candidate’s ability to complete the following tasks: • Design solutions that incorporate AWS services to meet current business requirements and future projected needs • Design architectures that are secure, resilient, high-performing, and cost-optimized • Review existing solutions and determine improvements
Unscored content The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.
Target candidate description The target candidate should have at least 1 year of hands-on experience designing cloud solutions that use AWS services
All AWS certification exam results are reported as a score from 100 to 1000. Your score shows how you performed on the examination as a whole and whether or not you passed. The passing score for the AWS Certified Solutions Architect Associate is 720 (72%).
Yes, you can now take all AWS Certification exams with online proctoring using Pearson Vue or PSI. Here’s a detailed guide on how to book your AWS exam.
There are no prerequisites for taking AWS exams. You do not need any programming knowledge or experience working with AWS. Everything you need to know is included in our courses. We do recommend that you have a basic understanding of fundamental computing concepts such as compute, storage, networking, and databases.
AWS Certified Solutions Architects are IT professionals who design cloud solutions with AWS services to meet given technical requirements. An AWS Solutions Architect Associate is expected to design and implement distributed systems on AWS that are high-performing, scalable, secure and cost optimized.
Domain 1: Design Secure Architectures This exam domain is focused on securing your architectures on AWS and comprises 30% of the exam. Task statements include:
Task Statement 1: Design secure access to AWS resources. Knowledge of: • Access controls and management across multiple accounts • AWS federated access and identity services (for example, AWS Identity and Access Management [IAM], AWS Single Sign-On [AWS SSO]) • AWS global infrastructure (for example, Availability Zones, AWS Regions) • AWS security best practices (for example, the principle of least privilege) • The AWS shared responsibility model
Skills in: • Applying AWS security best practices to IAM users and root users (for example, multi-factor authentication [MFA]) • Designing a flexible authorization model that includes IAM users, groups, roles, and policies • Designing a role-based access control strategy (for example, AWS Security Token Service [AWS STS], role switching, cross-account access) • Designing a security strategy for multiple AWS accounts (for example, AWS Control Tower, service control policies [SCPs]) • Determining the appropriate use of resource policies for AWS services • Determining when to federate a directory service with IAM roles
Task Statement 2: Design secure workloads and applications.
Knowledge of: • Application configuration and credentials security • AWS service endpoints • Control ports, protocols, and network traffic on AWS • Secure application access • Security services with appropriate use cases (for example, Amazon Cognito, Amazon GuardDuty, Amazon Macie) • Threat vectors external to AWS (for example, DDoS, SQL injection)
Skills in: • Designing VPC architectures with security components (for example, security groups, route tables, network ACLs, NAT gateways) • Determining network segmentation strategies (for example, using public subnets and private subnets) • Integrating AWS services to secure applications (for example, AWS Shield, AWS WAF, AWS SSO, AWS Secrets Manager) • Securing external network connections to and from the AWS Cloud (for example, VPN, AWS Direct Connect)
Task Statement 3: Determine appropriate data security controls.
Knowledge of: • Data access and governance • Data recovery • Data retention and classification • Encryption and appropriate key management
Skills in: • Aligning AWS technologies to meet compliance requirements • Encrypting data at rest (for example, AWS Key Management Service [AWS KMS]) • Encrypting data in transit (for example, AWS Certificate Manager [ACM] using TLS) • Implementing access policies for encryption keys • Implementing data backups and replications • Implementing policies for data access, lifecycle, and protection • Rotating encryption keys and renewing certificates
Domain 2: Design Resilient Architectures This exam domain is focused on designing resilient architectures on AWS and comprises 26% of the exam. Task statements include:
Task Statement 1: Design scalable and loosely coupled architectures. Knowledge of: • API creation and management (for example, Amazon API Gateway, REST API) • AWS managed services with appropriate use cases (for example, AWS Transfer Family, Amazon Simple Queue Service [Amazon SQS], Secrets Manager) • Caching strategies • Design principles for microservices (for example, stateless workloads compared with stateful workloads) • Event-driven architectures • Horizontal scaling and vertical scaling • How to appropriately use edge accelerators (for example, content delivery network [CDN]) • How to migrate applications into containers • Load balancing concepts (for example, Application Load Balancer) • Multi-tier architectures • Queuing and messaging concepts (for example, publish/subscribe) • Serverless technologies and patterns (for example, AWS Fargate, AWS Lambda) • Storage types with associated characteristics (for example, object, file, block) • The orchestration of containers (for example, Amazon Elastic Container Service [Amazon ECS],Amazon Elastic Kubernetes Service [Amazon EKS]) • When to use read replicas • Workflow orchestration (for example, AWS Step Functions)
Skills in: • Designing event-driven, microservice, and/or multi-tier architectures based on requirements • Determining scaling strategies for components used in an architecture design • Determining the AWS services required to achieve loose coupling based on requirements • Determining when to use containers • Determining when to use serverless technologies and patterns • Recommending appropriate compute, storage, networking, and database technologies based on requirements • Using purpose-built AWS services for workloads
Task Statement 2: Design highly available and/or fault-tolerant architectures. Knowledge of: • AWS global infrastructure (for example, Availability Zones, AWS Regions, Amazon Route 53) • AWS managed services with appropriate use cases (for example, Amazon Comprehend, Amazon Polly) • Basic networking concepts (for example, route tables) • Disaster recovery (DR) strategies (for example, backup and restore, pilot light, warm standby, active-active failover, recovery point objective [RPO], recovery time objective [RTO]) • Distributed design patterns • Failover strategies • Immutable infrastructure • Load balancing concepts (for example, Application Load Balancer) • Proxy concepts (for example, Amazon RDS Proxy) • Service quotas and throttling (for example, how to configure the service quotas for a workload in a standby environment) • Storage options and characteristics (for example, durability, replication) • Workload visibility (for example, AWS X-Ray)
Skills in: • Determining automation strategies to ensure infrastructure integrity • Determining the AWS services required to provide a highly available and/or fault-tolerant architecture across AWS Regions or Availability Zones • Identifying metrics based on business requirements to deliver a highly available solution • Implementing designs to mitigate single points of failure • Implementing strategies to ensure the durability and availability of data (for example, backups) • Selecting an appropriate DR strategy to meet business requirements • Using AWS services that improve the reliability of legacy applications and applications not built for the cloud (for example, when application changes are not possible) • Using purpose-built AWS services for workloads
Domain 3: Design High-Performing Architectures This exam domain is focused on designing high-performing architectures on AWS and comprises 24% of the exam. Task statements include:
Knowledge of: • Hybrid storage solutions to meet business requirements • Storage services with appropriate use cases (for example, Amazon S3, Amazon Elastic File System [Amazon EFS], Amazon Elastic Block Store [Amazon EBS]) • Storage types with associated characteristics (for example, object, file, block)
Skills in: • Determining storage services and configurations that meet performance demands • Determining storage services that can scale to accommodate future needs
Task Statement 2: Design high-performing and elastic compute solutions. Knowledge of: • AWS compute services with appropriate use cases (for example, AWS Batch, Amazon EMR, Fargate) • Distributed computing concepts supported by AWS global infrastructure and edge services • Queuing and messaging concepts (for example, publish/subscribe) • Scalability capabilities with appropriate use cases (for example, Amazon EC2 Auto Scaling, AWS Auto Scaling) • Serverless technologies and patterns (for example, Lambda, Fargate) • The orchestration of containers (for example, Amazon ECS, Amazon EKS)
Skills in: • Decoupling workloads so that components can scale independently • Identifying metrics and conditions to perform scaling actions • Selecting the appropriate compute options and features (for example, EC2 instance types) to meet business requirements • Selecting the appropriate resource type and size (for example, the amount of Lambda memory) to meet business requirements
Task Statement 3: Determine high-performing database solutions. Knowledge of: • AWS global infrastructure (for example, Availability Zones, AWS Regions) • Caching strategies and services (for example, Amazon ElastiCache) • Data access patterns (for example, read-intensive compared with write-intensive) • Database capacity planning (for example, capacity units, instance types, Provisioned IOPS) • Database connections and proxies • Database engines with appropriate use cases (for example, heterogeneous migrations, homogeneous migrations) • Database replication (for example, read replicas) • Database types and services (for example, serverless, relational compared with non-relational, in-memory)
Skills in: • Configuring read replicas to meet business requirements • Designing database architectures • Determining an appropriate database engine (for example, MySQL compared with PostgreSQL) • Determining an appropriate database type (for example, Amazon Aurora, Amazon DynamoDB) • Integrating caching to meet business requirements
Task Statement 4: Determine high-performing and/or scalable network architectures. Knowledge of: • Edge networking services with appropriate use cases (for example, Amazon CloudFront, AWS Global Accelerator) • How to design network architecture (for example, subnet tiers, routing, IP addressing) • Load balancing concepts (for example, Application Load Balancer) • Network connection options (for example, AWS VPN, Direct Connect, AWS PrivateLink)
Skills in: • Creating a network topology for various architectures (for example, global, hybrid, multi-tier) • Determining network configurations that can scale to accommodate future needs • Determining the appropriate placement of resources to meet business requirements • Selecting the appropriate load balancing strategy
Task Statement 5: Determine high-performing data ingestion and transformation solutions. Knowledge of: • Data analytics and visualization services with appropriate use cases (for example, Amazon Athena, AWS Lake Formation, Amazon QuickSight) • Data ingestion patterns (for example, frequency) • Data transfer services with appropriate use cases (for example, AWS DataSync, AWS Storage Gateway) • Data transformation services with appropriate use cases (for example, AWS Glue) • Secure access to ingestion access points • Sizes and speeds needed to meet business requirements • Streaming data services with appropriate use cases (for example, Amazon Kinesis)
Skills in: • Building and securing data lakes • Designing data streaming architectures • Designing data transfer solutions • Implementing visualization strategies • Selecting appropriate compute options for data processing (for example, Amazon EMR) • Selecting appropriate configurations for ingestion • Transforming data between formats (for example, .csv to .parquet)
Domain 4: Design Cost-Optimized Architectures This exam domain is focused optimizing solutions for cost-effectiveness on AWS and comprises 20% of the exam. Task statements include:
Task Statement 1: Design cost-optimized storage solutions. Knowledge of: • Access options (for example, an S3 bucket with Requester Pays object storage) • AWS cost management service features (for example, cost allocation tags, multi-account billing) • AWS cost management tools with appropriate use cases (for example, AWS Cost Explorer, AWS Budgets, AWS Cost and Usage Report) • AWS storage services with appropriate use cases (for example, Amazon FSx, Amazon EFS, Amazon S3, Amazon EBS) • Backup strategies • Block storage options (for example, hard disk drive [HDD] volume types, solid state drive [SSD] volume types) • Data lifecycles • Hybrid storage options (for example, DataSync, Transfer Family, Storage Gateway) • Storage access patterns • Storage tiering (for example, cold tiering for object storage) • Storage types with associated characteristics (for example, object, file, block)
Skills in: • Designing appropriate storage strategies (for example, batch uploads to Amazon S3 compared with individual uploads) • Determining the correct storage size for a workload • Determining the lowest cost method of transferring data for a workload to AWS storage • Determining when storage auto scaling is required • Managing S3 object lifecycles • Selecting the appropriate backup and/or archival solution • Selecting the appropriate service for data migration to storage services • Selecting the appropriate storage tier • Selecting the correct data lifecycle for storage • Selecting the most cost-effective storage service for a workload
Task Statement 2: Design cost-optimized compute solutions. Knowledge of: • AWS cost management service features (for example, cost allocation tags, multi-account billing) • AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report) • AWS global infrastructure (for example, Availability Zones, AWS Regions) • AWS purchasing options (for example, Spot Instances, Reserved Instances, Savings Plans) • Distributed compute strategies (for example, edge processing) • Hybrid compute options (for example, AWS Outposts, AWS Snowball Edge) • Instance types, families, and sizes (for example, memory optimized, compute optimized, virtualization) • Optimization of compute utilization (for example, containers, serverless computing, microservices) • Scaling strategies (for example, auto scaling, hibernation)
Skills in: • Determining an appropriate load balancing strategy (for example, Application Load Balancer [Layer 7] compared with Network Load Balancer [Layer 4] compared with Gateway Load Balancer) • Determining appropriate scaling methods and strategies for elastic workloads (for example, horizontal compared with vertical, EC2 hibernation) • Determining cost-effective AWS compute services with appropriate use cases (for example, Lambda, Amazon EC2, Fargate) • Determining the required availability for different classes of workloads (for example, production workloads, non-production workloads) • Selecting the appropriate instance family for a workload • Selecting the appropriate instance size for a workload
Task Statement 3: Design cost-optimized database solutions. Knowledge of: • AWS cost management service features (for example, cost allocation tags, multi-account billing) • AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report) • Caching strategies • Data retention policies • Database capacity planning (for example, capacity units) • Database connections and proxies • Database engines with appropriate use cases (for example, heterogeneous migrations, homogeneous migrations) • Database replication (for example, read replicas) • Database types and services (for example, relational compared with non-relational, Aurora, DynamoDB)
Skills in: • Designing appropriate backup and retention policies (for example, snapshot frequency) • Determining an appropriate database engine (for example, MySQL compared with PostgreSQL) • Determining cost-effective AWS database services with appropriate use cases (for example, DynamoDB compared with Amazon RDS, serverless) • Determining cost-effective AWS database types (for example, time series format, columnar format) • Migrating database schemas and data to different locations and/or different database engines
Task Statement 4: Design cost-optimized network architectures. Knowledge of: • AWS cost management service features (for example, cost allocation tags, multi-account billing) • AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report) • Load balancing concepts (for example, Application Load Balancer) • NAT gateways (for example, NAT instance costs compared with NAT gateway costs) • Network connectivity (for example, private lines, dedicated lines, VPNs) • Network routing, topology, and peering (for example, AWS Transit Gateway, VPC peering) • Network services with appropriate use cases (for example, DNS)
Skills in: • Configuring appropriate NAT gateway types for a network (for example, a single shared NAT gateway compared with NAT gateways for each Availability Zone) • Configuring appropriate network connections (for example, Direct Connect compared with VPN compared with internet) • Configuring appropriate network routes to minimize network transfer costs (for example, Region to Region, Availability Zone to Availability Zone, private to public, Global Accelerator, VPC endpoints) • Determining strategic needs for content delivery networks (CDNs) and edge caching • Reviewing existing workloads for network optimizations • Selecting an appropriate throttling strategy • Selecting the appropriate bandwidth allocation for a network device (for example, a single VPN compared with multiple VPNs, Direct Connect speed)
Which key tools, technologies, and concepts might be covered on the exam? The following is a non-exhaustive list of the tools and technologies that could appear on the exam. This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam. The general tools and technologies in this list appear in no particular order. AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance: • Compute • Cost management • Database • Disaster recovery • High performance • Management and governance • Microservices and component decoupling • Migration and data transfer • Networking, connectivity, and content delivery • Resiliency • Security • Serverless and event-driven design principles • Storage
AWS Services and Features There are lots of new services and feature updates in scope for the new AWS Certified Solutions Architect Associate certification! Here’s a list of some of the new services that will be in scope for the new version of the exam:
Analytics: • Amazon Athena • AWS Data Exchange • AWS Data Pipeline • Amazon EMR • AWS Glue • Amazon Kinesis • AWS Lake Formation • Amazon Managed Streaming for Apache Kafka (Amazon MSK) • Amazon OpenSearch Service (Amazon Elasticsearch Service) • Amazon QuickSight • Amazon Redshift
Management and Governance: • AWS Auto Scaling • AWS CloudFormation • AWS CloudTrail • Amazon CloudWatch • AWS Command Line Interface (AWS CLI) • AWS Compute Optimizer • AWS Config • AWS Control Tower • AWS License Manager • Amazon Managed Grafana • Amazon Managed Service for Prometheus • AWS Management Console • AWS Organizations • AWS Personal Health Dashboard • AWS Proton • AWS Service Catalog • AWS Systems Manager • AWS Trusted Advisor • AWS Well-Architected Tool
Media Services: • Amazon Elastic Transcoder • Amazon Kinesis Video Streams
Migration and Transfer: • AWS Application Discovery Service • AWS Application Migration Service (CloudEndure Migration) • AWS Database Migration Service (AWS DMS) • AWS DataSync • AWS Migration Hub • AWS Server Migration Service (AWS SMS) • AWS Snow Family • AWS Transfer Family
Out-of-scope AWS services and features The following is a non-exhaustive list of AWS services and features that are not covered on the exam. These services and features do not represent every AWS offering that is excluded from the exam content.
AWS solutions architect associate exam prep facts and summaries questions and answers dump – Solution Architecture Definition 1:
Solution architecture is a practice of defining and describing an architecture of a system delivered in context of a specific solution and as such it may encompass description of an entire system or only its specific parts. Definition of a solution architecture is typically led by a solution architect.
AWS solutions architect associate exam prep facts and summaries questions and answers dump – Solution Architecture Definition 2:
The AWS Certified Solutions Architect – Associate examination is intended for individuals who perform a solutions architect role and have one or more years of hands-on experience designing available, cost-efficient, fault-tolerant, and scalable distributed systems on AWS.
If you are running an application in a production environment and must add a new EBS volume with data from a snapshot, what could you do to avoid degraded performance during the volume’s first use? Initialize the data by reading each storage block on the volume. Volumes created from an EBS snapshot must be initialized. Initializing occurs the first time a storage block on the volume is read, and the performance impact can be impacted by up to 50%. You can avoid this impact in production environments by pre-warming the volume by reading all of the blocks.
If you are running a legacy application that has hard-coded static IP addresses and it is running on an EC2 instance; what is the best failover solution that allows you to keep the same IP address on a new instance? Elastic IP addresses (EIPs) are designed to be attached/detached and moved from one EC2 instance to another. They are a great solution for keeping a static IP address and moving it to a new instance if the current instance fails. This will reduce or eliminate any downtime uses may experience.
Which feature of Intel processors help to encrypt data without significant impact on performance? AES-NI
You can mount to EFS from which two of the following?
On-prem servers running Linux
EC2 instances running Linux
EFS is not compatible with Windows operating systems.
When a file(s) is encrypted and the stored data is not in transit it’s known as encryption at rest. What is an example of encryption at rest?
When would vertical scaling be necessary? When an application is built entirely into one source code, otherwise known as a monolithic application.
Fault-Tolerance allows for continuous operation throughout a failure, which can lead to a low Recovery Time Objective. RPO vs RTO
High-Availability means automating tasks so that an instance will quickly recover, which can lead to a low Recovery Time Objective. RPO vs. RTO
Frequent backups reduce the time between the last backup and recovery point, otherwise known as the Recovery Point Objective. RPO vs. RTO
Which represents the difference between Fault-Tolerance and High-Availability? High-Availability means the system will quickly recover from a failure event, and Fault-Tolerance means the system will maintain operations during a failure.
From a security perspective, what is a principal?An anonymous user falls under the definition of a principal. A principal can be an anonymous user acting on a system.
An authenticated user falls under the definition of a principal. A principal can be an authenticated user acting on a system.
What are two types of session data saving for an Application Session State?Stateless and Stateful
23. It is the customer’s responsibility to patch the operating system on an EC2 instance.
24. In designing an environment, what four main points should a Solutions Architect keep in mind? Cost-efficient, secure, application session state, undifferentiated heavy lifting: These four main points should be the framework when designing an environment.
25. In the context of disaster recovery, what does RPO stand for? RPO is the abbreviation for Recovery Point Objective.
26.What are the benefits of horizontal scaling?
Vertical scaling can be costly while horizontal scaling is cheaper.
Horizontal scaling suffers from none of the size limitations of vertical scaling.
Having horizontal scaling means you can easily route traffic to another instance of a server.
Top AWS solutions architect associate exam prep facts and summaries questions and answers dump – Quizzes
A company is developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? (Select TWO.)
A. CloudWatch
B. DynamoDB
C. Elastic Load Balancing
D. ElastiCache
E. Storage Gateway
Answer: B and D ( Get the SAA Exam Prep for More: iOS – Android – Windows )
Q1: A Solutions Architect is designing a critical business application with a relational database that runs on an EC2 instance. It requires a single EBS volume that can support up to 16,000 IOPS. Which Amazon EBS volume type can meet the performance requirements of this application?
A. EBS Provisioned IOPS SSD
B. EBS Throughput Optimized HDD
C. EBS General Purpose SSD
D. EBS Cold HDD
Answer: A ( Get the SAA Exam Prep for More: iOS – Android – Windows ) EBS Provisioned IOPS SSD provides sustained performance for mission-critical low-latency workloads. EBS General Purpose SSD can provide bursts of performance up to 3,000 IOPS and have a maximum baseline performance of 10,000 IOPS for volume sizes greater than 3.3 TB. The 2 HDD options are lower cost, high throughput volumes.
Q2: An application running on EC2 instances processes sensitive information stored on Amazon S3. The information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 is a security risk. Which solution will resolve the security concern?
A. Access the data through an Internet Gateway.
B. Access the data through a VPN connection.
C. Access the data through a NAT Gateway.
D.Access the data through a VPC endpoint for Amazon S3
Answer: D ( Get the SAA-C03 Exam Prep for More: iOS – Android – Windows ) VPC endpoints for Amazon S3 provide secure connections to S3 buckets that do not require a gateway or NAT instances. NAT Gateways and Internet Gateways still route traffic over the Internet to the public endpoint for Amazon S3. There is no way to connect to Amazon S3 via VPN.
Q3: An organization is building an Amazon Redshift cluster in their shared services VPC. The cluster will host sensitive data. How can the organization control which networks can access the cluster?
A. Run the cluster in a different VPC and connect through VPC peering.
B. Create a database user inside the Amazon Redshift cluster only for users on the network.
C. Define a cluster security group for the cluster that allows access from the allowed networks.
D. Only allow access to networks that connect with the shared services network via VPN.
Answer: ( Get the SAA-C03 Exam Prep for More: iOS – Android – Windows ) A security group can grant access to traffic from the allowed networks via the CIDR range for each network. VPC peering and VPN are connectivity services and cannot control traffic for security. Amazon Redshift user accounts address authentication and authorization at the user level and have no control over network traffic.
Q4: A web application allows customers to upload orders to an S3 bucket. The resulting Amazon S3 events trigger a Lambda function that inserts a message to an SQS queue. A single EC2 instance reads messages from the queue, processes them, and stores them in an DynamoDB table partitioned by unique order ID. Next month traffic is expected to increase by a factor of 10 and a Solutions Architect is reviewing the architecture for possible scaling problems. Which component is MOST likely to need re-architecting to be able to scale to accommodate the new traffic?
A. Lambda function
B. SQS queue
C. EC2 instance
D. DynamoDB table
Answer: C ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows ) A single EC2 instance will not scale and is a single point of failure in the architecture. A much better solution would be to have EC2 instances in an Auto Scaling group across 2 availability zones read messages from the queue. The other responses are all managed services that can be configured to scale or will scale automatically.
Q5: An application requires a highly available relational database with an initial storage capacity of 8 TB. The database will grow by 8 GB every day. To support expected traffic, at least eight read replicas will be required to handle database reads. Which option will meet these requirements?
A. DynamoDB
B. Amazon S3
C. Amazon Aurora
D. Amazon Redshift
Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows ) Amazon Aurora is a relational database that will automatically scale to accommodate data growth. Amazon Redshift does not support read replicas and will not automatically scale. DynamoDB is a NoSQL service, not a relational database. Amazon S3 is object storage, not a relational database.
C. Divide your files system into multiple smaller file systems.
D. Provision higher IOPs for your EFS.
Answer: B ( Get the SAA-C03 Exam Prep for More: iOS – Android – Windows ) Amazon EFS now allows you to instantly provision the throughput required for your applications independent of the amount of data stored in your file system. This allows you to optimize throughput for your application’s performance needs.
Q7: If you are designing an application that requires fast (10 – 25Gbps), low-latency connections between EC2 instances, what EC2 feature should you use?
A. Snapshots
B. Instance store volumes
C. Placement groups
D. IOPS provisioned instances.
Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows ) Placement groups are a clustering of EC2 instances in one Availability Zone with fast (up to 25Gbps) connections between them. This feature is used for applications that need extremely low-latency connections between instances.
Q8: A Solution Architect is designing an online shopping application running in a VPC on EC2 instances behind an ELB Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The application tier must read and write data to a customer managed database cluster. There should be no access to the database from the Internet, but the cluster must be able to obtain software patches from the Internet.
Which VPC design meets these requirements?
A. Public subnets for both the application tier and the database cluster
B. Public subnets for the application tier, and private subnets for the database cluster
C. Public subnets for the application tier and NAT Gateway, and private subnets for the database cluster
D. Public subnets for the application tier, and private subnets for the database cluster and NAT Gateway
Answer: C. The online application must be in public subnets to allow access from clients’ browsers. The database cluster must be in private subnets to meet the requirement that there be no access from the Internet. A NAT Gateway is required to give the database cluster the ability to download patches from the Internet. NAT Gateways must be deployed in public subnets.
Q9: What command should you run on a running instance if you want to view its user data (that is used at launch)?
A. curl http://254.169.254.169/latest/user-data
B. curl http://localhost/latest/meta-data/bootstrap
C. curl http://localhost/latest/user-data
D. curl http://169.254.169.254/latest/user-data
Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows ) Retrieve Instance User Data To retrieve user data from within a running instance, use the following URI: http://169.254.169.254/latest/user-data
Q10: A company is developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? (Select TWO.)
A. CloudWatch
B. DynamoDB
C. Elastic Load Balancing
D. ElastiCache
E. Storage Gateway
Answer: B. and D. ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows ) Both DynamoDB and ElastiCache provide high performance storage of key-value pairs. CloudWatch and ELB are not storage services. Storage Gateway is a storage service, but it is a hybrid Storage service that enables on-premises applications to use cloud storage.
A stateful web service will keep track of the “state” of a client’s connection and data over several requests. So for example, the client might login, select a users account data, update their address, attach a photo, and change the status flag, then disconnect.
In a stateless web service, the server doesn’t keep any information from one request to the next. The client needs to do it’s work in a series of simple transactions, and the client has to keep track of what happens between requests. So in the above example, the client needs to do each operation separately: connect and update the address, disconnect. Connect and attach the photo, disconnect. Connect and change the status flag, disconnect.
A stateless web service is much simpler to implement, and can handle greater volume of clients.
Q11: From a security perspective, what is a principal?
A. An identity
B. An anonymous user
C. An authenticated user
D. A resource
Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows )
An anonymous user falls under the definition of a principal. A principal can be an anonymous user acting on a system. An authenticated user falls under the definition of a principal. A principal can be an authenticated user acting on a system.
Q12: What are the characteristics of a tiered application?
A. All three application layers are on the same instance
B. The presentation tier is on an isolated instance than the logic layer
C. None of the tiers can be cloned
D. The logic layer is on an isolated instance than the data layer
E. Additional machines can be added to help the application by implementing horizontal scaling
F. Incapable of horizontal scaling
Answer: B. D. and E. ( Get the SAA-C02 / SAA-C03 Exam Prep for More: iOS – Android – Windows )
In a tiered application, the presentation layer is separate from the logic layer; the logic layer is separate from the data layer. Since parts of the application are isolated, they can scale horizontally.
Q17: You lead a team to develop a new online game application in AWS EC2. The application will have a large number of users globally. For a great user experience, this application requires very low network latency and jitter. If the network speed is not fast enough, you will lose customers. Which tool would you choose to improve the application performance? (Select TWO.)
A. AWS VPN
B. AWS Global Accelerator
C. Direct Connect
D. API Gateway
E. CloudFront
Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep App for More: iOS – Android – Windows )
Notes: This online game application has global users and needs low latency. Both CloudFront and Global Accelerator can speed up the distribution of contents over the AWS global network. AWS Global Accelerator works at the network layer and is able to direct traffic to optimal endpoints. Check what is global-accelerator for reference. CloudFront delivers content through edge locations and users are routed to the edge location that has the lowest time delay.
Q18: A company has a media processing application deployed in a local data center. Its file storage is built on a Microsoft Windows file server. The application and file server need to be migrated to AWS. You want to quickly set up the file server in AWS and the application code should continue working to access the file systems. Which method should you choose to create the file server?
A. Create a Windows File Server from Amazon WorkSpaces.
B. Configure a high performance Windows File System in Amazon EFS.
C. Create a Windows File Server in Amazon FSx.
D. Configure a secure enterprise storage through Amazon WorkDocs.
Answer: C – ( Get the SAA-C03 Exam Prep App for More: iOS – Android – Windows )
Notes: In this question, a Windows file server is required in AWS and the application should continue to work unchanged. Amazon FSx for Windows File Server is the correct answer as it is backed by a fully native Windows file system.
Q19: You are developing an application using AWS SDK to get objects from AWS S3. The objects have big sizes and sometimes there are failures when getting objects especially when the network connectivity is poor. You want to get a specific range of bytes in a single GET request and retrieve the whole object in parts. Which method can achieve this?
A. Enable multipart upload in the AWS SDK.
B. Use the “Range” HTTP header in a GET request to download the specified range bytes of an object.
C. Reduce the retry requests and enlarge the retry timeouts through AWS SDK when fetching S3 objects.
D. Retrieve the whole S3 object through a single GET operation.
Answer: – ( Get the SAA-C02 / SAA-C03 Exam Prep App for More: iOS – Android – Windows )
Notes: Because with byte-range fetches, users can establish concurrent connections to Amazon S3 to fetch different parts from within the same object.
Through the “Range” header in the HTTP GET request, a specified portion of the objects can be downloaded instead of the whole objects. Check the explanations in here.
Q20: You have an application hosted in an Auto Scaling group and an application load balancer distributes traffic to the ASG. You want to add a scaling policy that keeps the average aggregate CPU utilization of the Auto Scaling group to be 60 percent. The capacity of the Auto Scaling group should increase or decrease based on this target value. Which scaling policy does it belong to?
A. Target tracking scaling policy.
B. Step scaling policy.
C. Simple scaling policy.
D. Scheduled scaling policy.
Answer: A – ( Get the SAA-C03 Exam Prep App for More: iOS – Android – Windows )
Notes: A target tracking scaling policy can be applied to check the ASGAverageCPUUtilization metric. In ASG, you can add a target tracking scaling policy based on a target. Check here for different scaling policies.
Q21: You need to launch a number of EC2 instances to run Cassandra. There are large distributed and replicated workloads in Cassandra and you plan to launch instances using EC2 placement groups. The traffic should be distributed evenly across several partitions and each partition should contain multiple instances. Which strategy would you use when launching the placement groups?
A. Cluster placement strategy
B. Spread placement strategy.
C. Partition placement strategy.
D. Network placement strategy.
Answer: – ( Get the SAA-C02 / SAA-C03 Exam Prep App for More: iOS – Android – Windows )
Notes: Placement groups have the placement strategies of Cluster, Partition and Spread. With the Partition placement strategy, instances in one partition do not share the underlying hardware with other partitions. This strategy is suitable for distributed and replicated workloads such as Cassandra. Details please refer to Placement Groups Limitation partition.
Q22: To improve the network performance, you launch a C5 EC2 Amazon Linux instance and enable enhanced networking by modifying the instance attribute with “aws ec2 modify-instance-attribute –instance-id instance_id –ena-support”. Which mechanism does the EC2 instance use to enhance the networking capabilities?
A. Intel 82599 Virtual Function (VF) interface.
B. Elastic Fabric Adapter (EFA).
C. Elastic Network Adapter (ENA).
D. Elastic Network Interface (ENI).
Answer: C
Notes: Enhanced networking has two mechanisms: Elastic Network Adapter (ENA) and Intel 82599Virtual Function (VF) interface. For ENA, users can enable it with –ena-support. References can be found here
Q23: You work for an online retailer where any downtime at all can cause a significant loss of revenue. You have architected your application to be deployed on an Auto Scaling Group of EC2 instances behind a load balancer. You have configured and deployed these resources using a CloudFormation template. The Auto Scaling Group is configured with default settings, and a simple CPU utilization scaling policy. You have also set up multiple Availability Zones for high availability. The Load Balancer does health checks against an html file generated by script. When you begin performing load testing on your application and notice in CloudWatch that the load balancer is not sending traffic to one of your EC2 instances. What could be the problem?
A. The EC2 instance has failed the load balancer health check.
B. The instance has not been registered with CloudWatch.
C. The EC2 instance has failed EC2 status checks.
D. You are load testing at a moderate traffic level and not all instances are needed.
Notes: The load balancer will route the incoming requests only to the healthy instances. The EC2 instance may have passed status check and be considered health to the Auto Scaling Group, but the ELB may not use it if the ELB health check has not been met. The ELB health check has a default of 30 seconds between checks, and a default of 3 checks before making a decision. Therefore the instance could be visually available but unused for at least 90 seconds before the GUI would show it as failed. In CloudWatch where the issue was noticed it would appear to be a healthy EC2 instance but with no traffic. Which is what was observed.
References: ELB HealthCheck
Q24: Your company is using a hybrid configuration because there are some legacy applications which are not easily converted and migrated to AWS. And with this configuration comes a typical scenario where the legacy apps must maintain the same private IP address and MAC address. You are attempting to convert the application to the cloud and have configured an EC2 instance to house the application. What you are currently testing is removing the ENI from the legacy instance and attaching it to the EC2 instance. You want to attempt a cold attach. What does this mean?
A. Attach ENI when it’s stopped.
B. Attach ENI before the public IP address is assigned.
C. Attach ENI to an instance when it’s running.
D. Attach ENI when the instance is being launched.
Notes: Best practices for configuring network interfaces You can attach a network interface to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach). You can detach secondary network interfaces when the instance is running or stopped. However, you can’t detach the primary network interface. You can move a network interface from one instance to another, if the instances are in the same Availability Zone and VPC but in different subnets. When launching an instance using the CLI, API, or an SDK, you can specify the primary network interface and additional network interfaces. Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance. A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly. Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves. Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance. If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing. If possible, use a secondary private IPv4 address on the primary network interface instead. For more information, see Assigning a secondary private IPv4 address.
Q25: Your company has recently converted to a hybrid cloud environment and will slowly be migrating to a fully AWS cloud environment. The AWS side is in need of some steps to prepare for disaster recovery. A disaster recovery plan needs drawn up and disaster recovery drills need to be performed for compliance reasons. The company wants to establish Recovery Time and Recovery Point Objectives. The RTO and RPO can be pretty relaxed. The main point is to have a plan in place, with as much cost savings as possible. Which AWS disaster recovery pattern will best meet these requirements?
A. Warm Standby
B. Backup and restore
C. Multi Site
D. Pilot Light
Answer: B
Notes: Backup and Restore: This is the least expensive option and cost is the overriding factor.
Q26: An international travel company has an application which provides travel information and alerts to users all over the world. The application is hosted on groups of EC2 instances in Auto Scaling Groups in multiple AWS Regions. There are also load balancers routing traffic to these instances. In two countries, Ireland and Australia, there are compliance rules in place that dictate users connect to the application in eu-west-1 and ap-southeast-1. Which service can you use to meet this requirement?
A. Use Route 53 weighted routing.
B. Use Route 53 geolocation routing.
C. Configure CloudFront and the users will be routed to the nearest edge location.
D. Configure the load balancers to route users to the proper region.
Notes: Geolocation routing lets you choose the resources that serve your traffic based on the geographic location of your users, meaning the location that DNS queries originate from. For example, you might want all queries from Europe to be routed to an ELB in the Frankfurt region. When you use geolocation routing, you can localize your content and present some or all of your website in the language of your users. You can also use geolocation routing to restrict distribution of content to only the locations in which you have distribution rights. Another possible use is for balancing load across endpoints in a predictable, easy-to-manage way, so that each user location is consistently routed to the same endpoint.
Q26: You have taken over management of several instances in the company AWS environment. You want to quickly review scripts used to bootstrap the instances at runtime. A URL command can be used to do this. What can you append to the URL http://169.254.169.254/latest/ to retrieve this data?
A. user-data/
B. instance-demographic-data/
C. meta-data/
D. instance-data/
Answer: A
Notes: When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives.
Q27: A software company has created an application to capture service requests from users and also enhancement requests. The application is deployed on an Auto Scaling group of EC2 instances fronted by an Application Load Balancer. The Auto Scaling group has scaled to maximum capacity, but there are still requests being lost. The cost of these instances is becoming an issue. What step can the company take to ensure requests aren’t lost?
A. Use larger instances in the Auto Scaling group.
B. Use spot instances to save money.
C. Use an SQS queue with the Auto Scaling group to capture all requests.
D. Use a Network Load Balancer instead for faster throughput.
Notes: There are some scenarios where you might think about scaling in response to activity in an Amazon SQS queue. For example, suppose that you have a web app that lets users upload images and use them online. In this scenario, each image requires resizing and encoding before it can be published. The app runs on EC2 instances in an Auto Scaling group, and it’s configured to handle your typical upload rates. Unhealthy instances are terminated and replaced to maintain current instance levels at all times. The app places the raw bitmap data of the images in an SQS queue for processing. It processes the images and then publishes the processed images where they can be viewed by users. The architecture for this scenario works well if the number of image uploads doesn’t vary over time. But if the number of uploads changes over time, you might consider using dynamic scaling to scale the capacity of your Auto Scaling group.
Q28: A company has an auto scaling group of EC2 instances hosting their retail sales application. Any significant downtime for this application can result in large losses of profit. Therefore the architecture also includes an Application Load Balancer and an RDS database in a Multi-AZ deployment. The company has a very aggressive Recovery Time Objective (RTO) in case of disaster. How long will a failover typically complete?
A. Under 10 minutes
B. Within an hour
C. Almost instantly
D. one to two minutes
Answer: D
Notes: What happens during Multi-AZ failover and how long does it take? Failover is automatically handled by Amazon RDS so that you can resume database operations as quickly as possible without administrative intervention. When failing over, Amazon RDS simply flips the canonical name record (CNAME) for your DB instance to point at the standby, which is in turn promoted to become the new primary. We encourage you to follow best practices and implement database connection retry at the application layer. Failovers, as defined by the interval between the detection of the failure on the primary and the resumption of transactions on the standby, typically complete within one to two minutes. Failover time can also be affected by whether large uncommitted transactions must be recovered; the use of adequately large instance types is recommended with Multi-AZ for best results. AWS also recommends the use of Provisioned IOPS with Multi-AZ instances for fast, predictable, and consistent throughput performance.
Q29: You have two EC2 instances running in the same VPC, but in different subnets. You are removing the secondary ENI from an EC2 instance and attaching it to another EC2 instance. You want this to be fast and with limited disruption. So you want to attach the ENI to the EC2 instance when it’s running. What is this called?
Notes: Here are some best practices for configuring network interfaces. You can attach a network interface to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach). You can detach secondary network interfaces when the instance is running or stopped. However, you can’t detach the primary network interface. You can move a network interface from one instance to another if the instances are in the same Availability Zone and VPC but in different subnets. When launching an instance using the CLI, API, or an SDK, you can specify the primary network interface and additional network interfaces. Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance. A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly. Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves. Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance. If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing. If possible, use a secondary private IPv4 address on the primary network interface instead. For more information, see Assigning a secondary private IPv4 address.
Q30: You suspect that one of the AWS services your company is using has gone down. How can you check on the status of this service?
A. AWS Trusted Advisor
B. Amazon Inspector
C. AWS Personal Health Dashboard
D. AWS Organizations
Answer: C
Notes: AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that may impact you. While the Service Health Dashboard displays the general status of AWS services, Personal Health Dashboard gives you a personalized view of the performance and availability of the AWS services underlying your AWS resources. The dashboard displays relevant and timely information to help you manage events in progress, and provides proactive notification to help you plan for scheduled activities. With Personal Health Dashboard, alerts are triggered by changes in the health of AWS resources, giving you event visibility and guidance to help quickly diagnose and resolve issues.
Q31: You have configured an Auto Scaling Group of EC2 instances fronted by an Application Load Balancer and backed by an RDS database. You want to begin monitoring the EC2 instances using CloudWatch metrics. Which metric is not readily available out of the box?
Notes: Memory utilization is not available as an out of the box metric in CloudWatch. You can, however, collect memory metrics when you configure a custom metric for CloudWatch.
Types of custom metrics that you can set up include:
Q32: Several instances you are creating have a specific data requirement. The requirement states that the data on the root device needs to persist independently from the lifetime of the instance. After considering AWS storage options, which is the simplest way to meet these requirements?
A. Store your root device data on Amazon EBS.
B. Store the data on the local instance store.
C. Create a cron job to migrate the data to S3.
D. Send the data to S3 using S3 lifecycle rules.
Answer: A
Notes: By using Amazon EBS, data on the root device will persist independently from the lifetime of the instance. This enables you to stop and restart the instance at a subsequent time, which is similar to shutting down your laptop and restarting it when you need it again.
Q33: A company has an Auto Scaling Group of EC2 instances hosting their retail sales application. Any significant downtime for this application can result in large losses of profit. Therefore the architecture also includes an Application Load Balancer and an RDS database in a Multi-AZ deployment. What will happen to preserve high availability if the primary database fails?
A. A Lambda function kicks off a CloudFormation template to deploy a backup database.
B. The CNAME is switched from the primary db instance to the secondary.
C. Route 53 points the CNAME to the secondary database instance.
D. The Elastic IP address for the primary database is moved to the secondary database.
Notes: Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
Failover is automatically handled by Amazon RDS so that you can resume database operations as quickly as possible without administrative intervention. When failing over, Amazon RDS simply flips the canonical name record (CNAME) for your DB instance to point at the standby, which is in turn promoted to become the new primary.
Q34: After several issues with your application and unplanned downtime, your recommendation to migrate your application to AWS is approved. You have set up high availability on the front end with a load balancer and an Auto Scaling Group. What step can you take with your database to configure high-availability and ensure minimal downtime (under five minutes)?
A. Create a read replica.
B. Enable Multi-AZ failover on the database.
C. Take frequent snapshots of your database.
D. Create your database using CloudFormation and save the template for reuse.
Answer: B
Notes: In the event of a planned or unplanned outage of your DB instance, Amazon RDS automatically switches to a standby replica in another Availability Zone if you have enabled Multi-AZ. The time it takes for the failover to complete depends on the database activity and other conditions at the time the primary DB instance became unavailable. Failover times are typically 60–120 seconds. However, large transactions or a lengthy recovery process can increase failover time. When the failover is complete, it can take additional time for the RDS console to reflect the new Availability Zone. Note the above sentences. Large transactions could cause a problem in getting back up within five minutes, but this is clearly the best of the available choices to attempt to meet this requirement. We must move through our questions on the exam quickly, but always evaluate all the answers for the best possible solution.
Q35: A new startup is considering the advantages of using DynamoDB versus a traditional relational database in AWS RDS. The NoSQL nature of DynamoDB presents a small learning curve to the team members who all have experience with traditional databases. The company will have multiple databases, and the decision will be made on a case-by-case basis. Which of the following use cases would favour DynamoDB? Select two.
Notes: DynamoDB is a NoSQL database that supports key-value and document data structures. A key-value store is a database service that provides support for storing, querying, and updating collections of objects that are identified using a key and values that contain the actual content being stored. Meanwhile, a document data store provides support for storing, querying, and updating items in a document format such as JSON, XML, and HTML. DynamoDB’s fast and predictable performance characteristics make it a great match for handling session data. Plus, since it’s a fully-managed NoSQL database service, you avoid all the work of maintaining and operating a separate session store.
Storing metadata for Amazon S3 objects is correct because the Amazon DynamoDB stores structured data indexed by primary key and allows low-latency read and write access to items ranging from 1 byte up to 400KB. Amazon S3 stores unstructured blobs and is suited for storing large objects up to 5 TB. In order to optimize your costs across AWS services, large objects or infrequently accessed data sets should be stored in Amazon S3, while smaller data elements or file pointers (possibly to Amazon S3 objects) are best saved in Amazon DynamoDB.
Q36: You have been tasked with designing a strategy for backing up EBS volumes attached to an instance-store-backed EC2 instance. You have been asked for an executive summary on your design, and the executive summary should include an answer to the question, “What can an EBS volume do when snapshotting the volume is in progress”?
A. The volume can be used normally while the snapshot is in progress.
B. The volume can only accommodate writes while a snapshot is in progress.
C. The volume can not be used while a snapshot is in progress.
D. The volume can only accommodate reads while a snapshot is in progress.
Answer: A
Notes: You can create a point-in-time snapshot of an EBS volume and use it as a baseline for new volumes or for data backup. If you make periodic snapshots of a volume, the snapshots are incremental; the new snapshot saves only the blocks that have changed since your last snapshot. Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed. While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume.
Q37: You are working as a Solutions Architect in a large healthcare organization. You have many Auto Scaling Groups that you need to create. One requirement is that you need to reuse some software licenses and therefore need to use dedicated hosts on EC2 instances in your Auto Scaling Groups. What step must you take to meet this requirement?
A. Create your launch configuration, but manually change the instances to Dedicated Hosts in the EC2 console.
B. Use a launch template with your Auto Scaling Group.
C. Create the Dedicated Host EC2 instances, then add them to an existing Auto Scaling Group.
D. Make sure your launch configurations are using Dedicated Hosts.
Answer: B
Notes: In addition to the features of Amazon EC2 Auto Scaling that you can configure by using launch templates, launch templates provide more advanced Amazon EC2 configuration options. For example, you must use launch templates to use Amazon EC2 Dedicated Hosts. Dedicated Hosts are physical servers with EC2 instance capacity that are dedicated to your use. While Amazon EC2 Dedicated Instances also run on dedicated hardware, the advantage of using Dedicated Hosts over Dedicated Instances is that you can bring eligible software licenses from external vendors and use them on EC2 instances. If you currently use launch configurations, you can specify a launch template when you update an Auto Scaling group that was created using a launch configuration. To create a launch template to use with an Auto Scaling Group, create the template from scratch, create a new version of an existing template, or copy the parameters from a launch configuration, running instance, or other template.
Q38: Your organization uses AWS CodeDeploy for deployments. Now you are starting a project on the AWS Lambda platform. For your deployments, you’ve been given a requirement of performing blue-green deployments. When you perform deployments, you want to split traffic, sending a small percentage of the traffic to the new version of your application. Which deployment configuration will allow this splitting of traffic?
Notes: With canary, traffic is shifted in two increments. You can choose from predefined canary options that specify the percentage of traffic shifted to your updated Lambda function version in the first increment and the interval, in minutes, before the remaining traffic is shifted in the second increment.
Q39: A financial institution has an application that produces huge amounts of actuary data, which is ultimately expected to be in the terabyte range. There is a need to run complex analytic queries against terabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Which storage service will best meet this requirement?
A. RDS
B. DynamoDB
C. Redshift
D. ElastiCache
Answer: C
Notes: Amazon Redshift is a fast, fully-managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It enables you to run complex analytic queries against terabytes to petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Most results come back in seconds. With Redshift, you can start small for just $0.25 per hour with no commitments and scale-out to petabytes of data for $1,000 per terabyte per year, less than a tenth of the cost of traditional on-premises solutions. Amazon Redshift also includes Amazon Redshift Spectrum, allowing you to run SQL queries directly against exabytes of unstructured data in Amazon S3 data lakes. No loading or transformation is required, and you can use open data formats, including Avro, CSV, Grok, Amazon Ion, JSON, ORC, Parquet, RCFile, RegexSerDe, Sequence, Text, and TSV. Redshift Spectrum automatically scales query compute capacity based on the data retrieved, so queries against Amazon S3 run fast, regardless of data set size.
Q40: A company has an application for sharing static content, such as photos. The popularity of the application has grown, and the company is now sharing content worldwide. This worldwide service has caused some issues with latency. What AWS services can be used to host a static website, serve content to globally dispersed users, and address latency issues, while keeping cost under control? Choose two.
Notes: Amazon S3 is an object storage built to store and retrieve any amount of data from anywhere on the Internet. It’s a simple storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low costs. AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around the world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Both services integrate with AWS Shield for DDoS protection.
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services. CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing, or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience. Lastly, if you use AWS origins such as Amazon S3, Amazon EC2, or Elastic Load Balancing, you don’t pay for any data transferred between these services and CloudFront.
Q41: You have just been hired by a large organization which uses many different AWS services in their environment. Some of the services which handle data include: RDS, Redshift, ElastiCache, DynamoDB, S3, and Glacier. You have been instructed to configure a web application using stateless web servers. Which services can you use to handle session state data? Choose two.
Q42: After an IT Steering Committee meeting you have been put in charge of configuring a hybrid environment for the company’s compute resources. You weigh the pros and cons of various technologies based on the requirements you are given. Your primary requirement is the necessity for a private, dedicated connection, which bypasses the Internet and can provide throughput of 10 Gbps. Which option will you select?
A. AWS Direct Connect
B. VPC Peering
C. AWS VPN
D. AWS Direct Gateway
Answer: A
Notes: AWS Direct Connect can reduce network costs, increase bandwidth throughput, and provide a more consistent network experience than internet-based connections. It uses industry-standard 802.1q VLANs to connect to Amazon VPC using private IP addresses. You can choose from an ecosystem of WAN service providers for integrating your AWS Direct Connect endpoint in an AWS Direct Connect location with your remote networks. AWS Direct Connect lets you establish 1 Gbps or 10 Gbps dedicated network connections (or multiple connections) between AWS networks and one of the AWS Direct Connect locations. You can also work with your provider to create sub-1G connection or use link aggregation group (LAG) to aggregate multiple 1 gigabit or 10 gigabit connections at a single AWS Direct Connect endpoint, allowing you to treat them as a single, managed connection. A Direct Connect gateway is a globally available resource to enable connections to multiple Amazon VPCs across different regions or AWS accounts.
Q43: An application is hosted on an EC2 instance in a VPC. The instance is in a subnet in the VPC, and the instance has a public IP address. There is also an internet gateway and a security group with the proper ingress configured. But your testers are unable to access the instance from the Internet. What could be the problem?
A. Make sure the instance has a private IP address.
B. Add a route to the route table, from the subnet containing the instance, to the Internet Gateway.
C. A NAT gateway needs to be configured.
D. A Virtual private gateway needs to be configured.
The question doesn’t state if the subnet containing the instance is public or private. An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet. To enable access to or from the internet for instances in a subnet in a VPC, you must do the following:
Attach an internet gateway to your VPC.
Add a route to your subnet’s route table that directs internet-bound traffic to the internet gateway. If a subnet is associated with a route table that has a route to an internet gateway, it’s known as a public subnet. If a subnet is associated with a route table that does not have a route to an internet gateway, it’s known as a private subnet.
Ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address).
Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance.
In your subnet route table, you can specify a route for the internet gateway to all destinations not explicitly known to the route table (0.0.0.0/0 for IPv4 or ::/0 for IPv6). Alternatively, you can scope the route to a narrower range of IP addresses. For example, the public IPv4 addresses of your company’s public endpoints outside of AWS, or the elastic IP addresses of other Amazon EC2 instances outside your VPC. To enable communication over the Internet for IPv4, your instance must have a public IPv4 address or an Elastic IP address that’s associated with a private IPv4 address on your instance. Your instance is only aware of the private (internal) IP address space defined within the VPC and subnet. The internet gateway logically provides the one-to-one NAT on behalf of your instance so that when traffic leaves your VPC subnet and goes to the Internet, the reply address field is set to the public IPv4 address or elastic IP address of your instance and not its private IP address. Conversely, traffic that’s destined for the public IPv4 address or elastic IP address of your instance has its destination address translated into the instance’s private IPv4 address before the traffic is delivered to the VPC. To enable communication over the Internet for IPv6, your VPC and subnet must have an associated IPv6 CIDR block, and your instance must be assigned an IPv6 address from the range of the subnet. IPv6 addresses are globally unique, and therefore public by default.
Q44: A data company has implemented a subscription service for storing video files. There are two levels of subscription: personal and professional use. The personal users can upload a total of 5 GB of data, and professional users can upload as much as 5 TB of data. The application can upload files of size up to 1 TB to an S3 Bucket. What is the best way to upload files of this size?
A. Multipart upload
B. Single-part Upload
C. AWS Snowball
D. AWS SnowMobile
Answers: A
Notes: The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object (see Operations on Objects). Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket. You can list all of your in-progress multipart uploads or get a list of the parts that you have uploaded for a specific multipart upload. Each of these operations is explained in this section.
Q45: You have multiple EC2 instances housing applications in a VPC in a single Availability Zone. The applications need to communicate at extremely high throughputs to avoid latency for end users. The average throughput needs to be 6 Gbps. What’s the best measure you can do to ensure this throughput?
Notes: Amazon Web Services’ (AWS) solution to reducing latency between instances involves the use of placement groups. As the name implies, a placement group is just that — a group. AWS instances that exist within a common availability zone can be grouped into a placement group. Group members are able to communicate with one another in a way that provides low latency and high throughput. A cluster placement group is a logical grouping of instances within a single Availability Zone. A cluster placement group can span peered VPCs in the same Region. Instances in the same cluster placement group enjoy a higher per-flow throughput limit of up to 10 Gbps for TCP/IP traffic and are placed in the same high-bisection bandwidth segment of the network.
Q46: A team member has been tasked to configure four EC2 instances for four separate applications. These are not high-traffic apps, so there is no need for an Auto Scaling Group. The instances are all in the same public subnet and each instance has an EIP address, and all of the instances have the same Security Group. But none of the instances can send or receive internet traffic. You verify that all the instances have a public IP address. You also verify that an internet gateway has been configured. What is the most likely issue?
A. There is no route in the route table to the internet gateway (or it has been deleted).
B. Each instance needs its own security group.
C. The route table is corrupt.
D. You are using the default nacl.
Answers: A
Notes: The question details all of the configuration needed for internet access, except for a route to the IGW in the route table. This is definitely a key step in any checklist for internet connectivity. It is quite possible to have a subnet with the ‘Public’ attribute set but no route to the Internet in the assigned Route table. (test it yourself). This may have been a setup error, or someone may have thoughtlessly altered the shared Route table for a special case instead of creating a new Route table for the special case.
Q47: You have been assigned to create an architecture which uses load balancers to direct traffic to an Auto Scaling Group of EC2 instances across multiple Availability Zones. The application to be deployed on these instances is a life insurance application which requires path-based and host-based routing. Which type of load balancer will you need to use?
A. Any type of load balancer will meet these requirements.
B. Classic Load Balancer
C. Network Load Balancer
D. Application Load Balancer
Answers: D
Notes: Only the Application Load Balancer can support path-based and host-based routing. Using an Application Load Balancer instead of a Classic Load Balancer has the following benefits:
Support for path-based routing. You can configure rules for your listener that forward requests based on the URL in the request. This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL.
Support for host-based routing. You can configure rules for your listener that forward requests based on the host field in the HTTP header. This enables you to route requests to multiple domains using a single load balancer.
Support for routing based on fields in the request, such as standard and custom HTTP headers and methods, query parameters, and source IP addresses.
Support for routing requests to multiple applications on a single EC2 instance. You can register each instance or IP address with the same target group using multiple ports.
Support for redirecting requests from one URL to another.
Support for returning a custom HTTP response.
Support for registering targets by IP address, including targets outside the VPC for the load balancer.
Support for registering Lambda functions as targets.
Support for the load balancer to authenticate users of your applications through their corporate or social identities before routing requests.
Support for containerized applications. Amazon Elastic Container Service (Amazon ECS) can select an unused port when scheduling a task and register the task with a target group using this port. This enables you to make efficient use of your clusters.
Support for monitoring the health of each service independently, as health checks are defined at the target group level and many CloudWatch metrics are reported at the target group level. Attaching a target group to an Auto Scaling group enables you to scale each service dynamically based on demand.
Access logs contain additional information and are stored in compressed format.
Q48: You have been assigned to create an architecture which uses load balancers to direct traffic to an Auto Scaling Group of EC2 instances across multiple Availability Zones. You were considering using an Application Load Balancer, but some of the requirements you have been given seem to point to a Classic Load Balancer. Which requirement would be better served by an Application Load Balancer?
A. Support for EC2-Classic
B. Path-based routing
C. Support for sticky sessions using application-generated cookies
D. Support for TCP and SSL listeners
Answers: B
Notes:
Using an Application Load Balancer instead of a Classic Load Balancer has the following benefits:
Support for path-based routing. You can configure rules for your listener that forward requests based on the URL in the request. This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL.
Q49: You have been tasked to review your company disaster recovery plan due to some new requirements. The driving factor is that the Recovery Time Objective has become very aggressive. Because of this, it has been decided to configure Multi-AZ deployments for the RDS MySQL databases. Unrelated to DR, it has been determined that some read traffic needs to be offloaded from the master database. What step can be taken to meet this requirement?
A. Convert to Aurora to allow the standby to serve read traffic.
B. Redirect some of the read traffic to the standby database.
C. Add DAX to the solution to alleviate excess read traffic.
D. Add read replicas to offload some read traffic.
Notes: Amazon RDS Read Replicas for MySQL and MariaDB now support Multi-AZ deployments. Combining Read Replicas with Multi-AZ enables you to build a resilient disaster recovery strategy and simplify your database engine upgrade process. Amazon RDS Read Replicas enable you to create one or more read-only copies of your database instance within the same AWS Region or in a different AWS Region. Updates made to the source database are then asynchronously copied to your Read Replicas. In addition to providing scalability for read-heavy workloads, Read Replicas can be promoted to become a standalone database instance when needed.
Q50: A gaming company is designing several new games which focus heavily on player-game interaction. The player makes a certain move and the game has to react very quickly to change the environment based on that move and to present the next decision for the player in real-time. A tool is needed to continuously collect data about player-game interactions and feed the data into the gaming platform in real-time. Which AWS service can best meet this need?
A. AWS Lambda
B. Kinesis Data Streams
C. Kinesis Data Analytics
D. AWS IoT
Answers: B
Notes: Kinesis Data Streams can be used to continuously collect data about player-game interactions and feed the data into your gaming platform. With Kinesis Data Streams, you can design a game that provides engaging and dynamic experiences based on players’ actions and behaviors.
Q51: You are designing an architecture for a financial company which provides a day trading application to customers. After viewing the traffic patterns for the existing application you notice that traffic is fairly steady throughout the day, with the exception of large spikes at the opening of the market in the morning and at closing around 3 pm. Your architecture will include an Auto Scaling Group of EC2 instances. How can you configure the Auto Scaling Group to ensure that system performance meets the increased demands at opening and closing of the market?
A. Configure a Dynamic Scaling Policy to scale based on CPU Utilization.
B. Use a load balancer to ensure that the load is distributed evenly during high-traffic periods.
C. Configure your Auto Scaling Group to have a desired size which will be able to meet the demands of the high-traffic periods.
D. Use a predictive scaling policy on the Auto Scaling Group to meet opening and closing spikes.
Notes: Use a predictive scaling policy on the Auto Scaling Group to meet opening and closing spikes: Using data collected from your actual EC2 usage and further informed by billions of data points drawn from our own observations, we use well-trained Machine Learning models to predict your expected traffic (and EC2 usage) including daily and weekly patterns. The model needs at least one day’s of historical data to start making predictions; it is re-evaluated every 24 hours to create a forecast for the next 48 hours. What we can gather from the question is that the spikes at the beginning and end of day can potentially affect performance. Sure, we can use dynamic scaling, but remember, scaling up takes a little bit of time. We have the information to be proactive, use predictive scaling, and be ready for these spikes at opening and closing.
Q52: A software gaming company has produced an online racing game which uses CloudFront for fast delivery to worldwide users. The game also uses DynamoDB for storing in-game and historical user data. The DynamoDB table has a preconfigured read and write capacity. Users have been reporting slow down issues, and an analysis has revealed that the DynamoDB table has begun throttling during peak traffic times. Which step can you take to improve game performance?
A. Add a load balancer in front of the web servers.
B. Add ElastiCache to cache frequently accessed data in memory.
C. Add an SQS Queue to queue requests which could be lost.
D. Make sure DynamoDB Auto Scaling is turned on.
Answers: D
Notes: Amazon DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns. This enables a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic, without throttling. When the workload decreases, Application Auto Scaling decreases the throughput so that you don’t pay for unused provisioned capacity. Note that if you use the AWS Management Console to create a table or a global secondary index, DynamoDB auto scaling is enabled by default. You can modify your auto scaling settings at any time.
Q53: You have configured an Auto Scaling Group of EC2 instances. You have begun testing the scaling of the Auto Scaling Group using a stress tool to force the CPU utilization metric being used to force scale out actions. The stress tool is also being manipulated by removing stress to force a scale in. But you notice that these actions are only taking place in five-minute intervals. What is happening?
A. Auto Scaling Groups can only scale in intervals of five minutes or greater.
B. The Auto Scaling Group is following the default cooldown procedure.
C. A load balancer is managing the load and limiting the effectiveness of stressing the servers.
D. The stress tool is configured to run for five minutes.
Notes: The cooldown period helps you prevent your Auto Scaling group from launching or terminating additional instances before the effects of previous activities are visible. You can configure the length of time based on your instance startup time or other application needs. When you use simple scaling, after the Auto Scaling group scales using a simple scaling policy, it waits for a cooldown period to complete before any further scaling activities due to simple scaling policies can start. An adequate cooldown period helps to prevent the initiation of an additional scaling activity based on stale metrics. By default, all simple scaling policies use the default cooldown period associated with your Auto Scaling Group, but you can configure a different cooldown period for certain policies, as described in the following sections. Note that Amazon EC2 Auto Scaling honors cooldown periods when using simple scaling policies, but not when using other scaling policies or scheduled scaling. A default cooldown period automatically applies to any scaling activities for simple scaling policies, and you can optionally request to have it apply to your manual scaling activities. When you use the AWS Management Console to update an Auto Scaling Group, or when you use the AWS CLI or an AWS SDK to create or update an Auto Scaling Group, you can set the optional default cooldown parameter. If a value for the default cooldown period is not provided, its default value is 300 seconds.
Q54: A team of architects is designing a new AWS environment for a company which wants to migrate to the Cloud. The architects are considering the use of EC2 instances with instance store volumes. The architects realize that the data on the instance store volumes are ephemeral. Which action will not cause the data to be deleted on an instance store volume?
A. Reboot
B. The underlying disk drive fails.
C. Hardware disk failure.
D. Instance is stopped
Answers: A
Notes: Some Amazon Elastic Compute Cloud (Amazon EC2) instance types come with a form of directly attached, block-device storage known as the instance store. The instance store is ideal for temporary storage, because the data stored in instance store volumes is not persistent through instance stops, terminations, or hardware failures.
Q55: You work for an advertising company that has a real-time bidding application. You are also using CloudFront on the front end to accommodate a worldwide user base. Your users begin complaining about response times and pauses in real-time bidding. Which service can be used to reduce DynamoDB response times by an order of magnitude (milliseconds to microseconds)?
A. DAX
B. DynamoDB Auto Scaling
C. Elasticache
D. CloudFront Edge Caches
Answers: A
Notes: Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache that can reduce Amazon DynamoDB response times from milliseconds to microseconds, even at millions of requests per second. While DynamoDB offers consistent single-digit millisecond latency, DynamoDB with DAX takes performance to the next level with response times in microseconds for millions of requests per second for read-heavy workloads. With DAX, your applications remain fast and responsive, even when a popular event or news story drives unprecedented request volumes your way. No tuning required.
Q56: A travel company has deployed a website which serves travel updates to users all over the world. The traffic this database serves is very read heavy and can have some latency issues at certain times of the year. What can you do to alleviate these latency issues?
Notes: Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances. Read replicas are available in Amazon RDS for MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server as well as Amazon Aurora.
Q57: A large financial institution is gradually moving their infrastructure and applications to AWS. The company has data needs that will utilize all of RDS, DynamoDB, Redshift, and ElastiCache. Which description best describes Amazon Redshift?
A. Key-value and document database that delivers single-digit millisecond performance at any scale.
B. Cloud-based relational database.
C. Can be used to significantly improve latency and throughput for many read-heavy application workloads.
D. Near real-time complex querying on massive data sets.
Answers: D
Notes: Amazon Redshift is a fast, fully-managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against terabytes to petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Most results come back in seconds. With Redshift, you can start small for just $0.25 per hour with no commitments and scale out to petabytes of data for $1,000 per terabyte per year, less than a tenth the cost of traditional on-premises solutions. Amazon Redshift also includes Amazon Redshift Spectrum, allowing you to run SQL queries directly against exabytes of unstructured data in Amazon S3 data lakes. No loading or transformation is required, and you can use open data formats, including Avro, CSV, Grok, Amazon Ion, JSON, ORC, Parquet, RCFile, RegexSerDe, Sequence, Text, and TSV. Redshift Spectrum automatically scales query compute capacity based on the data retrieved, so queries against Amazon S3 run fast, regardless of data set size.
Q58: You are designing an architecture which will house an Auto Scaling Group of EC2 instances. The application hosted on the instances is expected to be an extremely popular social networking site. Forecasts for traffic to this site expect very high traffic and you will need a load balancer to handle tens of millions of requests per second while maintaining high throughput at ultra low latency. You need to select the type of load balancer to front your Auto Scaling Group to meet this high traffic requirement. Which load balancer will you select?
A. You will need an Application Load Balancer to meet this requirement.
B. All the AWS load balancers meet the requirement and perform the same.
C. You will select a Network Load Balancer to meet this requirement.
D. You will need a Classic Load Balancer to meet this requirement.
Answers: C
Notes: Network Load Balancer Overview: A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration. When you enable an Availability Zone for the load balancer, Elastic Load Balancing creates a load balancer node in the Availability Zone. By default, each load balancer node distributes traffic across the registered targets in its Availability Zone only. If you enable cross-zone load balancing, each load balancer node distributes traffic across the registered targets in all enabled Availability Zones. It is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on your part. The Network Load Balancer is API-compatible with the Application Load Balancer, including full programmatic control of Target Groups and Targets. Here are some of the most important features:
Static IP Addresses – Each Network Load Balancer provides a single IP address for each Availability Zone in its purview. If you have targets in us-west-2a and other targets in us-west-2c, NLB will create and manage two IP addresses (one per AZ); connections to that IP address will spread traffic across the instances in all the VPC subnets in the AZ. You can also specify an existing Elastic IP for each AZ for even greater control. With full control over your IP addresses, a Network Load Balancer can be used in situations where IP addresses need to be hard-coded into DNS records, customer firewall rules, and so forth.
Zonality – The IP-per-AZ feature reduces latency with improved performance, improves availability through isolation and fault tolerance, and makes the use of Network Load Balancers transparent to your client applications. Network Load Balancers also attempt to route a series of requests from a particular source to targets in a single AZ while still providing automatic failover should those targets become unavailable.
Source Address Preservation – With Network Load Balancer, the original source IP address and source ports for the incoming connections remain unmodified, so application software need not support X-Forwarded-For, proxy protocol, or other workarounds. This also means that normal firewall rules, including VPC Security Groups, can be used on targets.
Long-running Connections – NLB handles connections with built-in fault tolerance, and can handle connections that are open for months or years, making them a great fit for IoT, gaming, and messaging applications.
Failover – Powered by Route 53 health checks, NLB supports failover between IP addresses within and across regions.
Q59: An organization of about 100 employees has performed the initial setup of users in IAM. All users except administrators have the same basic privileges. But now it has been determined that 50 employees will have extra restrictions on EC2. They will be unable to launch new instances or alter the state of existing instances. What will be the quickest way to implement these restrictions?
A. Create an IAM Role for the restrictions. Attach it to the EC2 instances.
B. Create the appropriate policy. Place the restricted users in the new policy.
C. Create the appropriate policy. With only 20 users, attach the policy to each user.
D. Create the appropriate policy. Create a new group for the restricted users. Place the restricted users in the new group and attach the policy to the group.
Notes: You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. AWS supports six types of policies: identity-based policies, resource-based policies, permissions boundaries, Organizations SCPs, ACLs, and session policies. IAM policies define permissions for an action regardless of the method that you use to perform the operation. For example, if a policy allows the GetUser action, then a user with that policy can get user information from the AWS Management Console, the AWS CLI, or the AWS API. When you create an IAM user, you can choose to allow console or programmatic access. If console access is allowed, the IAM user can sign in to the console using a user name and password. Or if programmatic access is allowed, the user can use access keys to work with the CLI or API.
Q60: You are managing S3 buckets in your organization. This management of S3 extends to Amazon Glacier. For auditing purposes you would like to be informed if an object is restored to S3 from Glacier. What is the most efficient way you can do this?
A. Create a CloudWatch event for uploads to S3
B. Create an SNS notification for any upload to S3.
C. Configure S3 notifications for restore operations from Glacier.
D. Create a Lambda function which is triggered by restoration of object from Glacier to S3.
Answers: C
Notes: The Amazon S3 notification feature enables you to receive notifications when certain events happen in your bucket. To enable notifications, you must first add a notification configuration that identifies the events you want Amazon S3 to publish and the destinations where you want Amazon S3 to send the notifications. An S3 notification can be set up to notify you when objects are restored from Glacier to S3.
Q61: Your company has gotten back results from an audit. One of the mandates from the audit is that your application, which is hosted on EC2, must encrypt the data before writing this data to storage. Which service could you use to meet this requirement?
A. AWS Cloud HSM
B. Security Token Service
C. EBS encryption
D. AWS KMS
Answers: D
Notes: You can configure your application to use the KMS API to encrypt all data before saving it to disk. This link details how to choose an encryption service for various use cases:
Q62: Recent worldwide events have dictated that you perform your duties as a Solutions Architect from home. You need to be able to manage several EC2 instances while working from home and have been testing the ability to ssh into these instances. One instance in particular has been a problem and you cannot ssh into this instance. What should you check first to troubleshoot this issue?
A. Make sure that the security group for the instance has ingress on port 80 from your home IP address.
B. Make sure that your VPC has a connected Virtual Private Gateway.
C. Make sure that the security group for the instance has ingress on port 22 from your home IP address.
D. Make sure that the Security Group for the instance has ingress on port 443 from your home IP address.
Notes: The rules of a security group control the inbound traffic that’s allowed to reach the instances that are associated with the security group. The rules also control the outbound traffic that’s allowed to leave them. The following are the characteristics of security group rules:
By default, security groups allow all outbound traffic.
Security group rules are always permissive; you can’t create rules that deny access.
Security groups are stateful. If you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. For VPC security groups, this also means that responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules. For more information, see Connection tracking.
You can add and remove rules at any time. Your changes are automatically applied to the instances that are associated with the security group. The effect of some rule changes can depend on how the traffic is tracked. For more information, see Connection tracking. When you associate multiple security groups with an instance, the rules from each security group are effectively aggregated to create one set of rules. Amazon EC2 uses this set of rules to determine whether to allow access. You can assign multiple security groups to an instance. Therefore, an instance can have hundreds of rules that apply. This might cause problems when you access the instance. We recommend that you condense your rules as much as possible.
Q62: A consultant is hired by a small company to configure an AWS environment. The consultant begins working with the VPC and launching EC2 instances within the VPC. The initial instances will be placed in a public subnet. The consultant begins to create security groups. What is true of the default security group?
A. You can delete this group, however, you can’t change the group’s rules.
B. You can delete this group or you can change the group’s rules.
C. You can’t delete this group, nor can you change the group’s rules.
D. You can’t delete this group, however, you can change the group’s rules.
Answers: D
Notes: Your VPC includes a default security group. You can’t delete this group, however, you can change the group’s rules. The procedure is the same as modifying any other security group. For more information, see Adding, removing, and updating rules.
Q63: You are evaluating the security setting within the main company VPC. There are several NACLs and security groups to evaluate and possibly edit. What is true regarding NACLs and security groups?
A. Network ACLs and security groups are both stateful.
B. Network ACLs and security groups are both stateless.
C. Network ACLs are stateless, and security groups are stateful.
D. Network ACLs and stateful, and security groups are stateless.
Notes: Network ACLs are stateless, which means that responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa).
The following are the basic characteristics of security groups for your VPC:
There are quotas on the number of security groups that you can create per VPC, the number of rules that you can add to each security group, and the number of security groups that you can associate with a network interface. For more information, see Amazon VPC quotas.
You can specify allow rules, but not deny rules.
You can specify separate rules for inbound and outbound traffic.
When you create a security group, it has no inbound rules. Therefore, no inbound traffic originating from another host to your instance is allowed until you add inbound rules to the security group.
By default, a security group includes an outbound rule that allows all outbound traffic. You can remove the rule and add outbound rules that allow specific outbound traffic only. If your security group has no outbound rules, no outbound traffic originating from your instance is allowed.
Security groups are stateful. If you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. Responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules.
Q64: Your company needs to deploy an application in the company AWS account. The application will reside on EC2 instances in an Auto Scaling Group fronted by an Application Load Balancer. The company has been using Elastic Beanstalk to deploy the application due to limited AWS experience within the organization. The application now needs upgrades and a small team of subcontractors have been hired to perform these upgrades. What can be used to provide the subcontractors with short-lived access tokens that act as temporary security credentials to the company AWS account?
A. IAM Roles
B. AWS STS
C. IAM user accounts
D. AWS SSO
Answers: B
Notes: AWS Security Token Service (AWS STS) is the service that you can use to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use. You can use the AWS Security Token Service (AWS STS) to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use, with the following differences: Temporary security credentials are short-term, as the name implies. They can be configured to last for anywhere from a few minutes to several hours. After the credentials expire, AWS no longer recognizes them or allows any kind of access from API requests made with them. Temporary security credentials are not stored with the user but are generated dynamically and provided to the user when requested. When (or even before) the temporary security credentials expire, the user can request new credentials, as long as the user requesting them still has permissions to do so.
Q65: The company you work for has reshuffled teams a bit and you’ve been moved from the AWS IAM team to the AWS Network team. One of your first assignments is to review the subnets in the main VPCs. What are two key concepts regarding subnets?
A. A subnet spans all the Availability Zones in a Region.
B. Private subnets can only hold database.
C. Each subnet maps to a single Availability Zone.
D. Every subnet you create is associated with the main route table for the VPC.
E. Each subnet is associated with one security group.
Notes: A VPC spans all the Availability Zones in the region. After creating a VPC, you can add one or more subnets in each Availability Zone. When you create a subnet, you specify the CIDR block for the subnet, which is a subset of the VPC CIDR block. Each subnet must reside entirely within one Availability Zone and cannot span zones. Availability Zones are distinct locations that are engineered to be isolated from failures in other Availability Zones. By launching instances in separate Availability Zones, you can protect your applications from the failure of a single location. A VPC spans all of the Availability Zones in the Region. After creating a VPC, you can add one or more subnets in each Availability Zone. You can optionally add subnets in a Local Zone, which is an AWS infrastructure deployment that places compute, storage, database, and other select services closer to your end users. A Local Zone enables your end users to run applications that require single-digit millisecond latencies. For information about the Regions that support Local Zones, see Available Regions in the Amazon EC2 User Guide for Linux Instances. When you create a subnet, you specify the CIDR block for the subnet, which is a subset of the VPC CIDR block. We assign a unique ID to each subnet.
Q66: Amazon Web Services offers 4 different levels of support. Which of the following are valid support levels? Choose 3
A. Enterprise
B. Developer
C. Corporate
D. Business
E. Free Tier
Answer: A B D Notes: The correct answers are Enterprise, Business, Developer. References: https://docs.aws.amazon.com/
Q67: You are reviewing Change Control requests, and you note that there is a change designed to reduce wasted CPU cycles by increasing the value of your Amazon SQS “VisibilityTimeout” attribute. What does this mean?
A. While processing a message, a consumer instance can amend the message visibility counter by a fixed amount.
B. When a consumer instance retrieves a message, that message will be hidden from other consumer instances for a fixed period.
C. When the consumer instance polls for new work the SQS service will allow it to wait a certain time for a message to be available before closing the connection.
D. While processing a message, a consumer instance can reset the message visibility by restarting the preset timeout counter.
E. When the consumer instance polls for new work, the consumer instance will wait a certain time until it has a full workload before closing the connection.
F. When a new message is added to the SQS queue, it will be hidden from consumer instances for a fixed period.
Answer: B Notes: Poor timing of SQS processes can significantly impact the cost effectiveness of the solution. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours. References: https://docs.aws.amazon.com/sqs
Q68: You are a security architect working for a large antivirus company. The production environment has recently been moved to AWS and is in a public subnet. You are able to view the production environment over HTTP. However, when your customers try to update their virus definition files over a custom port, that port is blocked. You log in to the console and you allow traffic in over the custom port. How long will this take to take effect?
A. After a few minutes.
B. Immediately.
C. Straight away, but to the new instances only.
D. Straight away to the new instances, but old instances must be stopped and restarted before the new rules apply.
Q69: Amazon SQS keeps track of all tasks and events in an application.
A. True
B. False
Answer: B Notes: Amazon SWF (not Amazon SQS) keeps track of all tasks and events in an application. Amazon SQS requires you to implement your own application-level tracking, especially if your application uses multiple queues. Amazon SWF FAQs. References: References: https://docs.aws.amazon.com/sqs
Q70: Your Security Manager has hired a security contractor to audit your network and firewall configurations. The consultant doesn’t have access to an AWS account. You need to provide the required access for the auditing tasks, and answer a question about login details for the official AWS firewall appliance. Which of the following might you do? Choose 2
A. Create an IAM User with a policy that can Read Security Group and NACL settings.
B. Explain that AWS implements network security differently and that there is no such thing as an official AWS firewall appliance. Security Groups and NACLs are used instead.
C. Create an IAM Role with a policy that can Read Security Group and NACL settings.
D. Explain that AWS is a cloud service and that AWS manages the Network appliances.
E. Create an IAM Role with a policy that can Read Security Group and Route settings.
Answer: A and B Notes: Create an IAM user for the auditor and explain that the firewall functionality is implemented as stateful Security Groups, and stateless subnet NACLs. AWS has removed the Firewall appliance from the hub of the network and implemented the firewall functionality as stateful Security Groups, and stateless subnet NACLs. This is not a new concept in networking, but rarely implemented at this scale. References: https://docs.aws.amazon.com/iam
Q71: How many internet gateways can I attach to my custom VPC?
A. 5 B. 3 C. 2 D. 1
Answer: D Notes: 1 References: https://docs.aws.amazon.com/vpc
Q72: How long can a message be retained in an SQS Queue?
Q73: Although your application customarily runs at 30% usage, you have identified a recurring usage spike (>90%) between 8pm and midnight daily. What is the most cost-effective way to scale your application to meet this increased need?
A. Manually deploy Reactive Event-based Scaling each night at 7:45.
B. Deploy additional EC2 instances to meet the demand.
C. Use scheduled scaling to boost your capacity at a fixed interval.
D. Increase the size of the Resource Group to meet demand.
Answer: C Notes: Scheduled scaling allows you to set your own scaling schedule. For example, let’s say that every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling actions based on the predictable traffic patterns of your web application. Scaling actions are performed automatically as a function of time and date. Reference: Scheduled scaling for Amazon EC2 Auto Scaling.
Q74: To save money, you quickly stored some data in one of the attached volumes of an EC2 instance and stopped it for the weekend. When you returned on Monday and restarted your instance, you discovered that your data was gone. Why might that be?
A. The EBS volume was not large enough to store your data.
B. The instance failed to connect to the root volume on Monday.
C. The elastic block-level storage service failed over the weekend.
D. The volume was ephemeral, block-level storage. Data on an instance store volume is lost if an instance is stopped.
Answer: D Notes: the EC2 instance had an instance store volume attached to it. Instance store volumes are ephemeral, meaning that data in attached instance store volumes is lost if the instance stops. Reference: Instance store lifetime
Q75: Select all the true statements on S3 URL styles: Choose 2
A. Virtual hosted-style URLs will be eventually depreciated in favor of Path-Style URLs for S3 bucket access.
B. Virtual-host-style URLs (such as: https://bucket-name.s3.Region.amazonaws.com/key name) are supported by AWS.
C. Path-Style URLs (such as https://s3.Region.amazonaws.com/bucket-name/key name) are supported by AWS.
D. DNS compliant names are NOT recommended for the URLs to access S3.
Answer: B and C Notes: Virtual-host-style URLs and Path-Style URLs (soon to be retired) are supported by AWS. DNS compliant names are recommended for the URLs to access S3. References: https://docs.aws.amazon.com/s3
Q76: With EBS, I can ____. Choose 2
A. Create an encrypted snapshot from an unencrypted snapshot by creating an encrypted copy of the unencrypted snapshot.
B. Create an unencrypted volume from an encrypted snapshot.
C. Create an encrypted volume from a snapshot of another encrypted volume.
D. Encrypt an existing volume.
Answer: A and C Notes: Although there is no direct way to encrypt an existing unencrypted volume or snapshot, you can encrypt them by creating either a volume or a snapshot. Reference: Encrypting unencrypted resources. You can create an encrypted volume from a snapshot of another encrypted volume. References: https://docs.aws.amazon.com/ebs
Q77: You have been engaged by a company to design and lead a migration to an AWS environment. The team is concerned about the capabilities of the new environment, especially when it comes to high availability and cost-effectiveness. The design calls for about 20 instances (c3.2xlarge) pulling jobs/messages from SQS. Network traffic per instance is estimated to be around 500 Mbps at the beginning and end of each job. Which configuration should you plan on deploying?
A. Use a 2nd Network Interface to separate the SQS traffic for the storage traffic.
B. Choose a different instance type that better matched the traffic demand.
C.Spread the Instances over multiple AZs to minimize the traffic concentration and maximize fault-tolerance.
D. Deploy as a Cluster Placement Group as the aggregated burst traffic could be around 10 Gbps.
Answer: C Notes: With a multi-AZ configuration, an additional reliability point is scored as the entire Availability Zone itself is ruled out as a single point of failure. This ensures high availability. Wherever possible, use simple solutions such as spreading the load out rather than expensive high tech solutions. References:AZ
Q78: You are a solutions architect working for a cosmetics company. Your company has a busy Magento online store that consists of a two-tier architecture. The web servers are on EC2 instances deployed across multiple AZs, and the database is on a Multi-AZ RDS MySQL database instance. Your store is having a Black Friday sale in five days, and having reviewed the performance for the last sale you expect the site to start running very slowly during the peak load. You investigate and you determine that the database was struggling to keep up with the number of reads that the store was generating. Which solution would you implement to improve the application read performance the most?
A. Deploy an Amazon ElastiCache cluster with nodes running in each AZ.
B. Upgrade your RDS MySQL instance to use provisioned IOPS.
C. Add an RDS Read Replica in each AZ.
D. Upgrade the RDS MySQL instance to a larger type.
Answer: C Notes: RDS Replicas can substantially increase the Read performance of your database. Multiple read replicas can be made to increase performance further. It will also require the least modifications to any code, and is generally possible to be implemented in the timeframe specified References:RDS
Q79: Which native AWS service will act as a file system mounted on an S3 bucket?
A. Amazon Elastic Block Store
B. File Gateway
C. Amazon S3
D. Amazon Elastic File System
Answer: B Notes: A file gateway supports a file interface into Amazon Simple Storage Service (Amazon S3) and combines a service and a virtual software appliance. By using this combination, you can store and retrieve objects in Amazon S3 using industry-standard file protocols such as Network File System (NFS) and Server Message Block (SMB). The software appliance, or gateway, is deployed into your on-premises environment as a virtual machine (VM) running on VMware ESXi, Microsoft Hyper-V, or Linux Kernel-based Virtual Machine (KVM) hypervisor. The gateway provides access to objects in S3 as files or file share mount points. You can manage your S3 data using lifecycle policies, cross-region replication, and versioning. You can think of a file gateway as a file system mount on S3. Reference: What is AWS Storage Gateway? .
Q80:You have been evaluating the NACLS in your company. Most of the NACLs are configured the same: 100 All Traffic Allow 200 All Traffic Deny ‘*’ All Traffic Deny If a request comes in, how will it be evaluated?
A. The default will deny traffic.
B. The request will be allowed.
C. The highest numbered rule will be used, a deny.
D. All rules will be evaluated and the end result will be Deny.
Answer: B
Notes: Rules are evaluated starting with the lowest numbered rule. As soon as a rule matches traffic, it’s applied immediately regardless of any higher-numbered rule that may contradict it. The following are the basic things that you need to know about network ACLs: Your VPC automatically comes with a modifiable default network ACL. By default, it allows all inbound and outbound IPv4 traffic and, if applicable, IPv6 traffic. You can create a custom network ACL and associate it with a subnet. By default, each custom network ACL denies all inbound and outbound traffic until you add rules. Each subnet in your VPC must be associated with a network ACL. If you don’t explicitly associate a subnet with a network ACL, the subnet is automatically associated with the default network ACL. You can associate a network ACL with multiple subnets. However, a subnet can be associated with only one network ACL at a time. When you associate a network ACL with a subnet, the previous association is removed. A network ACL contains a numbered list of rules. We evaluate the rules in order, starting with the lowest-numbered rule, to determine whether traffic is allowed in or out of any subnet associated with the network ACL. The highest number that you can use for a rule is 32766. We recommend that you start by creating rules in increments (for example, increments of 10 or 100) so that you can insert new rules where you need to later on. A network ACL has separate inbound and outbound rules, and each rule can either allow or deny traffic. Network ACLs are stateless, which means that responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa).
Q81: You have been given an assignment to configure Network ACLs in your VPC. Before configuring the NACLs, you need to understand how the NACLs are evaluated. How are NACL rules evaluated?
A. NACL rules are evaluated by rule number from lowest to highest and executed immediately when a matching rule is found.
B. NACL rules are evaluated by rule number from highest to lowest, and executed immediately when a matching rule is found.
C. All NACL rules that you configure are evaluated before traffic is passed through.
D. NACL rules are evaluated by rule number from highest to lowest, and all are evaluated before traffic is passed through.
Answer: A
Notes: NACL rules are evaluated by rule number from lowest to highest and executed immediately when a matching rule is found.
You can add or remove rules from the default network ACL, or create additional network ACLs for your VPC. When you add or remove rules from a network ACL, the changes are automatically applied to the subnets that it’s associated with. A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC. The following are the parts of a network ACL rule:
Rule number. Rules are evaluated starting with the lowest-numbered rule. As soon as a rule matches traffic, it’s applied regardless of any higher-numbered rule that might contradict it.
Type. The type of traffic, for example, SSH. You can also specify all traffic or a custom range.
Protocol. You can specify any protocol that has a standard protocol number. For more information, see Protocol Numbers. If you specify ICMP as the protocol, you can specify any or all of the ICMP types and codes.
Port range. The listening port or port range for the traffic. For example, 80 for HTTP traffic.
Source. [Inbound rules only] The source of the traffic (CIDR range).
Destination. [Outbound rules only] The destination for the traffic (CIDR range).
Allow/Deny. Whether to allow or deny the specified traffic.
Q82: Your company has gone through an audit with a focus on data storage. You are currently storing historical data in Amazon Glacier. One of the results of the audit is that a portion of the infrequently-accessed historical data must be able to be accessed immediately upon request. Where can you store this data to meet this requirement?
A. S3 Standard
B. Leave infrequently-accessed data in Glacier.
C. S3 Standard-IA
D. Store the data in EBS
Answer: C
Notes: S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of S3 Standard, with a low-per-GB storage price and per GB retrieval fee. This combination of low cost and high performance make S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files. S3 Storage Classes can be configured at the object level and a single bucket can contain objects stored across S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA. You can also use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes.
Q84: After an IT Steering Committee meeting, you have been put in charge of configuring a hybrid environment for the company’s compute resources. You weigh the pros and cons of various technologies, such as VPN and Direct Connect, and based on the requirements you have decided to configure a VPN connection. What features and advantages can a VPN connection provide?
A VPN provides a connection between an on-premises network and a VPC, using a secure and private connection with IPsec and TLS.
A VPC/VPN Connection utilizes IPSec to establish encrypted network connectivity between your intranet and Amazon VPC over the Internet. VPN Connections can be configured in minutes and are a good solution if you have an immediate need, have low-to-modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity.
AWS Client VPN is a managed client-based VPN service that enables you to securely access your AWS resources or your on-premises network. With AWS Client VPN, you configure an endpoint to which your users can connect to establish a secure TLS VPN session. This enables clients to access resources in AWS or on-premises from any location using an OpenVPN-based VPN client.
Q86: Your company has decided to go to a hybrid cloud environment. Part of this effort will be to move a large data warehouse to the cloud. The warehouse is 50TB, and will take over a month to migrate given the current bandwidth available. What is the best option available to perform this migration considering both cost and performance aspects?
The AWS Snowball Edge is a type of Snowball device with on-board storage and compute power for select AWS capabilities. Snowball Edge can undertake local processing and edge-computing workloads in addition to transferring data between your local environment and the AWS Cloud.
Each Snowball Edge device can transport data at speeds faster than the internet. This transport is done by shipping the data in the appliances through a regional carrier. The appliances are rugged shipping containers, complete with E Ink shipping labels. The AWS Snowball Edge device differs from the standard Snowball because it can bring the power of the AWS Cloud to your on-premises location, with local storage and compute functionality.
Snowball Edge devices have three options for device configurations: storage optimized, compute optimized, and with GPU. When this guide refers to Snowball Edge devices, it’s referring to all options of the device. Whenever specific information applies to only one or more optional configurations of devices, like how the Snowball Edge with GPU has an on-board GPU, it will be called out. For more information, see Snowball Edge Device Options.
Q87: You have been assigned the review of the security in your company AWS cloud environment. Your final deliverable will be a report detailing potential security issues. One of the first things that you need to describe is the responsibilities of the company under the shared responsibility module. Which measure is the customer’s responsibility?
EC2 instance OS Patching
Notes:Security and compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates. The customer assumes responsibility for, and management of, the guest operating system (including updates and security patches), other associated application software, and the configuration of the AWS provided security group firewall. Customers should carefully consider the services they choose, as their responsibilities vary depending on the services used, the integration of those services into their IT environment, and applicable laws and regulations. The nature of this shared responsibility also provides the flexibility and customer control that permits the deployment. As shown in the chart below, this differentiation of responsibility is commonly referred to as Security “of” the Cloud versus Security “in” the Cloud.
Customers that deploy an Amazon EC2 instance are responsible for management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.
Q88: You work for a busy real estate company, and you need to protect your data stored on S3 from accidental deletion. Which of the following actions might you take to achieve this? Choose 2
A. Create a bucket policy that prohibits anyone from deleting things from the bucket. B. Enable S3 – Infrequent Access Storage (S3 – IA). C. Enable versioning on the bucket. If a file is accidentally deleted, delete the delete marker. D. Configure MFA-protected API access. E. Use pre-signed URL’s so that users will not be able to accidentally delete data.
Answer: C and D Notes: The best answers are to allow versioning on the bucket and to protect the objects by configuring MFA-protected API access. Reference:https://docs.aws.amazon.com/s3
Q89: AWS intends to shut down your spot instance; which of these scenarios is possible? Choose 3
A. AWS sends a notification of termination and you receive it 120 seconds before the intended forced shutdown.
B. AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, and you delay it by sending a ‘Delay300’ instruction before the forced shutdown takes effect.
C. AWS sends a notification of termination and you receive it 120 seconds before the intended forced shutdown, but AWS does not action the shutdown.
D. AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, but you block the shutdown because you used ‘Termination Protection’ when you initialized the instance.
E. AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, but the defined duration period (also known as Spot blocks) hasn’t ended yet.
F. AWS sends a notification of termination, but you do not receive it within the 120 seconds and the instance is shutdown.
Answer: A E and F Notes: When Amazon EC2 is going to interrupt your Spot Instance, it emits an event two minutes prior to the actual interruption (except for hibernation, which gets the interruption notice, but not two minutes in advance because hibernation begins immediately).
In rare situations, Spot blocks may be interrupted due to Amazon EC2 capacity needs. In these cases, AWS provides a two-minute warning before the instance is terminated, and customers are not charged for the terminated instances even if they have used them.
It is possible that your Spot Instance is terminated before the warning can be made available. Reference: https://docs.aws.amazon.com/ec2
Q90: What does the “EAR” in a policy document stand for?
A. Effects, APIs, Roles B. Effect, Action, Resource C. Ewoks, Always, Romanticize D. Every, Action, Reasonable
Answer: B. Notes: The elements included in a policy document that make up the “EAR” are effect, action, and resource. Reference: Policies and Permissions in IAM
Q91: _____ provides real-time streaming of data.
A. Kinesis Data Analytics B. Kinesis Data Firehose C. Kinesis Data Streams D. SQS
Answer: C Notes: Kinesis Data Streams offers real-time data streaming Reference: Amazon Kinesis Data Streams –
Q92: You can use _ to build a schema for your data, and _ to query the data that’s stored in S3.
A. Glue, Athena B. EC2, SQS C. EC2, Glue D. Athena, Lambda
Answer: A Notes: Kinesis Data Streams offers real-time data streaming Reference: Glue and Athena are correct –
Q93: What type of work does EMR perform?
A. Data processing information (DPI) jobs. B. Big data (BD) jobs. C. Extract, transform, and load (ETL) jobs. D. Huge amounts of data (HAD) jobs
Answer: C Notes: EMR excels at extract, transform, and load (ETL) jobs. Reference: Apache EMR – https://aws.amazon.com/emr/
Q94: _____ allows you to transform data using SQL as it’s being passed through Kinesis.
A. RDS B. Kinesis Data Analytics C. Redshift D. DynamoDB
Answer: B Notes: Kinesis Data Analytics allows you to transform data using SQL. Reference: Amazon Kinesis Data Analytics –
Q95 [SAA-C03]: A company runs a public-facing three-tier web application in a VPC across multiple Availability Zones. Amazon EC2 instances for the application tier running in private subnets need to download software patches from the internet. However, the EC2 instances cannot be directly accessible from the internet. Which actions should be taken to allow the EC2 instances to download the needed patches? (Select TWO.)
A. Configure a NAT gateway in a public subnet. B. Define a custom route table with a route to the NAT gateway for internet traffic and associate it with the private subnets for the application tier. C. Assign Elastic IP addresses to the EC2 instances. D. Define a custom route table with a route to the internet gateway for internet traffic and associate it with the private subnets for the application tier. E. Configure a NAT instance in a private subnet.
Answer: A. B. Notes: – A NAT gateway forwards traffic from the EC2 instances in the private subnet to the internet or other AWS services, and then sends the response back to the instances. After a NAT gateway is created, the route tables for private subnets must be updated to point internet traffic to the NAT gateway.
Q96 [SAA-C03]: A solutions architect wants to design a solution to save costs for Amazon EC2 instances that do not need to run during a 2-week company shutdown. The applications running on the EC2 instances store data in instance memory that must be present when the instances resume operation. Which approach should the solutions architect recommend to shut down and resume the EC2 instances?
A. Modify the application to store the data on instance store volumes. Reattach the volumes while restarting them. B. Snapshot the EC2 instances before stopping them. Restore the snapshot after restarting the instances. C. Run the applications on EC2 instances enabled for hibernation. Hibernate the instances before the 2- week company shutdown. D. Note the Availability Zone for each EC2 instance before stopping it. Restart the instances in the same Availability Zones after the 2-week company shutdown.
Answer: C. Notes: Hibernating EC2 instances save the contents of instance memory to an Amazon Elastic Block Store (Amazon EBS) root volume. When the instances restart, the instance memory contents are reloaded.
Q97 [SAA-C03]: A company plans to run a monitoring application on an Amazon EC2 instance in a VPC. Connections are made to the EC2 instance using the instance’s private IPv4 address. A solutions architect needs to design a solution that will allow traffic to be quickly directed to a standby EC2 instance if the application fails and becomes unreachable. Which approach will meet these requirements?
A) Deploy an Application Load Balancer configured with a listener for the private IP address and register the primary EC2 instance with the load balancer. Upon failure, de-register the instance and register the standby EC2 instance. B) Configure a custom DHCP option set. Configure DHCP to assign the same private IP address to the standby EC2 instance when the primary EC2 instance fails. C) Attach a secondary elastic network interface to the EC2 instance configured with the private IP address. Move the network interface to the standby EC2 instance if the primary EC2 instance becomes unreachable. D) Associate an Elastic IP address with the network interface of the primary EC2 instance. Disassociate the Elastic IP from the primary instance upon failure and associate it with a standby EC2 instance.
Answer: C. Notes: A secondary elastic network interface can be added to an EC2 instance. While primary network interfaces cannot be detached from an instance, secondary network interfaces can be detached and attached to a different EC2 instance.
Q98 [SAA-C03]: An analytics company is planning to offer a web analytics service to its users. The service will require that the users’ webpages include a JavaScript script that makes authenticated GET requests to the company’s Amazon S3 bucket. What must a solutions architect do to ensure that the script will successfully execute?
A. Enable cross-origin resource sharing (CORS) on the S3 bucket. B. Enable S3 Versioning on the S3 bucket. C. Provide the users with a signed URL for the script. D. Configure an S3 bucket policy to allow public execute privileges.
Answer: A. Notes: Web browsers will block running a script that originates from a server with a domain name that is different from the webpage. Amazon S3 can be configured with CORS to send HTTP headers that allow the script to run
Q99 [SAA-C03]: A company’s security team requires that all data stored in the cloud be encrypted at rest at all times using encryption keys stored on premises. Which encryption options meet these requirements? (Select TWO.)
A. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). B. Use server-side encryption with AWS KMS managed encryption keys (SSE-KMS). C. Use server-side encryption with customer-provided encryption keys (SSE-C). D. Use client-side encryption to provide at-rest encryption. E. Use an AWS Lambda function invoked by Amazon S3 events to encrypt the data using the customer’s keys.
Answer: C. D. Notes: Server-side encryption with customer-provided keys (SSE-C) enables Amazon S3 to encrypt objects on the server side using an encryption key provided in the PUT request. The same key must be provided in the GET requests for Amazon S3 to decrypt the object. Customers also have the option to encrypt data on the client side before uploading it to Amazon S3, and then they can decrypt the data after downloading it. AWS software development kits (SDKs) provide an S3 encryption client that streamlines the process.
Q100 [SAA-C03]: A company uses Amazon EC2 Reserved Instances to run its data processing workload. The nightly job typically takes 7 hours to run and must finish within a 10-hour time window. The company anticipates temporary increases in demand at the end of each month that will cause the job to run over the time limit with the capacity of the current resources. Once started, the processing job cannot be interrupted before completion. The company wants to implement a solution that would provide increased resource capacity as cost-effectively as possible. What should a solutions architect do to accomplish this?
A) Deploy On-Demand Instances during periods of high demand. B) Create a second EC2 reservation for additional instances. C) Deploy Spot Instances during periods of high demand. D) Increase the EC2 instance size in the EC2 reservation to support the increased workload.
Answer: A. Notes: While Spot Instances would be the least costly option, they are not suitable for jobs that cannot be interrupted or must complete within a certain time period. On-Demand Instances would be billed for the number of seconds they are running.
Q101 [SAA-C03]: A company runs an online voting system for a weekly live television program. During broadcasts, users submit hundreds of thousands of votes within minutes to a front-end fleet of Amazon EC2 instances that run in an Auto Scaling group. The EC2 instances write the votes to an Amazon RDS database. However, the database is unable to keep up with the requests that come from the EC2 instances. A solutions architect must design a solution that processes the votes in the most efficient manner and without downtime. Which solution meets these requirements?
A. Migrate the front-end application to AWS Lambda. Use Amazon API Gateway to route user requests to the Lambda functions. B. Scale the database horizontally by converting it to a Multi-AZ deployment. Configure the front-end application to write to both the primary and secondary DB instances. C. Configure the front-end application to send votes to an Amazon Simple Queue Service (Amazon SQS) queue. Provision worker instances to read the SQS queue and write the vote information to the database. D. Use Amazon EventBridge (Amazon CloudWatch Events) to create a scheduled event to re-provision the database with larger, memory optimized instances during voting periods. When voting ends, re-provision the database to use smaller instances.
Answer: C. Notes: – Decouple the ingestion of votes from the database to allow the voting system to continue processing votes without waiting for the database writes. Add dedicated workers to read from the SQS queue to allow votes to be entered into the database at a controllable rate. The votes will be added to the database as fast as the database can process them, but no votes will be lost.
Q102 [SAA-C03]: A company has a two-tier application architecture that runs in public and private subnets. Amazon EC2 instances running the web application are in the public subnet and an EC2 instance for the database runs on the private subnet. The web application instances and the database are running in a single Availability Zone (AZ). Which combination of steps should a solutions architect take to provide high availability for this architecture? (Select TWO.)
A. Create new public and private subnets in the same AZ. B. Create an Amazon EC2 Auto Scaling group and Application Load Balancer spanning multiple AZs for the web application instances. C. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer. D. Create new public and private subnets in a new AZ. Create a database using an EC2 instance in the public subnet in the new AZ. Migrate the old database contents to the new database. E. Create new public and private subnets in the same VPC, each in a new AZ. Create an Amazon RDS Multi-AZ DB instance in the private subnets. Migrate the old database contents to the new DB instance.
Answer: B. E. Notes: Create new subnets in a new Availability Zone (AZ) to provide a redundant network. Create an Auto Scaling group with instances in two AZs behind the load balancer to ensure high availability of the web application and redistribution of web traffic between the two public AZs. Create an RDS DB instance in the two private subnets to make the database tier highly available too.
Q103 [SAA-C03]: A website runs a custom web application that receives a burst of traffic each day at noon. The users upload new pictures and content daily, but have been complaining of timeouts. The architecture uses Amazon EC2 Auto Scaling groups, and the application consistently takes 1 minute to initiate upon boot up before responding to user requests. How should a solutions architect redesign the architecture to better respond to changing traffic?
A. Configure a Network Load Balancer with a slow start configuration. B. Configure Amazon ElastiCache for Redis to offload direct requests from the EC2 instances. C. Configure an Auto Scaling step scaling policy with an EC2 instance warmup condition. D. Configure Amazon CloudFront to use an Application Load Balancer as the origin.
Answer: C. Notes: The current configuration puts new EC2 instances into service before they are able to respond to transactions. This could also cause the instances to overscale. With a step scaling policy, you can specify the number of seconds that it takes for a newly launched instance to warm up. Until its specified warm-up time has expired, an EC2 instance is not counted toward the aggregated metrics of the Auto Scaling group. While scaling out, the Auto Scaling logic does not consider EC2 instances that are warming up as part of the current capacity of the Auto Scaling group. Therefore, multiple alarm breaches that fall in the range of the same step adjustment result in a single scaling activity. This ensures that you do not add more instances than you need.
Q104 [SAA-C03]: An application running on AWS uses an Amazon Aurora Multi-AZ DB cluster deployment for its database. When evaluating performance metrics, a solutions architect discovered that the database reads are causing high I/O and adding latency to the write requests against the database. What should the solutions architect do to separate the read requests from the write requests?
A. Enable read-through caching on the Aurora database. B. Update the application to read from the Multi-AZ standby instance. C. Create an Aurora replica and modify the application to use the appropriate endpoints. D. Create a second Aurora database and link it to the primary database as a read replica.
Answer: C. Notes: Aurora Replicas provide a way to offload read traffic. Aurora Replicas share the same underlying storage as the main database, so lag time is generally very low. Aurora Replicas have their own endpoints, so the application will need to be configured to direct read traffic to the new endpoints. Reference: Aurora Replicas
Question 106: A company plans to migrate its on-premises workload to AWS. The current architecture is composed of a Microsoft SharePoint server that uses a Windows shared file storage. The Solutions Architect needs to use a cloud storage solution that is highly available and can be integrated with Active Directory for access control and authentication. Which of the following options can satisfy the given requirement?
A. Create a file system using Amazon EFS and join it to an Active Directory domain. B. Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory domain in AWS. C. Create a Network File System (NFS) file share using AWS Storage Gateway. D. Launch an Amazon EC2 Windows Server to mount a new S3 bucket as a file volume.
Answer: B. Notes: Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessible over the industry-standard Service Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. Amazon FSx is accessible from Windows, Linux, and MacOS compute instances and devices. Thousands of compute instances and devices can access a file system concurrently. ReferenceText: FSx
Question 107: A company plans to migrate its on-premises workload to AWS. The current architecture is composed of a Microsoft SharePoint server that uses a Windows shared file storage. The Solutions Architect needs to use a cloud storage solution that is highly available and can be integrated with Active Directory for access control and authentication. Which of the following options can satisfy the given requirement?
A. Create a file system using Amazon EFS and join it to an Active Directory domain. B. Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory domain in AWS. C. Create a Network File System (NFS) file share using AWS Storage Gateway. D. Launch an Amazon EC2 Windows Server to mount a new S3 bucket as a file volume.
Answer: B. Notes: Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessible over the industry-standard Service Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. Amazon FSx is accessible from Windows, Linux, and MacOS compute instances and devices. Thousands of compute instances and devices can access a file system concurrently. Reference: FSx Category: Design Resilient Architectures
Question 108: A Forex trading platform, which frequently processes and stores global financial data every minute, is hosted in your on-premises data center and uses an Oracle database. Due to a recent cooling problem in their data center, the company urgently needs to migrate their infrastructure to AWS to improve the performance of their applications. As the Solutions Architect, you are responsible in ensuring that the database is properly migrated and should remain available in case of database server failure in the future. Which of the following is the most suitable solution to meet the requirement?
A. Create an Oracle database in RDS with Multi-AZ deployments. B. Launch an Oracle database instance in RDS with Recovery Manager (RMAN) enabled. C. Launch an Oracle Real Application Clusters (RAC) in RDS. D. Convert the database schema using the AWS Schema Conversion Tool and AWS Database Migration Service. Migrate the Oracle database to a non-cluster Amazon Aurora with a single instance.
Answer: A. Notes: Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. Reference: RDS Multi AZ Category: Design Resilient Architectures
Question 109: A data analytics company, which uses machine learning to collect and analyze consumer data, is using Redshift cluster as their data warehouse. You are instructed to implement a disaster recovery plan for their systems to ensure business continuity even in the event of an AWS region outage. Which of the following is the best approach to meet this requirement?
A. Do nothing because Amazon Redshift is a highly available, fully-managed data warehouse which can withstand an outage of an entire AWS region. B. Enable Cross-Region Snapshots Copy in your Amazon Redshift Cluster. C. Create a scheduled job that will automatically take the snapshot of your Redshift Cluster and store it to an S3 bucket. Restore the snapshot in case of an AWS region outage. D. Use Automated snapshots of your Redshift Cluster.
Answer: B. Notes: You can configure Amazon Redshift to copy snapshots for a cluster to another region. To configure cross-region snapshot copy, you need to enable this copy feature for each cluster and configure where to copy snapshots and how long to keep copied automated snapshots in the destination region. When cross-region copy is enabled for a cluster, all new manual and automatic snapshots are copied to the specified region. Reference: Redshift Snapshots
Category: Design Resilient Architectures
Question 109: A start-up company has an EC2 instance that is hosting a web application. The volume of users is expected to grow in the coming months and hence, you need to add more elasticity and scalability in your AWS architecture to cope with the demand. Which of the following options can satisfy the above requirement for the given scenario? (Select TWO.)
A. Set up two EC2 instances and then put them behind an Elastic Load balancer (ELB). B. Set up two EC2 instances deployed using Launch Templates and integrated with AWS Glue. C. Set up an S3 Cache in front of the EC2 instance. D. Set up two EC2 instances and use Route 53 to route traffic based on a Weighted Routing Policy. E. Set up an AWS WAF behind your EC2 Instance.
Answer: A. D. Notes: Using an Elastic Load Balancer is an ideal solution for adding elasticity to your application. Alternatively, you can also create a policy in Route 53, such as a Weighted routing policy, to evenly distribute the traffic to 2 or more EC2 instances. Hence, setting up two EC2 instances and then put them behind an Elastic Load balancer (ELB) and setting up two EC2 instances and using Route 53 to route traffic based on a Weighted Routing Policy are the correct answers. Reference: Elastic Load Balancing Category: Design Resilient Architectures
Question 110: A company plans to deploy a Docker-based batch application in AWS. The application will be used to process both mission-critical data as well as non-essential batch jobs. Which of the following is the most cost-effective option to use in implementing this architecture?
A. Use ECS as the container management service then set up Reserved EC2 Instances for processing both mission-critical and non-essential batch jobs. B. Use ECS as the container management service then set up a combination of Reserved and Spot EC2 Instances for processing mission-critical and non-essential batch jobs respectively. C. Use ECS as the container management service then set up On-Demand EC2 Instances for processing both mission-critical and non-essential batch jobs. D. Use ECS as the container management service then set up Spot EC2 Instances for processing both mission-critical and non-essential batch jobs.
Answer: B. Notes: Amazon ECS lets you run batch workloads with managed or custom schedulers on Amazon EC2 On-Demand Instances, Reserved Instances, or Spot Instances. You can launch a combination of EC2 instances to set up a cost-effective architecture depending on your workload. You can launch Reserved EC2 instances to process the mission-critical data and Spot EC2 instances for processing non-essential batch jobs. There are two different charge models for Amazon Elastic Container Service (ECS): Fargate Launch Type Model and EC2 Launch Type Model. With Fargate, you pay for the amount of vCPU and memory resources that your containerized application requests while for EC2 launch type model, there is no additional charge. You pay for AWS resources (e.g. EC2 instances or EBS volumes) you create to store and run your application. You only pay for what you use, as you use it; there are no minimum fees and no upfront commitments. In this scenario, the most cost-effective solution is to use ECS as the container management service then set up a combination of Reserved and Spot EC2 Instances for processing mission-critical and non-essential batch jobs respectively. You can use Scheduled Reserved Instances (Scheduled Instances) which enables you to purchase capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term. This will ensure that you have an uninterrupted compute capacity to process your mission-critical batch jobs. Reference: Emazon ECS
Category: Design Resilient Architectures
[/bgcollapse]
Question 111: A company has recently adopted a hybrid cloud architecture and is planning to migrate a database hosted on-premises to AWS. The database currently has over 50 TB of consumer data, handles highly transactional (OLTP) workloads, and is expected to grow. The Solutions Architect should ensure that the database is ACID-compliant and can handle complex queries of the application. Which type of database service should the Architect use?
A. Amazon DynamoDB B. Amazon RDS C. Amazon Redshift D. Amazon Aurora
Answer: D. Notes: Amazon Aurora (Aurora) is a fully managed relational database engine that’s compatible with MySQL and PostgreSQL. You already know how MySQL and PostgreSQL combine the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. The code, tools, and applications you use today with your existing MySQL and PostgreSQL databases can be used with Aurora. With some workloads, Aurora can deliver up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. Aurora includes a high-performance storage subsystem. Its MySQL- and PostgreSQL-compatible database engines are customized to take advantage of that fast distributed storage. The underlying storage grows automatically as needed, up to 64 tebibytes (TiB). Aurora also automates and standardizes database clustering and replication, which are typically among the most challenging aspects of database configuration and administration. Reference: Aurora Category: Design Resilient Architectures
Question 112: An online stocks trading application that stores financial data in an S3 bucket has a lifecycle policy that moves older data to Glacier every month. There is a strict compliance requirement where a surprise audit can happen at anytime and you should be able to retrieve the required data in under 15 minutes under all circumstances. Your manager instructed you to ensure that retrieval capacity is available when you need it and should handle up to 150 MB/s of retrieval throughput. Which of the following should you do to meet the above requirement? (Select TWO.)
A. Retrieve the data using Amazon Glacier Select. B. Use Bulk Retrieval to access the financial data. C. Purchase provisioned retrieval capacity. D. Use Expedited Retrieval to access the financial data. E. Specify a range, or portion, of the financial data archive to retrieve.
Answer: C. D. Notes: Expedited retrievals allow you to quickly access your data when occasional urgent requests for a subset of archives are required. For all but the largest archives (250 MB+), data accessed using Expedited retrievals are typically made available within 1–5 minutes. Provisioned Capacity ensures that retrieval capacity for Expedited retrievals is available when you need it. To make an Expedited, Standard, or Bulk retrieval, set the Tier parameter in the Initiate Job (POST jobs) REST API request to the option you want, or the equivalent in the AWS CLI or AWS SDKs. If you have purchased provisioned capacity, then all expedited retrievals are automatically served through your provisioned capacity. Provisioned capacity ensures that your retrieval capacity for expedited retrievals is available when you need it. Each unit of capacity provides that at least three expedited retrievals can be performed every five minutes and provides up to 150 MB/s of retrieval throughput. You should purchase provisioned retrieval capacity if your workload requires highly reliable and predictable access to a subset of your data in minutes. Without provisioned capacity Expedited retrievals are accepted, except for rare situations of unusually high demand. However, if you require access to Expedited retrievals under all circumstances, you must purchase provisioned retrieval capacity. Reference: Amazon Glacier Category: Design Resilient Architectures
Question 113: An organization stores and manages financial records of various companies in its on-premises data center, which is almost out of space. The management decided to move all of their existing records to a cloud storage service. All future financial records will also be stored in the cloud. For additional security, all records must be prevented from being deleted or overwritten. Which of the following should you do to meet the above requirement? A. Use AWS Storage Gateway to establish hybrid cloud storage. Store all of your data in Amazon S3 and enable object lock. B. Use AWS DataSync to move the data. Store all of your data in Amazon EFS and enable object lock. C. Use AWS Storage Gateway to establish hybrid cloud storage. Store all of your data in Amazon EBS and enable object lock. D. Use AWS DataSync to move the data. Store all of your data in Amazon S3 and enable object lock.
Answer: D. Notes: AWS DataSync allows you to copy large datasets with millions of files, without having to build custom solutions with open source tools, or license and manage expensive commercial network acceleration software. You can use DataSync to migrate active data to AWS, transfer data to the cloud for analysis and processing, archive data to free up on-premises storage capacity, or replicate data to AWS for business continuity. AWS DataSync enables you to migrate your on-premises data to Amazon S3, Amazon EFS, and Amazon FSx for Windows File Server. You can configure DataSync to make an initial copy of your entire dataset, and schedule subsequent incremental transfers of changing data towards Amazon S3. Enabling S3 Object Lock prevents your existing and future records from being deleted or overwritten. AWS DataSync is primarily used to migrate existing data to Amazon S3. On the other hand, AWS Storage Gateway is more suitable if you still want to retain access to the migrated data and for ongoing updates from your on-premises file-based applications. ReferenceText: AWS DataSync ReferenceUrl: https://aws.amazon.com/datasync/faqs/ Category: Design Secure Applications and Architectures
Question 114: A solutions architect is designing a solution to run a containerized web application by using Amazon Elastic Container Service (Amazon ECS). The solutions architect wants to minimize cost by running multiple copies of a task on each container instance. The number of task copies must scale as the load increases and decreases. Which routing solution distributes the load to the multiple tasks?
A. Configure an Application Load Balancer to distribute the requests by using path-based routing. B. Configure an Application Load Balancer to distribute the requests by using dynamic host port mapping. C. Configure an Amazon Route 53 alias record set to distribute the requests with a failover routing policy. D. Configure an Amazon Route 53 alias record set to distribute the requests with a weighted routing policy.
Answer: B. Notes: With dynamic host port mapping, multiple tasks from the same service are allowed for each container instance. You can use weighted routing policies to route traffic to instances at proportions that you specify. You cannot use weighted routing policies to manage multiple tasks on a single container. Notes: With dynamic host port mapping, multiple tasks from the same service are allowed for each container instance. You can use weighted routing policies to route traffic to instances at proportions that you specify. You cannot use weighted routing policies to manage multiple tasks on a single container. Reference:Choosing a routing policy Category: Design Cost-Optimized Architectures
Question 115: Question: A Solutions Architect needs to deploy a mobile application that can collect votes for a popular singing competition. Millions of users from around the world will submit votes using their mobile phones. These votes must be collected and stored in a highly scalable and highly available data store which will be queried for real-time ranking. Which of the following combination of services should the architect use to meet this requirement? A. Amazon Redshift and AWS Mobile Hub B. Amazon DynamoDB and AWS AppSync C. Amazon Relational Database Service (RDS) and Amazon MQ D. Amazon Aurora and Amazon Cognito
Answer: B. Notes: When the word durability pops out, the first service that should come to your mind is Amazon S3. Since this service is not available in the answer options, we can look at the other data store available which is Amazon DynamoDB. DynamoDB is durable, scalable, and highly available data store which can be used for real-time tabulation. You can also use AppSync with DynamoDB to make it easy for you to build collaborative apps that keep shared data updated in real time. You just specify the data for your app with simple code statements and AWS AppSync manages everything needed to keep the app data updated in real time. This will allow your app to access data in Amazon DynamoDB, trigger AWS Lambda functions, or run Amazon Elasticsearch queries and combine data from these services to provide the exact data you need for your app.
Question 116: The usage of a company’s image-processing application is increasing suddenly with no set pattern. The application’s processing time grows linearly with the size of the image. The processing can take up to 20 minutes for large image files. The architecture consists of a web tier, an Amazon Simple Queue Service (Amazon SQS) standard queue, and message consumers that process the images on Amazon EC2 instances. When a high volume of requests occurs, the message backlog in Amazon SQS increases. Users are reporting the delays in processing. A solutions architect must improve the performance of the application in compliance with cloud best practices. Which solution will meet these requirements?
A. Purchase enough Dedicated Instances to meet the peak demand. Deploy the instances for the consumers. B. Convert the existing SQS standard queue to an SQS FIFO queue. Increase the visibility timeout. C. Configure a scalable AWS Lambda function as the consumer of the SQS messages. D. Create a message consumer that is an Auto Scaling group of instances. Configure the Auto Scaling group to scale based upon the ApproximateNumberOfMessages Amazon CloudWatch metric.
Answer: D. Notes: FIFO queues will solve problems that occur when messages are processed out of order. FIFO queues will not improve performance during sudden volume increases. Additionally, you cannot convert SQS queues after you create them. Reference: FIFO Queues
Question 117: An application is hosted on an EC2 instance with multiple EBS Volumes attached and uses Amazon Neptune as its database. To improve data security, you encrypted all of the EBS volumes attached to the instance to protect the confidential data stored in the volumes. Which of the following statements are true about encrypted Amazon Elastic Block Store volumes? (Select TWO.)
A. All data moving between the volume and the instance are encrypted. B. Snapshots are automatically encrypted. C. The volumes created from the encrypted snapshot are not encrypted. D. Snapshots are not automatically encrypted. E. Only the data in the volume is encrypted and not all the data moving between the volume and the instance. Answer: A. B. Notes: Amazon Elastic Block Store (Amazon EBS) provides block level storage volumes for use with EC2 instances. EBS volumes are highly available and reliable storage volumes that can be attached to any running instance that is in the same Availability Zone. EBS volumes that are attached to an EC2 instance are exposed as storage volumes that persist independently from the life of the instance. Encryption operations occur on the servers that host EC2 instances, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. You can encrypt both the boot and data volumes of an EC2 instance. Reference:EBS
Question 118: A reporting application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. For complex reports, the application can take up to 15 minutes to respond to a request. A solutions architect is concerned that users will receive HTTP 5xx errors if a report request is in process during a scale-in event. What should the solutions architect do to ensure that user requests will be completed before instances are terminated?
A. Enable sticky sessions (session affinity) for the target group of the instances. B. Increase the instance size in the Application Load Balancer target group. C. Increase the cooldown period for the Auto Scaling group to a greater amount of time than the time required for the longest running responses. D. Increase the deregistration delay timeout for the target group of the instances to greater than 900 seconds.
Answer: D. Notes: By default, Elastic Load Balancing waits 300 seconds before the completion of the deregistration process, which can help in-flight requests to the target become complete. To change the amount of time that Elastic Load Balancing waits, update the deregistration delay value. Reference: Deregistration Delay.
Question 119: A company used Amazon EC2 Spot Instances for a demonstration that is now complete. A solutions architect must remove the Spot Instances to stop them from incurring cost. What should the solutions architect do to meet this requirement?
A. Cancel the Spot request only. B. Terminate the Spot Instances only. C. Cancel the Spot request. Terminate the Spot Instances. D. Terminate the Spot Instances. Cancel the Spot request.
Answer: C. Notes: To remove the Spot Instances, the appropriate steps are to cancel the Spot request and then to terminate the Spot Instances. Reference:Spot Instances
Question 120: Which components are required to build a site-to-site VPN connection on AWS? (Select TWO.) A. An Internet Gateway B. A NAT gateway C. A customer Gateway D. A Virtual Private Gateway E. Amazon API Gateway
Answer: C. D. Notes: A virtual private gateway is attached to a VPC to create a site-to-site VPN connection on AWS. You can accept private encrypted network traffic from an on-premises data center into your VPC without the need to traverse the open public internet. A customer gateway is required for the VPN connection to be established. A customer gateway device is set up and configured in the customer’s data center. Reference: What is AWS Site-to-Site VPN?
Question 121: A company runs its website on Amazon EC2 instances behind an Application Load Balancer that is configured as the origin for an Amazon CloudFront distribution. The company wants to protect against cross-site scripting and SQL injection attacks. Which approach should a solutions architect recommend to meet these requirements?
A. Enable AWS Shield Advanced. List the CloudFront distribution as a protected resource. B. Define an AWS Shield Advanced policy in AWS Firewall Manager to block cross-site scripting and SQL injection attacks. C. Set up AWS WAF on the CloudFront distribution. Use conditions and rules that block cross-site scripting and SQL injection attacks. D. Deploy AWS Firewall Manager on the EC2 instances. Create conditions and rules that block cross-site scripting and SQL injection attacks.
Answer: C. Notes: AWS WAF can detect the presence of SQL code that is likely to be malicious (known as SQL injection). AWS WAF also can detect the presence of a script that is likely to be malicious (known as cross-site scripting). Reference: AWS WAF.
Question 122: A media company is designing a new solution for graphic rendering. The application requires up to 400 GB of storage for temporary data that is discarded after the frames are rendered. The application requires approximately 40,000 random IOPS to perform the rendering. What is the MOST cost-effective storage option for this rendering application? A. A storage optimized Amazon EC2 instance with instance store storage B. A storage optimized Amazon EC2 instance with a Provisioned IOPS SSD (io1 or io2) Amazon Elastic Block Store (Amazon EBS) volume C. A burstable Amazon EC2 instance with a Throughput Optimized HDD (st1) Amazon Elastic Block Store (Amazon EBS) volume D. A burstable Amazon EC2 instance with Amazon S3 storage over a VPC endpoint
Answer: A. Notes: SSD-Backed Storage Optimized (i2) instances provide more than 365,000 random IOPS. The instance store has no additional cost, compared with the regular hourly cost of the instance. Reference: Amazon EC2 pricing.
Question 123: A company is deploying a new application that will consist of an application layer and an online transaction processing (OLTP) relational database. The application must be available at all times. However, the application will have periods of inactivity. The company wants to pay the minimum for compute costs during these idle periods. Which solution meets these requirements MOST cost-effectively? A. Run the application in containers with Amazon Elastic Container Service (Amazon ECS) on AWS Fargate. Use Amazon Aurora Serverless for the database. B. Run the application on Amazon EC2 instances by using a burstable instance type. Use Amazon Redshift for the database. C. Deploy the application and a MySQL database to Amazon EC2 instances by using AWS CloudFormation. Delete the stack at the beginning of the idle periods. D. Deploy the application on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. Use Amazon RDS for MySQL for the database.
Answer: A. Notes: When Amazon ECS uses Fargate for compute, it incurs no costs when the application is idle. Aurora Serverless also incurs no compute costs when it is idle. Reference: AWS Fargate Pricing.
Question 124:Which options best describe characteristics of events in event-driven design? (Select THREE.)
A. Events are usually processed asynchronously
B. Events usually expect an immediate reply
C. Events are used to share information about a change in state
D. Events are observable
E. Events direct the actions of targets
Answer: A. C. D. Notes: Events are used to share information about a change in state. Events are observable and usually processed asynchronously. Events do not direct the actions of targets, and events do not expect a reply. Events can be used to trigger synchronous communications, and in this case, an event source like API Gateway might wait for a response. Reference:Event Driven Design on AWS
Questions 125: Which of these scenarios would lead you to choose AWS AppSync and GraphQL APIs over API Gateway and REST APIs? Choose THREE.
A. You need a strongly typed schema for developers.
B. You need a server-controlled response.
C. You need multiple authentication options to the same API.
D. You need to integrate with existing clients.
E. You need client-specific responses that require data from many backend resources.
Answer: A. C. E Notes: With GraphQL, you define the schema and data types in advance. If it’s not in the schema, you can’t query for it. Developers can download the schema and generate source code off the schema to work with it. Consider GraphQL for applications where you need a client-specific response that needs data from lots of backend sources. When you need a server-controlled choose REST. AWS AppSync allows you to use multiple authentication options on the same API, but API Gateway allows you to associate only one authentication option per resource. When you need to integrate with existing clients, REST is much more mature, and there are more tools in which to use it. Most clients are written for REST. Reference: GraphQL vs. REST
Question 126: Which options are TRUE statements about serverless security? (Select THREE.)
A. Logging and metrics are especially critical because you can’t go back to the server to see what happened when something fails.
B. Because you aren’t responsible for the operating system and the network itself, you don’t need to worry about mitigating external attacks.
C. The distributed perimeter means your code needs to defend each of the potential paths that might be used to reach your functions.
D. You can use Lambda’s fine-grained controls to scope its reach with a much smaller set of permissions as opposed to traditional approaches.
E. You may use the same tooling as with your server-based applications, but the best practices you follow will be different.
Answer: A. C. and D.
Notes: In Lambda’s ephemeral environment, logging and metrics are more critical because once the code runs, you can no longer go back to the server to find out what has happened. The security perimeter you are defending has to consider the different services that might trigger a function, and your code needs to defend each of those potential paths. You can use Lambda’s fine-grained controls to scope its reach with a much smaller set of permissions as opposed to traditional approaches where you may give broad permissions for your application on its servers. Scope your functions to limit permission sharing between any unrelated components. Security best practices don’t change with serverless, but the tooling you’ll use will change. For example, techniques such as installing agents on your host may not be relevant any more. While you aren’t responsible for the operating system or the network itself, you do need to protect your network boundaries and mitigate external attacks.
Question 127: Which options are examples of steps you take to protect your serverless application from attacks? (Select FOUR.)
A. Update your operating system with the latest patches.
B. Configure geoblocking on Amazon CloudFront in front of regional API endpoints.
C. Disable origin access identity on Amazon S3.
D. Disable CORS on your APIs.
E. Use resource policies to limit access to your APIs to users from a specified account.
F. Filter out specific traffic patterns with AWS WAF.
G. Parameterize queries so that your Lambda function expects a single input.
Answer: B. E. F. G
Notes: You aren’t responsible for the operating system or network configuration where your functions run, and AWS is ensuring the security of the data within those managed services. You are responsible for protecting data entering your application and limiting access to your AWS resources. You still need to protect data that originates client-side or that travels to or from endpoints outside AWS.
When integrating CloudFront with regional API endpoints, CloudFront also supports geoblocking, which you can use to prevent requests from being served from particular geographic locations.
Use origin access identity with Amazon S3 to allow bucket access only through CloudFront.
CORS is a browser security feature that restricts cross-origin HTTP requests that are initiated from scripts running in the browser. It is enforced by the browser. If your APIs will receive cross-origin requests, you should enable CORS support in API Gateway.
IAM resource policies can be used to limit access to your APIs. For example, you can restrict access to users from a specified AWS account or deny traffic from a specified source IP address or CIDR block.
AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits. AWS WAF lets you create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns you define.
Lambda functions are triggered by events. These events submit an event parameter to the Lambda function and could be exploited for SQL injection. You can prevent this type of attack by parameterizing queries so that your Lambda function expects a single input.
Question 128:Which options reflect best practices for automating your deployment pipeline with serverless applications? (Select TWO.)
A. Select one deployment framework and use it for all of your deployments for consistency.
B. Use different AWS accounts for each environment in your deployment pipeline.
C. Use AWS SAM to configure safe deployments and include pre- and post-traffic tests.
D. Create a specific AWS SAM template to match each environment to keep them distinct.
Answer: B. and C.
Notes: You may use multiple deployment frameworks for an application so that you can use the framework that best suits the type of deployment. For example, you might use the AWS SAM framework to define your application stack and deployment preferences and then use AWS CDK to provision any infrastructure-related resources, such as the CI/CD pipeline.
It is a best practice to use different AWS accounts for each environment. This approach limits the blast radius of issues that occur and makes it less complex to differentiate which resources are associated with each environment. Because of the way costs are calculated with serverless, spinning up additional environments doesn’t add much to your cost.
AWS SAM lets you configure safe deployment preferences so that you can run code before the deployment, and after the deployment and rollback if there is a problem. You can also specify a method for shifting traffic to the new version a little bit at a time.
It is a best practice to use one AWS SAM template across environments and use options to parameterize values that are different per environment. This helps ensure that the environment is built with exactly the same stack.
Question 129: Your application needs to connect to an Amazon RDS instance on the backend. What is the best recommendation to the developer whose function must read from and write to the Amazon RDS instance?
A. Use reserved concurrency to limit the number of concurrent functions that would try to write to the database
B. Use the database proxy feature to provide connection pooling for the functions
C. Initialize the number of connections you want outside of the handler
D. Use the database TTL setting to clean up connections
Answer: B Notes: Use the database proxy feature to provide connection pooling for the functions
Question 130: A company runs a cron job on an Amazon EC2 instance on a predefined schedule. The cron job calls a bash script that encrypts a 2 KB file. A security engineer creates an AWS Key Management Service (AWS KMS) CMK with a key policy.
The key policy and the EC2 instance role have the necessary configuration for this job.
Which process should the bash script use to encrypt the file?
A) Use the aws kms encrypt command to encrypt the file by using the existing CMK.
B) Use the aws kms create-grant command to generate a grant for the existing CMK.
C) Use the aws kms encrypt command to generate a data key. Use the plaintext data key to encrypt the file.
D) Use the aws kms generate-data-key command to generate a data key. Use the encrypted data key to encrypt the file.
Answer: D
Notes: KMS allow encryption for raw data up to 4K but it is not recommended so A is possible but not good practice. Create grant is a ‘policy’ things, not an encryption things. Kms encrypt doesn’t generate data key. Only D generate a data key clear text and encrypted. You then encrypt the file with the data key, add the encrypted data key to the encrypted file metadata for later decryption.
Question 131: A Security engineer must develop an AWS Identity and Access Management (IAM) strategy for a company’s organization in AWS Organizations. The company needs to give developers autonomy to develop and test their applications on AWS, but the company also needs to implement security guardrails to help protect itself. The company creates and distributes applications with different levels of data classification and types. The solution must maximize scalability.
Which combination of steps should the security engineer take to meet these requirements? (Choose three.)
A) Create an SCP to restrict access to highly privileged or unauthorized actions to specific AM principals. Assign the SCP to the appropriate AWS accounts.
B) Create an IAM permissions boundary to allow access to specific actions and IAM principals. Assign the IAM permissions boundary to all AM principals within the organization
C) Create a delegated IAM role that has capabilities to create other IAM roles. Use the delegated IAM role to provision IAM principals by following the principle of least privilege.
D) Create OUs based on data classification and type. Add the AWS accounts to the appropriate OU. Provide developers access to the AWS accounts based on business need.
E) Create IAM groups based on data classification and type. Add only the required developers’ IAM role to the IAM groups within each AWS account.
F) Create IAM policies based on data classification and type. Add the minimum required IAM policies to the developers’ IAM role within each AWS account.
Answer: A B and C
Notes:
If you look at the choices, there are three related to SCP, which controls services, and three related to IAM and permissions boundaries.
Limiting services doesn’t help with data classification – using boundaries, policies and roles give you the scalability and can solve the problem.
Question 132: A company is ready to deploy a public web application. The company will use AWS and will host the application on an Amazon EC2 instance. The company must use SSL/TLS encryption. The company is already using AWS Certificate Manager (ACM) and will export a certificate for use with the deployment.
How can a security engineer deploy the application to meet these requirements?
A) Put the EC2 instance behind an Application Load Balancer (ALB). In the EC2 console, associate the certificate with the ALB by choosing HTTPS and 443.
B) Put the EC2 instance behind a Network Load Balancer. Associate the certificate with the EC2 instance.
C) Put the EC2 instance behind a Network Load Balancer (NLB). In the EC2 console, associate the certificate with the NLB by choosing HTTPS and 443.
D) Put the EC2 instance behind an Application Load Balancer. Associate the certificate with the EC2 instance.
Answer: A
Notes: You can’t directly install Amazon-issued certificates on Amazon Elastic Compute Cloud (EC2) instances. Instead, use the certificate with a load balancer, and then register the EC2 instance behind the load balancer.
What are the 6 pillars of a well architected framework:
AWS Well-Architected helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications and workloads. Based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization — AWS Well-Architected provides a consistent approach for customers and partners to evaluate architectures, and implement designs that can scale over time.
1. Operational Excellence
The operational excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. You can find prescriptive guidance on implementation in the Operational Excellence Pillar whitepaper.
2. Security The security pillar includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies. You can find prescriptive guidance on implementation in the Security Pillar whitepaper.
3. Reliability The reliability pillar includes the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. You can find prescriptive guidance on implementation in the Reliability Pillar whitepaper.
4. Performance Efficiency The performance efficiency pillar includes the ability to use computing resources efficiently to meet system requirements and to maintain that efficiency as demand changes and technologies evolve. You can find prescriptive guidance on implementation in the Performance Efficiency Pillar whitepaper.
5. Cost Optimization The cost optimization pillar includes the ability to avoid or eliminate unneeded cost or suboptimal resources. You can find prescriptive guidance on implementation in the Cost Optimization Pillar whitepaper.
6. Sustainability
The ability to increase efficiency across all components of a workload by maximizing the benefits from the provisioned resources.
There are six best practice areas for sustainability in the cloud:
Region Selection – AWS Global Infrastructure
User Behavior Patterns – Auto Scaling, Elastic Load Balancing
Software and Architecture Patterns – AWS Design Principles
The AWS Well-Architected Framework provides architectural best practices across the five pillars for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. The framework provides a set of questions that allows you to review an existing or proposed architecture. It also provides a set of AWS best practices for each pillar. Using the Framework in your architecture helps you produce stable and efficient systems, which allows you to focus on functional requirements.
Other AWS Facts and Summaries and Questions/Answers Dump
The reality, of course, today is that if you come up with a great idea you don’t get to go quickly to a successful product. There’s a lot of undifferentiated heavy lifting that stands between your idea and that success. The kinds of things that I’m talking about when I say undifferentiated heavy lifting are things like these: figuring out which servers to buy, how many of them to buy, what time line to buy them.
Eventually you end up with heterogeneous hardware and you have to match that. You have to think about backup scenarios if you lose your data center or lose connectivity to a data center. Eventually you have to move facilities. There’s negotiations to be done. It’s a very complex set of activities that really is a big driver of ultimate success.
But they are undifferentiated from, it’s not the heart of, your idea. We call this muck. And it gets worse because what really happens is you don’t have to do this one time. You have to drive this loop. After you get your first version of your idea out into the marketplace, you’ve done all that undifferentiated heavy lifting, you find out that you have to cycle back. Change your idea. The winners are the ones that can cycle this loop the fastest.
On every cycle of this loop you have this undifferentiated heavy lifting, or muck, that you have to contend with. I believe that for most companies, and it’s certainly true at Amazon, that 70% of your time, energy, and dollars go into the undifferentiated heavy lifting and only 30% of your energy, time, and dollars gets to go into the core kernel of your idea.
I think what people are excited about is that they’re going to get a chance they see a future where they may be able to invert those two. Where they may be able to spend 70% of their time, energy and dollars on the differentiated part of what they’re doing.
So my exam was yesterday and I got the results in 24 hours. I think that’s how they review all saa exams, not showing the results right away anymore.
I scored 858. Was practicing with Stephan’s udemy lectures and Bonso exam tests. My test results were as follows Test 1. 63%, 93% Test 2. 67%, 87% Test 3. 81 % Test 4. 72% Test 5. 75 % Test 6. 81% Stephan’s test. 80%
I was reading all question explanations (even the ones I got correct)
The actual exam was pretty much similar to these. The topics I got were:
A lot of S3 (make sure you know all of it from head to toes)
VPC peering
DataSync and Database Migration Service in same questions. Make sure you know the difference
One EKS question
2-3 KMS questions
Security group question
A lot of RDS Multi-AZ
SQS + SNS fan out pattern
ECS microservice architecture question
Route 53
NAT gateway
And that’s all I can remember)
I took extra 30 minutes, because English is not my native language and I had plenty of time to think and then review flagged questions.
Hey guys, just giving my update so all of you guys working towards your certs can stay motivated as these success stories drove me to reach this goal.
Background: 12 years of military IT experience, never worked with the cloud. I’ve done 7 deployments (that is a lot in 12 years), at which point I came home from the last one burnt out with a family that barely knew me. I knew I needed a change, but had no clue where to start or what I wanted to do. I wasn’t really interested in IT but I knew it’d pay the bills. After seeing videos about people in IT working from home(which after 8+ years of being gone from home really appealed to me), I stumbled across a video about a Solutions Architect’s daily routine working from home and got me interested in AWS.
AWS Solutions Architect SAA Certification Preparation time: It took me 68 days straight of hard work to pass this exam with confidence. No rest days, more than 120 pages of hand-written notes and hundreds and hundreds of flash cards.
In the beginning, I hopped on Stephane Maarek’s course for the CCP exam just to see if it was for me. I did the course in about a week and then after doing some research on here, got the CCP Practice exams from tutorialsdojo.com Two weeks after starting the Udemy course, I passed the exam. By that point, I’d already done lots of research on the different career paths and the best way to study, etc.
Cantrill(10/10) – That same day, I hopped onto Cantrill’s course for the SAA and got to work. Somebody had mentioned that by doing his courses you’d be over-prepared for the exam. While I think a combination of material is really important for passing the certification with confidence, I can say without a doubt Cantrill’s courses got me 85-90% of the way there. His forum is also amazing, and has directly contributed to me talking with somebody who works at AWS to land me a job, which makes the money I spent on all of his courses A STEAL. As I continue my journey (up next is SA Pro), I will be using all of his courses.
Neal Davis(8/10) – After completing Cantrill’s course, I found myself needing a resource to reinforce all the material I’d just learned. AWS is an expansive platform and the many intricacies of the different services can be tricky. For this portion, I relied on Neal Davis’s Training Notes series. These training notes are a very condensed version of the information you’ll need to pass the exam, and with the proper context are very useful to find the things you may have missed in your initial learnings. I will be using his other Training Notes for my other exams as well.
TutorialsDojo(10/10) – These tests filled in the gaps and allowed me to spot my weaknesses and shore them up. I actually think my real exam was harder than these, but because I’d spent so much time on the material I got wrong, I was able to pass the exam with a safe score.
As I said, I was surprised at how difficult the exam was. A lot of my questions were related to DBs, and a lot of them gave no context as to whether the data being loaded into them was SQL or NoSQL which made the choice selection a little frustrating. A lot of the questions have 2 VERY SIMILAR answers, and often time the wording of the answers could be easy to misinterpret (such as when you are creating a Read Replica, do you attach it to the primary application DB that is slowing down because of read issues or attach it to the service that is causing the primary DB to slow down). For context, I was scoring 95-100% on the TD exams prior to taking the test and managed a 823 on the exam so I don’t know if I got unlucky with a hard test or if I’m not as prepared as I thought I was (i.e. over-thinking questions).
Anyways, up next is going back over the practical parts of the course as I gear up for the SA Pro exam. I will be taking my time with this one, and re-learning the Linux CLI in preparation for finding a new job.
PS if anybody on here is hiring, I’m looking! I’m the hardest worker I know and my goal is to make your company as streamlined and profitable as possible. 🙂
Testimonial: How did you prepare for AWS Certified Solutions Architect – Associate Level certification?
Best way to prepare for aws solution architect associate certification
Practical knowledge is 30% important and rest is Jayendra blog and Dumps.
Buying udemy courses doesn’t make you pass, I can tell surely without going to dumps and without going to jayendra’s blog not easy to clear the certification.
Read FAQs of S3, IAM, EC2, VPC, SQS, Autoscaling, Elastic Load Balancer, EBS, RDS, Lambda, API Gateway, ECS.
Read the Security Whitepaper and Shared Responsibility model.
The most important thing is basic questions from the last introduced topics to the exam is very important like Amazon Kinesis, etc…
Whitepapers are the important information about each services that are published by Amazon in their website. If you are preparing for the AWS certifications, it is very important to use the some of the most recommended whitepapers to read before writing the exam.
Data Security questions could be the more challenging and it’s worth noting that you need to have a good understanding of security processes described in the whitepaper titled “Overview of Security Processes”.
In the above list, most important whitepapers are Overview of Security Processes and Storage Options in the Cloud. Read more here…
Stephen Maarek’s Udemy course, and his 6 exam practices
Adrian Cantrill’s online course (about `60% done)
TutorialDojo’s exams
(My company has udemy business account so I was able to use Stephen’s course/exam)
I scheduled my exam at the end of March, and started with Adrian’s. But I was dumb thinking that I could go through his course within 3 weeks… I stopped around 12% of his course and went to the textbook and finished reading the all-in-one exam guide within a weekend. Then I started going through Stephen’s course. While learning the course, I pushed back the exam to end of April, because I knew I wouldn’t be ready by the exam comes along.
Five days before the exam, I finished Stephen’s course, and then did his final exam on the course. I failed miserably (around 50%). So I did one of Stephen’s practice exam and did worse (42%). I thought maybe it might be his exams that are slightly difficult, so I went and bought Jon Bonso’s exam and got 60% on his first one. And then I realized based on all the questions on the exams, I was definitely lacking some fundamentals. I went back to Adrian’s course and things were definitely sticking more – I think it has to do with his explanations + more practical stuff. Unfortunately, I could not finish his course before the exam (because I was cramming), and on the day of the exam, I could only do Bonso’s four of six exams, with barely passing one of them.
Please, don’t do what I did. I was desperate to get this thing over with it. I wanted to move on and work on other things for job search, but if you’re not in this situation, please don’t do this. I can’t for love of god tell you about OAI and Cloudfront and why that’s different than S3 URL. The only thing that I can remember is all the practical stuff that I did with Adrian’s course. I’ll never forget how to create VPC, because he make you manually go through it. I’m not against Stephen’s course – they are different on its own way (see the tips below).
So here’s what I recommend doing before writing for aws exam:
Don’t schedule your exam beforehand. Go through the materials that you are doing, and make sure you get at least 80% on all of the Jon Bonso’s exam (I’d recommend maybe 90% or higher)
If you like to learn things practically, I do recommend Adrian’s course. If you like to learn things conceptually, go with Stephen Maarek’s course. I find Stephen’s course more detailed when going through different architectures, but I can’t really say that because I didn’t really finish Adrian’s course
Jon Bonso’s exam was about the same difficulty as the actual exam. But they’re slightly more tricky. For example, many of the questions will give you two different situation and you really have to figure out what they are asking for because they might contradict to each other, but the actual question is asking one specific thing. However, there were few questions that were definitely obvious if you knew the service.
I’m upset that even though I passed the exam, I’m still lacking some practical stuff, so I’m just going to go through Adrian’s Developer exam but without cramming this time. If you actually learn the materials and practice them, they are definitely useful in the real world. I hope this will help you passing and actually learning the stuff.
P.S I vehemently disagree with Adrian in one thing in his course. doggogram.io is definitely better than catagram.io, although his cats are pretty cool
I sat the exam at a PearsonVUE test centre and scored 816.
The exam had lots of questions around S3, RDS and storage. To be honest it was a bit of a blur but they are the ones I remember.
I was a bit worried before sitting the exam as I was only hit 76% in the official AWS practice exam the night before but it turned out alright in the end!
I have around 8 years of experience in IT but AWS was relatively new to me around 5 weeks ago.
Training Material Used
Firstly I ran through the u/stephanemaarek course which I found to pretty much cover all that was required!
I then used the u/Tutorials_Dojo practice exams. I took one before starting Stephane’s course to see where I was at with no training. I got 46% but I suppose a few of them were lucky guesses!
I then finished the course and took another test and hit around 65%, TD was great as they gave explanations on the answers. I then used this go back to the course to go over my weak areas again.
I then seemed to not be able to get higher than the low 70% on the exams so I went through u/neal-davis course, this was also great as it had an “Exam Cram” video at the end of each topic.
I also set up flashcards on BrainScape which helped me remember AWS services and what their function is.
All in all it was a great learning experience and I look forward to putting my skills into action!
S3 Use cases, storage tiers, cloudfront were pretty prominent too
Only got one “figure out what’s wrong with this IAM policy” question
A handful of dynamodb questions and a handful for picking use cases between different database types or caching layers.
Other typical tips: When you’re unclear on what answer you should pick, or if they seem very similar – work on eliminating answers first. “It can’t be X because oy Y” and that can help a lot.
Testimonial: Passed the AWS Solutions Architect Associate exam! I prepared mostly from freely available resources as my basics were strong. Bought Jon Bonso’s tests on Udemy and they turned out to be super important while preparing for those particular type of questions (i.e. the questions which feel subjective, but they aren’t), understanding line of questioning and most suitable answers for some common scenarios.
Created a Notion notebook to note down those common scenarios, exceptions, what supports what, integrations etc. Used that notebook and cheat sheets on Tutorials Dojo website for revision on final day.
Found the exam was little tougher than Jon Bonso’s, but his practice tests on Udemy were crucial. Wouldn’t have passed it without them.
Piece of advice for upcoming test aspirants: Get your basics right, especially networking. Understand properly how different services interact in VPC. Focus more on the last line of the question. It usually gives you a hint upon what exactly is needed. Whether you need cost optimization, performance efficiency or high availability. Little to no operational effort means serverless. Understand all serverless services thoroughly.
I have almost no experience with AWS, except for completing the Certified Cloud Practitioner earlier this year. My work is pushing all IT employees to complete some cloud training and certifications, which is why I chose to do this.
How I Studied: My company pays for acloudguru subscriptions for its employees, so I used that for the bulk of my learning. I took notes on 3×5 notecards on the key terms and concepts for review.
Once I scored passing grades on the ACG practice tests, I took the Jon Bonso tests on Udemy, which are much more difficult and fairly close to the difficulty of the actual exam. I scored 45%-74% on every Bonso practice test, and spent 1-2 hours after each test reviewing what I missed, supplementing my note cards, and taking time to understand my weak spots. I only took these tests once each, but in between each practice test, I would review all my note cards until I had the content largely memorized.
The Test: This was one of the most difficult certification tests I’ve ever done. The exam was remote proctored with PearsonVUE (I used PSI for the CCP and didn’t like it as much) I felt like I was failing half the time. I marked about 25% of the questions for review, and I used up the entire allotted time. The questions are mostly about understanding which services interact with which other services, or which services are incompatible with the scenario. It was important for me to read through each response and eliminate the ones that don’t make sense. A lot of the responses mentioned a lot of AWS services that sound good but don’t actually work together (i.e. if it doesn’t make sense to have service X querying database Y, so that probably isn’t the right answer). I can’t point to one domain that really needs to be studied more than any other. You need to know all of the content for the exam.
Final Thoughts: The ACG practice tests are not a good metric for success for the actual SAA exam, and I would not have passed without Bonso’s tests showing me my weak spots. PearsonVUE is better than PSI. Make sure to study everything thoroughly and review excessively. You don’t necessarily need 5 different study sources and years of experience to be able to pass (although both of those definitely help) and good luck to anyone that took the time to read!
AWS Certified Solutions Architect Associate So glad to pass my first AWS certification after 6 weeks of preparation.
My Preparation:
After a series of trial of error in regards to picking the appropriate learning content. Eventually, I went with the community’s advice, and took the course presented by the amazing u/stephanemaarek, in addition to the practice exams by Jon Bonso. At this point, I can’t say anything that hasn’t been said already about how helpful they are. It’s a great combination of learning material, I appreciate the instructor’s work, and the community’s help in this sub.
Review:
Throughout the course I noted down the important points, and used the course slides as a reference in the first review iteration. Before resorting to Udemy’s practice exams, I purchased a practice exam from another website, that I regret (not to defame the other vendor, I would simply recommend Udemy). Udemy’s practice exams were incredible, in that they made me aware of the points I hadn’t understood clearly. After each exam, I would go both through the incorrect answers, as well as the questions I marked for review, wrote down the topic for review, and read the explanation thoroughly. The explanations point to the respective documentation in AWS, which is a recommended read, especially if you don’t feel confident with the service. What I want to note, is that I didn’t get satisfying marks on the first go at the practice exams (I got an average of ~70%). Throughout the 6 practice exams, I aggregated a long list of topics to review, went back to the course slides and practice-exams explanations, in addition to the AWS documentation for the respective service. On the second go I averaged 85%. The second attempt at the exams was important as a confidence boost, as I made sure I understood the services more clearly.
The take away:
Don’t feel disappointed if you get bad results at your practice-exams. Make sure to review the topics and give it another shot.
The AWS documentation is your friend! It is vert clear and concise. My only regret is not having referenced the documentation enough after learning new services.
The exam:
I scheduled the exam using PSI. I was very confident going into the exam. But going through such an exam environment for the first time made me feel under pressure. Partly, because I didn’t feel comfortable being monitored (I was afraid to get eliminated if I moved or covered my mouth), but mostly because there was a lot at stake from my side, and I had to pass it in the first go. The questions were harder than expected, but I tried analyze the questions more, and eliminate the invalid answers. I was very nervous and kept reviewing flagged questions up to the last minute. Luckily, I pulled through.
The take away:
The proctors are friendly, just make sure you feel comfortable in the exam place, and use the practice exams to prepare for the actual’s exam’s environment. That includes sitting in a straight posture, not talking/whispering, or looking away.
Make sure to organize the time dedicated for each questions well, and don’t let yourself get distracted by being monitored like I did.
Don’t skip the question that you are not sure of. Try to select the most probable answer, then flag the question. This will make the very-stressful, last-minute review easier.
You have been engaged by a company to design and lead a migration to an AWS environment. The team is concerned about the capabilities of the new environment, especially when it comes to high availability and cost-effectiveness. The design calls for about 20 instances (c3.2xlarge) pulling jobs/messages from SQS. Network traffic per instance is estimated to be around 500 Mbps at the beginning and end of each job. Which configuration should you plan on deploying?
Spread the Instances over multiple AZs to minimize the traffic concentration and maximize fault-tolerance. With a multi-AZ configuration, an additional reliability point is scored as the entire Availability Zone itself is ruled out as a single point of failure. This ensures high availability. Wherever possible, use simple solutions such as spreading the load out rather than expensive high tech solutions
To save money, you quickly stored some data in one of the attached volumes of an EC2 instance and stopped it for the weekend. When you returned on Monday and restarted your instance, you discovered that your data was gone. Why might that be?
The volume was ephemeral, block-level storage. Data on an instance store volume is lost if an instance is stopped.
The most likely answer is that the EC2 instance had an instance store volume attached to it. Instance store volumes are ephemeral, meaning that data in attached instance store volumes is lost if the instance stops.
Your company likes the idea of storing files on AWS. However, low-latency service of the last few days of files is important to customer service. Which Storage Gateway configuration would you use to achieve both of these ends?
A file gateway simplifies file storage in Amazon S3, integrates to existing applications through industry-standard file system protocols, and provides a cost-effective alternative to on-premises storage. It also provides low-latency access to data through transparent local caching.
Cached volumes allow you to store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data.
You’ve been commissioned to develop a high-availability application with a stateless web tier. Identify the most cost-effective means of reaching this end.
Use an Elastic Load Balancer, a multi-AZ deployment of an Auto-Scaling group of EC2 Spot instances (primary) running in tandem with an Auto-Scaling group of EC2 On-Demand instances (secondary), and DynamoDB.
With proper scripting and scaling policies, running EC2 On-Demand instances behind the Spot instances will deliver the most cost-effective solution because On-Demand instances will only spin up if the Spot instances are not available. DynamoDB lends itself to supporting stateless web/app installations better than RDS .
You are building a NAT Instance in an m3.medium using the AWS Linux2 distro with amazon-linux-extras installed. Which of the following do you need to set?
Ensure that “Source/Destination Checks” is disabled on the NAT instance. With a NAT instance, the most common oversight is forgetting to disable Source/Destination Checks. TNote: This is a legacy topic and while it may appear on the AWS exam it will only do so infrequently.
You are reviewing Change Control requests and you note that there is a proposed change designed to reduce errors due to SQS Eventual Consistency by updating the “DelaySeconds” attribute. What does this mean?
When a new message is added to the SQS queue, it will be hidden from consumer instances for a fixed period.
Delay queues let you postpone the delivery of new messages to a queue for a number of seconds, for example, when your consumer application needs additional time to process messages. If you create a delay queue, any messages that you send to the queue remain invisible to consumers for the duration of the delay period. The default (minimum) delay for a queue is 0 seconds. The maximum is 15 minutes. To set delay seconds on individual messages, rather than on an entire queue, use message timers to allow Amazon SQS to use the message timer’s DelaySeconds value instead of the delay queue’s DelaySeconds value. Reference: Amazon SQS delay queues.
Amazon SQS keeps track of all tasks and events in an application: True or False?
False. Amazon SWF (not Amazon SQS) keeps track of all tasks and events in an application. Amazon SQS requires you to implement your own application-level tracking, especially if your application uses multiple queues. Amazon SWF FAQs.
You work for a company, and you need to protect your data stored on S3 from accidental deletion. Which actions might you take to achieve this?
Allow versioning on the bucket and to protect the objects by configuring MFA-protected API access.
Your Security Manager has hired a security contractor to audit your network and firewall configurations. The consultant doesn’t have access to an AWS account. You need to provide the required access for the auditing tasks, and answer a question about login details for the official AWS firewall appliance. Which actions might you do?
AWS has removed the Firewall appliance from the hub of the network and implemented the firewall functionality as stateful Security Groups, and stateless subnet NACLs. This is not a new concept in networking, but rarely implemented at this scale.
Create an IAM user for the auditor and explain that the firewall functionality is implemented as stateful Security Groups, and stateless subnet NACLs
Amazon ElastiCache can fulfill a number of roles. Which operations can be implemented using ElastiCache for Redis.
Amazon ElastiCache offers a fully managed Memcached and Redis service. Although the name only suggests caching functionality, the Redis service in particular can offer a number of operations such as Pub/Sub, Sorted Sets and an In-Memory Data Store. However, Amazon ElastiCache for Redis doesn’t support multithreaded architectures.
You have been asked to deploy an application on a small number of EC2 instances. The application must be placed across multiple Availability Zones and should also minimize the chance of underlying hardware failure. Which actions would provide this solution?
Deploy the EC2 servers in a Spread Placement Group.
Spread Placement Groups are recommended for applications that have a small number of critical instances which need to be kept separate from each other. Launching instances in a Spread Placement Group reduces the risk of simultaneous failures that might occur when instances share the same underlying hardware. Spread Placement Groups provide access to distinct hardware, and are therefore suitable for mixing instance types or launching instances over time. In this case, deploying the EC2 instances in a Spread Placement Group is the only correct option.
You manage a NodeJS messaging application that lives on a cluster of EC2 instances. Your website occasionally experiences brief, strong, and entirely unpredictable spikes in traffic that overwhelm your EC2 instances’ resources and freeze the application. As a result, you’re losing recently submitted messages from end-users. You use Auto Scaling to deploy additional resources to handle the load during spikes, but the new instances don’t spin-up fast enough to prevent the existing application servers from freezing. Can you provide the most cost-effective solution in preventing the loss of recently submitted messages?
Use Amazon SQS to decouple the application components and keep the messages in queue until the extra Auto-Scaling instances are available.
Neither increasing the size of your EC2 instances nor maintaining additional EC2 instances is cost-effective, and pre-warming an ELB signifies that these spikes in traffic are predictable. The cost-effective solution to the unpredictable spike in traffic is to use SQS to decouple the application components.
True statements on S3 URL styles
Virtual-host-style URLs (such as: https://bucket-name.s3.Region.amazonaws.com/key name) are supported by AWS.
Path-Style URLs (such as https://s3.Region.amazonaws.com/bucket-name/key name) are supported by AWS.
You run an automobile reselling company that has a popular online store on AWS. The application sits behind an Auto Scaling group and requires new instances of the Auto Scaling group to identify their public and private IP addresses. How can you achieve this?
Using a Curl or Get Command to get the latest meta-data from http://169.254.169.254/latest/meta-data/
What data formats are used to create CloudFormation templates?
JSOn and YAML
You have launched a NAT instance into a public subnet, and you have configured all relevant security groups, network ACLs, and routing policies to allow this NAT to function. However, EC2 instances in the private subnet still cannot communicate out to the internet. What troubleshooting steps should you take to resolve this issue?
Disable the Source/Destination Check on your NAT instance.
A NAT instance sends and retrieves traffic on behalf of instances in a private subnet. As a result, source/destination checks on the NAT instance must be disabled to allow the sending and receiving traffic for the private instances. Route 53 resolves DNS names, so it would not help here. Traffic that is originating from your NAT instance will not pass through an ELB. Instead, it is sent directly from the public IP address of the NAT Instance out to the Internet.
You need a storage service that delivers the lowest-latency access to data for a database running on a single EC2 instance. Which of the following AWS storage services is suitable for this use case?
Amazon EBS is a block level storage service for use with Amazon EC2. Amazon EBS can deliver performance for workloads that require the lowest-latency access to data from a single EC2 instance. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.
What are DynamoDB use cases?
Use cases include storing JSON data, BLOB data and storing web session data.
You are reviewing Change Control requests, and you note that there is a change designed to reduce costs by updating the Amazon SQS “WaitTimeSeconds” attribute. What does this mean?
When the consumer instance polls for new work, the SQS service will allow it to wait a certain time for one or more messages to be available before closing the connection.
Poor timing of SQS processes can significantly impact the cost effectiveness of the solution.
Long polling helps reduce the cost of using Amazon SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request) and false empty responses (when messages are available but aren’t included in a response).
You have been asked to decouple an application by utilizing SQS. The application dictates that messages on the queue CAN be delivered more than once, but must be delivered in the order they have arrived while reducing the number of empty responses. Which option is most suitable?
Configure a FIFO SQS queue and enable long polling.
You are a security architect working for a large antivirus company. The production environment has recently been moved to AWS and is in a public subnet. You are able to view the production environment over HTTP. However, when your customers try to update their virus definition files over a custom port, that port is blocked. You log in to the console and you allow traffic in over the custom port. How long will this take to take effect?
Immediately.
You need to restrict access to an S3 bucket. Which methods can you use to do so?
There are two ways of securing S3, using either Access Control Lists (Permissions) or by using bucket Policies.
You are reviewing Change Control requests, and you note that there is a change designed to reduce wasted CPU cycles by increasing the value of your Amazon SQS “VisibilityTimeout” attribute. What does this mean?
When a consumer instance retrieves a message, that message will be hidden from other consumer instances for a fixed period.
Poor timing of SQS processes can significantly impact the cost effectiveness of the solution. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours.
With EBS, I can ____.
Create an encrypted volume from a snapshot of another encrypted volume.
Create an encrypted snapshot from an unencrypted snapshot by creating an encrypted copy of the unencrypted snapshot.
You can create an encrypted volume from a snapshot of another encrypted volume.
Although there is no direct way to encrypt an existing unencrypted volume or snapshot, you can encrypt them by creating either a volume or a snapshot. Reference: Encrypting unencrypted resources.
Following advice from your consultant, you have configured your VPC to use dedicated hosting tenancy. Your VPC has an Amazon EC2 Auto Scaling designed to launch or terminate Amazon EC2 instances on a regular basis, in order to meet workload demands. A subsequent change to your application has rendered the performance gains from dedicated tenancy superfluous, and you would now like to recoup some of these greater costs. How do you revert your instance tenancy attribute of a VPC to default for new launched EC2 instances?
Modify the instance tenancy attribute of your VPC from dedicated to default using the AWS CLI, an AWS SDK, or the Amazon EC2 API.
You can change the instance tenancy attribute of a VPC from dedicated to default. Modifying the instance tenancy of the VPC does not affect the tenancy of any existing instances in the VPC. The next time you launch an instance in the VPC, it has a tenancy of default, unless you specify otherwise during launch. You can modify the instance tenancy attribute of a VPC using the AWS CLI, an AWS SDK, or the Amazon EC2 API only. Reference: Change the tenancy of a VPC.
Amazon DynamoDB is a fast, fully managed NoSQL database service. DynamoDB makes it simple and cost-effective to store and retrieve any amount of data and serve any level of request traffic.
DynamoDB is used to create tables that store and retrieve any level of data.
DynamoDB uses SSD’s to store data.
Provides Automatic and synchronous data.
Maximum item size is 400KB
Supports cross-region replication.
DynamoDB Core Concepts:
The fundamental concepts around DynamoDB are:
Tables-which is a collection of data.
Items- They are the individual entries in the table.
Attributes- These are the properties associated with the entries.
Primary Keys.
Secondary Indexes.
DynamoDB streams.
Secondary Indexes:
The Secondary index is a data structure that contains a subset of attributes from the table, along with an alternate key that supports Query operations.
Every secondary index is related to only one table, from where it obtains data. This is called base table of the index.
When you create an index you create an alternate key for the index i.e. Partition Key and Sort key, DynamoDB creates a copy of the attributes into the index, including primary key attributes derived from the table.
After this is done, you use the query/scan in the same way as you would use a query on a table.
Every secondary index is instinctively maintained by DynamoDB.
DynamoDB Indexes: DynamoDB supports two indexes:
Local Secondary Index (LSI)- The index has the same partition key as the base table but a different sort key,
Global Secondary index (GSI)- The index has a partition key and sort key are different from those on the base table.
While creating more than one table using secondary table , you must do it in a sequence. Create table one after the another. When you create the first table wait for it to be active.
Once that table is active, create another table and wait for it to get active and so on. If you try to create one or more tables continuously DynamoDB will return a LimitExceededException.
You must specify the following, for every secondary index:
Type- You must mention the type of index you are creating whether it is a Global Secondary Index or a Local Secondary index.
Name- You must specify the name for the index. The rules for naming the indexes are the same as that for the table it is connected with. You can use the same name for the indexes that are connected with the different base table.
Key- The key schema for the index states that every attribute in the index must be of the top level attribute of type-string, number, or binary. Other data types which include documents and sets are not allowed. Other requirements depend on the type of index you choose.
For GSI- The partitions key can be any scalar attribute of the base table.
Sort key is optional and this too can be any scalar attribute of the base table.
For LSI- The partition key must be the same as the base table’s partition key.
The sort key must be a non-key table attribute.
Additional Attributes: The additional attributes are in addition to the tables key attributes. They are automatically projected into every index. You can use attributes for any data type, including scalars, documents and sets.
Throughput: The throughput settings for the index if necessary are:
GSI: Specify read and write capacity unit settings. These provisioned throughput settings are not dependent on the base tables settings.
LSI- You do not need to specify read and write capacity unit settings. Any read and write operations on the local secondary index are drawn from the provisioned throughput settings of the base table.
You can create upto 5 Global and 5 Local Secondary Indexes per table. With the deletion of a table all the indexes are connected with the table are also deleted.
You can use the Scan or Query operation to fetch the data from the table. DynamoDB will give you the results in descending or ascending order.
Network Load Balancer Overview: A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration. When you enable an Availability Zone for the load balancer, Elastic Load Balancing creates a load balancer node in the Availability Zone. By default, each load balancer node distributes traffic across the registered targets in its Availability Zone only. If you enable cross-zone load balancing, each load balancer node distributes traffic across the registered targets in all enabled Availability Zones. It is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on your part. The Network Load Balancer is API-compatible with the Application Load Balancer, including full programmatic control of Target Groups and Targets. Here are some of the most important features:
Static IP Addresses – Each Network Load Balancer provides a single IP address for each Availability Zone in its purview. If you have targets in us-west-2a and other targets in us-west-2c, NLB will create and manage two IP addresses (one per AZ); connections to that IP address will spread traffic across the instances in all the VPC subnets in the AZ. You can also specify an existing Elastic IP for each AZ for even greater control. With full control over your IP addresses, a Network Load Balancer can be used in situations where IP addresses need to be hard-coded into DNS records, customer firewall rules, and so forth.
Zonality – The IP-per-AZ feature reduces latency with improved performance, improves availability through isolation and fault tolerance, and makes the use of Network Load Balancers transparent to your client applications. Network Load Balancers also attempt to route a series of requests from a particular source to targets in a single AZ while still providing automatic failover should those targets become unavailable.
Source Address Preservation – With Network Load Balancer, the original source IP address and source ports for the incoming connections remain unmodified, so application software need not support X-Forwarded-For, proxy protocol, or other workarounds. This also means that normal firewall rules, including VPC Security Groups, can be used on targets.
Long-running Connections – NLB handles connections with built-in fault tolerance, and can handle connections that are open for months or years, making them a great fit for IoT, gaming, and messaging applications.
Failover – Powered by Route 53 health checks, NLB supports failover between IP addresses within and across regions.
There are two types of VPC endpoints: (1) interface endpoints and (2) gateway endpoints. Interface endpoints enable connectivity to services over AWS PrivateLink.
Amazon AWS uses key pair to encrypt and decrypt login information.
A sender uses a public key to encrypt data, which its receiver then decrypts using another private key. These two keys, public and private, are known as a key pair.
You need a key pair to be able to connect to your instances. The way this works on Linux and Windows instances is different.
First, when you launch a new instance, you assign a key pair to it. Then, when you log in to it, you use the private key.
The difference between Linux and Windows instances is that Linux instances do not have a password already set and you must use the key pair to log in to Linux instances. On the other hand, on Windows instances, you need the key pair to decrypt the administrator password. Using the decrypted password, you can use RDP and then connect to your Windows instance.
Amazon EC2 stores only the public key, and you can either generate it inside Amazon EC2 or you can import it. Since the private key is not stored by Amazon, it’s advisable to store it in a secure place as anyone who has this private key can log in on your behalf.
AWS PrivateLink provides private connectivity between VPCs and services hosted on AWS or on-premises, securely on the Amazon network. By providing a private endpoint to access your services, AWS PrivateLink ensures your traffic is not exposed to the public internet.
There are two types of Security Groups based on where you launch your instance. When you launch your instance on EC2-Classic, you have to specify an EC2-Classic Security Group . On the other hand, when you launch an instance in a VPC, you will have to specify an EC2-VPC Security Group. Now that we have a clear understanding what we are comparing, lets see their main differences:
I think this is historical in nature. S3 and DynamoDB were the first services to support VPC endpoints. The release of those VPC endpoint features pre-dates two important services that subsequently enabled interface endpoints: Network Load Balancer and AWS PrivateLink.
Take advantage of execution context reuse to improve the performance of your function. Initialize SDK clients and database connections outside of the function handler, and cache static assets locally in the /tmp directory. Subsequent invocations processed by the same instance of your function can reuse these resources. This saves execution time and avoid potential data leaks across invocations, don’t use the execution context to store user data, events, or other information with security implications. If your function relies on a mutable state that can’t be stored in memory within the handler, consider creating a separate function or separate versions of a function for each user.
Use AWS Lambda Environment Variables to pass operational parameters to your function. For example, if you are writing to an Amazon S3 bucket, instead of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable.
You can use VPC Flow Logs. The steps would be the following:
Enable VPC Flow Logs for the VPC your EC2 instance lives in. You can do this from the VPC console
Having VPC Flow Logs enabled will create a CloudWatch Logs log group
Find the Elastic Network Interface assigned to your EC2 instance. Also, get the private IP of your EC2 instance. You can do this from the EC2 console.
Find the CloudWatch Logs log stream for that ENI.
Search the log stream for records where your Windows instance’s IP is the destination IP, make sure the port is the one you’re looking for. You’ll see records that tell you if someone has been connecting to your EC2 instance. For example, there are bytes transferred, status=ACCEPT, log-status=OK. You will also know the source IP that connected to your instance.
I recommend using CloudWatch Logs Metric Filters, so you don’t have to do all this manually. Metric Filters will find the patterns I described in your CloudWatch Logs entries and will publish a CloudWatch metric. Then you can trigger an alarm that notifies you when someone logs in to your instance.
Here are more details from the AWS Official Blog and the AWS documentation for VPC Flow Logs records:
Also, there are 3rd-party tools that simplify all these steps for you and give you very nice visibility and alerts into what’s happening in your AWS network resources. I’ve tried Observable Networks and it’s great: Observable Networks
Typically outbound traffic is not blocked by NAT on any port, so you would not need to explicitly allow those, since they should already be allowed. Your firewall generally would have a rule to allow return traffic that was initiated outbound from inside your office.
Packet sniffing by other tenants. It is not possible for a virtual instance running in promiscuous mode to receive or “sniff” traffic that is intended for a different virtual instance. While you can place your interfaces into promiscuous mode, the hypervisor will not deliver any traffic to them that is not addressed to them. Even two virtual instances that are owned by the same customer located on the same physical host cannot listen to each other’s traffic. Attacks such as ARP cache poisoning do not work within Amazon EC2 and Amazon VPC. While Amazon EC2 does provide ample protection against one customer inadvertently or maliciously attempting to view another’s data, as a standard practice you should encrypt sensitive traffic.
But as you can see, they still recommend that you should maintain encryption inside your network. We have taken the approach of terminating SSL at the external interface of the ELB, but then initiating SSL from the ELB to our back-end servers, and even further, to our (RDS) databases. It’s probably belt-and-suspenders, but in my industry it’s needed. Heck, we have some interfaces that require HTTPS and a VPN.
What’s the use case for S3 Pre-signed URL for uploading objects?
I get the use-case to allow access to private/premium content in S3 using Presigned-url that can be used to view or download the file until the expiration time set, But what’s a real life scenario in which a Webapp would have the need to generate URI to give users temporary credentials to upload an object, can’t the same be done by using the SDK and exposing a REST API at the backend.
Asking this since I want to build a POC for this functionality in Java, but struggling to find a real-world use-case for the same
Pre-signed URLs are used to provide short-term access to a private object in your S3 bucket. They work by appending an AWS Access Key, expiration time, and Sigv4 signature as query parameters to the S3 object. There are two common use cases when you may want to use them:
Simple, occasional sharing of private files.
Frequent, programmatic access to view or upload a file in an application.
Imagine you may want to share a confidential presentation with a business partner, or you want to allow a friend to download a video file you’re storing in your S3 bucket. In both situations, you could generate a URL, and share it to allow the recipient short-term access.
There are a couple of different approaches for generating these URLs in an ad-hoc, one-off fashion, including:
First time going there, what like to know in advance the do and don’t … from people with previous experiences.
Pre-plan as much as you can, but don’t sweat it in the moment if it doesn’t work out. The experience and networking are as if not more valuable than the sessions.
Deliberately know where your exits are. Most of Vegas is designed to keep you inside — when you’re burned out from the crowds and knowledge deluge is not the time to be trying to figure out how the hell you get out of wherever you are.
Study maps of how the properties interconnect before you go. You can get a lot of places without ever going outside. Be able to make a deliberate decision of what route to take. Same thing for the outdoor escalators and pedestrian bridges — they’re not necessarily intuitive, but if you know where they go, they’re a life saver running between events.
Drink more water and eat less food than you think you need to. Your mind and body will thank you.
Be prepared for all of the other Vegasisms if you ever plan on leaving the con boundaries (like to walk down the street to another venue) — you will likely be propositioned by mostly naked showgirls, see overt advertisement for or even be directly propositioned by prostitutes and their business associates, witness some pretty awful homelessness, and be “accidentally bumped into” pretty regularly by amateur pickpockets.
Switching gears between “work/AWS” and “surviving Vegas” multiple times a day can be seriously mentally taxing. I haven’t found any way to prevent that, just know it’s going to happen.
Take a burner laptop and not your production access work machine. You don’t want to accidentally crater your production environment because you gave the wrong cred as part of a lab.
There are helpful staffers everywhere around the con — don’t be afraid to leverage them — they tend to be much better informed than the ushers/directors/crowd wranglers at other cons.
Plan on getting Covid or at very least Con Crud. If you’re not used to being around a million sick people in the desert, it’s going to take its toll on your body one way or another.
Don’t set morning alarms. If your body needs to sleep in, that was more important than whatever morning session you wanted to catch. Watch the recording later on your own time and enjoy your mental clarity for the rest of the day.
Wander the expo floor when you’re bored to get a big picture of the ecosystem, but don’t expect anything too deep. The partner booths are all fun and games and don’t necessarily align with reality. Hang out at the “Ask AWS” booths — people ask some fun interesting questions and AWS TAMs/SAs and the other folks staffing the booth tend not to suck.
Listen to The Killers / Brandon Flowers when walking around outside — he grew up in Las Vegas and a lot of his music has subtle (and not so subtle) hints on how to survive and thrive there.
I’m sure there’s more, but that’s what I can think of off the top of my head.
This is more Vegas-advice than pure Re:Invent advice, but if you’re going to be in the city for more than 3 days try to either:
Find a way off/out of the strip for an afternoon. A hike out at Red Rocks is a great option.
Get a pass to the spa at your hotel so that you can escape the casino/event/hotel room trap. It’s amazing how shitty you feel without realizing it until you do a quick workout and steam/sauna/ice bath routine.
I’ve also seen a whole variety of issues that people run into during hands-on workshops where for one reason or another their corporate laptop/email/security won’t let them sign up and log into a new AWS account. Make sure you don’t have any restrictions there, as that’ll be a big hassle. The workshops have been some of the best and most memorable sessions for me.
More tips:
Sign up for all the parties! Try to get your sessions booked too, it’s a pain to be on waitlists. Don’t do one session at Venetian followed by a session at MGM. You’ll never make it in time. Try to group your sessions by location/day.
We catalog all the parties, keep a list of the latest (and older) guides, the Expo floor plan, drawings, etc. On Twitter as well @reInventParties
Hidden gem if you’re into that sort of thing, the Pinball Museum is a great place to hang for a bit with some friends.
Bring sunscreen, a water bottle you like, really comfortable shoes, and lip balm.
Get at least one cert if you don’t already have one. The Cert lounge is a wonderful place to chill and the swag there is top tier.
Check the partner parties, they have good food and good swag.
Register with an alt email address (something like yourname+reinvent@domain.com) so you can set an email rule for all the spam.
If your workplace has an SA, coordinate with them for schedules and info. They will also curate calendars for you and get you insider info if you want them to.
Prioritize workshops and chalk talks. Partner talks are long advertisements, take them with a grain of salt.
Even if you are an introvert, network. There are folks there with valuable insights and skills. You are one of those.
Don’t underestimate the distance between venues. Getting from MGM to Venetian can take forever.
Bring very comfortable walking shoes and be prepared to spend a LOT of time on your feet and walking 25-30,000 steps a day. All of the other comments and ideas are awesome. The most important thing to remember, especially for your very first year, is to have fun. Don’t just sit in breakouts all day and then go back to your hotel. Go to the after dark events. Don’t get too hung up on if you don’t make it to all the breakout sessions you want to go to. Let your first year be a learning curve on how to experience and enjoy re:Invent. It is the most epic week in Vegas you will ever experience. Maybe we will bump into each other. Love meeting new people.
Join Peter DeSantis, Senior Vice President, Utility Computing and Apps, to learn how AWS has optimized its cloud infrastructure to run some of the world’s most demanding workloads and give your business a competitive edge.
Join Dr. Werner Vogels, CTO, Amazon.com, as he goes behind the scenes to show how Amazon is solving today’s hardest technology problems. Based on his experience working with some of the largest and most successful applications in the world, Dr. Vogels shares his insights on building truly resilient architectures and what that means for the future of software development.
Applied artificial intelligence (AI) solutions, such as contact center intelligence (CCI), intelligent document processing (IDP), and media intelligence (MI), have had a significant market and business impact for customers, partners, and AWS. This session details how partners can collaborate with AWS to differentiate their products and solutions with AI and machine learning (ML). It also shares partner and customer success stories and discusses opportunities to help customers who are looking for turnkey solutions.
An implication of applying the microservices architectural style is that a lot of communication between components is done over the network. In order to achieve the full capabilities of microservices, this communication needs to happen in a loosely coupled manner. In this session, explore some fundamental application integration patterns based on messaging and connect them to real-world use cases in a microservices scenario. Also, learn some of the benefits that asynchronous messaging can have over REST APIs for communication between microservices.
Avoiding unexpected user behavior and maintaining reliable performance is crucial. This session is for application developers who want to learn how to maintain application availability and performance to improve the end user experience. Also, discover the latest on Amazon CloudWatch.
Amazon is transforming customer experiences through the practical application of AI and machine learning (ML) at scale. This session is for senior business and technology decision-makers who want to understand Amazon.com’s approach to launching and scaling ML-enabled innovations in its core business operations and toward new customer opportunities. See specific examples from various Amazon businesses to learn how Amazon applies AI/ML to shape its customer experience while improving efficiency, increasing speed, and lowering cost. Also hear the lessons the Amazon teams have learned from the cultural, process, and technical aspects of building and scaling ML capabilities across the organization.
Data has become a strategic asset. Customers of all sizes are moving data to the cloud to gain operational efficiencies and fuel innovation. This session details how partners can create repeatable and scalable solutions to help their customers derive value from their data, win new customers, and grow their business. It also discusses how to drive partner-led data migrations using AWS services, tools, resources, and programs, such as the AWS Migration Acceleration Program (MAP). Also, this session shares customer success stories from partners who have used MAP and other resources to help customers migrate to AWS and improve business outcomes.
User-facing web and mobile applications are the primary touchpoint between organizations and their customers. To meet the ever-rising bar for customer experience, developers must deliver high-quality apps with both foundational and differentiating features. AWS Amplify helps front-end web and mobile developers build faster front to back. In this session, review Amplify’s core capabilities like authentication, data, and file storage and explore new capabilities, such as Amplify Geo and extensibility features for easier app customization with AWS services and better integration with existing deployment pipelines. Also learn how customers have been successful using Amplify to innovate in their businesses.
AWS Amplify is a set of tools and services that makes it quickand easy for front-end web and mobile developers to build full-stack applications on AWS
Amplify DataStore provides a programming model for leveraging shared and distributed data without writing additional code for offline and online scenarios, which makes working with distributed, cross-user data just as simple as working with local-only data
AWS AppSync is a managed GraphQL API service
Amazon DynamoDB is a serverless key-value and document database that’s highly scalable
While DevOps has not changed much, the industry has fundamentally transformed over the last decade. Monolithic architectures have evolved into microservices. Containers and serverless have become the default. Applications are distributed on cloud infrastructure across the globe. The technical environment and tooling ecosystem has changed radically from the original conditions in which DevOps was created. So, what’s next? In this session, learn about the next phase of DevOps: a distributed model that emphasizes swift development, observable systems, accountable engineers, and resilient applications.
Innovation Day
Innovation Day is a virtual event that brings together organizations and thought leaders from around the world to share how cloud technology has helped them capture new business opportunities, grow revenue, and solve the big problems facing us today, and in the future. Featured topics include building the first human basecamp on the moon, the next generation F1 car, manufacturing in space, the Climate Pledge from Amazon, and building the city of the future at the foot of Mount Fuji.
Latest AWS Products and Services announced at re:invent 2021
Graviton 3: AWS today announced the newest generation of its Arm-based Graviton processors: the Graviton 3. The company promises that the new chip will be 25 percent faster than the last-generation chips, with 2x faster floating-point performances and a 3x speedup for machine-learning workloads. AWS also promises that the new chips will use 60 percent less power.
Trn1 to train models for various applications
AWS Mainframe Modernization: Cut mainframe migration time by 2/3
AWS Private 5G: Deploy and manage your own private 5G network (Set up and scale a private mobile network in days)
Transaction for Governed tables in Lake Formation: Automatically manages conflicts and error
Serverless and On-Demand Analytics for Redshift, EMAR, MSK, Kinesis:
Amazon Sagemaker Canvas: Create ML predictions without any ML experience or writing any code
AWS IoT TwinMaker: Real Time system that makes it easy to create and use digital twins of real-world systems.
Amazon DevOps Guru for RDS: Automatically detect, diagnose, and resolve hard-to-find database issues.
Amazon DynamoDB Standard-Infrequent Access table class: Reduce costs by up to 60%. Maintain the same performance, durability, scaling. and availability as Standard
AWS Database Migration Service Fleet Advisor: Accelerate database migration with automated inventory and migration: This service makes it easier and faster to get your data to the cloud and match it with the correct database service. “DMS Fleet Advisor automatically builds an inventory of your on-prem database and analytics service by streaming data from on prem to Amazon S3. From there, we take it over. We analyze [the data] to match it with the appropriate amount of AWS Datastore and then provide customized migration plans.
Amazon Sagemaker Ground Truth Plus: Deliver high-quality training datasets fast, and reduce data labeling cost.
Amazon SageMaker Training Compiler: Accelerate model training by 50%
Amazon SageMaker Inference Recommender: Reduce time to deploy from weeks to hours
Amazon SageMaker Serverless Inference: Lower cost of ownership with pay-per-use pricing
Amazon Kendra Experience Builder: Deploy Intelligent search applications powered by Amazon Kendra with a few clicks.
Amazon Lex Automated Chatbot Designer: Drastically Simplifies bot design with advanced natural language understanding
Amazon SageMaker Studio Lab: A no cost, no setup access to powerful machine learning technology
AWS Cloud WAN: Build, manage and monitor global wide area networks
AWS Amplify Studio: Visually build complete, feature-rich apps in hours instead of weeks, with full control over the application code.
AWS Carbon Footprint Tool: Don’t forget to turn off the lights.
AWS Well-Architected Sustainability Pillar: Learn, measure, and improve your workloads using environmental best practices in cloud computing
AWS re:Post: Get Answers from AWS experts. A Reimagined Q&A Experience for the AWS Community
You can automate any task that involves interaction with AWS and on-premises resources, including in multi-account and multi-Region environments, with AWS Systems Manager. In this session, learn more about three new Systems Manager launches at re:Invent—Change Manager, Fleet Manager, and Application Manager. In addition, learn how Systems Manager Automation can be used across multiple Regions and accounts, integrate with other AWS services, and extend to on-premises. This session takes a deep dive into how to author a custom runbook using an automation document, and how to execute automation anywhere.
Learn about the performance improvements made in Amazon EMR for Apache Spark and Presto, giving Amazon EMR one of the fastest runtimes for analytics workloads in the cloud. This session dives deep into how AWS generates smart query plans in the absence of accurate table statistics. It also covers adaptive query execution—a technique to dynamically collect statistics during query execution—and how AWS uses dynamic partition pruning to generate query predicates for speeding up table joins. You also learn about execution improvements such as data prefetching and pruning of nested data types.
Explore how state-of-the-art algorithms built into Amazon SageMaker are used to detect declines in machine learning (ML) model quality. One of the big factors that can affect the accuracy of models is the difference in the data used to generate predictions and what was used for training. For example, changing economic conditions could drive new interest rates affecting home purchasing predictions. Amazon SageMaker Model Monitor automatically detects drift in deployed models and provides detailed alerts that help you identify the source of the problem so you can be more confident in your ML applications.
Amazon Lightsail is AWS’s simple, virtual private server. In this session, learn more about Lightsail and its newest launches. Lightsail is designed for simple web apps, websites, and dev environments. This session reviews core product features, such as preconfigured blueprints, managed databases, load balancers, networking, and snapshots, and includes a demo of the most recent launches. Attend this session to learn more about how you can get up and running on AWS in the easiest way possible.
This session dives into the security model behind AWS Lambda functions, looking at how you can isolate workloads, build multiple layers of protection, and leverage fine-grained authorization. You learn about the implementation, the open-source Firecracker technology that provides one of the most important layers, and what this means for how you build on Lambda. You also see how AWS Lambda securely runs your functions packaged and deployed as container images. Finally, you learn about SaaS, customization, and safe patterns for running your own customers’ code in your Lambda functions.
Unauthorized users and financially motivated third parties also have access to advanced cloud capabilities. This causes concerns and creates challenges for customers responsible for the security of their cloud assets. Join us as Roy Feintuch, chief technologist of cloud products, and Maya Horowitz, director of threat intelligence and research, face off in an epic battle of defense against unauthorized cloud-native attacks. In this session, Roy uses security analytics, threat hunting, and cloud intelligence solutions to dissect and analyze some sneaky cloud breaches so you can strengthen your cloud defense. This presentation is brought to you by Check Point Software, an AWS Partner.
AWS provides services and features that your organization can leverage to improve the security of a serverless application. However, as organizations grow and developers deploy more serverless applications, how do you know if all of the applications are in compliance with your organization’s security policies? This session walks you through serverless security, and you learn about protections and guardrails that you can build to avoid misconfigurations and catch potential security risks.
The Amazon Cash application service matches incoming customer payments with accounts and open invoices, while an email ingestion service (EIS) processes more than 1 million semi-structured and unstructured remittance emails monthly. In this session, learn how this EIS classifies the emails, extracts invoice data from the emails, and then identifies the right invoices to close on Amazon financial platforms. Dive deep on how these services automated 89.5% of cash applications using AWS AI & ML services. Hear about how these services will eliminate the manual effort of 1000 cash application analysts in the next 10 years.
Dive into the details of using Amazon Kinesis Data Streams and Amazon DynamoDB Streams as event sources for AWS Lambda. This session walks you through how AWS Lambda scales along with these two event sources. It also covers best practices and challenges, including how to tune streaming sources for optimum performance and how to effectively monitor them.
Build real-time applications using Apache Flink with Apache Kafka and Amazon Kinesis Data Streams. Apache Flink is a framework and engine for building streaming applications for use cases such as real-time analytics and complex event processing. This session covers best practices for building low-latency applications with Apache Flink when reading data from either Amazon MSK or Amazon Kinesis Data Streams. It also covers best practices for running low-latency Apache Flink applications using Amazon Kinesis Data Analytics and discusses AWS’s open-source contributions to this use case.
Learn how you can accelerate application modernization and benefit from the open-source Apache Kafka ecosystem by connecting your legacy, on-premises systems to the cloud. In this session, hear real customer stories about timely insights gained from event-driven applications built on an event streaming platform from Confluent Cloud running on AWS, which stores and processes historical data and real-time data streams. Confluent makes Apache Kafka enterprise-ready using infinite Kafka storage with Amazon S3 and multiple private networking options including AWS PrivateLink, along with self-managed encryption keys for storage volume encryption with AWS Key Management Service (AWS KMS).
Data-driven business intelligence (BI) decision making is more important than ever in this age of remote work. An increasing number of organizations are investing in data transformation initiatives, including migrating data to the cloud, modernizing data warehouses, and building data lakes. But what about the last mile—connecting the dots for end users with dashboards and visualizations? Come to this session to learn how Amazon QuickSight allows you to connect to your AWS data and quickly build rich and interactive dashboards with self-serve and advanced analytics capabilities that can scale from tens to hundreds of thousands of users, without managing any infrastructure and only paying for what you use.
Is there an Updated SAA-C03 Practice Exam?
Yes as of August 2022. This sample SAA-C03 sample exam PDF file can provide you with a hint of what the real SAA-C03 exam will look like in your upcoming test. In addition, the SAA-C03 sample questions also contain the necessary explanation and reference links that you can study.
In this AWS tutorial, we are going to discuss how we can make the best use of AWS services to build a highly scalable, and fault tolerant configuration of EC2 instances. The use of Load Balancers and Auto Scaling Groups falls under a number of best practices in AWS, including Performance Efficiency, Reliability and high availability.
Before we dive into this hands-on tutorial on how exactly we can build this solution, let’s have a brief recap on what an Auto Scaling group is, and what a Load balancer is.
Autoscaling group (ASG)
An Autoscaling group (ASG) is a logical grouping of instances which can scale up and scale down depending on pre-configured settings. By setting Scaling policies of your ASG, you can choose how many EC2 instances are launched and terminated based on your application’s load. You can do this based on manual, dynamic, scheduled or predictive scaling.
Elastic Load Balancer (ELB)
An Elastic Load Balancer (ELB) is a name describing a number of services within AWS designed to distribute traffic across multiple EC2 instances in order to provide enhanced scalability, availability, security and more. The particular type of Load Balancer we will be using today is an Application Load Balancer (ALB). The ALB is a Layer 7 Load Balancer designed to distribute HTTP/HTTPS traffic across multiple nodes – with added features such as TLS termination, Sticky Sessions and Complex routing configurations.
Getting Started
First of all, we open our AWS management console and head to the EC2 management console.
We scroll down on the left-hand side and select ‘Launch Templates’. A Launch Template is a configuration template which defines the settings for EC2 instances launched by the ASG.
Under Launch Templates, we will select “Create launch template”.
We specify the name ‘MyTestTemplate’ and use the same text in the description.
Under the ‘Auto Scaling guidance’ box, tick the box which says ‘Provide guidance to help me set up a template that I can use with EC2 Auto Scaling’ and scroll down to launch template contents.
When it comes to choosing our AMI (Amazon Machine Image) we can choose the Amazon Linux 2 under ‘Quick Start’.
The Amazon Linux 2 AMI is free tier eligible, and easy to use for our demonstration purposes.
Next, we select the ‘t2.micro’ under instance types, as this is also free tier eligible.
Under Network Settings, we create a new Security Group called ExampleSG in our default VPC, allowing HTTP access to everyone. It should look like this.
We can then add our IAM Role we created earlier. Under Advanced Details, select your IAM instance profile.
Then we need to include some user data which will load a simple web server and web page onto our Launch Template when the EC2 instance launches.
Under ‘advanced details’, and in ‘User data’ paste the following code in the box.
Then simply click ‘Create Launch Template’ and we are done!
We are now able to build an Auto Scaling Group from our launch template.
On the same console page, select ‘Auto Scaling Groups’, and Create Auto Scaling Group.
We will call our Auto Scaling Group ‘ExampleASG’, and select the Launch Template we just created, then select next.
On the next page, keep the default VPC and select any default AZ and Subnet from the list and click next.
Under ‘Configure Advanced Options’ select ‘Attach to a new load balancer’ .
You will notice the settings below will change and we will now build our load balancer directly on the same page.
Select the Application Load Balancer, and leave the default Load Balancer name.
Choose an ‘Internet Facing’ Load balancer, select another AZ and leave all of the other defaults the same. It should look something like the following.
Under ‘Listeners and routing’, select ‘Create a target group’ and select the target group which was just created. It will be called something like ‘ExampleASG-1’. Click next.
Now we get to Group Size. This is where we specify the desired, minimum and maximum capacity of our Auto Scaling Group.
Set the capacities as follows:
Click ‘skip to review’, and click ‘Create Auto Scaling Group’.
You will now see the Auto Scaling Group building, and the capacity is updating.
After a short while, navigate to the EC2 Dashboard, and you will see that two EC2 instances have been launched!
To make sure our Auto Scaling group is working as it should – select any instance, and terminate the instance. After one instance has been terminated you should see another instance pending and go into a running state – bringing capacity back to 2 instances (as per our desired capacity).
If we also head over to the Load Balancer console, you will find our Application Load Balancer has been created.
If you select the load balancer, and scroll down, you will find the DNS name of your ALB – it will look something like ‘ ExampleASG-1-1435567571.us-east-1.elb.amazonaws.com’.
If you enter the DNS name into our URL, you should get the following page show up:
The message will display a ‘Hello World’ message including the IP address of the EC2 instance which is serving up the webpage behind the load balancer.
If you refresh the page a few times, you should see that the IP address listed will change. This is because the load balancer is routing you to the other EC2 instance, validating that our simple webpage is being served from behind our ALB.
The final step Is to make sure you delete all of the resources you configured! Start by deleting the Auto Scaling Group – and ensure you delete your load balancer also – this will ensure you don’t incur any charges.
Architectural Diagram
Below, you’ll find the architectural diagram of what we have built.
Learn how to Master AWS Cloud
Ultimate Training Packages – Our popular training bundles (on-demand video course + practice exams + ebook) will maximize your chances of passing your AWS certification the first time.
Membership – For unlimited access to our cloud training catalog, enroll in our monthly or annual membership program.
Challenge Labs – Build hands-on cloud skills in a secure sandbox environment. Learn, build, test and fail forward without risking unexpected cloud bills.
This post originally appeared on: https://digitalcloud.training/load-balancing-ec2-instances-in-an-autoscaling-group/
There are significant protections provided to you natively when you are building your networking stack on AWS. This wide range of services and features can become difficult to manage, and becoming knowledgeable about what tools to use in which area can be challenging.
The two main security components which can be confused within VPC networking are the Security Group and the Network Access Control List (NACL). When you compare a Security Group vs NACL, you will find that although they are fairly similar in general, there is a distinct difference in the use cases for each of these security features.
In this blog post, we are going to explain the main differences between Security Group vs NACL and talk about the use cases and some best practices.
First of all, what do they have in common?
The main thing that is shared in common between a Security group vs a NACL is that they are both a firewall. So, what is a firewall?
Firewalls in computing monitor and control incoming and outgoing network traffic based on predetermined security rules. Firewalls provide a barrier between trusted and untrusted networks. The network layer which we are talking about in this instance is an Amazon Virtual Private Cloud – aka a VPC.
In the AWS cloud, VPCs are on-demand pools of shared resources, designed to provide a certain degree of isolation between different organizations and different teams within an account.
First, let’s talk about the particulars of a Security Group.
Security Group Key Features
Where do they live?
Security groups are tied to an instance. This can be either an EC2 instance, ECS cluster or an RDS database instance – providing routing rules and acting as a firewall for the resources contained within the security group. With a security group, you have to purposely assign a security group to the instances – if you don’t want them to use the default security group.
The default security group allows all traffic outbound by default, but no inbound traffic.
This means any instances within the subnet group gets the rule applied.
Stateful or Stateless
Security groups are stateful in nature. As a result, any changes applicable to an incoming rule will also be automatically applied to the outgoing rule in the same way. For example, allowing an incoming port 80 will automatically open the outgoing port 80 – without you having to explicitly direct traffic in the opposite direction.
Allow or Deny Rules
The only rule set that can be used in security groups is the Allow rule set. Thus, You cannot backlist a certain IP address from establishing a connection with any instances within your security group. This would have to be achieved using a different technology.
Limits
Instance can have multiple security groups. By default, AWS will let you apply up to five security groups to a virtual network interface, but it is possible to use up to 16 if you submit a limit increase request.
Additionally, you can have 60 inbound and 60 outbound rules per security group (for a total of 120 rules). IPv4 rules are enforced separately from IPv6 rules; a security group, for example, may have 60 IPv4 rules and 60 IPv6 rules.
Network Access Control Lists (NACLS)
Now let’s compare the Security Group vs NACLs using the same criteria.
Where do they live?
Network ACLs exist on an interact at the subnet level, so any instance in the subnet with an associated NACL will automatically follow the rules of the NACL.
Stateful or Stateless
Network ACLs are stateless. Consequently, any changes made to an incoming rule will not be reflected in an outgoing rule. For example, if you allow an incoming port 80, you would also need to apply the rule for outgoing traffic.
Allow or Deny Rules
Unlike a Security Group, NACLs support both allow and deny rules. By deny rules, you could explicitly deny a certain IP address to establish a connection; e.g. to block a specific known malicious IP address from establishing a connection to an EC2 Instance.
Limits
Subnet can have only one NACL. However, you can associate one network ACL to one or more subnets within a VPC. By default, you can have up to 200 unique NACLs within a VPC, however this is a soft limit that is adjustable.
Secondly, you can have 20 inbound and 20 outbound rules per NACL (for a total of 40 rules). IPv4 rules are enforced separately from IPv6 rules. A NACL, for example, may have 20 IPv4 rules and 20 IPv6 rules.
We hope that you now more keenly understand the difference between NACLs and security groups.
A multi-account strategy in AWS can provide you with a secure and isolated platform from which to launch your resources. Whilst smaller organizations may only require a few AWS accounts, large corporations with many business units often require many accounts. These accounts may be organized hierarchically.
Building this account topology manually on the cloud requires a high degree of knowledge, and is rather error prone. If you want to set up a multi-account environment in AWS within a few clicks, you can use a service called AWS Control Tower.
AWS Control Tower allows your team to quickly provision and to set up and govern a secure, multi-account AWS environment, known as a landing zone. Built on the back of AWS Organizations, it automatically implements many accounts under the appropriate organizational units, with hardened service control policies attached. Provisioning new accounts happens in the click of a button, automating security configuration, and ensuring you extend governance into new accounts, without any manual intervention.
There are a number of key features which constitute AWS Control Tower, and in this article, we will explore each section and break down how it makes governing multiple accounts a lot easier.
The Landing Zone
A Landing Zone refers to the multi-account structure itself, which is configured to provide with a compliant and secure set of accounts upon which to start building. A Landing Zone can include extended features like federated account access via SSO and the utilization of centralized logging via Amazon CloudTrail and AWS Config.
The Landing Zone’s accounts follow guardrails set by you to ensure you are compliant to your own security requirements. Guardrails are rules written in plain English, leveraging AWS CloudFormation in the background to establish a hardened account baseline.
Guardrails can fit into one of a number of categories:
Guardrails provide immediate protection from any number of scenarios, without the need to be able to read or write complex security policies – a big upside compared to manual provisioning of permissions.
Account Factory
Account Factory is a component of Control Tower which allows you to automate the secure provisioning of new accounts, which exist according to defined security principles. Several pre-approved configurations are included as part of the launch of your new accounts including Networking information, and Region Selection. You also get seamless integration with AWS Service Catalog to allow your internal customers to configure and build new accounts. Third party Infrastructure as Code tooling like Terraform (Account Factory for Terraform) can be used also to provide your cloud teams the ability to benefit from a multiple account setup whilst using tools they are familiar with.
Architecture of Control Tower
Lets now dive into how Control Tower looks, with an architectural overview.
As you can see, there are a number of OUs (Organizational Units) in which accounts are placed. These are provisioned for you using AWS Organizations.
Security OU – The Security OU contains two accounts, the Log Archive Account and the Audit Account. The Log Archive Account serves as a central store for all CloudTrail and AWS Config logs across the Landing Zone, securely stored within an S3 Bucket.
Sandbox OU – The Sandbox OU is setup to host testing accounts (Sandbox Accounts) which are safely isolated from any production workloads.
Production OU – This OU is for hosting all of your production accounts, containing production workloads.
Non-Production OU – This OU can serve as a pre-production environment, in which further testing and development can take place.
Suspended OU – this is secure OU, where you can move any deleted, reused or breached accounts. Permissions in this OU are extremely locked-down, ensuring it is a safe location.
Shared Services OU – The Shared Services OU contains accounts in which services shared across multiple other accounts are hosted. This consists of three accounts:
The Shared Services account (where the resources are directly shared)
The Security Services Account (hosting services like Amazon Inspector, Amazon Macie, AWS Secrets Manager as well as any firewall solutions.)
The Networking Account – This contains VPC Endpoints and components and things like DNS Endpoints.
Any organization can benefit from using AWS Control Tower. Whether you’re a multinational corporation with years of AWS Experience, or a burgeoning start-up with little experience in the cloud, Landing Zone can provide your customers with confidence that they are provisioning their architecture efficiently and securely.
I am a Linux Systems Engineer with over 15 years of experience as Linux and Unix Systems Administrator and onprem infrastructure engineer. My AWS knowledge are limited to basic EC2, IAM and S3. Initially I was studying for the AWS Certified Cloud Practitioner certification but I just couldn't keep from falling asleep. Too much information about too many products I know nothing about. I just don't learn like that. I am self taught, I learn by doing things. I may not necessarily know every detail of the tech I am working with, but I can get stuff done. Provided what I just said, would it be easier for me to go for the AWS Architect Associate or SysOps Administrator certification? Thanks submitted by /u/nousewindows [link] [comments]
If you are managing lots of accounts and Amazon Virtual Private Cloud (Amazon VPC) resources, sharing and then associating many DNS resources to each VPC can present a significant burden. You often hit limits around sharing and association, and you may have gone as far as building your own orchestration layers to propagate DNS configuration
Passed "AWS Security" today with a little above the minimum - 780. That was a really hard exam; I did not expect so hard after SA Pro . I used A Cantril, TutorialDojo and AWS Twitch Security podcast. And thanks,paper_recluse, for his list - EKS/ECS logging no one mentioned anywhere, in addition to his list to paying attention - the Encryption SDK and RAM (policy), and all sorts of policy evaluation (SCP especially). And yes, take your time, more than 1 month is needed for this exam to pass without hesitation. submitted by /u/Wide-Answer-2789 [link] [comments]
I'm almost certain I'm overlooking something obvious, but here goes: Currently, I'm in the process of automating my Elastic Beanstalk deployments, and I would like to populate my application environment from values currently stored in Secrets Manager. Following the example here, I created a new config file under .ebextensions, which utilizes directives for the value stored in secrets manager option_settings: aws:elasticbeanstalk:application:environment: ALGOLIA_ADMIN_API_KEY: '{{resolve:secretsmanager:__SECRET_ARN__:SecretString:ALGOLIA_ADMIN_API_KEY}}' (SECRET_ARN is placeholder replaced at build time) Unfortunately, after the new application version deploys, the environment variables shown in the "Configuration" section contain the literal text of directive, and not the resolved value: https://imgur.com/a/nxvI9r6 I'm not sure what I'm doing wrong here. I've confirmed the ARN, and have updated the Elastic Beanstalk service role to include read access to Secrets Manager, but those don't seem to make any difference. Any help is much appreciated. submitted by /u/flowstate [link] [comments]
I'm almost certain I'm overlooking something obvious, but here goes: Currently, I'm in the process of automating my Elastic Beanstalk deployments, and I would like to populate my application environment from values currently stored in Secrets Manager. Following the example here, I created a new config file under .ebextensions, which utilizes directives for the value stored in secrets manager option_settings: aws:elasticbeanstalk:application:environment: ALGOLIA_ADMIN_API_KEY: '{{resolve:secretsmanager:__SECRET_ARN__:SecretString:ALGOLIA_ADMIN_API_KEY}}' (SECRET_ARN is placeholder replaced at build time) Unfortunately, after the new application version deploys, the environment variables shown in the "Configuration" section contain the literal text of directive, and not the resolved value: https://imgur.com/a/nxvI9r6 I'm not sure what I'm doing wrong here. I've confirmed the ARN, and have updated the Elastic Beanstalk service role to include read access to Secrets Manager, but those don't seem to make any difference. Any help is much appreciated. submitted by /u/flowstate [link] [comments]
There are many trends within the current cloud computing industry that have a sway on the conversations which take place throughout the market. One of these key areas of discussion is ‘Serverless’.
Serverless application deployment is a way of provisioning infrastructure in a managed way, without having to worry about any building and maintenance of servers – you launch the service and it works. Scaling, high availability, and automotive processes are looked after using managed AWS Serverless service. AWS Step Functions provides us a useful way to coordinate the components of distributed applications and microservices using visual workflows.
What is AWS Step Functions?
AWS Step Functions let developers build distributed applications, automate IT and business processes, and build data and machine learning pipelines by using AWS services.
Using Step Functions workflows, developers can focus on higher-value business logic instead of worrying about failures, retries, parallelization, and service integrations. In other words, AWS Step Functions is serverless workload orchestration service which can make developers’ lives much easier.
Components and Integrations
AWS Step Functions consist of a few components, the first being a State Machine.
What is a state machine?
The State Machine model uses given states and transitions to complete the tasks at hand. It is an abstract machine (system) that can be in one state at a time, but it can also switch between them. As a result, it doesn’t allow infinity loops, which removes one source of errors entirely, which is often costly.
With AWS Step Functions, you can define workflows as state machines, which simplify complex code into easy-to-understand statements and diagrams. The process of building applications and confirming they work as expected is actually much faster and easier.
State
In a state machine, a state is referred to by its name, which can be any string, but must be unique within the state machine. State instances exist until their execution is complete.
An individual component of your state machine can be in any of the following 8 types of states:
Task state – Do some work in your state machine. From a task state, Amazon Step Functions can call Lambda functions directly
Choice state – Make a choice between different branches of execution
Fail state – Stops execution and marks it as failure
Succeed state – Stops execution and marks it as a success
Pass state – Simply pass its input to its output or inject some fixed data
Wait state – Provide a delay for a certain amount of time or until a specified time/date
Parallel state – Begin parallel branches of execution
Map state – Adds a for-each loop condition
Limits
There are some limits which you need to be aware of when you are using AWS Step Functions. This table will break down the limits:
Use Cases and Examples
If you need to build workflows across multiple Amazon services, then AWS Step Functions are a great tool for you. Serverless microservices can be orchestrated with Step Functions, data pipelines can be built, and security incidents can be handled with Step Functions. It is possible to use Step Functions both synchronously and asynchronously.
Instead of manually orchestrating long-running, multiple ETL jobs or maintaining a separate application, Step Functions can ensure that these jobs are executed in order and complete successfully.
As a third feature, Step Functions are a great way to automate recurring tasks, such as updating patches, selecting infrastructure, and synchronizing data, and Step Functions will scale automatically, respond to timeouts, and retry missed tasks when they fail.
With Step Functions, you can create responsive serverless applications and microservices with multiple AWS Lambda functions without writing code for workflow logic, parallel processes, error handling, or timeouts.
Additionally, services and data can be orchestrated that run on Amazon EC2 instances, containers, or on-premises servers.
Pricing
Each time you perform a step in your workflow, Step Functions counts a state transition. State transitions, including retries, are charged across all state machines.
There is a Free Tier for AWS Step Functions of 4000 State Transitions per month.
With AWS Step Functions, you pay for the number state transitions you use per month.
Step Functions count a state transition each time a step of your workflow is executed. You are charged for the total number of state transitions across all your state machines, including retries.
State Transitions cost a flat rate of $0.000025 per state transition thereafter.
Summary
In summary, Step Functions are a powerful tool which you can use to improve the application development and productivity of your developers. By migrating your logic workflows into the cloud you will benefit from lower cost, rapid deployment. As this is a serverless service, you will be able to remove any undifferentiated heavy lifting from the application development process.
Interview Questions
Q: How does AWS Step Function create a State Machine?
A: A state machine is a collection of states which allows you to perform tasks in the form of lambda functions, or another service, in sequence, passing the output of one task to another. You can add branching logic based on the output of a task to determine the next state.
Q: How can we share data in AWS Step Functions without passing it between the steps?
A: You can make use of InputPath and ResultPath. In the ValidationWaiting step you can set the following properties (in State Machine definition)
This way you can send to external service only data that is actually needed by it and you won’t lose access to any data that was previously in the input.
Q: How can I diagnose an error or a failure within AWS Step Functions?
A: The following are some possible failure events that may occur
State Machine Definition Issues.
Task Failures due to exceptions thrown in a Lambda Function.
Transient or Networking Issues.
A task has surpassed its timeout threshold.
Privileges are not set appropriately for a task to execute.
If you want to be an AWS cloud professional, you need to understand the differences between the myriad of services AWS offer. You also need an in-depth understanding on how to use the Security services to ensure that your account infrastructure is highly secure and safe to use. This is job zero at AWS, and there is nothing that is taken more seriously than Security. AWS makes it really easy to implement security best practices and provides you with many tools to do so.
AWS Secrets Manager and SSM Parameter store sound like very similar services on the surface -however, when you dig deeper – comparing AWS Secrets Manager vs SSM Parameter Store – you will find some significant differences which help you understand exactly when to use each tool.
AWS Secrets Manager
AWS Secrets Manager is designed to provide encryption for confidential information (like database credentials and API keys) that needs to be guarded safely in a secure way. Encryption is automatically enabled when creating a secret entry and there are a number of additional features we are going to explore in this article.
Through using AWS Secrets Manager, you can manage a wide range of secrets: Database credentials, API keys, and other self defined secrets are all eligible for this service.
If you are responsible for storing and managing secrets within your team, as well as ensuring that your company follows regulatory requirements – this is possible through AWS Secrets Manager which securely and safely stores all secrets within one place. Secrets Manager also has a large degree of added functionality.
SSM Parameter store
SSM Parameter store is slightly different. The key differences become evident when you compare how AWS Secrets Manager vs SSM Parameter Store are used.
The SSM Parameter Store focuses on a slightly wider set of requirements. Based on your compliance requirements, SSM Parameter Store can be used to store the secrets encrypted or unencrypted within your code base.
By storing environmental configuration data and other parameters, the software simplifies and optimizes the application deployment process. With the AWS Secrets Manager, you can add key rotation, cross-account access, and faster integration with services offered by AWS.
Based on this explanation you may think that they both sound similar. Let’s break down the similarities and differences between these roles.
Similarities
Managed Key/Value Store Services
Both services allow you to store values using a name and key. This is an extremely useful aspect of both of the services as the deployment of the application can reference different parameters or different secrets based on the deployment environment, allowing customizable and highly integratable deployments of your applications.
Both Referenceable in CloudFormation
You can use the powerful Infrastructure as Code (IaC) tool AWS CloudFormation to build your applications programmatically. The effortless deployment of either product using CloudFormation allows a seamless developer experience, without using painful manual processes.
While SSM Parameter Store only allows one version of a parameter to be active at any given time, Secrets Manager allows multiple versions to exist at the same time when you are rotating a secret using staging labels.
Similar Encryption Options
They are both inherently very secure services – and you do not have to choose one over another based on the encryption offered by either service.
Through another AWS Security service, KMS (the Key Management Service), IAM policies can be outlined to control and outline specific permissions on which only certain IAM users and roles have permission to decrypt the value. This restricts access to anyone who doesn’t need it – and it abides to the principle of least privilege, helping you abide by compliance standards.
Versioning
Versioning outlines the ability to save multiple, and iteratively developed versions of something to allow quicker restore lost versions, and maintain multiple copies of the same thing etc.
Both services support versioning of secret values within the service. This allows you to view multiple previous versions of your parameters. You can also optionally choose to promote a former version to the master up to date version, which can be useful as your application changes.
Given that there are lots of similarities between the two services, it is now time to view and compare the differences, along with some use cases of either service.
Differences
Cost
The costs are different across the services, namely the fact that SSM tends to cost less compared to Secrets Manager. Standard parameters are free for SSM. You won’t be charged for the first 10,000 parameters you store, however, Advanced Parameters will cost you. For every 10,000 API calls and every secret per month, AWS Secret Manager bills you a fixed fee.
This may factor into how you use each service and how you define your cloud spending strategy, so this is valuable information.
Password generation
A useful feature within AWS Secrets Manager allows us to generate random data during the creation phase to allow for the secure and auditable creation of strong and unique passwords and subsequently reference it in the same CloudFormation stack. This allows our applications to be fully built using IaC, and gives us all the benefits which that entails.
AWS Systems Manager Parameter Store on the other hand doesn’t work this way, and doesn’t allow us to generate random data — we need to do it manually using console or AWS CLI, and this can’t happen during the creation phase.
Rotation of Secrets
A Powerful feature of AWS Secrets Manager is the ability to automatically rotate credentials based on a pre-defined schedule, which you set. AWS Secrets Manager integrates this feature natively with many AWS services, and this feature (automated data rotation) is simply not possible using AWS Systems Manager Parameter Store.You will have to refresh and update data daily which will include a lot more manual setup to achieve the same functionality that is supported natively with Secrets Manager.
Cross-Account Access
Firstly, there is currently no way to attach resource-based IAM policy for AWS Systems Manager Parameter Store (Standard type).This means that cross-account access is not possible for Parameter store, and if you need this functionality you will have to configure an extensive work around, or use AWS Secrets Manager.
Size of Secrets
Each of the options stores a maximum set size of secret / parameter.
Secrets Manager can store secrets of up to 10kb in size.
Standard Parameters can use up to 4096 characters (4KB size) for each entry, and Advanced Parameters can store up to 8KB entries.
Multi-Region Deployment
Like with many other features of AWS secrets Manager, AWS SSM Parameter store does not come with the same functionality. In this case you can’t easily replicate your secrets across multiple regions for added functionality / value, and you will need to implement an extensive work around for this to work.
In terms of use cases, you may want to use AWS Secrets Manager to store your encrypted secrets with easy rotation. If you require a feature rich solution for managing your secrets to stay compliant with your regulatory and compliance requirements, consider choosing AWS Secrets Manager.
On the other hand, you may want to choose SSM Parameter Store as a cheaper option to store your encrypted or unencrypted secrets. Parameter Store will provide some limited functionality to enable your application deployments by storing your parameters in a safe, cheap and secure way.
When you are building applications in the AWS cloud, you have to go to painstaking lengths to make your applications durable, resilient and highly available.
Whilst AWS can help you with this for the most part, it is nearly impossible to see a situation in which you will not need some kind of Disaster Recovery plan.
An organization’s Business Continuity and Disaster Recovery (BCDR) program is a set of approaches and processes that can be used to recover from a disaster and resume its regular business operations after the disaster has ended. An example of a disaster would be a natural calamity, an outage or disruption caused by a power outage, an employee mistake, a hardware failure, or a cyberattack.
With the implementation of a BCDR plan, businesses can operate as close to normal as possible after an unexpected interruption, and with the least possible loss of data.
In this blog post, we will explore three notable disaster recovery solutions, each with different merits and drawbacks, and different ways of restoring them once they’ve been lost. However, before we can appreciate these different methods, we need to break down some key terminology in Disaster Recovery. Using AWS infrastructure as a lens, we will examine all of these strategies.
What is Disaster Recovery?
This definition provides an excellent summary of disaster recovery – an extremely broad term.
“Disaster recovery involves a set of policies, tools, and procedures to enable the recovery or continuation of vital technology infrastructure and systems following a natural or human-induced disaster.”
This definition emphasizes the necessity of recovering systems, tools, etc. after a disaster. Disaster Recovery depends on many factors, including:
• Financial plan
• Competence in technology
• Use of tools
• The Cloud Provider used
It is essential to understand some key terminology, including RPO and RTO, in order to evaluate disaster recovery efficacy:
How do RPOs and RTOs differ?
RPO (Recovery Point Objective)
The Recovery Point Objective (RPO) is the maximum acceptable amount of data loss after an unplanned data-loss incident, expressed as an amount of time. This is a measure of a maximum, in order to get a low RPO, you will have to have a highly available solution.
RTO (Recovery Time Objective)
The Recovery Time Objective (RTO) is the maximum tolerable length of time that a computer, system, network or application can be down after a failure or disaster occurs. This is measured in minutes or hours and trying to retrieve as low of an RTO as possible is dependent on how quickly you can get your application back online.
Disaster Recovery Methods
Now that we understand these key concepts, we can break down three popular disaster recovery methods, namely Backup and Restore, Disaster Recovery Plan, and Disaster Recovery Contingency Plan.
Backup and Restore
Data loss or corruption can be mitigated by utilizing backup and restore. The replication of data to other data centers can also mitigate the effects of a disaster. Redeploying the infrastructure, configuration, and application code in the recovery Data center is in addition to restoring the data.
The recovery time objective (RTO) and recovery point objective (RPO) of backup and restoration are higher. The result is longer downtimes and greater data loss between the time of the disaster event and the time of recovery. Even so, backup and restore may still be the most cost-effective and easiest strategy for your workload. RTO and RPO in minutes or less are not required for all workloads.
RPO is dependent on how frequently you take snapshots, and RTO is dependent on how long it takes to restore snapshots.
Pilot Light
As far as affordability and reliability are concerned, Pilot Light strikes a perfect balance between the two. There is one key difference between Backup and Restore and Pilot Light: Pilot Light will always have its core functionality running somewhere, either in another region or in another account and region that separates it from Backup and Restore.
You can, for example, log into Backup and Restore and have all of your data synced into an S3 bucket, so that you can retrieve it in case of a disaster. It is important to note that when using Pilot Light, the data is synchronized with an always-on and always-available database replica.
Also, other core services, such as an EC2 instance with all of the necessary software already installed on it, will be available and ready to use at the touch of a button. There would be an Auto-Scaling Policy in place for each of these EC2 instances to ensure the instances would scale out in a timely manner in order to meet your production needs as soon as possible. This strategy focuses on a lower chance of overall downtime and is contingent on smaller aspects of your architecture running all of the time.
Multi-Site Active/Active
Having an exactly mirrored application across multiple AWS regions or data centers is the most resilient cloud disaster recovery strategy.
In the multi-site active/active strategy, you will be able to achieve the lowest RTO (recovery time objective) and RPO (recovery point objective). However, it is important to take into account the potential cost and complexity of operating active stacks in multiple locations.
There is a multi-AZ workload stack available in every region to ensure high availability. There is a live replication of data between each of the data stores within each Region, as well as a backup of this data. Hence, data backups are of crucial importance to protect against disasters that may lead to the loss or corruption of data as a result.
Only the most demanding applications should use this DR method, since it has the lowest RTOs and RPOs of any other DR technique.
Conclusion
It is impossible to build a Disaster Recovery plan that fits all circumstances, and no “one size fits all” approach exists. Budget ahead of time – and ensure that you don’t spend more than you can afford. It may seem like a lot of money is being spent on ‘What ifs?” – but if your applications CAN NOT go down – you have the capability to ensure this happens.
AWS offers many services, so many that it can often get pretty confusing for beginners and experts alike. This is especially true when it comes to the many storage options AWS provides its users. Knowing the benefits and use cases of AWS storage services will help you design the best solution. In this article, we’ll be looking at S3 vs EBS vs EFS.
So, what are these services and what do they do? Let’s start with S3.
Amazon S3 Benefits
The Amazon Simple Storage Service (Amazon S3) is AWS’s object storage solution. If you’ve ever used a service like Google Drive or Dropbox, you’ll know generally what S3 can do. At first glance, S3 is simply a place to store files, photos, videos, and other documents. However, after digging deeper, you’ll uncover the many functionalities of S3, making it much more than the average object storage service.
Some of these functionalities include scalable solutions, which essentially means that if your project gets bigger or smaller than originally expected, S3 can grow or shrink to easily meet your needs in a cost-effective manner. S3 also helps you to easily manage data, giving you the ability to control who accesses your content. With S3 you have data protection against all kinds of threats. It also replicates your data for increased durability and lets you choose between different storage classes to save you money.
S3 is incredibly powerful, so powerful, in fact, that even tech-giant Netflix uses S3 for its services. If you like Netflix, you have AWS S3 to thank for its convenience and efficiency! In fact, many of the websites you access on a daily basis either run off of S3 or use content stored in S3. Let’s look at a couple of use cases to get a better idea of how S3 is used in the real world.
Amazon S3 Use Cases
Have you ever accidentally deleted something important? S3 has backup and restore capabilities to make sure a user doesn’t lose data through versioning and deletion protection. Versioning means that AWS will save a new version of a file every time it’s updated and deletion protection makes sure a user has the right permissions before deleting a file.
What would a company do during an unexpected power outage or if their on-premises data center suddenly crashed? S3 data is protected in an Amazon managed data center, the same data centers Amazon uses to host their world-famous shopping website. By using S3, users get a second storage option without having to directly pay the rent and utilities of a physical site.
Some businesses need to store financial, medical, or other data mandated by industry standards. AWS allows users to archive this type of data with S3 Glacier, one of the many S3 storage classes to choose from. S3 Glacier is a cost-effective solution for archiving and one of the best in the market today.
Amazon EBS Benefits
Amazon Elastic Block Store (Amazon EBS) is an umbrella term for all of AWS’s block storage services. EBS is different from S3 in that it provides a storage volume directly connected to EC2 (Elastic Cloud Compute). EBS allows you to store files directly on an EC2 instance, allowing the instance to access your files in a quick and cheap manner. So when you hear or read about EBS, think “EC2 storage.”
You can customize your EBS volumes with the configuration best suited for the workload. For example, if you have a workload that requires greater throughput, then you could choose a Throughput Optimized HDD EBS volume. If you don’t have any specific needs for your workload then you could choose an EBS General Purpose SSD. If you need a high-performance volume then an EBS Provisioned IOPS SSD volume would do the trick. If you don’t understand yet, that’s okay! There’s a lot to learn about these volume types and we’ll cover that all in our video courses.
Just remember that EBS works with EC2 in a similar way to how your hard drive works with your computer. An EBS lets you save files locally to an EC2 instance. This storage capacity allows your EC2 to do some pretty powerful stuff that would otherwise be impossible. Let’s look at a couple of examples.
Amazon EBS Use Cases
Many companies look for cheaper ways to run their databases. Amazon EBS provides both Relational and NoSQL Databases with scalable solutions that have low-latency performance. Slack, the messaging app, uses EBS to increase database performance to better serve customers around the world.
Another use case of EBS involves backing up your instances. Because EBS is an AWS native solution, the backups you create in EBS can easily be uploaded to S3 for convenient and cost-effective storage. This way you’ll always be able to recover to a certain point-in-time if needed.
Amazon EFS Benefits
Elastic File System (EFS) is Amazon’s way of allowing businesses to share file data from multiple EC2’s or on-prem instances simultaneously. EFS is an elastic and serverless service. It automatically grows and shrinks depending on the file storing needs of your business without you having to provision or manage it.
Some advantages include being able to divide up your content between frequently accessed or infrequently accessed storage classes, helping you save some serious cash. EFS is an AWS native solution, so it also works with containers and functions like Amazon Elastic Container Service (ECS) and AWS Lambda.
Imagine an international company has a hundred EC2 instances with each hosting a web application (a website like this one). Hundreds of thousands of people are accessing these servers on a regular basis — therefore producing HUGE amounts of data. EFS is the AWS tool that would allow you to connect the data gathered from hundreds, even thousands of instances so you can perform data analytics and gather key business insights.
Amazon EFS Use Cases
Amazon Elastic File System (EFS) provides an easy-to-use, high-performing, and consistent file system needed for machine learning and big data workloads. Tons of data scientists use EFS to create the perfect environment for their heavy workloads.
EFS provides an effective means of managing content and web applications. EFS mimics many of the file structures web developers often use, making it easy to learn and implement in web applications like websites or other online content.
When companies like Discover and Ancestry switched from legacy storage systems to Amazon EFS they saved huge amounts of money due to decreased costs in management and time.
S3 vs EBS vs EFS Comparison Table
AWS Storage Summed Up
S3 is for object storage. Think photos, videos, files, and simple web pages.
EBS is for EC2 block storage. Think of a computer’s hard drive.
EFS is a file system for many EC2 instances. Think multiple EC2 instances and lots of data.
I hope that clears up AWS storage options. Of course, we can only cover so much in an article but check out our AWS courses for video lectures and hands-on labs to really learn how these services work.
Serverless computing has been on the rise the last few years, and whilst there is still a large number of customers who are not cloud-ready, there is a larger contingent of users who want to realize the benefits of serverless computing to maximize productivity and to enable newer and more powerful ways of building applications.
Serverless in cloud computing
Serverless is a cloud computing execution model in which the cloud provider allocates machine resources on demand and manages the servers on behalf of their customers. Cloud service providers still use servers to execute code for developers, which makes the term “serverless” a misnomer. There is always a server running in the background somewhere, and the cloud provider (AWS in this case) will run the infrastructure for you and leave you with the room to build your applications.
AWS Lambda
Within the AWS world, the principal Serverless service is AWS Lambda. Using AWS Lambda, you can run code for virtually any type of application or backend service without provisioning or managing servers. AWS Lambda functions can be triggered from many services, and you only pay for what you use.
So how does Lambda work? Using Lambda, you can run your code on high availability compute infrastructure and manage your compute resources. This includes server and operating system maintenance, capacity provisioning and automatic scaling, code and security patch deployment, and code monitoring and logging. All you need to do is supply the code.
AWS Lambda is a powerful service that has in recent years elevated AWS to be the leader in not only serverless architecture development, but within the cloud industry in general.
For those of you who don’t know – Lambda is a serverless, event-driven compute service that lets you run code without provisioning or managing servers which can be used for virtually any type of application or backend service. Its serverless nature and the fact that it has wide appeal across different use cases has made AWS Lambda a useful tool when running your short running compute operations in the cloud.
What makes Lambda better than other options?
You can use Lambda to handle all the operational and administrative tasks on your behalf, such as provisioning capacity, monitoring fleet health, deploying and running your code, and monitoring and logging it. Lambda’s key features and selling points are as follows:
Highly scalable
Completely event-driven
Support multiple languages and frameworks
Pay-as-you-go pricing
The use cases for AWS Lambda are varied and cannot be sufficiently explored in one blog post. However, we have put together the top ten use cases in which Lambda shines the best.
1: Processing uploaded S3 objects
Once your files land in S3 buckets, you can immediately start processing them by Lambda using S3 object event notifications. Using AWS Lambda for thumbnail generation is a great example for this use case, as the solution is cost-effective and you won’t have to worry about scaling up since Lambda can handle any load you place on it.
The alternative to a serverless function handling this request is an EC2 instance spinning up every time a photo needs converting to a thumbnail, or leaving an EC2 instance running 24/7 on the occasion that a thumbnail needs to be converted. This use case requires low latency, highly responsive event-driven architecture that allows your application to perform effectively at scale.
2: Document editing and conversion in a hurry
When objects are uploaded to Amazon S3 you can leverage AWS Lambda to perform changes to the material to help with any business goal you may have. This can also include editing document types and adding watermarks to important corporate documents.
For example, you could leverage a RESTful API, using Amazon S3 Object Lambda to convert documents to PDF and apply a watermark based on the requesting user. You could also convert a file from doc to PDF automatically upon being uploaded to a particular S3 Bucket. The use cases within this field are also unlimited.
3: Cleaning up the backend
Any consumer-oriented website needs to have a fast response time as one of its top priorities. Slow response times or even a visible delay can cause traffic to be lost.
It is likely that your consumers will simply switch to another site if your site is too busy dealing with background tasks to be able to display the next page or search results in a timely manner. While there are some sources of delay that are beyond your control, such as slow ISPs, there are some things you can do to increase your response time, and these are listed below.
How does AWS Lambda come into play when it comes to cloud computing?
Backend tasks should not delay frontend requests due to the fact that they are running on the backend. You can send the data to an AWS Lambda process if you need to parse the user input to store it in a database, or if there are other input processing tasks that are not necessary for rendering the next page. AWS Lambda can then clean up and send the data to your database or application.
4: Creating and operating serverless websites
It is outdated to maintain a dedicated server, even a virtual server. Furthermore, provisioning the instances, updating the OS, etc. takes a lot of time and distracts you from focusing on the core functions.
You don’t need to manage a single server or operating system when you use AWS Lambda and other AWS services to build a powerful website. For a basic version of this architecture you could use AWS API Gateway, DynamoDB, Amazon S3 and Amazon Cognito User Pools to achieve a simple, low effort and highly scalable website to solve any of your business use cases.
5: Real-time processing of bulk data
It is not unusual for an application, or even a website, to handle a certain amount of real-time data at any given time. Depending on how the data is inputted, it can come from communication devices, peripherals interacting with the physical world, or user input devices. Generally, this data will arrive in short bursts, or even a few bytes at a time, in formats that are easy to parse, and will arrive in formats that are usually very easy to read.
Nevertheless, there are times when your application might need to handle large amounts of streaming input data, so moving it to temporary storage for later processing may not be the best option.
It is usually necessary to be able to identify specific values from a stream of data collected from a remote device, such as a telemetry device. It is possible to handle the necessary real-time tasks without hindering the operation of your main application by sending the stream of data to a Lambda application on AWS that can pull and process the required information quickly.
6: Rendering pages in real-time
The Lambda service can play a significant role if you are using predictive page rendering in order to prepare webpages for display on your website.
As an example, if you want to retrieve documents and multimedia files for use in the next requested page, you can use a Lambda-based application to retrieve them, perform the initial stages of rendering them for display, and then, if necessary, use them for use in the next page.
7: Automated backups
When you are operating an enterprise application in the cloud, certain manual tasks like backing up your database or other storage mediums can fall to the side. By taking the undifferentiated heavy lifting out of your operations you can focus on what delivers value.
Using Lambda scheduled events is a great way of performing housekeeping within your account. By using the boto3 Python libraries and AWS Lambda, you can create backups, check for idle resources, generate reports, and perform other common tasks quickly.
8: Email Campaigns using AWS Lambda & SES
You can build out simple email campaigns to send mass emails to potential customers to improve your business outcomes.
Any organization that engages in marketing has mass mailing services as part of its marketing services. Hardware expenditures, license costs, and technical expertise are often required for traditional solutions.
You can build an in-house serverless email platform using AWS Lambda and Simple Email Service SES quite easily which can scale in line with your application.
9: Real-time log analysis
You could easily build out a Lambda function to check log files from Cloudtrail or Cloudwatch.
Amazon CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, and optimize resource utilization. AWS CloudTrail can be used to track all API calls made within your account.
It is possible to search the logs for specific events or log entries as they occur in the logs and be notified of them via SNS when they occur. You can also very easily implement custom notification hooks to Slack, Zendesk, or other systems by calling their API endpoint within Lambda.
10: AWS Lambda Use Case for Building Serverless Chatbot
Building and running chatbots is not only time consuming but expensive also. Developers must provision, run and scale the infrastructural resources that run the chatbot code. However, with AWS Lambda you can run a scalable chatbot architecture quite easily, without having to provision all of the hardware you would have had to do if you were not doing this on the cloud.
Basics of Amazon Detective (Included in AWS SAA-C03 Exam)
Detective is integrated with Amazon GuardDuty, AWS Security Hub, and partner security products, through which you can easily navigate to Detective, you don’t have to organize any data or develop, configure, or tune queries and algorithms. There are no upfront costs and customers pay only for the events analyzed, with no additional software to deploy or other feeds to subscribe to.
Testimonial: Passed SAA-C03!
Hi, just got the word, I passed the cert!
I mainly used Maareks videos for the initial learning, did turorialsdojo for the practice test and used Cantrills to touch up on places I lacked knowledge.
My next cert is prob gonna be sysOps. This time I plan to just use Cantrills videos I think because I feel they helped me the most.
Today I got the notification that I am officially an AWS Certified Solutions Architect and I’m so happy!
I was nervous because I had been studying for the C02 version, but at the last minute I registered for the C03 thinking it was somehow “better” because it was more up to date (?). I didn’t know how different it would be and with the announcement that Stephane was yet to release an updated version for this exam made me even more anxious. But it turned out well!
I used Stephane’s Udemy course and the practice exams from Tutorials Dojo to help me study. I think the practice exams were the most useful as they helped me understand better how the questions would be presented.
Looking back now, I don’t think there was a major difference between C02 and C03, so if you are thinking that you haven’t studied specifically for C03, I wouldn’t worry too much,
My experience with Practice exam –
I found Stephan’s practice exam to be more challenging and it really helped me in filling the gap. Options were very similar to each other so guessing was not an option in stephane’s exams.
With TD, questions were worded correctly but options were terrible. Like even if you don’t know the answer you can guess it. Some options were like ( Which one of the option is a planet – # sun – # Earth #cow – # AWS) like they were that easy to guess and that’s why I got 85% in the second test and I have to review all question because I don’t know the answer yet I was scoring.
Things of note:
Use the keyboard shortcuts (eg alt-n for next question). Over 65 questions, this will save at least 1-2 minutes.
Attempt every question on first read, even if you flag to come back to it, make a go of it there and then. That way if you time out, you’ve put in your first/gut feel answer. More often than not, during review you won’t change i anyway.
Don’t get disheartened. There are 15 non-scoring questions so conceivably one could get 15 plus 12-14 more wrong and still hit 720+ and pass!
Look for the keywords, obvious wrong answers. Most of the time it will be a choice of 2 answers, with maybe a keyword to nail home the right answer. I found a lot of keywords/points that made me thing ‘yep – has to be that’.
Read the entire question and all of the answers, even if sure on the right answer, just in case…
Discover what works best for you in terms of learning. Some people are more suited to books, some are hands on/projects, some are audio/video etc. Finding your way helps make learning something new a lot easier.
If at home, test your machine the week before and then again the day before don’t reboot. Remove the as much stress from the event as possible.
I’m preparing for SAA-C03, when I have questions where to choose the correct policy routing I always struggle with Latency, Geolocation and Geoproximity.
Especially with these kinds of scenarios:
Latency
I’ve users in the US and in Europe, those in Europe have perf issues, you set up your application also in Europe and you pick which policy routing?
Obviously ;-P I’ve selected Geolocation, because they are in Europe and I want they use the EU instances!!! It will boost the latency as well 🙁 , or at least to me is logical, while using a Latency based policy, I cannot be sure that they will use my servers in Europe.
2. Geolocation and Geoproximity
I don’t have a specific case to show up, but my understanding is that when I need to change the bias, I pick proximity based routing. The problem for me, it’s to understand when a simple geolocation policy is not enough (any tips). Is that Geolocation is used mainly to restrict content and internationalization? For country/compliance based restrictions, I understand that is better to use CloudFront, so using Routing is even not an option in such cases…
Comments:
#1: Geolocation isn’t about performance, that’s a secondary effect, but it’s not the primary function.
Latency based routing is there for a reason, to ensure the lowest latency .. and latency (generally) is a good indicator of performance.. especially for any applications which are latency sensitive.
Geo-location is more about delivering content from a localized server .. it might be about data location, language, local laws.
These are taken from my lessons on it, geolocation doesn’t return the ‘closest’ record… if you have a record tagged UK and one tagged France and you are in Germany .. it won’t return either of those… it would do Germany, Europe, default etc.
The different routing types are pretty easy to understand once you think about them in the right way.
We know you like your hobbies and especially coding, We do too, but you should find time to build the skills that’ll drive your career into Six Figures. Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career. 85% of hiring managers say cloud certifications make a candidate more attractive. Start your cloud journey with this excellent books below:
AWS WAF and AWS Shield help protect your AWS resources from web exploits and DDoS attacks.
AWS WAF is a web application firewall service that helps protect your web apps from common exploits that could affect app availability, compromise security, or consume excessive resources.
AWS Shield provides expanded DDoS attack protection for your AWS resources. Get 24/7 support from our DDoS response team and detailed visibility into DDoS events.
We’ll now go into more detail on each service.
AWS Web Application Firewall (WAF)
AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.
AWS WAF helps protect web applications from attacks by allowing you to configure rules that allow, block, or monitor (count) web requests based on conditions that you define.
These conditions include IP addresses, HTTP headers, HTTP body, URI strings, SQL injection and cross-site scripting.
Can allow or block web requests based on strings that appear in the requests using string match conditions.
For example, AWS WAF can match values in the following request parts:
Header – A specified request header, for example, the User-Agent or Referer header.
HTTP method – The HTTP method, which indicates the type of operation that the request is asking the origin to perform. CloudFront supports the following methods: DELETE, GET, HEAD, OPTIONS, PATCH, POST, and PUT.
Query string – The part of a URL that appears after a ? character, if any.
URI – The URI path of the request, which identifies the resource, for example, /images/daily-ad.jpg.
Body – The part of a request that contains any additional data that you want to send to your web server as the HTTP request body, such as data from a form.
Single query parameter (value only) – Any parameter that you have defined as part of the query string.
All query parameters (values only) – As above buy inspects all parameters within the query string.
New rules can be deployed within minutes, letting you respond quickly to changing traffic patterns.
When AWS services receive requests for web sites, the requests are forwarded to AWS WAF for inspection against defined rules.
Once a request meets a condition defined in the rules, AWS WAF instructs the underlying service to either block or allow the request based on the action you define.
With AWS WAF you pay only for what you use.
AWS WAF pricing is based on how many rules you deploy and how many web requests your web application receives.
There are no upfront commitments.
AWS WAF is tightly integrated with Amazon CloudFront and the Application Load Balancer (ALB), services.
When you use AWS WAF on Amazon CloudFront, rules run in all AWS Edge Locations, located around the world close to end users.
This means security doesn’t come at the expense of performance.
Blocked requests are stopped before they reach your web servers.
When you use AWS WAF on an Application Load Balancer, your rules run in region and can be used to protect internet-facing as well as internal load balancers.
Web Traffic Filtering
AWS WAF lets you create rules to filter web traffic based on conditions that include IP addresses, HTTP headers and body, or custom URIs.
This gives you an additional layer of protection from web attacks that attempt to exploit vulnerabilities in custom or third-party web applications.
In addition, AWS WAF makes it easy to create rules that block common web exploits like SQL injection and cross site scripting.
AWS WAF allows you to create a centralized set of rules that you can deploy across multiple websites.
This means that in an environment with many websites and web applications you can create a single set of rules that you can reuse across applications rather than recreating that rule on every application you want to protect.
Full feature API
AWS WAF can be completely administered via APIs.
This provides organizations with the ability to create and maintain rules automatically and incorporate them into the development and design process.
For example, a developer who has detailed knowledge of the web application could create a security rule as part of the deployment process.
This capability to incorporate security into your development process avoids the need for complex handoffs between application and security teams to make sure rules are kept up to date.
AWS WAF can also be deployed and provisioned automatically with AWS CloudFormation sample templates that allow you to describe all security rules you would like to deploy for your web applications delivered by Amazon CloudFront.
AWS WAF is integrated with Amazon CloudFront, which supports custom origins outside of AWS – this means you can protect web sites not hosted in AWS.
Support for IPv6 allows the AWS WAF to inspect HTTP/S requests coming from both IPv6 and IPv4 addresses.
Real-time visibility
AWS WAF provides real-time metrics and captures raw requests that include details about IP addresses, geo locations, URIs, User-Agent and Referers.
AWS WAF is fully integrated with Amazon CloudWatch, making it easy to setup custom alarms when thresholds are exceeded, or attacks occur.
This information provides valuable intelligence that can be used to create new rules to better protect applications.
AWS Shield
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS.
AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection.
There are two tiers of AWS Shield – Standard and Advanced.
AWS Shield Standard
All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge.
AWS Shield Standard defends against most common, frequently occurring network and transport layer DDoS attacks that target web sites or applications.
When using AWS Shield Standard with Amazon CloudFront and Amazon Route 53, you receive comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks.
AWS Shield Advanced
Provides higher levels of protection against attacks targeting applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator and Amazon Route 53 resources.
In addition to the network and transport layer protections that come with Standard, AWS Shield Advanced provides additional detection and mitigation against large and sophisticated DDoS attacks, near real-time visibility into attacks, and integration with AWS WAF, a web application firewall.
AWS Shield Advanced also gives you 24×7 access to the AWS DDoS Response Team (DRT) and protection against DDoS related spikes in your Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator and Amazon Route 53 charges.
AWS Shield Advanced is available globally on all Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53 edge locations.
Origin servers can be Amazon S3, Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), or a custom server outside of AWS.
AWS Shield Advanced includes DDoS cost protection, a safeguard from scaling charges because of a DDoS attack that causes usage spikes on protected Amazon EC2, Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, or Amazon Route 53.
If any of the AWS Shield Advanced protected resources scale up in response to a DDoS attack, you can request credits via the regular AWS Support channel.
AWS Simple Workflow vs AWS Step Function vs Apache Airflow
There are a number of different services and products on the market which support building logic and processes within your application flow. While these services have largely similar pricing, there are different use cases for each service.
AWS Simple Workflow Service (SWF), AWS Step Functions and Apache Airflow all seem very similar, and at times it may seem difficult to distinguish each service. This article highlights the similarities and differences, benefits, drawbacks, and use cases of these services that see a growing demand.
What is AWS Simple Workflow Service?
The AWS Simple Workflow Service (SWF) allows you to coordinate work between distributed applications.
A task is an invocation of a logical step in an Amazon SWF application. Amazon SWF interacts with workers which are programs that retrieve, process, and return tasks.
As part of the coordination of tasks, execution dependencies, scheduling, and concurrency are managed accordingly.
What are AWS Step Functions?
AWS Step Functions enables you to coordinate distributed applications and microservices through visual workflows.
Your workflow can be visualized by state machines describing steps, their relationships, and their inputs and outputs. State machines represent individual steps in a workflow diagram by containing a number of states.
The states in your workflow can perform work, make choices, pass parameters, initiate parallel execution, manage timeouts, or terminate your workflow.
What is Apache Airflow?
Firstly, Apache Airflow is a third party tool – and is not an AWS Service. Apache Airflow is an open-source workflow management platform for data engineering pipelines.
This powerful and widely-used open-source workflow management system (WMS) allows programmatic creation, scheduling, orchestration, and monitoring of data pipelines and workflows.
Using Airflow, you can author workflows as Directed Acyclic Graphs (DAGs) of tasks, and Apache Airflow can integrate with many AWS and non-AWS services such as: Amazon Glacier, Amazon CloudWatch Logs and Google Cloud Secret Manager.
Benefits and Drawbacks
Let’s have a closer look at the benefits and drawbacks of each service.
Here’s an overview of some use cases of each service.
Choose AWS Simple Workflow Service if you are building:
Order management systems
Multi-stage message processing systems
Billing management systems
Video encoding systems
Image conversion systems
Choose AWS Step Functions if you want to include:
Microservice Orchestration
Security and IT Automation
Data Processing and ETL Orchestration
New instances of Media Processing
Choose Apache Airflow if:
ETL pipelines that extract data from multiple sources, and run Spark jobs or other data transformations
Machine learning model training
Automated generation of reports
Backups and other DevOps tasks
Conclusion
Each of the services discussed has unique use cases and deployment considerations. It is always necessary to fully determine your solution requirements before you make a decision as to which service best fits your needs.
There are many things that AWS actively try to help you with – and cost optimization is one of them. Cost optimization simply defined comes down to helping you reduce your cloud spend in specific areas, without impacting on the efficacy of your architecture and how it functions. Cost optimization is one of the pillars in the well architected framework, and we can use it to help us move towards a more streamlined, and cost efficient workload.
AWS Well-Architected Framework enables cloud architects to build fast, reliable, and secure infrastructures for a wide variety of workloads and applications. It is built around six pillars:
Operational excellence
Security
Reliability
Performance efficiency
Cost optimization
Sustainability
The Well-Architected Framework provides customers and partners with a consistent approach for evaluating architectures and implementing scalable designs on AWS. It is applicable for use whether you are a burgeoning start-up or an enterprise corporation using the AWS Cloud.
In this article however, we are going to focus on exactly what is cost optimization, explore some key principles of how it is defined and demonstrate some use cases as to how it could help you when architecting your own AWS Solutions.
What is Cost Optimization?
Besides being one of the pillars on the Well Architected framework, Cost Optimization is a broad, yet simple term and is defined by AWS as follows:
“The Cost Optimization pillar includes the ability to run systems to deliver business value at the lowest price point.”
It provides a comprehensive overview of the general design principles, best practices, and questions related to cost optimization. Once understood, it can have a massive impact on how you are launching your various applications on AWS.
As well as a definition of what the Cost Optimization is there are some key design principles which we’ll explore in order to make sure we are on the right track with enhancing our workloads:
Implement Cloud Financial Management
In order to maximize the value of your cloud investment, cloud financial management/cost optimization is essential for achieving financial success and maximizing the value of your cloud investment. As your organization moves into this new era of technology and usage management, there is an imperative need for you to devote resources and time to developing capability in this new area. As with security or operational excellence, if you want to become a cost-efficient organization, you will have to build capability through knowledge building, programs, resources, and processes in a similar manner to how you would build capability for security.
Adopt a consumption model
If you want to save money on computing resources, it is important to pay only for what you require, and to increase or decrease usage based on the needs of the business, without relying on elaborate forecasting.
Measure overall efficiency
It is important to measure the business output of a workload as well as the costs associated with delivering that workload. You can use this measure to figure out what gains you will make if you increase output and reduce costs. Efficiency doesn’t also have to be just financially worthwhile to help get your cloud spend under control. It can also help any one server from becoming under or over utilized and help from a performance standpoint also.
Stop spending money on unnecessary activities
When it comes to data center operations, AWS handle everything from racks and stacks to powering servers and providing the racking itself. By utilizing managed services, you will also be able to remove the operational burden of managing operating systems as well as applications. The advantage of this method is that you are able to focus on your customers and your business projects rather than on your IT infrastructure.
Analyze and attribute expenditure
There is no doubt that the cloud allows for easy identification of the usage and cost of systems, which in turn allows for transparent attribution of IT costs to individual workload owners. Achieving this helps workload owners to measure the return on investment (ROI) of their investment as well as to reduce their costs and optimize their resources.
Now that we fully understand what we mean when we say ‘Cost Optimization on AWS’, we are going to show some ways that we can use cost optimization principles in order to improve the overall financial performance of our workloads on Amazon S3, and Amazon EC2:
Cost optimization on S3
Amazon S3 is an object-storage service which provides 11 Nines of Durability, and near infinite, low-cost object storage. There are a number of ways to even further optimize your costs, and ensure you are adhering to the Cost Optimization pillar of the Well Architected Framework.
S3 Intelligent Tiering
Amazon S3 Intelligent-Tiering is a storage class that is intended to optimize storage costs as well as provide users with an easy way to move data to the most cost-effective access tier as their usage patterns change over time. In the case of S3, Intelligent-Tiering monitors access patterns and, for a small monthly fee, automatically moves objects that have not been accessed to lower-cost access tiers. You can automatically save storage costs when you use S3 Intelligent-Tiering, which is a technology that provides low-latency and high-throughput access tiers to reduce storage costs. As a result of S3 Intelligent-Tiering storage class, it is possible to automatically archive data that is asynchronously accessible.
S3 Storage Class Analysis
Amazon S3 Storage Class Analysis analyses storage access patterns to help you decide when to transition the right data to the right storage class. This is a relatively new Amazon S3 analytics feature that monitors your data access patterns and tells you when data should be moved to a lower-cost storage class based on the frequency with which it is accessed.
Cost optimization on EC2
Amazon EC2 is simply a Virtual Machine in the cloud that can be scaled up, scaled down dynamically as your application grows. There are a number of ways you can optimize your spend on EC2 depending on your use case, whilst still delivering excellent performance.
Savings Plans
In exchange for a commitment to a specific instance family within the AWS Region (for example, C7 in US-West-2), EC2 Instance Savings Plans offer savings of up to 72 percent off on-demand.
EC2 Instance Savings Plans allow you to switch between instance sizes within the family (for example, from c5.xlarge to c5.2xlarge) or operating systems (such as from Windows to Linux), or change from Dedicated to Default tenancy, while continuing to receive the discounted rate.
If you are using large amounts of particular EC2 instances, buying a Savings Plan allows you to flexibly save money on your compute spend.
Right-sizing EC2 Instances
Right-sizing is about matching instance types and sizes to your workload performance and capacity needs at the lowest possible cost. Furthermore, it involves analyzing deployed instances and identifying opportunities to eliminate or downsize them without compromising capacity or other requirements.
The Amazon EC2 service offers a variety of instance types tailored to fit the needs of different users. There are a number of instance types that offer different combinations of resources such as CPU, memory, storage, and networking, so that you can choose the right resource mix for your application.
You can use Trusted Advisor to give recommendations on which particular EC2 instances are running at low utilization. This takes a lot of undifferentiated heavy lifting out of your hands as AWS tell you the exact instances you need to re-size.
Using Spot Capacity where possible
Spot capacity is spare capacity that AWS has within their data centers, which they provide to you for a large discount (up to 90%). The downside is that if a customer is willing to pay the on-demand price for this capacity, you will be given a 2-minute warning after which your instances will be terminated.
Applications requiring online availability are not well suited to spot instances. The use of Spot Instances is recommended for stateless, fault-tolerant, and flexible applications. A Spot Instance can be used for big data, containerized workloads, continuous integration and delivery (CI/CD), stateless web servers, high performance computing (HPC), and rendering workloads, as well as anything else which can be interrupted and requires low cost.
There are many considerations when it comes to optimizing cost on AWS, and the Cost Optimization pillar provides us all of the resources we need to be fully enabled in our AWS journey.
Source: This article originally appeared on: https://digitalcloud.training/what-does-aws-mean-by-cost-optimization/
AWS Amplify is a set of tools and services that enables mobile and front-end web developers to build secure, scalable full stack applications powered by AWS. Amplify includes an open-source framework with use-case-centric libraries and a powerful toolchain to create and add cloud-based features to your application, and a web-hosting service to deploy static web applications.
AWS SAM
AWS SAM is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, and event source mappings. With just a few lines per resource, you can define the application you want and model it using YAML. During deployment, AWS SAM transforms and expands the AWS SAM syntax into AWS CloudFormation syntax, enabling you to build serverless applications faster.
Amazon Cognito
Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0.
Vue Javascript Framework
Vue JavaScript framework is a progressive framework for building user interfaces. Unlike other monolithic frameworks, Vue is designed to be incrementally adoptable. The core library focuses on the view layer only and is easy to pick up and integrate with other libraries or existing projects. Vue is also perfectly capable of powering sophisticated single-page applications when used in combination with modern tooling and supporting libraries.
AWS Cloud9
AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser. It includes a code editor, debugger, and terminal. AWS Cloud9 makes it easy to write, run, and debug serverless applications. It pre-configures the development environment with all the SDKs, libraries, and plugins needed for serverless development.
Swagger API
Swagger API is an open-source software framework backed by a large ecosystem of tools that help developers design, build, document, and consume RESTful web services. Swagger also allows you to understand and test your backend API specifically.
Amazon DynamoDB
Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-Region, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than 20 million requests per second.
Amazon EventBridge
Amazon EventBridge makes it easy to build event-driven applications because it takes care of event ingestion, delivery, security, authorization, and error handling for you. To achieve the promises of serverless technologies with event-driven architecture, such as being able to individually scale, operate, and evolve each service, the communication between the services must happen in a loosely coupled and reliable environment. Event-driven architecture is a fundamental approach for integrating independent systems or building up a set of loosely coupled systems that can operate, scale, and evolve independently and flexibly. In this lab, you use EventBridge to address the contest use case.
Amazon DynamoDB Streams
Amazon DynamoDB Streams is an ordered flow of information about changes to items in a DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table.
AWS Step Functions
AWS Step Functions is a serverless function orchestrator that makes it easy to sequence Lambda functions and multiple AWS services into business-critical applications. Through its visual interface, you can create and run a series of checkpointed and event-driven workflows that maintain the application state. The output of one step acts as input to the next. Each step in your application runs in order, as defined by your business logic. Orchestrating a series of individual serverless applications, managing retries, and debugging failures can be challenging. As your distributed applications become more complex, the complexity of managing them also grows. Step Functions automatically manages error handling, retry logic, and state. With its built-in operational controls, Step Functions manages sequencing, error handling, retry logic, and state, removing a significant operational burden from your team.
When your processing requires a series of steps, use Step Functions to build a state machine to orchestrate the workflow. This lets you keep your Lambda functions focused on business logic.
Returning to the baker in our analogy, when an order to make a pie comes in, the order is actually a series of related but distinct steps. Some steps have to be done first or in sequence, and some can be done in parallel. Some take longer than others. Someone with expertise in each step performs that step. To make things go smoothly and let the experts stick to their expertise, you need a way to manage the flow of steps and keep whoever needs to know informed of the status.
Amazon Simple Notification Service (Amazon SNS) is a fully managed messaging service for both system-to-system and app-to-person (A2P) communication. The service enables you to communicate between systems through publish/subscribe (pub/sub) patterns that enable messaging between decoupled microservice applications or to communicate directly to users via SMS, mobile push, and email. The system-to-system pub/sub functionality provides topics for high-throughput, push-based, many-to-many messaging. Using Amazon SNS topics, your publisher systems can fan out messages to a large number of subscriber systems or customer endpoints including Amazon Simple Queue Service (Amazon SQS) queues, Lambda functions, and HTTP/S, for parallel processing. The A2P messaging functionality enables you to send messages to users at scale using either a pub/sub pattern or direct-publish messages using a single API.
Observability extends traditional monitoring with approaches that address the kinds of questions you want to answer about your applications. Business metrics are sometimes an afterthought, only coming into play when someone in the business asks the question, and you have to figure out how to get the answers from the data you have. If you build in these needs when you’re building the application, you’ll have much more visibility into what’s happening within your application.
Logs, metrics, and distributed tracing are often known as the three pillars of observability. These are powerful tools that, if well understood, can unlock the ability to build better systems.
Logs provide valuable insights into how you measure your application health. Event logs are especially helpful in uncovering growing and unpredictable behaviors that components of a distributed system exhibit. Logs come in three forms: plaintext, structured, and binary.
Metrics are a numeric representation of data measured over intervals of time about the performance of your systems. You can configure and receive automatic alerts when certain metrics are met.
Tracing can provide visibility into both the path that a request traverses and the structure of a request. An event-driven or microservices architecture consists of many different distributed parts that must be monitored. Imagine a complex system consisting of multiple microservices, and an error occurs in one of the services in the call chain. Even if every microservice is logging properly and logs are consolidated in a central system, it can be difficult to find all relevant log messages.
AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application and shows a map of your application’s underlying components. You can use X-Ray to analyze both applications in development and in production, from simple three-tier applications to complex microservices applications consisting of thousands of services.
Amazon CloudWatch Logs Insights is a fully managed service that is designed to work at cloud scale with no setup or maintenance required. The service analyzes massive logs in seconds and gives you fast, interactive queries and visualizations. CloudWatch Logs Insights can handle any log format and autodiscovers fields from JSON logs.
Amazon CloudWatch ServiceLens is a feature that enables you to visualize and analyze the health, performance, and availability of your applications in a single place. CloudWatch ServiceLens ties together CloudWatch metrics and logs, as well as traces from X-Ray, to give you a complete view of your applications and their dependencies. This enables you to quickly pinpoint performance bottlenecks, isolate root causes of application issues, and determine impacted users.
Characteristics of modern applications that challenge traditional approaches
Short-lived resources
More devices, services, and data
Faster release velocity
Importance of user experience
AWS services that address the three pillars of observability
CloudWatch Logs and Logs Insights
X-Ray
CloudWatch metrics
ServiceLens
Ways you can use CloudWatch metrics
Use default operational metrics.
Alarm based on thresholds, and trigger automated actions.
Correlate logs and metrics.
Create custom metrics.
Use embedded metrics format to create metrics from log files.
User pools can act as an identity provider with profile management and issue a JWT that can be used for authorization. You can also configure sign in to a web or mobile application through Amazon Cognito with user pools.
Federated identities cannot be used as an identity provider (IdP), but you can use them to create unique identities for users and federate them with identity providers. With federated identities, you can issue AWS access key IDs and secret access keys that can be used with temporary IAM credentials to access AWS resources.
Using Amazon EventBridge and Amzon SNS to Decouple Components:
In an asynchronous design, the client sends a request and may get an acknowledgement that the event was received, but it doesn’t get a response that includes the results of the request.
Amazon EventBridge and Amazon SNS are both messaging services that trigger Lambda asynchronously and allow you to fan out for parallel processing.
The advantage of asynchronous processing is that you reduce the dependencies on downstream activities, and this reduction improves responsiveness back to the client. This means you don’t have to put logic into your code to deal with long wait times or to handle errors that might occur downstream. Once your client has successfully handed off the request, you can move on.
If you go into your design thinking in terms of asynchronous connections, you can create a much more flexible and resilient application.
In asynchronous communications, it’s important that the client has a way to know when the downstream task is done so that it can complete the appropriate next steps. There are three common patterns the client might use to get the status of the asynchronous transaction illustrated here.
Three types of event sources for EventBridge
AWS services
Custom applications
Software as a service (SaaS) applications
Two ways Amazon SNS supports event-driven design
You can filter events with subscriber-specific filtering policies.
You can build wide fan-out patterns.
Four ways the schema registry helps developers
Stores event structures that can be searched
Generates code bindings
Automatically discovers and adds schemas to the registry with schema discovery
What are Four types of serverless resource types supported in AWS SAM templates?
Lambda functions
API Gateway APIs
DynamoDB tables
Step Functions state machines
Declarative Programming vs Imperative Programming
With declarative programming, you say what you want (abstract). Compared to imperative programming, developers need less knowledge than with higher level languages and tool chains. But you do give up flexibility in terms of things like full execution control, looping constructs, and advanced techniques (such as OO inheritance, threading, and automated testing). AWS CloudFormation templates use declarative programming.
With imperative programming, you say how to do it (procedural). Compared to declarative programing, developers need more knowledge to write code in the specific language syntax, using different API libraries and conventions and exception handling mechanisms. But this does give the developer greater flexibility within the language-specific editors and tooling. AWS CDK is an option for using imperative programming.
Three things Lambda does for you when polling a queue
Polls the queue and invokes your function with an event that contains a batch of messages
Deletes a successfully processed batch of records off the queue
Scales pollers and Lambda concurrency up and down based on queue depth and error rates
Three things Lambda does for you when polling a stream
Polls the stream and invokes your function with an event that contains a batch of messages
Maintains a pointer of last record processed on the stream
Invokes one Lambda function instance per shard or the number you choose for concurrent batches per shard
Queues or stream?
Data is available to multiple consumers: Stream
Rate of messages is continuous and high volume: Stream
Consumer must maintain a pointer: Stream
Rates of messages vary: Queues
Value from acting on the stream messages: Stream
Messages are deleted after they are successfully processed: Queue
Act on individual messages: Queues
There is only one consumer of the messages: Queues
Messages are deleted after they are successfully processed: With a queue, you delete the message from the queue after it has been processed so that it does not get picked up for processing again. When you use Lambda as an event source, Lambda deletes successfully processed batches from the queue.
Consumer must maintain a pointer: With streams, Lambda and other consumers must maintain a pointer to track which records need to be processed next.
Data is available to multiple consumers: Multiple consumers can poll a stream and process the messages. Messages remain on the stream until they expire.
There is only one consumer of the messages: This implies a queue. You set up a single target for the queue, for example a Lambda function. Messages are deleted from the queue as they are processed.
Rates of messages may vary: This is generally a characteristic that leads to selecting a queue. Records are processed as they arrive, but you do not anticipate a steady stream.
Rate of messages is continuous and high volume: This is the typical scenario for a stream. Message come in at a steady pace and there is a high volume of messages to process.
Act on individual messages: Use a queue when you need to act on individual messages, for example processing an order.
Value from acting on the stream of messages: This points to a stream. You don’t really get value from processing one message on a stream, but instead, find value in aggregating results.
What are the Best practices for writing Lambda functions?
Take advantage of environment reuse, and check that background processes have completed.
Manage database connection pooling with a database proxy.
Persist state data externally.
Minimize deployment package size and complexity of dependencies.
Mount Amazon Elastic File System (Amazon EFS) for large or shared assets.
Use provisioned concurrency to avoid cold starts.
What are Three configurations that impact performance for Lambda functions?
Memory
Timeout
Concurrency (reserved and provisioned)
Lambda error handling
Two general types of errors from Lambda Functions:
Invocation errors
Function errors
Characteristics of error handling for synchronous event sources
The event source must handle any errors that occur.
There are no built-in retries.
Developers should use the backoff and retry functionality in the AWS CLI and AWS SDK to respond to errors.
Characteristics of error handling for asynchronous event sources
The client or invoking service is responsible for errors that prevent Lambda from invoking the function, such as permission issues or invalid JSON.
If Lambda successfully invokes the function but your function throws an exception or doesn’t complete, Lambda will retry running the function up to two more times. You can set this retry value from 0-2 in the function configuration.
You can send Lambda invocation failures that continue to fail to an OnFailure destination or a dead-letter queue.
Characteristics of error handling for streams as an event source
By default, Lambda will keep trying a failed batch until it succeeds or the record expires off of the stream, effectively blocking the shard.
Configure the event source to split a failed batch or use checkpointing and use maximum retries to isolate failing records and send them to an OnFailure destination.
To preserve order, update your function to return a failed sequence identifier and use checkpointing to retry only records that have not been processed.
When you save updates to your Lambda function, $LATEST is updated by default. You edit the $LATEST code and publish new versions as needed.
When you choose the option to “publish” a version of the function, Lambda creates a copy of the unpublished $LATEST version and gives it a sequential number. That version is immutable.
The qualified ARN for that version puts the version number as a qualifier.
You can create an alias and associate to any version of your function, including $LATEST. This allows you to reference your function using the alias as a qualifier in your ARN.
Lambda also lets you use alias routing to have an alias point to two versions, sending a percentage of traffic to each version.
With synchronous event sources, the client is responsible for all error handling. The AWS CLI and AWS SDK include backoff and retries by default, so take advantage of those to respond to errors.
With asynchronous event sources, Lambda will retry failed functions up to two times based on the selection you make on the Lambda function configuration for asynchronous events. With asynchronous event sources, you also have the option to set an on-failure destination and a dead-letter queue.
With streaming event sources, you can configure error handling to split batches on error. This allows the stream to isolate the failing record and send it to an on-failure destination. This is important for preventing bottlenecks because Lambda will not move the pointer on the stream until a batch is successful.
With Amazon SQS as an event source, Lambda will increase concurrency to keep up with the pace of requests, but it will also decrease concurrency if there are errors being returned. With Amazon SQS as an event source, you can configure a dead-letter queue on the queue itself.
In the world of Cloud Computing, Security is always job zero. This means that we design everything with Security in mind – at every single layer of our application! While you may have heard about AWS Security Groups – have you ever stopped to think about what a security group is, and what it actually does?
If for example, you are launching a Web Server to launch a brand new website hosted on AWS, you will have to prevent and allow certain protocols to initiate communication with your Web Server in order for users to interact with your website. On the other hand, if you give everyone access to your server using any protocol you may be leaving sensitive information easily reachable from anyone else on the internet, ruining your security posture.
The balance of allowing this kind of access is done using a specific technology in AWS, and today we are going to explore how Security Groups work, and what problems they help you solve.
What is a Security Group?
Security groups control traffic reaching and leaving the resources they are associated with according to the security group rules set by each group. After you associate a security group with an EC2 instance, it controls the instance’s inbound and outbound traffic.
Although VPCs come with a default security group when you create them, additional security groups can be created for any VPC within your account.
Security groups can only be associated with resources in the VPC for which they were created, and do not apply to resources in different VPCs.
Each security group has rules for controlling traffic based on protocols and ports. There are separate rules for inbound and outbound traffic.
Let’s have a look at what a security group looks like.
As stated earlier, Security Groups control inbound and outbound traffic in relation to resources placed in these security groups. Below are some example rules that you would see routinely when interacting with security groups for a Web Server.
Inbound
Outbound
Security Groups can also be used for the Relational Database Service, and for Amazon Elasticache to control traffic in a similar way.
Security Group Quotas
There is a limit of Security Groups you can have within a Region, and a limit on the number of outbound and inbound rules you can have per security group.
For the number of Security Groups within a Region, you can have 2500 Security Groups per Region by default. This quota applies to individual AWS account VPCs and shared VPCs, and is adjustable through launching a support ticket with AWS Support.
Regarding the number of inbound and outbound rules per Security Group, you can have 60 inbound and 60 outbound rules per security group (making a total of 120 rules). An IPv4 quota is enforced separately from IPv6 quotas. For example, an IPv4 quota can have 60 inbound rules, while an IPv6 quota can have 60 inbound rules.
Both inbound and outbound rules can be changed with a quota change. Per network interface, this quota multiplied by the quota for security groups cannot exceed 1,000.
Best Practices with Security Groups
When we are inevitably using Security Groups as part of our infrastructure, we can use some best practices to ensure that we are aligning ourselves with the highest security standards possible.
Ensure your Security Groups do not have a large range of ports open
When large port ranges are open, instances are vulnerable to unwanted attacks. Furthermore, they make it very difficult to trace vulnerabilities. Web servers may only require 80 and 443 ports to be open, and not any more.
Create new security groups and restrict traffic appropriately
If you are using the default AWS Security group for your active resources you are going to unnecessarily open up your instances and your applications and cause a lessened security posture.
Where possible, restrict access to required IP address(es) and by port, even internally within your organization
If you are allowing all access (0.0.0.0/0 or ::/0) to your resources, you are asking for trouble. Where possible, you can actually restrict access to your resources based on an individual IP address or range of addresses. This would prevent any bad actors accessing your instances and lessen your security posture.
Chain Security Groups together
When Chaining Security Groups, the inbound and outbound rules are set up in a way that traffic can only flow from the top tier to the bottom tier and back up again. The security groups act as firewalls to prevent a security breach in one tier to automatically provide subnet-wide access of all resources to the compromised client.
The AWS Front-End Web and Mobile services support development workflows for native iOS/Android, React Native, and JavaScript developers. You can develop apps and deliver, test, and monitor them using managed AWS services.
AWS AppSync Features
AWS AppSync is a fully managed service that makes it easy to develop GraphQL APIs.
Securely connects to to data sources like AWS DynamoDB, Lambda, and more.
Add caches to improve performance, subscriptions to support real-time updates, and client-side data stores that keep offline clients in sync.
AWS AppSync automatically scales your GraphQL API execution engine up and down to meet API request volumes.
GraphQL
AWS AppSync uses GraphQL, a data language that enables client apps to fetch, change and subscribe to data from servers.
In a GraphQL query, the client specifies how the data is to be structured when it is returned by the server.
This makes it possible for the client to query only for the data it needs, in the format that it needs it in.
GraphQL also includes a feature called “introspection” which lets new developers on a project discover the data available without requiring knowledge of the backend.
Real-time data access and updates
AWS AppSync lets you specify which portions of your data should be available in a real-time manner using GraphQL Subscriptions.
GraphQL Subscriptions are simple statements in the application code that tell the service what data should be updated in real-time.
Offline data synchronization
The Amplify DataStore provides a queryable on-device DataStore for web, mobile and IoT developers.
When combined with AWS AppSync the DataStore can leverage advanced versioning, conflict detection and resolution in the cloud.
This allows automatic merging of data from different clients as well as providing data consistency and integrity.
Data querying, filtering, and search in apps
AWS AppSync gives client applications the ability to specify data requirements with GraphQL so that only the needed data is fetched, allowing for both server and client filtering.
AWS AppSync supports AWS Lambda, Amazon DynamoDB, and Amazon Elasticsearch.
GraphQL operations can be simple lookups, complex queries & mappings, full text searches, fuzzy/keyword searches, or geo lookups.
Server-Side Caching
AWS AppSync’s server-side data caching capabilities reduce the need to directly access data sources.
Data is delivered at low latency using high speed in-memory managed caches.
AppSync is fully managed and eliminates the operational overhead of managing cache clusters.
Provides the flexibility to selectively cache data fields and operations defined in the GraphQL schema with customizable expiration.
Security and Access Control
AWS AppSync allows several levels of data access and authorization depending on the needs of an application.
Simple access can be protected by a key.
AWS IAM roles can be used for more restrictive access control.
AWS AppSync also integrates with:
Amazon Cognito User Pools for email and password functionality
Social providers (Facebook, Google+, and Login with Amazon).
Enterprise federation with SAML.
Customers can use the Group functionality for logical organization of users and roles as well as OAuth features for application access.
Custom Domain Names
AWS AppSync enables customers to use custom domain names with their AWS AppSync API to access their GraphQL endpoint and real-time endpoint.
Used with AWS Certificate Manager (ACM) certificates..
A custom domain name can be associated with any available AppSync API in your account.
When AppSync receives a request on the custom domain endpoint, it routes it to the associated API for handling.
Cloud security best practices are serverless best practices. These include applying the principle of least privilege, securing data in transit and at rest, writing code that is security-aware, and monitoring and auditing actively.
Apply a defense in depth approach to your serverless application security.
Thinking serverless at scale means knowing the quotas of the services you are using and focusing on scaling trade-offs and optimizations among those services to find the balance that makes the most sense for your workload.
As your solutions evolve and your usage patterns become clearer, you should continue to find ways to optimize performance and costs and make the trade-offs that best support the workload you need rather than trying to scale infinitely on all components. Don’t expect to get it perfect on the first deployment. Build in the kind of monitoring and observability that will help you understand what’s happening, and be prepared to tweak things that make sense for the access patterns that happen in production.
Lambda Power Tuning helps you understand the optimal memory to allocate to functions.
You can specify whether you want to optimize on cost, performance, or a balance of the two.
Under the hood, a Step Functions state machine invokes the function you’ve specified at different memory settings from 128 MB to 3 GB and captures both duration and cost values.
Let’s take a look at Lambda Power Tuning in action with a function I’ve written.
The function I have determines the hash value of a lot of numbers. Computationally, it’s expensive. I’d like to know whether I should be allocating 1 GB, 1.5 GB, or 3 GB of RAM to it.
I can specify the memory values to test in the file deploy.sh. In my example, I’m only using 1, 1.5 GB, and 3 GB. The state machine takes the following parameters (you define these in sample-execution-input.json):
Lambda ARN
Number of invocations for each memory configuration
Static payload to pass to the Lambda function for each invocation
Parallel invocation: Whether all invocations should be in parallel or not. Depending on the value, you may experience throttling.
Strategy: Can be cost, speed, or balanced. Default is cost.
If you specify Cost, it will report the cheapest option regardless of performance. Speed will suggest fastest regardless of cost. Balanced will choose a compromise according to balancedWeight. balancedWeight is a number between 0 and 1. Zero is speed strategy. One is cost strategy.
Let’s take a look at the inputs I’ve specified and find out how much memory we should allocate.
In this configuration, I’m specifying that I want this function to execute as quickly as possible.
Results.powershows that 3 GB provides the best performance.
Let’s update my configuration to use the default strategy of cost and run again.
Results.power shows that 1 GB is the best option for price.
Use this tool to help you evaluate how to configure your Lambda functions.
How API Gateway responds to a burst of requests
API Gateway uses the token bucket algorithm to fulfill requests at a steady pace.
If the rate at which the bucket is being filled causes the bucket to fill up and exceed the burst value, a 429 Too Many Requests error is returned.
Lambda concurrency considerations for scaling
Burst quota: Regional limit that prevents concurrency from increasing too quickly (cannot be changed)
Regional account quota: Soft limit on total number of concurrent invocations within an account by Region
Reserved concurrency: Optional limit per function
Provisioned concurrency: Optional subset of reserved concurrency that is always warm
How Lambda reacts to a burst of requests
Lambda immediately increases the number of concurrent invocations by the Regional burst concurrency quota.
After this immediate increase, Lambda will add up to 500 concurrent invocations per minute until it has enough to run all the requests concurrently or until it reaches the function or account limit.
Synchronous and asynchronous event source scaling
Lambda will increase concurrency to keep up with demand up to an account quota or the reserved concurrency set on a function.
Concurrency = requests * duration.
Amazon SQS queue event source scaling
Lambda increases the number of pollers on the queue as queue depth increases but decreases it if error rates are increasing.
You can increase the batch size to process more messages at once, but you need to avoid making it so large that the batch can’t complete before the function times out.
Kinesis Data Streams event source scaling
Lambda invokes one concurrent invocation per shard by default.
You can increase the batch size and increase the concurrent batches per shard to process batches of messages faster.
Configure error handling with a maximum retry count and an onFailure destination to prevent blocked shards.
Configure enhanced fan-out to give higher throughput to many consumers.
Automation is especially important with serverless applications. Lots of distributed services that can be independently deployed mean more, smaller deployment pipelines that each build and test a service or set of services. With an automated pipeline, you can incorporate better detection of anomalies and more testing, halt your pipeline at a certain step, and automatically roll back a change if a deployment were to fail or if an alarm threshold is triggered.
Your pipeline may be a mix and match of AWS or third-party components that suit your needs, but the concepts apply generally to whatever tools your organization uses for each of these steps in the deployment tool chain. This module will reference the AWS tools that you can use in each step in your CI/CD pipeline.
CI/CD best practices
Configure testing using safe deployments in AWS SAM:
Declare an AutoPublishAlias
Set safe deployment type
Set a list of up to 10 alarms that will trigger a rollback
Configure a Lambda function to run pre- and post-deployment tests
Use traffic shifting with pre- and post-deployment hooks
PreTraffic: When the application is deployed, the PreTraffic Lambda function runs to determine if things should continue. If that function completes successfully (i.e., returns a 200 status code), the deployment continues. If the function does not complete successfully, the deployment rolls back.
PostTraffic: If the traffic successfully completes the traffic shifting progression to 100 percent of traffic to the new alias, the PostTraffic Lambda function runs. If it returns a 200 status code, the deployment is complete. If the PostTraffic function is not successful, the deployment is rolled back.
Use separate account per environment
It’s a best practice with serverless to use separate accounts for each stage or environment in your deployment. Each developer has an account, and the staging and deployment environments are each in their own accounts.
This approach limits the blast radius of issues that occur (for example, unexpectedly high concurrency) and allows you to secure each account with IAM credentials more effectively with less complexity in your IAM policies within a given account. It also makes it less complex to differentiate which resources are associated with each environment.
Because of the way costs are calculated with serverless, spinning up additional environments doesn’t add much to your cost. Other than where you are provisioning concurrency or database capacity, the cost of running tests in three environments is not different than running them in one environment because it’s mostly about the total number of transactions that occur, not about having three sets of infrastructure.
Use on AWS SAM template with parameters across environments
As noted earlier, AWS SAM supports CloudFormation syntax so that your AWS SAM template can be the same for each deployment environment with dynamic data for the environment provided when the stack is created or updated. This helps you ensure that you have parity between all testing environments and aren’t surprised by configurations or resources that are different or missing from one environment to the next.
AWS SAM lets you build out multiple environments using the same template, even across accounts:
Use parameters and mappings when possible to build dynamic templates based on user inputs and pseudo parameters, such as AWS: Region
Use the Globals section to simplify templates
Use ExportValue and ImportValue to share resource information across stacks
Manage secrets across environments with Parameter Store:
AWS Systems Manager Parameter Store supports encrypted values and is account specific, accessible through AWS SAM templates at deployment, and accessible from code at runtime.
Testing throughout the pipeline
Another best practice is to test throughout the pipeline. Assuming these steps in a pipeline – build, deploy to test environment, deploy to staging environment, and deploy to production – drag the type of test to the pipeline step where you would perform the tests before allowing the next step in deployment to continue.
You are reviewing the team’s plan for managing the application’s deployment. Which suggestions would you agree with? (Select TWO.)
A. Use IAM to control development and production access within one AWS account to separate development code from production code
B. Use AWS SAM CLI for local development testing
C. Use CloudFormation to write all of the infrastructure as code for deploying the application
D. Use Amplify to deploy the user interface and AWS SAM to deploy the serverless backend
Answer: B. D. Notes: Use AWS SAM CLI for local development testing, Use Amplify to deploy the user interface and AWS SAM to deploy the serverless backend
Scaling considerations for serverless applications
True
Using HTTP APIs and first-class service integrations can reduce end-to-end latency because it lets you connect the API call directly to a service API rather than requiring a Lambda function between API Gateway and the other AWS service.
Provisioned concurrency may be less expensive than on-demand in some cases. If your provisioned concurrency is used more than 60 percent during a given time period, then it will probably be less expensive to use provisioned concurrency or a combination of on-demand and provisioned concurrency.
With Amazon SQS as an event source, Lambda will manage concurrency. Lambda will increase concurrency when the queue depth is increasing, and decrease concurrency when errors are being returned.
You can set a batch window to increase the time before Lambda polls a stream or queue. This lets you reduce costs by avoiding regularly invoking the function with a small number of records if you have a relatively low volume of incoming records.
False
Setting reserved concurrency on a version: You cannot set reserved concurrency per function version. You set reserved concurrency on the function and can set provisioned concurrency on an alias. It’s important to keep the total provisioned concurrency for active aliases to less than the reserved concurrency for the function.
Setting the number of shards on a DynamoDB table: You do not directly control the number of shards the table uses. You can directly add shards to a Kinesis Data Stream. With a DynamoDB table, the way you provision read/write capacity and your scaling decisions drive the number of shards. DynamoDB will automatically adjust the number of shards needed based on the way you’ve configured the table and the volume of data.
Concurrency in synchronous invocations: Lambda will use concurrency equal to the request rate multiplied by function duration. As one function invocation ends, Lambda can reuse its environment rather than spinning up a new one, so function duration plays an important factor in concurrency for synchronous and asynchronous invocations.
The impact of higher function memory: A higher memory configuration does have a higher price per millisecond, but because duration is also a factor of cost, your function may finish faster at higher memory configurations and that might mean an overall lower cost.
A shorter duration may reduce the concurrency Lambda needs, but depending on the nature of the function, higher memory may not have a measurable impact on duration. You can use tools like Lambda Power Tuning (https://github.com/alexcasalboni/aws-lambda-power-tuning) to find the best balance for your functions.
There is no stopping Amazon Web Services (AWS) from innovating, improving, and ensuring the customer gets the best experience possible as a result. Providing a seamless user experience is a constant commitment for AWS, and their ongoing innovation allows the customer’s applications to be more innovative – creating a better customer experience.
AWS makes managing networking in the cloud one of the easiest parts of the cloud service experience. When managing your infrastructure on premises, you would have had to devote a significant amount of time to understanding how your networking stack works. It is important to note that AWS does not have a magic bullet that will make all issues go away, but they are constantly providing new exciting features that will enhance your ability to scale in the cloud, and the key to this is elasticity.
Elasticity is defined as “The ability to acquire resources as you need them and release resources when you no longer need them” – this is one of the biggest selling points of the cloud. The three networking features which we are going to talk about today are all elastic in nature, namely the Elastic Network Interface (ENI), the Elastic Fabric Adapter (EFA), and the Elastic Network Adapter (ENA). Let’s compare and contrast these AWS features to allow us to get a greater understanding into how AWS can help our managed networking requirements.
AWS ENI (Elastic Network Interface)
You may be wondering what an ENI is in AWS? The AWS ENI (AWS Elastic Network Interface) is a virtual network card that can be attached to any instance of the Amazon Elastic Compute Cloud (EC2). The purpose of these devices is to enable network connectivity for your instances. If you have more than one of these devices connected to your instance, it will be able to communicate on two different subnets -offering a whole host of advantages.
For example, using multiple ENIs per instance allows you to decouple the ENI from the EC2 instance, in turn allowing you far more flexibility to design an elastic network which can adapt to failure and change.
As stated, you can connect several ENIs to the same EC2 instance and attach your single EC2 instance to many different subnets. You could for example have one ENI connected to a public-facing subnet, and another ENI connected to another internal private subnet.
You could also, for example, attach an ENI to a running EC2 instance, or you could have it live after the EC2 instance is deleted.
Finally, it can also be implemented as a crude form of high availability: Attach an ENI to an EC2 instance; if that instance dies, launch another and attach the ENI to that one as well. It will only affect traffic flow for a short period of time.
AWS EFA (Elastic Fabric Adapter)
In Amazon EC2 instances, Elastic Fabric Adapters (EFAs) are network devices that accelerate high-performance computing (HPC) and machine learning.
EFAs are Elastic Network Adapters (ENAs) with additional OS-bypass capabilities.
AWS Elastic Fabric Adapter (EFA) is a specialized network interface for Amazon EC2 instances that allows customers to run high levels of inter-instance communication, such as HPC applications on AWS at scale on.
Due to EFA’s support for libfabric APIs, applications using a supported MPI library can be easily migrated to AWS without having to make any changes to their existing code.
For this reason, AWS EFA is often used in conjunction with Cluster placement groups – which allow physical hosts to be placed much closer together within an AZ to decrease latency even more. Some use cases for EFA are in weather modelling, semiconductor design, streaming a live sporting event, oil and gas simulations, genomics, finance, and engineering, amongst others.
AWS ENA (Elastic Network Adapter)
Finally, let’s discuss the AWS ENA (Elastic Network Adapter).
The Elastic Network Adapter (ENA) is designed to provide Enhanced Networking to your EC2 instances.
With ENA, you can expect high throughput and packet per second (PPS) performance, as well as consistently low latencies on Amazon EC2 instances. Using ENA, you can utilize up to 20 Gbps of network bandwidth on certain EC2 instance types – massively improving your networking throughput compared to other EC2 instances, or on premises machines. ENA-based Enhanced Networking is currently supported on X1 instances.
Key Differences
There are a number of differences between these three networking options.
Elastic Network Interface (ENI) is a logical networking component that represents a virtual networking card
Elastic Network Adapter (ENA) physical device, Intel 82599 Virtual Function (VF) to provide high end performance on certain specified and supported EC2 types
Elastic Fabric Adapter (EFA) is a network device which you can attach to your EC2 instance to accelerate High Performance Computing (HPC)
Elastic Network Adapter (ENA) is only available on the X1 instance type, Elastic Network Interfaces (ENI) are ubiquitous across all EC2 instances and Elastic Fabric Adapters are available for only certain instance types.
In order to support VPC networking, an ENA ENI provides traditional IP networking features.
EFA ENIs provide all the functionality of ENA ENIs plus hardware support to allow applications to communicate directly with the EFA ENI without involving the instance kernel (OS-bypass communication).
Since the EFA ENI has advanced capabilities, it can only be attached to stopped instances or at launch.
Limitations
EFA has the following limitations:
p4d.24xlarge and dl1.24xlarge instances support up to four EFAs. All other supported instance types support only one EFA per instance.
It is not possible to send EFA traffic from one subnet to another. It is possible to send IP traffic from one subnet to another using the EFA.
EFA OS-bypass traffic cannot be routed. EFA IP traffic can be routed normally.
An EFA must belong to a security group that allows inbound and outbound traffic to and from the group.
ENA has the following limitations:
ENA is only used currently in the X1 instance type
ENI has the following limitations:
You lack the visibility of a physical networking card, due to virtualization
Only a few instances types support up to four networking cards, the majority only support 1
Pricing
You are not priced per ENI with EC2, you are only limited to how many your instance type supports. There is however a charge for additional public IPs on the same instance.
EFA is available as an optional EC2 networking feature that you can enable on any supported EC2 instance at no additional cost.
ENA pricing is absorbed into the cost of running an X1 instance
Thanks to all the people who posted there testing experience here. It gave me a lot of perspective from the exam point of view and how to prepare for the new version.
Stephan Marek’s udemy course and his practice test on Udemy was the key to my success in this test. I did not use any other resource for my preparation.
I am a consultant and have been working on AWS from the last 5+ years, not much hands on work though. My initial cert expired last year so wanted to renew.
Overall, the C03 version was very similar to the C02/C01 version. I did not get a single question about AI/ML services and the questions were majorly related to more fundamental services like VPC, SQS, Lambda, cloud watch, event bridge, Storage (S3, glacier, lifecycle policies). Source: r/awscertification
Passed SAP-C01 AWS Certified Solutions Architect Professional
Resources used were:
Adrian (for the labs),
Jon (For the Test Bank),
and Stephane for a quick overview played on double speed.
Total time spent studying was about a month. I don’t do much hands on as a security compliance guy, but do work with AWS based applications everyday. It helps to know things to a very low level.
Passed SAA C03 in 38 Days
So I am sharing how I passed my certification SAA C03 in less than 40 Days without any prior experience in AWS, (my org asked me to do it)
Neal Davis Practice Tests: https://www.udemy.com/course-dashboard-redirect/?course_id=1878624 I highly recommend these, since Neal’s tests will give you less hints in questions and after doing these you now have absolute understanding how actual Exam questions will be.
After doing tests just make sure you know why the particular answer is wrong.
I scheduled my exam on 26th September and gave the test in Pearson Center. The exam was extremely lengthy I took all my time to just do the questions and I did not have time to look back at my Flagged questions (actually while I was clicking on End Review button timeup and test ended itself) My results came after 50 hours of completing the test and these 50 hours were the most difficult in complete journey.
Today I received my result and I score 914 and got the badge and certification.
So how do you know you are ready. Once you start getting 80+ consistently in 2-3 tests just book your exam.
Passed SAP-C01!
Just found out I passed the Solutions Architect Pro exam. It was a tough one, took me almost the full 3 hours to answer and review every question. At the end of the exam, I felt that it could have gone either way. Had to wait about 20 painful hours to get my final result (857/1000). I’m honestly amazed, I felt so unprepared. What made it worse is that I suddenly felt ill on the night of the exam. Only got about three hours sleep, realized it was too late to reschedule and had to drag myself to the test center. Was very tempted to bail and pay the $300 to resit, very glad I didn’t!
No formal cloud background, but have worked in IT/software for about 10 years as a software engineer. Some of my roles included network setup/switch configuration/Linux and Windows server admin, which definitely comes in useful (but isn’t required). I got my first cert in January (CCP), and have since got the other three associate certs (SAA, DVA, SOA).
People are not joking when they say this is an endurance test. You need to try and stay focused for the full three hours. It took me about two hours to answer every question, and a further hour to review my answers.
In terms of prep, I used a combination of Stefan Maarek (Udemy) and Adrian Cantrill (learn.cantrill.io). Both courses worked well together I found (Adrian Cantrill for the theory/practical, and Stefan Maarek for the review/revision). I used Tutorials Dojo for practice exams and review (tutorialsdojo.com). The exam questions are very close to the real thing, and the question summary/explanations are extremely well written. My advice is to sit the practice exam, and then carefully review each question (regardless of if you got it right/wrong) and read/understand the explanations as to why each answer is right/wrong. It takes time, but it will really prepare you for the real thing.
I’m particularly impressed with the Advanced Demos on the Adrian Cantrill course, some of those really helped out with having the knowledge to answer the exam questions. I particularly liked the Organizations, Active Directory, Hybrid DNS, Hybrid SSM, VPN and WordPress demos.
In terms of the exam, lots of questions on IAM (cross-account roles), Organizations (billing/SCP/RAM), Database performance issues, migrations, Transit Gateway, DX/VPN, containerisation (ECS/EKS), disaster recovery. Some of the scenario questions are quite tricky, all four answers appear valid but there will be subtle differences between them. So you have to work out what is different between each answer.
A tip I will leave you: a lot of the migration questions will get you to pick between using snow devices or uploading via the internet/DX. Quick way to work out if uploading is feasible is to multiply the line speed by 10,000 – this will give you the approximate number of bytes that can be transferred in a day. E.g. a line speed of 50Mbps will let you transfer 500GBytes in a day (assuming nothing else is using that link). So if you had to transfer 100TB, then you will need to use snow devices (unless you were happy waiting 200 days).
Passed SAA-C03 – Feedback
Just passed the SAA-C03 exam (864) and wanted to provide some feedback since that was helpful for me when I was browsing here before the exam.
I come from an IT background and have a strong knowledge in the VPC portion so that section was a breeze for me in the preparation process (I had never used AWS before this so everything else was new, but the concepts were somewhat familiar considering my background). I started my preparation about a month ago, and used the Mareek class on Udemy. Once I finished the class and reviewed my notes I moved to Mareek’s 6 practice exams (on Udemy). I wasn’t doing extremely well on the PEs (I passed on 4/6 of the exams with 70s grades) I reviewed the exam questions after each exam and moved on to the next. I also purchased Tutorial Dojo’s 6 exams set but only ended up taking one out of 6 (which I passed).
Overall the practice exams ended up being a lot harder than the real exam which had mostly the regular/base topics: a LOT of S3 stuff and storage in general, a decent amount of migration questions, only a couple questions on VPCs and no ML/AI stuff.
Sharing the study guide that I followed when I prepared for the AWS Certified Solutions Architect Associate SAA-C03 exam. I passed this test and thought of sharing a real exam experience in taking this challenging test.
First off: my background – I have 8 years of development.experience and been doing AWS for several project, both personally and at work. Studied for a total of 2 months. Focused on the official Exam Guide, and carefully studied the Task Statements and related AWS services.
SAA-C03 Exam Prep
For my exam prep, I bought the adrian cantrill video course, tutorialsdojo (TD) video course and practice exams. Adrian’s course is just right and highly educational but like others has said, the content is long and cover more than just the exam. Did all of the hands-on labs too and played around some machine learning services in my AWS account.
TD video course is short and a good overall summary of the topics items you’ve just learned. One TD lesson covers multiple topics so the content is highly concise. After I completed doing Adrian’s video course, I used TD’s video course as a refresher, did a couple of their hands-on labs then head on to their practice exams.
For the TD practice exams, I took the exam in chronologically and didn’t jumped back and forth until I completed all tests. I first tried all of the 7 timed-mode tests, and review every wrong ones I got on every attempt., then the 6 review-mode tests and the section/topic-based tests. I took the final-test mode roughly 3 times and this is by far one of the helpful feature of the website IMO. The final-test mode generates a unique set from all TD question bank, so every attempt is challenging for me. I also noticed that the course progress doesn’t move if I failed a specific test, so I used to retake the test that I failed.
The Actual SAA-C03 Exam
The actual AWS exam is almost the same with the ones in the TD tests where:
All of the questions are scenario-based
There are two (or more) valid solutions in the question, e.g:
Need SSL: options are ACM and self-signed URL
Need to store DB credentials: options are SSM Parameter Store and Secrets Manager
The scenarios are long-winded and asks for:
MOST Operationally efficient solution
MOST cost-effective
LEAST amount overhead
Overall, I enjoyed the exam and felt fully prepared while taking the test, thanks to Adrian and TD, but it doesn’t mean the whole darn thing is easy. You really need to put some elbow grease and keep your head lights on when preparing for this exam. Good luck to all and I hope my study guide helped out anyone who is struggling.
Another Passed SAA-C03?
Just another thread about passing the general exam? I passed SAA-C03 yesterday, would like to share my experience on how I earned the examination.
Background:
– graduate with networking background
– working experience on on-premise infrastructure automation, mainly using ansible, python, zabbix and etc.
– cloud experience, short period like 3-6 months with practice
– provisioned cloud application using terraform in azure and aws
cantrill course is depth and lot of practical knowledge, like email alias and etc.. check in to know more
tutorialdojo practice exam help me filter the answer and guide me on correct answer. If I am wrong in specific topic, I rewatch cantrill video. However, there is some topics that not covered by cantrill but the guideline/review in practice exam will provide pretty much detail. I did all the other mode before the timed-based, after that get average 850 in timed-based exam, while scoring the final practice exam with 63/65. However, real examination is harder compared to practice exam in my opinion.
udemy course and practice exam, I go through some of them but I think the practice exam is quite hard compared to tutorialdojo.
lab – just get hand dirty and they will make your knowledge deep dive in your brain, my advice is try not only to do copy and paste lab but really read the description for each parameter in aws portal
Advice:
you need to know some general exam topics like how to:
– s3 private access
– ec2 availability
– kinesis product including firehose, data stream, blabla
– iam
My next target will be AWS SAP and CKA, still searching suitable material for AWS SAP but proposed mainly using acloudguru sandbox and homelab to learn the subject, practice with acantrill lab in github.
Good luck anyone!
Passed SAA
I wanted to give my personal experience. I have a background in IT, but I have never worked in AWS previous to 5 weeks ago. I got my Cloud Practitioner in a week and SAA after another 4 weeks of studying (2-4 hours a day). I used Cantril’s Course and Tutorials Dojo Practice Exams. I highly, highly recommend this combo. I don’t think I would have passed without the practice exams, as they are quite difficult. In my opinion, they are much more difficult than the actual exam. They really hit the mark on what kind of content you will see. I got a 777, and that’s with getting 70-80%’s on the practice exams. I probably could have done better, but I had a really rough night of sleep and I came down with a cold. I was really on the struggle bus halfway through the test.
I only had a couple of questions on ML / AI, so make sure you know the differences between them all. Lot’s of S3 and EC2. You really need to know these in and out.
My company is offering stipend’s for each certification, so I’m going straight to developer next.
Recently passed SAA-C03
Just passed my SAA-C03 yesterday with 961 points. My first time doing AWS certification. I used Cantrill’s course. Went through the course materials twice, and took around 6 months to study, but that’s mostly due to my busy schedule. I found his materials very detailed and probably go beyond what you’d need for the actual exam.
I also used Stephane’s practice exams on Udemy. I’d say it’s instrumental in my passing doing these to get used to the type of questions in the actual exams and review missing knowledge. Would not have passed otherwise.
Just a heads-up, there are a few things popped up that I did not see in the course materials or practice exams:
* Lake Formation: question about pooling data from RDS and S3, as well as controlling access.
* S3 Requester Pays: question about minimizing S3 data cost when sharing with a partner.
* Pinpoint journey: question about customer replying to SMS sent-out and then storing their feedback.
Not sure if they are graded or Amazon testing out new parts.
I’ve spent the last 2 months of my life focusing on this exam and now it’s over! I wanted to write down some thoughts that I hope are informative to others. I’m also happy to answer any other questions.
APPROACH
I used Stephane’s courses to pass CCP, SAA, DVA… however I heard such great things about Adrian’s course that I purchased it and started there.
The detail and clarity that Adrian employs is amazing, and I was blown away by the informative diagrams that he includes with his lessons. His UDP joke made me lol. The course took a month to get through with many daily hours, and I made over 100 pages of study notes in a Google document. After finishing his course, I went through Stephane’s for redundancy.
As many have mentioned here, Stephane does a great job of summarizing concepts, and for me, I really value the slides that he provides with his courses. It helps to memorize and solidify concepts for the actual exam.
After I went through the courses, I bought TutorialsDojo practice exams and started practicing. As everyone says, these are almost a must-use resource before an AWS exam. I recognized three questions on the real exam, and the thought exercise of taking the mocks came in handy during the real exam.
Total preparation: 10 weeks
DIFFICULTY
I heard on this Subreddit that if this exam is a 10, then the associate-level exams are a 3. I was a bit skeptical, but I found the exam a bit harder than the practice exam questions. I just found a few obscure things referred to during the real exam, and some concepts combined in single questions. The Pro-level exams are *at least* 2 times as hard, in my opinion. You need to have Stephane’s slides (or the exam “power-ups” that Adrian points out)/the bolded parts down cold and really understand the fundamentals.
WHILE STUDYING
As my studying progressed, I found myself on this sub almost every day reading others’ experiences and questions. Very few people in my circle truly understand the dedication and hard work that is required to pass any AWS exam, so observing and occasionally interacting here with like-minded people was great. We’re all in this together!
POST-EXAM
I was waiting anxiously for my exam result. When I took the associate exams, I got a binary PASS/FAIL immediately… I got my Credly email 17 hours after finishing the exam, and when I heard from AWS, my score was more than expected which feels great.
WHAT’S NEXT
I’m a developer and have to admit I’ve caught the AWS bug. I want to pursue more… I heard Adrian mention in another thread that some of his students take the Security specialty exam right after SAP, and I think I will do the same after some practice exams. Or DevOps Pro… Then I’m taking a break 🙂
I had a lot on S3, cloudfront, DBs, and a lot on a bunch Lambda and containers. Lots of which is the most cost-effective solution questions.
I think did ok but my online proctoring experience kinda jacked with my mind alittle bit(specifics in separate thread), at one point I even got yelled at for thinking outloud to myself which kinda sucked as that’sone way I talk myself through situations :-/
For two weeks I used MANY practice exams on Youtube, Tutorials Dojo, a cloud guru, and shout out for Cloud Guru Amit (Youtube) has a keyword method that worked well for me, and just reading up on various white papers on stuff I wasn’t clear on/got wrong.
ONTO AWS-Security Specialty and CompTIA Sec+ for me.
Shoutout to Adrian his course was great at preparing me for all the knowledge needed for the exam. (with exception of a question on Polly and Textstract which none of the resources Adrian, Stephan for Test review and dojo practice exams covered)
I got a 78 and went in person to a testing site close by to avoid potential hiccups with online testing. I studied over the course of 4 months but did the bulk of the course in 2 months.
I want to reiterate a common theme in these posts that should not be overlooked in case you are in deep in your journey and plan on taking the tests in the near 4 weeks out or 75% through the videos. BUY THE TUTORIALDOJO PRACTICE EXAMS AND TAKE THEM. EVEN BEFORE YOU ARE DONE WITH ALL THE COURSE.
I thought it would be smarter to finish the course and then do the tests to get a higher score BUT you will inevitably strengthen your skills and knowledge through 1) Doing the tests to get used to the format. 2) REVIEW REVIEW REVIEW – The questions fall into 4 categories and afterwords you will see all the questions and why they are the right answer or almost the right answer. Knowing your weaknesses is crucial for intentional, intelligent, and efficient reviewing.
I took screenshots of all the questions I got wrong or wasn’t completely sure of why I got them right.
Passed SAA-C03 today!
Got a lot of questions based on cloudfront,s3,secrets manger, kms, databases, container(ECS) and ML question based on amazon transcribe.
Passed SAA-C03 = Lots of networking + security questions!
Just passed the AWS Certified Solutions Architect Associate exam SAA-C03 and thank God I allocated some time improving my core networking knowledge. In my point of view, the exam is filled with networking and security questions, so make sure that you really focus on these two domains.
If you don’t know that the port number of MySQL is 3306 and the one for MS SQL is 1433, then you might get overwhelmed by the content of the SAA-C03 exam. Knowing how big or how small a particular VPC (or network) would be based on a given CIDR notation would help too. Integrating SSL / HTTPS into your services like ALB, CloudFront etc are also present in the exam.
On the top of my head, these are the related networking stuff I encountered. Most of the things in this list are somewhat mentioned in the official exam guide:
Ports (e.g. 3306 = MySQL, 1433 = Microsoft SQL)
Regional API Gateway
DNS Resolution between On-Premises networks and AWS
As far as I know, AWS shuffles the content of their exam so you probably could get these topics too. Some feature questions could range from basic to advanced, so make sure you know each feature of all the AWS services mentioned in the exam guide. Here’s what i could remember:
Amazon MQ with active/sync
S3 Features (Requester Pays, Object Lock etc)
Data Lakes
Amazon Rekognition
Amazon Comprehend
For my exam prep, I started my study with Jon Bonso/TD’s SAA video course then moved to Adrian Cantrill’s course. Both are very solid resources and each instructor has a different style of teaching. Jon’s course is more like a minimalist, modern YouTube style teaching. He starts with an overview first before going to the nitty-gritty tech details of things, with fancy montage of videos to drive the fundamental concept in AWS. I recommend his stuff as a crash course to learn the majority of SAA-related content. There’s also a bunch of playcloud/hands-on labs included in his course which I find very helpful too.
Adrian’s course has a much longer course and include the necessary networking/tech fundamentals. Like what other people are saying in this sub, the quality of his stuff is superb and very well delivered. If you are not in a rush and really want to learn the ropes of being a solutions architect, then his course is definitely a must-have. He also has a good videos on YouTube and mini-projects in Github that you can check out.
About half-way in Adrian’s course, I started doing mock exams from TutorialsDojo (TD) and AWS Skill Builder just to reinforce my knowledge. I take a practice test first, then review my correct and incorrect answer. If I notice that I get a lot of mistake in a particular service, I go back to Adrian’s course to make those concepts stick better.
I also recommend trying out the demo/sample/preview lessons before you buy any SAA course. From there, you can decide which teaching style would work best for you:
Thank you to all the helpful guys and gals in this community who shared tips!
Passed 3 AWS Exams – long post
About me:
My overall objective was to pivot more towards cloud security role from a traditional cybersecurity role. I am a security professional and have 10+ years of experience with certifications like CCIE Security, CISSP, OSCP and others. Mostly I have worked in consulting environments doing deployment and pre-sales work.
My cloud Journey:
I started studying AWS certification in January 2022 and did SA Associate in March, SA Professional in August and Security Specialty in September. I used Adrian’s, Stefan’s, and Neal’s videos in mix. I used tutorialdojo for practice test.
Preparation Material:
For videos, Adrian’s stood out with the level of effort this guy has put in. Had this been 6-8 years back this kind of on-site bootcamp for 1 candidate would sell for at minimum 5000 USD . I used them at 1.25x speed but it was difficult to come back to Adrian’s content due to its length if I were to recall/revise something. That’s why I had Stefan’s and Neal’s stuff in my pocket, they usually go on sale for 12-13 USD so no harm in having them. Neal did better job than Stefan for SA Pro as his slides were much more visually appealing. But I felt Stefan covered more concepts. Topics like VPC, Transit gateway can be better understood if the visuals are better. I never made any notes, I purchased Tutorial Dojo’s notes but I dont think they were of much use. You can always find notes made by other people on github and I felt they were more helpful. You can also download video slides from udemy and I did cut a few slides from there and pasted in my google docs if I were to revise them. For the practice test I felt dojo’s wordings were complex compared to the real exam but it does give a very good idea of the difficulty of exam. The real exam had more crisp content.
About Exam:
The exams itself were interesting because it helped me learn the new datacenter’s architecture. Concepts and technologies like lambda, step function, AWS organization, SCP were very interesting and I feel way more confident now compared to what I was 1 year back. Because I target security roles I want to point out that not everything is covered in AWS certifications for these roles. I had gone through CSA 4.0 guide back in December 2021 before starting AWS journey and I think thats helped me visualize many scenarios. Concepts like shadow IT, legal hold, vendor lock in, SOC2/3 reports , portability and interoperability problems in cloud environments were very new to me. I wish AWS can include these stuff in the security exam. These concepts are more towards compliance and governance but its important to know if you are going to interview for cloud security architect roles. I also feel concepts included in DevSecOps should be included more in the security specialty exam.
A bit of criticism here. The exam is very much product specific and many people coming from deployment/research backgrounds will even call it a marketing exam. In fact one L7 Principal Security SA from AWS told me that he considers this a marketing exam. On this forum, there are often discussions on how difficult the AWS SA Pro is but I disagree on that. These exams were no way near the difficulty level of CCIE , CISSP or OSCP which I did in past. The difficulty of exam is high because of its long length of questions, the reading fatigue it can cause, and lack of Visio diagrams. All of these things are not relevant to the real world if you are working as a Solution Architect/Security Architect. Especially for SA Pro almost all questions goes like this – Example scenario -‘A customer plans to migrate to AWS cloud where the application are to be resided on EC2 with auto-scaling enabled in private subnet, those EC2 are behind an ALB which is in public subnet. A replica of this should be created in EU region and Route53 should be doing geolocation routing’ . In the real world, these kinds of issues are always communicated using Visio diagrams i.e. “Current state architecture diagram” and “Future state architecture diagram”. In almost every question I had to read this and draw on the provided sheet which created extra work and reading fatigue. I bet non-english speakers who are experienced architects will find it irritating even though they are given 30 minutes extra. If AWS would change these long sentences into diagrams that can make things easier and more aligned to real world, not sure if they would want to do it because then the difficulty goes down. Also because SMEs are often paid per question they make they don’t want to put more effort in creating diagrams. That’s the problem when you outsource question creation to 3rd party SMEs, the payment is on the number of questions made and I don’t think companies even pay for this. Often this is voluntary work against which the company grants some sort of free recertification or exam voucher.
There seems to be quite a noise for Advanced Networking exam which is considered most difficult. While I haven’t looked into the exam, I would say if it doesn’t has diagrams in each question then the exam is not aligned to real world. Networking challenges should never be communicated without diagrams. Again the difficulty is high because it causes reading fatigue which doesn’t happen in a life of a security architect.
Tips to be a successful consultant:
If you were to become a cloud security architect, I would still highly recommend AWS SA Pro, Security specialty not so much because there was more KMS here and a little bit here and there but the Security specialty was not an eye-opener for me as SA Pro was. Even AWS job description for L6 Security Arch ( Proserv ) role says that the candidate must be able to complete AWS SA Pro in 3 months of hiring which means this is more relevant than the Security specialty even for security roles. But these are all products and you need knowledge beyond that for security roles. The driving force of security has mostly been compliance, you should be really good in things like PCI DSS , ISO 27001 , Cloud Control Matrix because the end of the day you need to map these controls to the product so understanding product is not even 50% of the job. Terraform/Pulumi if you were to communicate your ideas/PoC as IaC. Some python/boto3 SDK which will help you in creating use cases ( need for ProServ roles but not for SA roles ) . If you are looking to do threat modeling of cloud native applications you again need AWS knowledge plus , securing SDLC process, SAST/DAST and then MITRE ATT&CK/Cloud controls matrix etc.
Similarly, if you want to be in networking roles, don’t think AWS Advance Networking will help you be a good consultant. Its a very complex topic and I would recommend look beyond by following courses by Ivan Pepelnjak who himself is a networking veteran. https://www.ipspace.net/Courses . This kind of stuff will help you be a much confident consultant.
I am starting my python journey now which will help me automate use cases. Feel free to ping me if you have any questions.
So finally got my score and I scored 886 which is definitely more than I expected. I have been working on AWS for about a year but my company is slowing moving there so I don’t have ton of hands on experience yet.
I got lot of helpful information from so many people on this subreddit. This is now my turn to share my experience.
Study plan
Started with Cantril’s SAA-C02 course and later switched to his SAA-C03 course. He does a great job at explaining everything in detail. He really covers every topic in great detail and the demos are well structured and detailed. Worth every penny. It does take a long time to finish his course so plan accordingly.
Tutorials DoJo study guide & cheat sheets – I liked this 300 odd pages PDF where all the crucial topics are summarized. Bonso does a great job in comparing similar services and highlighting things that may get you confused during the exam. I took notes within the PDF and used highlighter tool a lot. Helped me revise couple of days before the exam.
Tutorials DoJo practice tests – These tests are the BEST. The questions are similar to what they ask in the exam. The explanation under every question is very helpful. Read thru these for every question that you got wrong and even on the questions that you got right but weren’t 100% sure.
Official exam guide – I used this at the end to check if I have an understanding of knowledge and skill items. The consolidated list of services is really helpful. I took notes against each service and especially focused on services that look similar.
Labs – While Cantrill’s labs are great, if you are following him along then you may be going too fast and missing few things. If you are new to a particular service then you should absolutely go back and go thru every screen at your own pace. I did spend time doing labs but nearly not enough as I had hoped for.
Exam experience
First few questions were easy. Lot of short questions which definitely helped me with my nerves.
Questions started getting longer and answers were confusing too. I flagged about 20 odd questions for review but could only review half of them before the timer was done.
Remember that 15 questions are not scored. No point spending a lot of time on a question that may not even count against your final score. Use the flag for review feature and come back to a question later if time permits.
Watch out for exactly what they are asking for. You as an architect might want to solve the problem in another way than what the question is asking you to do.
Lots of the comments here about networking / VPC questions being prevalent are true. Also so many damn Aurora questions, it was like a presales chat.
The questions are actually quite detailed; as some had already mentioned. So pay close attention to the minute details Some questions you definitely have to flag for re-review.
It is by far harder than the Developer Associate exam, despite it having a broader scope. The DVA-C02 exam was like doing a speedrun but this felt like finishing off Sigrun on GoW. Ya gotta take your time.
I took the TJ practice exams. It somewhat helped, but having intimate knowledge of VPC and DB concepts would help more.
Passed AWS SAP-C01
Just passed the SAP this past weekend, and it was for sure a challenge. I had some familiarity with AWS already having cloud practitioner and passing the SAA back in 2019. I originally wanted to pass the professional version to keep my certs active, so I decided to cram to pass this before they made changes in November. Overall I was able to pass on my first attempt after studying for about 6 weeks heavily. This consisted of on average of about 4 hours a day of studying.
I used the following for studying:
a cloud guru video course and labs(this was ok in my opinion but didn’t really go into as much detail as I think it should have)
Stephane Maarek’s video course was really awesome and hit on everything I really needed on the test. I also took his practice tests a bunch of times.
tutorial dojos practice tests were worth every penny and the review mode on there was perfect to practice and go over material rapidly.
Overall I would focus at first on going through a the full video course with Stephane and then tackling some practice tests. I would then revisit his videos often on subjects I needed to revisit. On the day of the test I took it remotely which honestly think added a little more stress with the proctor all over me on any movement. I ended up passing with a score of 811. Not the best score but I honestly thought I did worse on the test overall as it was challenging and time flew by.
Took Ultimate AWS Certified Solutions Architect Associate SAA-C03 course by Stephane Maarek on Udemy. Sat through all lectures and labs. I think Maarek’s course provides a good overview of all necessary services including hands on labs which prepare you for real world tasks.
Finished all Practice Exams by Tutorial Dojo. Did half of the tests first in review mode and the rest in timed mode.
For last minute summary preparation, I used Tutorials Dojo Study Guide eBook. It was around $4 and summarizes all services. Good ebook to go through before your exam. It is around 280 pages. I only went through summary of services that I was struggling with.
Exam Day and Details:
I opted in for in person exam with Pearson since I live close to their testing centers and I heard about people running into issues with online exams. If you have a testing center nearby, I highly recommend you go there. Unlike online exams, you are free to use bathroom and use blank sheets of paper. It just thought there was more freedom during in person class.
The exam questions were harder than TD. They were more detailed and usually had combination of multiple services as the correct answers. Read the questions very carefully and flag them for review if you aren’t sure.
Around 5-10 questions were exactly the same from TD which was very helpful.
There were a lot of questions related to S3, EBS, EFS, RDS and DynamoDB. So focus on those.
I saw ~5 questions with AWS services which I had never heard of before. I believe those were part of 15 ungraded questions. If you see services you haven’t heard of, I wouldn’t worry much about them as they are likely part of 15 ungraded questions.
It took me around 1.5 hours to finish the exam including the review. I finished my exam around 4PM and got results next morning around 5AM. I only got email from credly. However, I was able to download my exam report from https://www.aws.training/Certification immediately.
Tips:
Try to get at least 80% on few tests on TD before your take your exam.
Take half of the TD practice exams in review mode and go through the answers in detail (even the right ones).
Opt in for in person exam if possible.
If you see AWS services you hadn’t seen before, don’t panic. It’s likely they are part of 15 ungraded questions.
Read questions very carefully.
Relax. It’s just a certification exam and you can retake it in 14 days if you failed. But if you followed all of above, there is very little chance that you will fail.
I passed the SAA-C03 AWS Certified Solutions Architect Assoc. exam this week, all thanks to this helpful Reddit sub! Thank you to everyone who are sharing tips and inspiration on a regular basis. Sharing my exam experience here:
Topics I encountered in the exam:
Lots of S3 features (ex: Object Lock, S3 Access Points)
Lots of advanced cloud designs. I remember the following:
AWS cloud only with 1 VPC
AWS cloud only with 3 VPCs connected using a Transit Gateway
AWS cloud only with 3 VPCs with a shared VPC which contains shared resources that the other 2 VPCs can use.
AWS cloud + on-prem via VPN
AWS cloud + on-prem via Direct Connect
AWS cloud + on-prem with SD WAN connection
Lots of networking ( multicasting via Transit Gateway, Container Networking, Route 53 resolvers )
If you’re not a newbie anymore I recommend skipping the basic lessons included in Adrian Cantrill’s course and focus on the related SAA-C03 stuff.
Do labs labs labs! Adrian has a collection of labs on this github. TD has hands-on labs too with a real AWS Console. I do find the TD labs helpful in testing my actual knowledge in certain topics.
Take the TD practice exams at least twice and aim to get 90% on all test
Review the suitable use cases for each AWS service. The TD and Adrian’s video courses usually covers the use cases for every AWS service. Familiarize yourself with that and make notes
Make sure that whenever you watch the videos, you create your own notes that you can review later on.
source: r/awscertifiations
Passed SAA-C03
Hi guys,
I’ve successfully passed the SAA-C03 exam on Saturday with a score of 832. I felt like the exam was pretty difficult and was wondering if I would pass… Maybe I got a harder test set.
What I did to prepare:
Tom Carpenter’s course on Linkedin Learning for SAA-C02. I started preparing for the exam last year, but had a break in between. Meanwhile AWS released the new version, so this course is not that relevant anymore. They will probably update it for SAA-C03 in the future.
Tutorials Dojo practice tests and materials: Now these were great! I’ve did a couple of their practice tests in review mode and a couple in timed mode. Overall (unpopular opinion) I felt like the exam was harder than the practice tests, but the practice tests and explanations prepared me pretty well for it.
Whizlabs SAA-C03 course: They have some practice tests which were fine, but they also have Labs which are great if you want to explore the AWS services in a guided environment.
Skillcertpro practice tests: The first 5 were fine, but the others were horrible. Stay away from them! They are full of typos and also incorrect answers (S3 was ‘eventually consistent’ in one of the questions)
Are too many companies dependent on ONE cloud company? Amazon’s AWS outage impacted thousands of companies and the products they offer to consumers including doorbells, security cameras, refrigerators, 911 services, and productivity software.
AWS Glue is a pay-as-you-go service from Amazon that helps you with your ETL (extract, transform and load) needs. It automates time-consuming steps of data preparation for analytics. It extracts the data from different data sources, transforms it, and then saves it in the data warehouse. Today, we will explore AWS Glue in detail. Let’s start with the components of AWS Glue.
AWS Glue Components
Below, you’ll find some of the core components of AWS Glue.
Data Catalog
Data Catalog is the persistent metadata store in your AWS Glue. You have one data catalog per AWS account. It contains the metadata related to all your data sources, table definitions, and job definitions to manage the ETL process in AWS Glue.
Crawler
Crawler connects to your data source and data targets. It crawls through the schema and creates metadata in your AWS Glue data catalog.
Classifier
The classifier object determines the schema of a data store. AWS Glue has built-in classifiers for common data types like CSV, Json, XML, etc. AWS Glue also provides default classifiers for common RDBMS systems as well.
Data store
A data store is used to store the actual data in a persistent data storage system like S3 or a relational database management system.
Database
Database in the AWS Glue terminology refers to the collection of associated data catalog table definitions organized into a logical group in AWS Glue.
AWS Glue Architecture
How AWS Glue Works
You will Identify the data sources which you will use.
You will define a crawler to point to each data source and populate the AWS Glue data catalog with the metadata table definitions. This metadata will be used when data is transformed during the ETL process.
Now your data catalog has been categorized, and the data is available for instant searching, querying, and ETL processing.
You will provide a script through the console or API so that the data can be transformed. AWS Glue can also generate a script for this purpose.
You will run the job or schedule the job to run based on a particular trigger. A trigger can be based on a particular schedule or occurring of an event.
When a job is executed, the script extracts the data from the data source(s), transforms it, and loads the transformed data into the data target. The script is run in the Apache Spark environment in AWS Glue.
When To Use AWS Glue
Below are some of the top use cases for AWS Glue.
Build a data warehouse
If you want to build a data warehouse that will collect data from different sources, cleanse it, validate it, and transform it, then AWS Glue is an ideal fit. You can transform and move the AWS cloud data into your data store too.
Use AWS S3 as data lake
You can convert your S3 data into a data lake by cataloguing its data into AWS Glue. The transformed data will be available to AWS redshift and AWS Athena for querying. Both Redshift and Athena can directly query your S3 using AWS Glue.
Create event-driven ETL pipeline
AWS Glue is a perfect fit if you want to launch an ETL job as soon as fresh data is available in S3. You can use AWS Lambda along with AWS Glue to orchestrate the ETL process.
Features of AWS Glue
Below are some of the top features of AWS Glue.
Automatic schema recognition
Crawler is a very powerful component of AWS Glue that automatically recognizes the schema of your data. Users do not need to design the schema of each data source manually. Crawlers automatically identify the schema and parse the data.
Automatic ETL code generation
AWS Glue is capable of creating the ETL code automatically. You just need to specify the source of the data and its target data store; AWS Glue will automatically create the relevant code in scala or python for the entire ETL pipeline.
Job scheduler
ETL jobs are very flexible in AWS Glue. You can execute the jobs on-demand, and you can also schedule them to be triggered based on a schedule or event. Multiple jobs can be executed in parallel, and you can even mention the job dependencies as well.
Developer endpoints
Developers can take advantage of developer endpoints to debug AWS Glue as well as develop custom crawlers, writers, and data transformers, which can later be imported into custom libraries.
Integrated data catalog
The data catalog is the most powerful component of AWS Glue. It is the central metadata store of all the diverse data sources of your pipeline. You only have to maintain just one data catalog per AWS account.
Benefits of Using AWS Glue
Strong integrations
AWS Glue has strong integrations with other AWS services. It provides native support for AWS RDS and Aurora databases. It also supports AWS Redshift, S3, and all common database engines and databases running in your EC2 instances. AWS Glue even supports NoSQL data sources like DynamoDB.
Built-in orchestration
You do not need to set up or maintain ETL pipeline infrastructure. AWS Glue will automatically handle the low-level complexities for you. The crawlers automate the process of schema identification and parsing, freeing you from the burden of manually evaluating and parsing different complex data sources. AWS Glue also creates the ETL pipeline code automatically. It has built-in features for logging, monitoring, alerting, and restarting failure scenarios as well.
AWS Glue is serverless, which means you do not need to worry about maintaining the underlying infrastructure. AWS glue has built-in scaling capabilities, so it can automatically handle the extra load. It automatically handles the setup, configuration, and scaling of underlying resources.
Cost-effective
You only pay for what you use. You will only be charged for the time when your jobs are running. This is especially beneficial if your workload is unpredictable and you are not sure about the infrastructure to provision for your ETL jobs.
Drawbacks of Using AWS Glue
Here are some of the drawbacks of using AWS Glue.
Reliance on Apache Spark
As the AWS Glue jobs run in Apache Spark, the team must have expertise in Spark in order to customize the generated ETL job. AWS Glue also creates the code in python or scala – so your engineers must have knowledge of these programming languages too.
Complexity of some use cases
Apache spark is not very efficient in use cases like advertisement, gaming, and fraud detection because these jobs need high cardinality joins. Spark is not very good when it comes to high cardinality joins. You can handle these scenarios by implementing additional components, although that will make your ETL pipeline complex.
Similarly, if you need to combine steam and batch jobs, that will be complex to handle in AWS Glue. This is because AWS Glue requires batch and stream processes to be separate. As a result, you need to maintain extra code to make sure that both of these processes run in a combined manner.
AWS Glue Pricing
For the ETL jobs, you will be charged only for the time the job is running. AWS will charge you on an hourly basis depending on the number of DPUs (Data Processing Units) that are needed to run your job. One DPU is approximately 4 vCPUs with 16GB of memory. You will also pay for the storage of the data stored in the AWS Glue data catalog. The first million objects are free in the catalog, and the first million accesses are also free. Crawlers and development endpoints are also charged based on an hourly rate, and the rate depends on the number of DPU’s.
Frequently Asked Questions
How is AWS Glue different from AWS Lake Formation?
Lake Formation’s main area is governance and data management functionality, whereas AWS Glue is strong in ETL and data processing. They both complement each other as the lake formation is primarily a permission management layer that uses the AWS glue catalog under the hood.
Can AWS Glue write to DynamoDB?
Yes, AWS Glue can write to DynamoDB. However, the option of writing is not available in the console. You will need to customize the script to achieve that.
Can AWS Glue write to RDS?
Yes, AWS Glue can write to any RDS engine. When using the ETL job wizard, you can select the target option of “JDBC” and then you can create a connection to any RDS-compliant database.
Is AWS Glue in real-time?
AWS Glue can process data from Amazon Kinesis Data Streams using micro-batches in real-time. For a large data set, there might be some delay. It can process petabytes of data both in batches and in real-time.
Does AWS Glue Auto Scale?
AWS Glue provides autoscaling starting from version 3.0. It automatically adds or removes workers based on the workload.
Where is AWS Glue Data Catalog Stored?
As AWS Glue is a drop-in replacement to Hive metastore. Most probably, the data is stored in MySQL database. However, it is not confirmed because there is no official information from AWS regarding this.
How Fast is AWS Glue?
AWS Glue 3 has improved a lot in terms of speed. The speed of version 3 is 2.4 times faster than the version 2. This is because it uses vectorized readers and micro-parallel SIMD CPU instructions for faster data parsing, tokenization, and indexing.
Is AWS Glue Expensive?
No, AWS Glue is not expensive. This is because it is based on serverless architecture, and you are charged only when it is actually used. There is no permanent infrastructure cost, so AWS Glue is not costly.
Is AWS Glue a Database?
No. AWS Glue is a fully managed cloud service from Amazon through which you can prepare data for analysis through an automated ETL process.
Is AWS Glue difficult to learn?
AWS Glue is not really difficult to learn. This is because it provides a GUI-based interface through which you can easily manage the process of authoring, running, and monitoring the whole process of ETL jobs.
What is The Difference Between AWS Glue and EMR?
AWS Glue and EMR are both AWS solutions for ETL processing. EMR is a slightly faster and cheaper platform, especially if you already have the required infrastructure available. However, if you want a serverless solution where you expect your workload to be inconsistent, then AWS Glue is the better option.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6 Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
The AWS Certified Cloud Practitioner Exam (CLF-C02) is an introduction to AWS services and the intention is to examine the candidates ability to define what the AWS cloud is and its global infrastructure. It provides an overview of AWS core services security aspects, pricing and support services. The main objective is to provide an overall understanding about the Amazon Web Services Cloud platform. The course helps you get the conceptual understanding of the AWS and can help you know about the basics of AWS and cloud computing, including the services, cases and benefits [Get AWS CCP Practice Exam PDF Dumps here]
To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
aws cloud practitioner practice questions and answers
aws cloud practitioner practice exam questions and references
Q1:For auditing purposes, your company now wants to monitor all API activity for all regions in your AWS environment. What can you use to fulfill this new requirement?
A. For each region, enable CloudTrail and send all logs to a bucket in each region.
B. Enable CloudTrail for all regions.
C. Ensure one CloudTrail is enabled for all regions.
D. Use AWS Config to enable the trail for all regions.
Ensure one CloudTrail is enabled for all regions. Turn on CloudTrail for all regions in your environment and CloudTrail will deliver log files from all regions to one S3 bucket. AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting.
Use a VPC Endpoint to access S3. A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.
AWS PrivateLink simplifies the security of data shared with cloud-based applications by eliminating the exposure of data to the public Internet.
[Get AWS CCP Practice Exam PDF Dumps here] It is AWS responsibility to secure Edge locations and decommission the data. AWS responsibility “Security of the Cloud” – AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.
Q4:You have EC2 instances running at 90% utilization and you expect this to continue for at least a year. What type of EC2 instance would you choose to ensure your cost stay at a minimum?
[Get AWS CCP Practice Exam PDF Dumps here] Reserved instances are the best choice for instances with continuous usage and offer a reduced cost because you purchase the instance for the entire year. Amazon EC2 Reserved Instances (RI) provide a significant discount (up to 75%) compared to On-Demand pricing and provide a capacity reservation when used in a specific Availability Zone.
The AWS Simple Monthly Calculator helps customers and prospects estimate their monthly AWS bill more efficiently. Using this tool, they can add, modify and remove services from their ‘bill’ and it will recalculate their estimated monthly charges automatically.
A. Sign up for the free alert under filing preferences in the AWS Management Console.
B. Set a schedule to regularly review the Billing an Cost Management dashboard each month.
C. Create an email alert in AWS Budget
D. In CloudWatch, create an alarm that triggers each time the limit is exceeded.
Answer:
Answer: iOS – Android (C) [Get AWS CCP Practice Exam PDF Dumps here] AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Reservation alerts are supported for Amazon EC2, Amazon RDS, Amazon Redshift, Amazon ElastiCache, and Amazon Elasticsearch reservations.
Q7:An Edge Location is a specialization AWS data centre that works with which services?
A. Lambda
B. CloudWatch
C. CloudFront
D. Route 53
Answer:
Answer: Get AWS Certified Cloud Practitioner Practice Exam CCP CLF-C02 eBook Print Book here Lambda@Edge lets you run Lambda functions to customize the content that CloudFront delivers, executing the functions in AWS locations closer to the viewer. Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you’re serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.
CloudFront speeds up the distribution of your content by routing each user request through the AWS backbone network to the edge location that can best serve your content. Typically, this is a CloudFront edge server that provides the fastest delivery to the viewer. Using the AWS network dramatically reduces the number of networks that your users’ requests must pass through, which improves performance. Users get lower latency—the time it takes to load the first byte of the file—and higher data transfer rates.
You also get increased reliability and availability because copies of your files (also known as objects) are now held (or cached) in multiple edge locations around the world.
Anser: A. Route 53 is a domain name system service by AWS. When a Disaster does occur , it can be easy to switch to secondary sites using the Route53 service. Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other. Amazon Route 53 is fully compliant with IPv6 as well.
Answer: D. The below snapshot from the AWS Documentation shows the spectrum of the Disaster recovery methods. If you go to the further end of the spectrum you have the least time for downtime for the users.
Q11:Your company is planning to host resources in the AWS Cloud. They want to use services which can be used to decouple resources hosted on the cloud. Which of the following services can help fulfil this requirement?
A. AWS EBS Volumes
B. AWS EBS Snapshots
C. AWS Glacier
D. AWS SQS
Answer:
D. AWS SQS: Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components.
A. 99.999999999% Durability and 99.99% Availability S3 Standard Storage class has a rating of 99.999999999% durability (referred to as 11 nines) and 99.99% availability.
A. Redshift is a database offering that is fully-managed and used for data warehousing and analytics, including compatibility with existing business intelligence tools.
B. and C. CENTRALLY MANAGE POLICIES ACROSS MULTIPLE AWS ACCOUNTS AUTOMATE AWS ACCOUNT CREATION AND MANAGEMENT CONTROL ACCESS TO AWS SERVICES CONSOLIDATE BILLING ACROSS MULTIPLE AWS ACCOUNTS
Q17:There is a requirement hosting a set of servers in the Cloud for a short period of 3 months. Which of the following types of instances should be chosen to be cost effective.
A. Spot Instances
B. On-Demand
C. No Upfront costs Reserved
D. Partial Upfront costs Reserved
Answer:
B. Since the requirement is just for 3 months, then the best cost effective option is to use On-Demand Instances.
You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, and other sources. You can then retrieve the associated log data from CloudWatch Log.
Q22:A company is deploying a new two-tier web application in AWS. The company wants to store their most frequently used data so that the response time for the application is improved. Which AWS service provides the solution for the company’s requirements?
A. MySQL Installed on two Amazon EC2 Instances in a single Availability Zone
Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases.
Q23:You have a distributed application that periodically processes large volumes of data across multiple Amazon EC2 Instances. The application is designed to recover gracefully from Amazon EC2 instance failures. You are required to accomplish this task in the most cost-effective way. Which of the following will meetyour requirements?
When you think of cost effectiveness, you can either have to choose Spot or Reserved instances. Now when you have a regular processing job, the best is to use spot instances and since your application is designed recover gracefully from Amazon EC2 instance failures, then even if you lose the Spot instance , there is no issue because your application can recover.
A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC.
Q25:A company is deploying a two-tier, highly available web application to AWS. Which service provides durable storage for static content while utilizing Overall CPU resources for the web tier?
A. Amazon EBC volume.
B. Amazon S3
C. Amazon EC2 instance store
D. Amazon RDS instance
Answer:
B. Amazon S3 is the default storage service that should be considered for companies. It provides durable storage for all static content.
Q26:When working on the costing for on-demand EC2 instances , which are the following are attributes which determine the costing of the EC2 Instance. Choose 3 answers from the options given below
Q27:You have a mission-critical application which must be globally available at all times. If this is the case, which of the below deployment mechanisms would you employ
Always build components which are loosely coupled. This is so that even if one component does fail, the entire system does not fail. Also if you build with the assumption that everything will fail, then you will ensure that the right measures are taken to build a highly available and fault tolerant system.
Q29: You have 2 accounts in your AWS account. One for the Dev and the other for QA. All are part ofconsolidated billing. The master account has purchase 3 reserved instances. The Dev department is currently using 2 reserved instances. The QA team is planning on using 3 instances which of the same instance type. What is the pricing tier of the instances that can be used by the QA Team?
Since all are a part of consolidating billing, the pricing of reserved instances can be shared by All. And since 2 are already used by the Dev team , another one can be used by the QA team. The rest of the instances can be on-demand instances.
Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components.
Q32:You are exploring what services AWS has off-hand. You have a large number of data sets that need to be processed. Which of the following services can help fulfil this requirement.
A. EMR
B. S3
C. Glacier
D. Storage Gateway
Answer:
A. Amazon EMR helps you analyze and process vast amounts of data by distributing the computational work across a cluster of virtual servers running in the AWS Cloud. The cluster is managed using an open-source framework called Hadoop. Amazon EMR lets you focus on crunching or analyzing your data without having to worry about time-consuming setup, management, and tuning of Hadoop clusters or the compute capacity they rely on.
Amazon Inspector enables you to analyze the behaviour of your AWS resources and helps you to identify potential security issues. Using Amazon Inspector, you can define a collection of AWS resources that you want to include in an assessment target. You can then create an assessment template and launch a security assessment run of this target.
Q34:Your company is planning to offload some of the batch processing workloads on to AWS. These jobs can be interrupted and resumed at any time. Which of the following instance types would be the most cost effective to use for this purpose.
A. On-Demand
B. Spot
C. Full Upfront Reserved
D. Partial Upfront Reserved
Answer:
B. Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. For example, Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks
Note that the AWS Console cannot be used to upload data onto Glacier. The console can only be used to create a Glacier vault which can be used to upload the data.
Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data& into and out of the AWS cloud. Using Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. Transferring data with Snowball is simple, fast, secure, and can be as little as one-fifth the cost of high-speed Internet.
Amazon Inspector enables you to analyze the behavior of your AWS resources and helps you to identify potential security issues. Using Amazon Inspector, you can define a collection of AWS resources that you want to include in an assessment target. You can then create an assessment template and launch a security assessment run of this target.
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open source databases.
You can reduce the load on your source DB Instance by routing read queries from your applications to the read replica. Read replicas allow you to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.
When you create an EBS volume in an Availability Zone, it is automatically replicated within that zone to prevent data loss due to failure of any single hardware component
Q42:Your company is planning to host a large e-commerce application on the AWS Cloud. One of their major concerns is Internet attacks such as DDos attacks.
Which of the following services can help mitigate this concern. Choose 2 answers from the options given below
One of the first techniques to mitigate DDoS attacks is to minimize the surface area that can be attacked thereby limiting the options for attackers and allowing you to build protections in a single place. We want to ensure that we do not expose our application or resources to ports, protocols or applications from where they do not expect any communication. Thus, minimizing the possible points of attack and letting us concentrate our mitigation efforts. In some cases, you can do this by placing your computation resources behind Content Distribution Networks (CDNs), Load Balancers and restricting direct Internet traffic to certain parts of your infrastructure like your database servers. In other cases, you can use firewalls or Access Control Lists (ACLs) to control what traffic reaches your applications.
You can use the consolidated billing feature in AWS Organizations to consolidate payment for multiple AWS accounts or multiple AISPL accounts. With consolidated billing, you can see a combined view of AWS charges incurred by all of your accounts. You also can get a cost report for each member account that is associated with your master account. Consolidated billing is offered at no additional charge.
One of the first techniques to mitigate DDoS attacks is to minimize the surface area that can be attacked thereby limiting the options for attackers and allowing you to build protections in a single place. We want to ensure that we do not expose our application or resources to ports, protocols or applications from where they do not expect any communication. Thus, minimizing the possible points of attack and letting us concentrate our mitigation efforts. In some cases, you can do this by placing your computation resources behind; Content Distribution Networks (CDNs), Load Balancers and restricting direct Internet traffic to certain parts of your infrastructure like your database servers. In other cases, you can use firewalls or Access Control Lists (ACLs) to control what traffic reaches your applications.
If you want a self-managed database, that means you want complete control over the database engine and the underlying infrastructure. In such a case you need to host the database on an EC2 Instance
If the database is going to be used for a minimum of one year at least , then it is better to get Reserved Instances. You can save on costs , and if you use a partial upfront options , you can get a better discount
The AWS Console cannot be used to upload data onto Glacier. The console can only be used to create a Glacier vault which can be used to upload the data.
Security groups acts as a virtual firewall for your instance to control inbound and outbound traffic. Network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets.
Q52:You plan to deploy an application on AWS. This application needs to be PCI Compliant. Which of the below steps are needed to ensure the compliance? Choose 2 answers from the below list:
A. Choose AWS services which are PCI Compliant
B. Ensure the right steps are taken during application development for PCI Compliance
C. Encure the AWS Services are made PCI Compliant
D. Do an audit after the deployment of the application for PCI Compliance.
Q57:Which of the following is a factor when calculating Total Cost of Ownership (TCO) for the AWS Cloud?
A. The number of servers migrated to AWS
B. The number of users migrated to AWS
C. The number of passwords migrated to AWS
D. The number of keys migrated to AWS
Answer:
A. Running servers will incur costs. The number of running servers is one factor of Server Costs; a key component of AWS’s Total Cost of Ownership (TCO). Reference: AWS cost calculator
Q58:Which AWS Services can be used to store files? Choose 2 answers from the options given below:
A. Amazon CloudWatch
B. Amazon Simple Storage Service (Amazon S3)
C. Amazon Elastic Block Store (Amazon EBS)
D. AWS COnfig
D. AWS Amazon Athena
B. and C. Amazon S3 is a Object storage built to store and retrieve any amount of data from anywhere. Amazon Elastic Block Store is a Persistent block storage for Amazon EC2.
C: AWS is defined as a cloud services provider. They provide hundreds of services of which compute and storage are included (not not limited to). Reference: AWS
Q60: Which AWS service can be used as a global content delivery network (CDN) service?
A. Amazon SES
B. Amazon CouldTrail
C. Amazon CloudFront
D. Amazon S3
Answer:
C: Amazon CloudFront is a web service that gives businesses and web application developers an easy and cost effective way to distribute content with low latency and high data transfer speeds. Like other AWS services, Amazon CloudFront is a self-service, pay-per-use offering, requiring no long term commitments or minimum fees. With CloudFront, your files are delivered to end-users using a global network of edge locations.Reference: AWS cloudfront
Q61:What best describes the concept of fault tolerance?
Choose the correct answer:
A. The ability for a system to withstand a certain amount of failure and still remain functional.
B. The ability for a system to grow in size, capacity, and/or scope.
C. The ability for a system to be accessible when you attempt to access it.
D. The ability for a system to grow and shrink based on demand.
Answer:
A: Fault tolerance describes the concept of a system (in our case a web application) to have failure in some of its components and still remain accessible (highly available). Fault tolerant web applications will have at least two web servers (in case one fails).
Q62: The firm you work for is considering migrating to AWS. They are concerned about cost and the initial investment needed. Which of the following features of AWS pricing helps lower the initial investment amount needed?
Choose 2 answers from the options given below:
A. The ability to choose the lowest cost vendor.
B. The ability to pay as you go
C. No upfront costs
D. Discounts for upfront payments
Answer:
B and C: The best features of moving to the AWS Cloud is: No upfront cost and The ability to pay as you go where the customer only pays for the resources needed. Reference: AWS pricing
Q64: Your company has started using AWS. Your IT Security team is concerned with the security of hosting resources in the Cloud. Which AWS service provides security optimization recommendations that could help the IT Security team secure resources using AWS?
An online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment, Trusted Advisor provides real time guidance to help you provision your resources following AWS best practices. Reference: AWS trusted advisor
Q65:What is the relationship between AWS global infrastructure and the concept of high availability?
Choose the correct answer:
A. AWS is centrally located in one location and is subject to widespread outages if something happens at that one location.
B. AWS regions and Availability Zones allow for redundant architecture to be placed in isolated parts of the world.
C. Each AWS region handles a different AWS services, and you must use all regions to fully use AWS.
As an AWS user, you can create your applications infrastructure and duplicate it. By placing duplicate infrastructure in multiple regions, high availability is created because if one region fails you have a backup (in a another region) to use.
Q66: You are hosting a number of EC2 Instances on AWS. You are looking to monitor CPU Utilization on the Instance. Which service would you use to collect and track performance metrics for AWS services?
Answer: iOS – Android C: Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. Reference: AWS cloudwatch
Q67: Which of the following support plans give access to all the checks in the Trusted Advisor service.
Q68: Which of the following in AWS maps to a separate geographic location?
A. AWS Region B. AWS Data Centers C. AWS Availability Zone
Answer:
Answer: iOS – Android A: Amazon cloud computing resources are hosted in multiple locations world-wide. These locations are composed of AWS Regions and Availability Zones. Each AWS Region is a separate geographic area. Reference: AWS Regions And Availability Zone
Q69:What best describes the concept of scalability?
Choose the correct answer:
A. The ability for a system to grow and shrink based on demand.
B. The ability for a system to grow in size, capacity, and/or scope.
C. The ability for a system be be accessible when you attempt to access it.
D. The ability for a system to withstand a certain amount of failure and still remain functional.
Answer
Answer: iOS – Android B: Scalability refers to the concept of a system being able to easily (and cost-effectively) scale UP. For web applications, this means the ability to easily add server capacity when demand requires.
Q70: If you wanted to monitor all events in your AWS account, which of the below services would you use?
A. AWS CloudWatch
B. AWS CloudWatch logs
C. AWS Config
D. AWS CloudTrail
Answer:
D: AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. Reference: Cloudtrail
Q71:What are the four primary benefits of using the cloud/AWS?
Choose the correct answer:
A. Fault tolerance, scalability, elasticity, and high availability.
B. Elasticity, scalability, easy access, limited storage.
C. Fault tolerance, scalability, sometimes available, unlimited storage
D. Unlimited storage, limited compute capacity, fault tolerance, and high availability.
Answer:
Answer: iOS – Android Fault tolerance, scalability, elasticity, and high availability are the four primary benefits of AWS/the cloud.
Q72:What best describes a simplified definition of the “cloud”?
Choose the correct answer:
A. All the computers in your local home network.
B. Your internet service provider
C. A computer located somewhere else that you are utilizing in some capacity.
D. An on-premise data center that your company owns.
Answer
Answer: iOS – Android (D) The simplest definition of the cloud is a computer that is located somewhere else that you are utilizing in some capacity. AWS is a cloud services provider, as the provide access to computers they own (located at AWS data centers), that you use for various purposes.
Q73: Your development team is planning to host a development environment on the cloud. This consists of EC2 and RDS instances. This environment will probably only be required for 2 months.
Which types of instances would you use for this purpose?
A. On-Demand
B. Spot
C. Reserved
D. Dedicated
Answer:
Answer: iOS – Android (A) The best and cost effective option would be to use On-Demand Instances. The AWS documentation gives the following additional information on On-Demand EC2 Instances. With On-Demand instances you only pay for EC2 instances you use. The use of On-Demand instances frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs. Reference: AWS ec2 pricing on-demand
Q74: Which of the following can be used to secure EC2 Instances?
Answer: iOS – Android security groups acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC could be assigned to a different set of security groups. If you don’t specify a particular group at launch time, the instance is automatically assigned to the default security group for the VPC. Reference: VPC Security Groups
Q75: What is the purpose of a DNS server?
Choose the correct answer:
A. To act as an internet search engine.
B. To protect you from hacking attacks.
C. To convert common language domain names to IP addresses.
Domain name system servers act as a “third party” that provides the service of converting common language domain names to IP addresses (which are required for a web browser to properly make a request for web content).
High availability refers to the concept that something will be accessible when you try to access it. An object or web application is “highly available” when it is accessible a vast majority of the time.
RDS is a SQL database service (that offers several database engine options), and DynamoDB is a NoSQL database option that only offers one NoSQL engine.
Reference:
Q78: What are two open source in-memory engines supported by ElastiCache?
Q85:If you want to have SMS or email notifications sent to various members of your department with status updates on resources in your AWS account, what service should you choose?
Choose the correct answer:
A. SNS
B. GetSMS
C. RDS
D. STS
Answer:
Answer: iOS – Android (A) Simple Notification Service (SNS) is what publishes messages to SMS and/or email endpoints.
Amazon WorkSpaces is a managed, secure Desktop-as-a-Service (DaaS) solution. You can use Amazon WorkSpaces to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe
Q87: Your company has recently migrated large amounts of data to the AWS cloud in S3 buckets. But it is necessary to discover and protect the sensitive data in these buckets. Which AWS service can do that?
Notes:Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS.
Q88: Your Finance Department has instructed you to save costs wherever possible when using the AWS Cloud. You notice that using reserved EC2 instances on a 1year contract will save money. What payment method will save the most money?
A: Deferred
B: Partial Upfront
C: All Upfront
D: No Upfront
Answer: C
Notes: With the All Upfront option, you pay for the entire Reserved Instance term with one upfront payment. This option provides you with the largest discount compared to On Demand Instance pricing.
Q89: A fantasy sports company needs to run an application for the length of a football season (5 months). They will run the application on an EC2 instance and there can be no interruption. Which purchasing option best suits this use case?
Notes: This is not a long enough term to make reserved instances the better option. Plus, the application can’t be interrupted, which rules out spot instances. Dedicated instances provide the option to bring along existing software licenses.
The scenario does not indicate a need to do this.
Q90:Your company is considering migrating its data center to the cloud. What are the advantages of the AWS cloud over an on-premises data center?
A. Replace upfront operational expenses with low variable operational expenses.
B. Maintain physical access to the new data center, but share responsibility with AWS.
C. Replace low variable costs with upfront capital expenses.
D. Replace upfront capital expenses with low variable costs.
Q91:You are leading a pilot program to try the AWS Cloud for one of your applications. You have been instructed to provide an estimate of your AWS bill. Which service will allow you to do this by manually entering your planned resources by service?
Notes: With the AWS Pricing Calculator, you can input the services you will use, and the configuration of those services, and get an estimate of the costs these services will accrue. AWS Pricing Calculator lets you explore AWS services, and create an estimate for the cost of your use cases on AWS.
Q92:Which AWS service would enable you to view the spending distribution in one of your AWS accounts?
Notes: AWS Cost Explorer is a free tool that you can use to view your costs and usage. You can view data up to the last 13 months, forecast how much you are likely to spend for the next three months, and get recommendations for what Reserved Instances to purchase. You can use AWS Cost Explorer to see patterns in how much you spend on AWS resources over time, identify areas that need further inquiry, and see trends that you can use to understand your costs. You can also specify time ranges for the data, and view time data by day or by month.
Q93:You are managing the company’s AWS account. The current support plan is Basic, but you would like to begin using Infrastructure Event Management. What support plan (that already includes Infrastructure Event Management without an additional fee) should you upgrade to?
A. Upgrade to Enterprise plan.
B. Do nothing. It is included in the Basic plan.
C. Upgrade to Developer plan.
D. Upgrade to the Business plan. No other steps are necessary.
Notes:AWS Infrastructure Event Management is a structured program available to Enterprise support customers (and Business Support customers for an additional fee) that helps you plan for large-scale events, such as product or application launches, infrastructure migrations, and marketing events.
With Infrastructure Event Management, you get strategic planning assistance before your event, as well as real-time support during these moments that matter most for your business.
Q94:You have decided to use the AWS Cost and Usage Report to track your EC2 Reserved Instance costs. To where can these reports be published?
A. Trusted Advisor
B. An S3 Bucket that you own.
C. CloudWatch
D. An AWS owned S3 Bucket.
Answer: B
Notes: The AWS Cost and Usage Reports (AWS CUR) contains the most comprehensive set of cost and usage data available. You can use Cost and Usage Reports to publish your AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that break down your costs by the hour or day, by product or product resource, or by tags that you define yourself. AWS updates the report in your bucket once a day in comma-separated value (CSV) format. You can view the reports using spreadsheet software such as Microsoft Excel or Apache OpenOffice Calc, or access them from an application using the Amazon S3 API.
Q95:What can we do in AWS to receive the benefits of volume pricing for your multiple AWS accounts?
A. Use consolidated billing in AWS Organizations.
B. Purchase services in bulk from AWS Marketplace.
Notes: You can use the consolidated billing feature in AWS Organizations to consolidate billing and payment for multiple AWS accounts or multiple Amazon Internet Services Pvt. Ltd (AISPL) accounts. You can combine the usage across all accounts in the organization to share the volume pricing discounts, Reserved Instance discounts, and Savings Plans. This can result in a lower charge for your project, department, or company than with individual standalone accounts.
Q96:A gaming company is using the AWS Developer Tool Suite to develop, build, and deploy their applications. Which AWS service can be used to trace user requests from end-to-end through the application?
Notes:AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components.
Q97:A company needs to use a Load Balancer which can serve traffic at the TCP, and UDP layers. Additionally, it needs to handle millions of requests per second at very low latencies. Which Load Balancer should they use?
Notes:Network Load Balancer is best suited for load balancing of Transmission Control Protocol (TCP), User Datagram Protocol (UDP) and Transport Layer Security (TLS) traffic where extreme performance is required. Operating at the connection level (Layer 4), Network Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) and is capable of handling millions of requests per second while maintaining ultra-low latencies.
Q98:Your company is migrating its services to the AWS cloud. The DevOps team has heard about infrastructure as code, and wants to investigate this concept. Which AWS service would they investigate?
Notes:AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS.
Q99:You have a MySQL database that you want to migrate to the cloud, and you need it to be significantly faster there. You are looking for a speed increase up to 5 times the current performance. Which AWS offering could you use?
Notes:Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is up to five times faster than standard MySQL databases and three times faster than standard PostgreSQL databases.
Q100:A developer is trying to programmatically retrieve information from an EC2 instance such as public keys, ip address, and instance id. From where can this information be retrieved?
Notes: This type of data is stored in Instance metadata. Instance userdata does not retrieve the information mentioned, but can be used to help configure a new instance.
Q101: Why is AWS more economical than traditional data centers for applications with varying compute workloads?
A) Amazon EC2 costs are billed on a monthly basis. B) Users retain full administrative access to their Amazon EC2 instances. C) Amazon EC2 instances can be launched on demand when needed. D) Users can permanently run enough instances to handle peak workloads.
Answer: C Notes: The ability to launch instances on demand when needed allows users to launch and terminate instances in response to a varying workload. This is a more economical practice than purchasing enough on-premises servers to handle the peak load. Reference: Advantage of cloud computing
Q102: Which AWS service would simplify the migration of a database to AWS?
A) AWS Storage Gateway B) AWS Database Migration Service (AWS DMS) C) Amazon EC2 D) Amazon AppStream 2.0
Answer: B Notes: AWS DMS helps users migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. AWS DMS can migrate data to and from most widely used commercial and open-source databases. Reference: AWS DMS
Q103: Which AWS offering enables users to find, buy, and immediately start using software solutions in their AWS environment?
A) AWS Config B) AWS OpsWorks C) AWS SDK D) AWS Marketplace
Answer: D Notes: AWS Marketplace is a digital catalog with thousands of software listings from independent software vendors that makes it easy to find, test, buy, and deploy software that runs on AWS. Reference: AWS Markerplace
Q104: Which AWS networking service enables a company to create a virtual network within AWS?
A) AWS Config B) Amazon Route 53 C) AWS Direct Connect D) Amazon Virtual Private Cloud (Amazon VPC)
Answer: D Notes: Amazon VPC lets users provision a logically isolated section of the AWS Cloud where users can launch AWS resources in a virtual network that they define. Reference: VPC https://aws.amazon.com/vpc/
Q105: Which component of the AWS global infrastructure does Amazon CloudFront use to ensure low-latency delivery?
A) AWS Regions B) Edge locations C) Availability Zones D) Virtual Private Cloud (VPC)
Answer: B Notes: – To deliver content to users with lower latency, Amazon CloudFront uses a global network of points of presence (edge locations and regional edge caches) worldwide. Reference: Cloudfront – https://aws.amazon.com/cloudfront/
Q106: How would a system administrator add an additional layer of login security to a user’s AWS Management Console?
A) Use Amazon Cloud Directory B) Audit AWS Identity and Access Management (IAM) roles C) Enable multi-factor authentication D) Enable AWS CloudTrail
Answer: C Notes: – Multi-factor authentication (MFA) is a simple best practice that adds an extra layer of protection on top of a username and password. With MFA enabled, when a user signs in to an AWS Management Console, they will be prompted for their username and password (the first factor—what they know), as well as for an authentication code from their MFA device (the second factor—what they have). Taken together, these multiple factors provide increased security for AWS account settings and resources. Reference: MFA – https://aws.amazon.com/iam/features/mfa/
Q107: Which service can identify the user that made the API call when an Amazon EC2 instance is terminated?
A) AWS Trusted Advisor B) AWS CloudTrail C) AWS X-Ray D) AWS Identity and Access Management (AWS IAM)
Answer: B Notes: – AWS CloudTrail helps users enable governance, compliance, and operational and risk auditing of their AWS accounts. Actions taken by a user, role, or an AWS service are recorded as events in CloudTrail. Events include actions taken in the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs and APIs. Reference: AWS CloudTrail https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html
Q108: Which service would be used to send alerts based on Amazon CloudWatch alarms?
A) Amazon Simple Notification Service (Amazon SNS) B) AWS CloudTrail C) AWS Trusted Advisor D) Amazon Route 53
Answer: A Notes: Amazon SNS and Amazon CloudWatch are integrated so users can collect, view, and analyze metrics for every active SNS. Once users have configured CloudWatch for Amazon SNS, they can gain better insight into the performance of their Amazon SNS topics, push notifications, and SMS deliveries. Reference: CloudWatch for Amazon SNS https://docs.aws.amazon.com/sns/latest/dg/sns-monitoring-using-cloudwatch.html
Q109: Where can a user find information about prohibited actions on the AWS infrastructure?
A) AWS Trusted Advisor B) AWS Identity and Access Management (IAM) C) AWS Billing Console D) AWS Acceptable Use Policy
Answer: D Notes: – The AWS Acceptable Use Policy provides information regarding prohibited actions on the AWS infrastructure. Reference: AWS Acceptable Use Policy – https://aws.amazon.com/aup/
Q110: Which of the following is an AWS responsibility under the AWS shared responsibility model?
A) Configuring third-party applications B) Maintaining physical hardware C) Securing application access and data D) Managing guest operating systems
Answer: B Notes: – Maintaining physical hardware is an AWS responsibility under the AWS shared responsibility model. Reference: AWS shared responsibility model https://aws.amazon.com/compliance/shared-responsibility-model/
Q111: Which recommendations are included in the AWS Trusted Advisor checks? (Select TWO.)
A) Amazon S3 bucket permissions
B) AWS service outages for services
C) Multi-factor authentication (MFA) use on the AWS account root user
D) Available software patches for Amazon EC2 instances
Answer: A and C
Notes: Trusted Advisor checks for S3 bucket permissions in Amazon S3 with open access permissions. Bucket permissions that grant list access to everyone can result in higher than expected charges if objects in the bucket are listed by unintended users at a high frequency. Bucket permissions that grant upload and delete access to all users create potential security vulnerabilities by allowing anyone to add, modify, or remove items in a bucket. This Trusted Advisor check examines explicit bucket permissions and associated bucket policies that might override the bucket permissions.
Trusted Advisor does not provide notifications for service outages. You can use the AWS Personal Health Dashboard to learn about AWS Health events that can affect your AWS services or account.
Trusted Advisor checks the root account and warns if MFA is not enabled.
Trusted Advisor does not provide information about the number of users in an AWS account.
What is the difference between Amazon EC2 Savings Plans and Spot Instances?
Amazon EC2 Savings Plans are ideal for workloads that involve a consistent amount of compute usage over a 1-year or 3-year term. With Amazon EC2 Savings Plans, you can reduce your compute costs by up to 72% over On-Demand costs.
Spot Instances are ideal for workloads with flexible start and end times, or that can withstand interruptions. With Spot Instances, you can reduce your compute costs by up to 90% over On-Demand costs. Unlike Amazon EC2 Savings Plans, Spot Instances do not require contracts or a commitment to a consistent amount of compute usage.
Amazon EBS vs Amazon EFS
An Amazon EBS volume stores data in a single Availability Zone. To attach an Amazon EC2 instance to an EBS volume, both the Amazon EC2 instance and the EBS volume must reside within the same Availability Zone.
Amazon EFS is a regional service. It stores data in and across multiple Availability Zones. The duplicate storage enables you to access data concurrently from all the Availability Zones in the Region where a file system is located. Additionally, on-premises servers can access Amazon EFS using AWS Direct Connect.
Which cloud deployment model allows you to connect public cloud resources to on-premises infrastructure?
Applications made available through hybrid deployments connect cloud resources to on-premises infrastructure and applications. For example, you might have an application that runs in the cloud but accesses data stored in your on-premises data center.
What is the difference between Amazon EC2 Savings Plans and Spot Instances?
Amazon EC2 Savings Plans are ideal for workloads that involve a consistent amount of compute usage over a 1-year or 3-year term. With Amazon EC2 Savings Plans, you can reduce your compute costs by up to 72% over On-Demand costs.
Spot Instances are ideal for workloads with flexible start and end times, or that can withstand interruptions. With Spot Instances, you can reduce your compute costs by up to 90% over On-Demand costs. Unlike Amazon EC2 Savings Plans, Spot Instances do not require contracts or a commitment to a consistent amount of compute usage.
Which benefit of cloud computing helps you innovate and build faster?
Agility: The cloud gives you quick access to resources and services that help you build and deploy your applications faster.
Which developer tool allows you to write code within your web browser?
Cloud9 is an integrated development environment (IDE) that allows you to write code within your web browser.
Which method of accessing an EC2 instance requires both a private key and a public key?
SSH allows you to access an EC2 instance from your local laptop using a key pair, which consists of a private key and a public key.
Which service allows you to track the name of the user making changes in your AWS account?
CloudTrail tracks user activity and API calls in your account, which includes identity information (the user’s name, source IP address, etc.) about the API caller.
Which analytics service allows you to query data in Amazon S3 using Structured Query Language (SQL)?
Athena is a query service that makes it easy to analyze data in Amazon S3 using SQL.
Which machine learning service helps you build, train, and deploy models quickly?
SageMaker helps you build, train, and deploy machine learning models quickly.
Which EC2 storage mechanism is recommended when running a database on an EC2 instance?
EBS is a storage device you can attach to your instances and is a recommended storage option when you run databases on an instance.
Which storage service is a scalable file system that only works with Linux-based workloads?
EFS is an elastic file system for Linux-based workloads.
Djamgatech: AI Driven Certification Preparation: Azure AI, AWS Machine Learning Specialty, AWS Data Analytics, GCP ML, GCP PDE,
Which AWS service provides a secure and resizable compute platform with choice of processor, storage, networking, operating system, and purchase model?
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. Amazon EC2 offers the broadest and deepest compute platform with choice of processor, storage, networking, operating system, and purchase model. Amazon EC2.
Which services allow you to build hybrid environments by connecting on-premises infrastructure to AWS?
Site-to-site VPN allows you to establish a secure connection between your on-premises equipment and the VPCs in your AWS account.
Direct Connect allows you to establish a dedicated network connection between your on-premises network and AWS.
What service could you recommend to a developer to automate the software release process?
CodePipeline is a developer tool that allows you to continuously automate the software release process.
Which service allows you to practice infrastructure as code by provisioning your AWS resources via scripted templates?
CloudFormation allows you to provision your AWS resources via scripted templates.
Which machine learning service allows you to add image analysis to your applications?
Rekognition is a service that makes it easy to add image analysis to your applications.
Which services allow you to run containerized applications without having to manage servers or clusters?
Fargate removes the need for you to interact with servers or clusters as it provisions, configures, and scales clusters of virtual machines to run containers for you.
ECS lets you run your containerized Docker applications on both Amazon EC2 and AWS Fargate.
EKS lets you run your containerized Kubernetes applications on both Amazon EC2 and AWS Fargate.
Amazon S3 offers multiple storage classes. Which storage class is best for archiving data when you want the cheapest cost and don’t mind long retrieval times?
S3 Glacier Deep Archive offers the lowest cost and is used to archive data. You can retrieve objects within 12 hours.
In the shared responsibility model, what is the customer responsible for?
You are responsible for patching the guest OS, including updates and security patches.
You are responsible for firewall configuration and securing your application.
A company needs phone, email, and chat access 24 hours a day, 7 days a week. The response time must be less than 1 hour if a production system has a service interruption. Which AWS Support plan meets these requirements at the LOWEST cost?
The Business Support plan provides phone, email, and chat access 24 hours a day, 7 days a week. The Business Support plan has a response time of less than 1 hour if a production system has a service interruption.
Which of the following is an advantage of consolidated billing on AWS?
Consolidated billing is a feature of AWS Organizations. You can combine the usage across all accounts in your organization to share volume pricing discounts, Reserved Instance discounts, and Savings Plans. This solution can result in a lower charge compared to the use of individual standalone accounts.
A company requires physical isolation of its Amazon EC2 instances from the instances of other customers. Which instance purchasing option meets this requirement?
With Dedicated Hosts, a physical server is dedicated for your use. Dedicated Hosts provide visibility and the option to control how you place your instances on an isolated, physical server. For more information about Dedicated Hosts, see Amazon EC2 Dedicated Hosts.
A company is hosting a static website from a single Amazon S3 bucket. Which AWS service will achieve lower latency and high transfer speeds?
CloudFront is a web service that speeds up the distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. Content is cached in edge locations. Content that is repeatedly accessed can be served from the edge locations instead of the source S3 bucket. For more information about CloudFront, see Accelerate static website content delivery.
Which AWS service provides a simple and scalable shared file storage solution for use with Linux-based Amazon EC2 instances and on-premises servers?
Amazon EFS provides an elastic file system that lets you share file data without the need to provision and manage storage. It can be used with AWS Cloud services and on-premises resources, and is built to scale on demand to petabytes without disrupting applications. With Amazon EFS, you can grow and shrink your file systems automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.
Which service allows you to generate encryption keys managed by AWS?
KMS allows you to generate and manage encryption keys. The keys generated by KMS are managed by AWS.
Which service can integrate with a Lambda function to automatically take remediation steps when it uncovers suspicious network activity when monitoring logs in your AWS account?
GuardDuty can perform automated remediation actions by leveraging Amazon CloudWatch Events and AWS Lambda. GuardDuty continuously monitors for threats and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon S3. GuardDuty analyzes multiple AWS data sources, such as AWS CloudTrail event logs, Amazon VPC Flow Logs, and DNS logs.
Which service allows you to create access keys for someone needing to access AWS via the command line interface (CLI)?
IAM allows you to create users and generate access keys for users needing to access AWS via the CLI.
Which service allows you to record software configuration changes within your Amazon EC2 instances over time?
Config helps with recording compliance and configuration changes over time for your AWS resources.
Which service assists with compliance and auditing by offering a downloadable report that provides the status of passwords and MFA devices in your account?
IAM provides a downloadable credential report that lists all users in your account and the status of their various credentials, including passwords, access keys, and MFA devices.
Which service allows you to locate credit card numbers stored in Amazon S3?
Macie is a data privacy service that helps you uncover and protect your sensitive data, such as personally identifiable information (PII) like credit card numbers, passport numbers, social security numbers, and more.
How do you manage permissions for multiple users at once using AWS Identity and Access Management (IAM)?
An IAM group is a collection of IAM users. When you assign an IAM policy to a group, all users in the group are granted permissions specified by the policy.
Which service protects your web application from cross-site scripting attacks?
WAF helps protect your web applications from common web attacks, like SQL injection or cross-site scripting.
Which AWS Trusted Advisor real-time guidance recommendations are available for AWS Basic Support and AWS Developer Support customers?
Basic and Developer Support customers get 50 service limit checks.
Basic and Developer Support customers get security checks for “Specific Ports Unrestricted” on Security Groups.
Basic and Developer Support customers get security checks on S3 Bucket Permissions.
Which service allows you to simplify billing by using a single payment method for all your accounts?
Organizations offers consolidated billing that provides 1 bill for all your AWS accounts. This also gives you access to volume discounts.
Which AWS service usage will always be free even after the 12-month free tier plan has expired?
One million Lambda requests are always free each month.
What is the easiest way for a customer on the AWS Basic Support plan to increase service limits?
The Basic Support plan allows 24/7 access to Customer Service via email and the ability to open service limit increase support cases.
Which types of issues are covered by AWS Support?
“How to” questions about AWS service and features
Problems detected by health checks
Djamgatech: AI Driven Certification Preparation: Azure AI, AWS Machine Learning Specialty, AWS Data Analytics, GCP ML, GCP PDE,
Which features of AWS reduce your total cost of ownership (TCO)?
Sharing servers with others allows you to save money.
Elastic computing allows you to trade capital expense for variable expense.
You pay only for the computing resources you use with no long-term commitments.
Which service allows you to select and deploy operating system and software patches automatically across large groups of Amazon EC2 instances?
Systems Manager allows you to automate operational tasks across your AWS resources.
Which service provides the easiest way to set up and govern a secure, multi-account AWS environment?
Control Tower allows you to centrally govern and enforce the best use of AWS services across your accounts.
Which cost management tool gives you the ability to be alerted when the actual or forecasted cost and usage exceed your desired threshold?
Budgets allow you to improve planning and cost control with flexible budgeting and forecasting. You can choose to be alerted when your budget threshold is exceeded.
Which tool allows you to compare your estimated service costs per Region?
The Pricing Calculator allows you to get an estimate for the cost of AWS services. Comparing service costs per Region is a common use case.
Who can assist with accelerating the migration of legacy contact center infrastructure to AWS?
Professional Services is a global team of experts that can help you realize your desired business outcomes with AWS.
The AWS Partner Network (APN) is a global community of partners that helps companies build successful solutions with AWS.
Which cost management tool allows you to view costs from the past 12 months, current detailed costs, and forecasts costs for up to 3 months?
Cost Explorer allows you to visualize, understand, and manage your AWS costs and usage over time.
Which service reduces the operational overhead of your IT organization?
Managed Services implements best practices to maintain your infrastructure and helps reduce your operational overhead and risk.
I assume it is your subscription where the VPCs are located, otherwise you can’t really discover the information you are looking for. On the EC2 server you could use AWS CLI or Powershell based scripts that query the IP information. Based on IP you can find out what instance uses the network interface, what security groups are tied to it and in which VPC the instance is hosted. Read more here…
When using AWS Lambda inside your VPC, your Lambda function will be allocated private IP addresses, and only private IP addresses, from your specified subnets. This means that you must ensure that your specified subnets have enough free address space for your Lambda function to scale up to. Each simultaneous invocation needs its own IP. Read more here…
When a Lambda “is in a VPC”, it really means that its attached Elastic Network Interface is the customer’s VPC and not the hidden VPC that AWS manages for Lambda.
The ENI is not related to the AWS Lambda management system that does the invocation (the data plane mentioned here). The AWS Step Function system can go ahead and invoke the Lambda through the API, and the network request for that can pass through the underlying VPC and host infrastructure.
Those Lambdas in turn can invoke other Lambda directly through the API, or more commonly by decoupling them, such as through Amazon SQS used as a trigger. Read more ….
How do I invoke an AWS Lambda function programmatically?
Invokes a Lambda function. You can invoke a function synchronously (and wait for the response), or asynchronously. To invoke a function asynchronously, set InvocationType to Event.
For synchronous invocation, details about the function response, including errors, are included in the response body and headers. For either invocation type, you can find more information in the execution log and trace.
When an error occurs, your function may be invoked multiple times. Retry behavior varies by error type, client, event source, and invocation type. For example, if you invoke a function asynchronously and it returns an error, Lambda executes the function up to two more times. For more information, see Retry Behavior.
For asynchronous invocation, Lambda adds events to a queue before sending them to your function. If your function does not have enough capacity to keep up with the queue, events may be lost. Occasionally, your function may receive the same event multiple times, even if no error occurs. To retain events that were not processed, configure your function with a dead-letter queue.
The status code in the API response doesn’t reflect function errors. Error codes are reserved for errors that prevent your function from executing, such as permissions errors, limit errors, or issues with your function’s code and configuration. For example, Lambda returns TooManyRequestsException if executing the function would cause you to exceed a concurrency limit at either the account level ( Concurrent Invocation Limit Exceeded) or function level ( Reserved Function Concurrent Invocation LimitExceeded).
For functions with a long timeout, your client might be disconnected during synchronous invocation while it waits for a response. Configure your HTTP client, SDK, firewall, proxy, or operating system to allow for long connections with timeout or keep-alive settings.
The subnet mask determines how many bits of the network address are relevant (and thus indirectly the size of the network block in terms of how many host addresses are available) –
192.0.2.0, subnet mask 255.255.255.0 means that 192.0.2 is the significant portion of the network number, and that there 8 bits left for host addresses (i.e. 192.0.2.0 thru 192.0.2.255)
192.0.2.0, subnet mask 255.255.255.128 means that 192.0.2.0 is the significant portion of the network number (first three octets and the most significant bit of the last octet), and that there 7 bits left for host addresses (i.e. 192.0.2.0 thru 192.0.2.127)
When in doubt, envision the network number and subnet mask in base 2 (i.e. binary) and it will become much clearer. Read more here…
Separate out the roles needed to do each job. (Assuming this is a corporate environment)
Have a role for EC2, another for Networking, another for IAM.
Everyone should not be admin. Everyone should not be able to add/remove IGW’s, NAT gateways, alter security groups and NACLS, or setup peering connections.
Also, another thing… lock down full internet access. Limit to what is needed and that’s it. Read more here….
How can we setup AWS public-private subnet in VPC without NAT server?
Within a single VPC, the subnets’ route tables need to point to each other. This will already work without additional routes because VPC sets up the local target to point to the VPC subnet.
Security groups are not used here since they are attached to instances, and not networks.
The NAT EC2 instance (server), or AWS-provided NAT gateway is necessary only if the private subnet internal addresses need to make outbound connections. The NAT will translate the private subnet internal addresses to the public subnet internal addresses, and the AWS VPC Internet Gateway will translate these to external IP addresses, which can then go out to the Internet. Read more here ….
What are the applications (or workloads) that cannot be migrated on to cloud (AWS or Azure or GCP)?
A good example of workloads that currently are not in public clouds are mobile and fixed core telecom networks for tier 1 service providers. This is despite the fact that these core networks are increasingly software based and have largely been decoupled from the hardware. There are a number of reasons for this such as the public cloud providers such as Azure and AWS do not offer the guaranteed availability required by telecom networks. These networks require 99.999% availability and is typically referred to as telecom grade.
The regulatory environment frequently restricts hosting of subscriber data outside the of the operators data centers or in another country and key network functions such as lawful interception cannot contractually be hosted off-prem. Read more here….
How many CIDRs can we add to my own created VPC?
You can add up to 5 IPv4 CIDR blocks, or 1 IPv6 block per VPC. You can further segment the network by utilizing up to 200 subnets per VPC. Amazon VPC Limits. Read more …
Why can’t a subnet’s CIDR be changed once it has been assigned?
Sure it can, but you’ll need to coordinate with the neighbors. You can merge two /25’s into a single /24 quite effortlessly if you control the entire range it covers. In practice you’ll see many tiny allocations in public IPv4 space, like /29’s and even smaller. Those are all assigned to different people. If you want to do a big shuffle there, you have a lot of coordinating to do.. or accept the fallout from the breakage you cause. Read more…
Can one VPC talk to another VPC?
Yes, but a Virtual Private Cloud is usually built for the express purpose of being isolated from unwanted external traffic. I can think of several good reasons to encourage that sort of communication, so the idea is not without merit. Read more..
Good knowledge about the AWS services, and how to leverage them to solve simple to complex problems.
As your question is related to the deployment Pod, you will probably be asked about deployment methods (A/B testing like blue-green deployment) as well as pipelining strategies. You might be asked during this interview to reason about a simple task and to code it (like parsing a log file). Also review the TCP/IP stack in-depth as well as the tools to troubleshoot it for the networking round. You will eventually have some Linux questions, the range of questions can vary from common CLI tools to Linux internals like signals / syscalls / file descriptors and so on.
Last but not least the Leadership principles, I can only suggest you to prepare a story for each of them. You will quickly find what LP they are looking for and would be able to give the right signal to your interviewer.
Finally, remember that theres a debrief after the (usually 5) stages of your on site interview, and more senior and convincing interviewers tend to defend their vote so don’t screw up with them.
Be natural, focus on the question details and ask for confirmation, be cool but not too much. At the end of the day, remember that your job will be to understand customer issues and provide a solution, so treat your interviewers as if they were customers and they will see a successful CSE in you, be reassured and give you the job.
Expect questions on cloudformations, Teraform, Aws ec2/rds and stack related questions.
It also depends on the support team you are being hired for. Networking or compute teams (Ec2) have different interview patterns vs database or big data support.
In any case, basics of OS, networking are critical to the interview. If you have a phone screen, we will be looking for basic/semi advance skills of these and your speciality. For example if you mention Oracle in your resume and you are interviewing for the database team, expect a flurry of those questions.
Other important aspect is the Amazon leadership principles. Half of your interview is based on LPs. If you fail to have scenarios where you do not demonstrate our LPs, you cannot expect to work here even though your technical skills are above average (Having extraordinary skills is a different thing).
The overall interview itself will have 1 phone screen if you are interviewing in the US and 1–2 if outside US. The onsite loop will be 4 rounds , 2 of which are technical (again divided into OS and networking and the specific speciality of the team you are interviewing for ) and 2 of them are leadership principles where we test your soft skills and management skills as they are very important in this job. You need to have a strong view point, disagree if it seems valid to do so, empathy and be a team player while showing the ability to pull off things individually as well. These skills will be critical for cracking LP interviews.
You will NOT be asked to code or write queries as its not part of the job, so you can concentrate on the theoretical part of the subject and also your resume. We will grill you on topics mentioned on your resume to start with.
Monolithic architecture is something that build from single piece of material, historically from rock. Monolith term normally use for object made from single large piece of material.” – Non-Technical Definition. “Monolithic application has single code base with multiple modules.
Large Monolithic code-base (often spaghetti code) puts immense cognitive complexity on the developer’s head. As a result, the development velocity is poor. Granular scaling (i.e., scaling part of the application) is not possible. Polyglot programming or polyglot database is challenging.
Drawbacks of Monolithic Architecture
This simple approach has a limitation in size and complexity. Application is too large and complex to fully understand and made changes fast and correctly. The size of the application can slow down the start-up time. You must redeploy the entire application on each update.
Sticky sessions, also known as session affinity, allow you to route a site user to the particular web server that is managing that individual user’s session. The session’s validity can be determined by a number of methods, including a client-side cookies or via configurable duration parameters that can be set at the load balancer which routes requests to the web servers.
Some advantages with utilizing sticky sessions are that it’s cost effective due to the fact you are storing sessions on the same web servers running your applications and that retrieval of those sessions is generally fast because it eliminates network latency. A drawback for using storing sessions on an individual node is that in the event of a failure, you are likely to lose the sessions that were resident on the failed node. In addition, in the event the number of your web servers change, for example a scale-up scenario, it’s possible that the traffic may be unequally spread across the web servers as active sessions may exist on particular servers. If not mitigated properly, this can hinder the scalability of your applications. Read more here …
After you terminate an instance, it remains visible in the console for a short while, and then the entry is automatically deleted. You cannot delete the terminated instance entry yourself. After an instance is terminated, resources such as tags and volumes are gradually disassociated from the instance, therefore may no longer be visible on the terminated instance after a short while.
When an instance terminates, the data on any instance store volumes associated with that instance is deleted.
By default, Amazon EBS root device volumes are automatically deleted when the instance terminates. However, by default, any additional EBS volumes that you attach at launch, or any EBS volumes that you attach to an existing instance persist even after the instance terminates. This behavior is controlled by the volume’s DeleteOnTermination attribute, which you can modify
When you first launch an instance with gp2 volumes attached, you get an initial burst credit allowing for up to 30 minutes of 3,000 iops/sec.
After the first 30 minutes, your volume will accrue credits as follows (taken directly from AWS documentation):
Within the General Purpose (SSD) implementation is a Token Bucket model that works as follows
Each token represents an “I/O credit” that pays for one read or one write.
A bucket is associated with each General Purpose (SSD) volume, and can hold up to 5.4 million tokens.
Tokens accumulate at a rate of 3 per configured GB per second, up to the capacity of the bucket.
Tokens can be spent at up to 3000 per second per volume.
The baseline performance of the volume is equal to the rate at which tokens are accumulated — 3 IOPS per GB per second.
In addition to this, gp2 volumes provide baseline performance of 3 iops per Gb, up to 1Tb (3000 iops). Volumes larger than 1Tb no longer work on the credit system, as they already provide a baseline of 3000 iops. Gp2 volumes have a cap of 10,000 iops regardless of the volume size (so the iops max out for volumes larger than 3.3Tb)
Elastic IP addresses are free when you have them assigned to an instance, feel free to use one! Elastic IPs get disassociated when you stop an instance, so you will get charged in the mean time. The benefit is that you get to keep that IP allocated to your account though, instead of losing it like any other. Once you start the instance you just re-associate it back and you have your old IP again.
Here are the changes associated with the use of Elastic IP addresses
No cost for Elastic IP addresses while in use
* $0.01 per non-attached Elastic IP address per complete hour
* $0.00 per Elastic IP address remap – first 100 remaps / month
* $0.10 per Elastic IP address remap – additional remap / month over 100
If you require any additional information about pricing please reference the link below
The short answer to reducing your AWS EC2 costs – turn off your instances when you don’t need them.
Your AWS bill is just like any other utility bill, you get charged for however much you used that month. Don’t make the mistake of leaving your instances on 24/7 if you’re only using them during certain days and times (ex. Monday – Friday, 9 to 5).
To automatically start and stop your instances, AWS offers an “EC2 scheduler” solution. A better option would be a cloud cost management tool that not only stops and starts your instances automatically, but also tracks your usage and makes sizing recommendations to optimize your cloud costs and maximize your time and savings.
You could potentially save money using Reserved Instances. But, in non-production environments such as dev, test, QA, and training, Reserved Instances are not your best bet. Why is this the case? These environments are less predictable; you may not know how many instances you need and when you will need them, so it’s better to not waste spend on these usage charges. Instead, schedule such instances (preferably using ParkMyCloud). Scheduling instances to be only up 12 hours per day on weekdays will save you 65% – better than all but the most restrictive 3-year RIs!
Well AWS is a web service provider which offers a set of services related to compute, storage, database, network and more to help the business scale and grow
All your concerns are related to AWS EC2 instance, so let me start with an instance
Instance:
An EC2 instance is similar to a server where you can host your websites or applications to make it available Globally
It is highly scalable and works on the pay-as-you-go model
You can increase or decrease the capacity of these instances as per the requirement
AMI:
AMI provides the information required to launch the EC2 instance
AMI includes the pre-configured templates of the operating system that runs on the AWS
Users can launch multiple instances with the same configuration from a single AMI
Snapshot:
Snapshots are the incremental backups for the Amazon EBS
Data in the EBS are stored in S3 by taking point-to-time snapshots
Unique data are only deleted when a snapshot is deleted
They are definitely all chalk and cheese to one another.
A VPN (Virtual Private Network) is essentially an encrypted “channel” connecting two networks, or a machine to a network, generally over the public internet.
A VPS (Virtual Private Server) is a rented virtual machine running on someone else’s hardware. AWS EC2 can be thought of as a VPS, but the term is usually used to describe low-cost products offered by lots of other hosting companies.
A VPC (Virtual Private Cloud) is a virtual network in AWS (Amazon Web Services). It can be divided into private and public subnets, have custom routing rules, have internal connections to other VPCs, etc. EC2 instances and other resources are placed in VPCs similarly to how physical data centers have operated for a very long time.
Elastic IP address is basically the static IP (IPv4) address that you can allocate to your resources.
Now, in case that you allocate IP to the resource (and the resource is running), you are not charged anything. On the other hand, if you create Elastic IP, but you do not allocate it to the resource (or the resource is not running), then you are charged some amount (should be around $0.005 per hour if I remember correctly)
Additional info about these:
You are limited to 5 Elastic IP addresses per region. If you require more than that, you can contact AWS support with a request for additional addresses. You need to have a good reason in order to be approved because IPv4 addresses are becoming a scarce resource.
In general, you should be good without Elastic IPs for most of the use-cases (as every EC2 instance has its own public IP, and you can use load balancers, as well as map most of the resources via Route 53).
One of the use-cases that I’ve seen where my client is using Elastic IP is to make it easier for him to access specific EC2 instance via RDP, as well as do deployment through Visual Studio, as he targets the Elastic IP, and thus does not have to watch for any changes in public IP (in case of stopping or rebooting).
At this time, AWS Transit Gateway does not support inter region attachments. The transit gateway and the attached VPCs must be in the same region. VPC peering supports inter region peering.
The EC2 instance is server instance whilst a Workspace is windows desktop instance
Both Windows Server and Windows workstation editions have desktops. Windows Server Core doesn’t not (and AWS doesn’t have an AMI for Windows Server Core that I could find).
It is possible to SSH into a Windows instance – this is done on port 22. You would not see a desktop when using SSH if you had enabled it. It is not enabled by default.
If you are seeing a desktop, I believe you’re “RDPing” to the Windows instance. This is done with the RDP protocol on port 3389.
Two different protocols and two different ports.
Workspaces doesn’t allow terminal or ssh services by default. You need to use Workspace client. You still can enable RDP or/and SSH but this is not recommended.
Workspaces is a managed desktop service. AWS is taking care of pre-build AMIs, software licenses, joining to domain, scaling etc.
What is Amazon EC2?Scalable, pay-as-you-go compute capacity in the cloud. Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.
What is Amazon WorkSpaces?Easily provision cloud-based desktops that allow end-users to access applications and resources. With a few clicks in the AWS Management Console, customers can provision a high-quality desktop experience for any number of users at a cost that is highly competitive with traditional desktops and half the cost of most virtual desktop infrastructure (VDI) solutions. End-users can access the documents, applications and resources they need with the device of their choice, including laptops, iPad, Kindle Fire, or Android tablets.
Elastic – Amazon EC2 enables you to increase or decrease capacity within minutes, not hours or days. You can commission one, hundreds or even thousands of server instances simultaneously.
Completely Controlled – You have complete control of your instances. You have root access to each one, and you can interact with them as you would any machine.
Flexible – You have the choice of multiple instance types, operating systems, and software packages. Amazon EC2 allows you to select a configuration of memory, CPU, instance storage, and the boot partition size that is optimal for your choice of operating system and application.
On the other hand, Amazon WorkSpaces provides the following key features:
Support Multiple Devices- Users can access their Amazon WorkSpaces using their choice of device, such as a laptop computer (Mac OS or Windows), iPad, Kindle Fire, or Android tablet.
Keep Your Data Secure and Available- Amazon WorkSpaces provides each user with access to persistent storage in the AWS cloud. When users access their desktops using Amazon WorkSpaces, you control whether your corporate data is stored on multiple client devices, helping you keep your data secure.
Choose the Hardware and Software you need- Amazon WorkSpaces offers a choice of bundles providing different amounts of CPU, memory, and storage so you can match your Amazon WorkSpaces to your requirements. Amazon WorkSpaces offers preinstalled applications (including Microsoft Office) or you can bring your own licensed software.
Amazon EBS vs Amazon EFS
An Amazon EBS volume stores data in a single Availability Zone. To attach an Amazon EC2 instance to an EBS volume, both the Amazon EC2 instance and the EBS volume must reside within the same Availability Zone.
Amazon EFS is a regional service. It stores data in and across multiple Availability Zones. The duplicate storage enables you to access data concurrently from all the Availability Zones in the Region where a file system is located. Additionally, on-premises servers can access Amazon EFS using AWS Direct Connect.
Provides secure, resizable compute capacity in the cloud. It makes web-scale cloud computing easier for developers. EC2
EC2 Spot
Run fault-tolerant workloads for up to 90% off. EC2Spot
EC2 Autoscaling
Automatically add or remove compute capacity to meet changes in demand. EC2_AustoScaling
Lightsail
Designed to be the easiest way to launch & manage a virtual private server with AWS. An easy-to-use cloud platform that offers everything need to build an application or website. Lightsail
Batch
Enables developers, scientists, & engineers to easily & efficiently run hundreds of thousands of batch computing jobs on AWS. Fully managed batch processing at any scale. Batch
Containers
Elastic Container Service (ECS)
Highly secure, reliable, & scalable way to run containers. ECS
Run code without thinking about servers. Pay only for the compute time you consume. Lamda
Edge and hybrid
Outposts
Run AWS infrastructure & services on premises for a truly consistent hybrid experience. Outposts
Snow Family
Collect and process data in rugged or disconnected edge environments. SnowFamily
Wavelength
Deliver ultra-low latency application for 5G devices. Wavelenth
VMware Cloud on AWS
Innovate faster, rapidly transition to the cloud, & work securely from any location. VMware_On_AWS
Local Zones
Run latency sensitive applications closer to end-users. LocalZones
Networking and Content Delivery
Use cases
Functionality
Service
Description
Build a cloud network
Define and provision a logically isolated network for your AWS resources
VPC
VPC lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. VPC
Connect VPCs and on-premises networks through a central hub
Transit Gateway
Transit Gateway connects VPCs & on-premises networks through a central hub. This simplifies network & puts an end to complex peering relationships. TransitGateway
Provide private connectivity between VPCs, services, and on-premises applications
PrivateLink
PrivateLink provides private connectivity between VPCs & services hosted on AWS or on-premises, securely on the Amazon network. PrivateLink
Route users to Internet applications with a managed DNS service
Route 53
Route 53 is a highly available & scalable cloud DNS web service. Route53
Scale your network design
Automatically distribute traffic across a pool of resources, such as instances, containers, IP addresses, and Lambda functions
Elastic Load Balancing
Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as EC2’s, containers, IP addresses, & Lambda functions. ElasticLoadBalancing
Direct traffic through the AWS Global network to improve global application performance
Global Accelerator
Global Accelerator is a networking service that sends user’s traffic through AWS’s global network infrastructure, improving internet user performance by up to 60%. GlobalAccelerator
Secure your network traffic
Safeguard applications running on AWS against DDoS attacks
Shield
Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. Shield
Protect your web applications from common web exploits
WAF
WAF is a web application firewall that helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. WAF
Centrally configure and manage firewall rules
Firewall Manager
Firewall Manager is a security management service which allows to centrally configure & manage firewall rules across accounts & apps in AWS Organization. link text
Build a hybrid IT network
Connect your users to AWS or on-premises resources using a Virtual Private Network
(VPN) – Client
VPN solutions establish secure connections between on-premises networks, remote offices, client devices, & the AWS global network. VPN
Create an encrypted connection between your network and your Amazon VPCs or AWS Transit Gateways
(VPN) – Site to Site
Site-to-Site VPN creates a secure connection between data center or branch office & AWS cloud resources. site_to_site
Establish a private, dedicated connection between AWS and your datacenter, office, or colocation environment
Direct Connect
Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. DirectConnect
Content delivery networks
Securely deliver data, videos, applications, and APIs to customers globally with low latency, and high transfer speeds
CloudFront
CloudFront expedites distribution of static & dynamic web content. CloudFront
Build a network for microservices architectures
Provide application-level networking for containers and microservices
App Mesh
App Mesh makes it accessible to guide & control microservices operating on AWS. AppMesh
Create, maintain, and secure APIs at any scale
API Gateway
API Gateway allows the user to design & expand their own REST and WebSocket APIs at any scale. APIGateway
Discover AWS services connected to your applications
Cloud Map
Cloud Map permits the name & handles the cloud resources. CloudMap
S3 is the storehouse for the internet i.e. object storage built to store & retrieve any amount of data from anywhere S3
AWS Backup
AWS Backup is an externally-accessible backup provider that makes it easier to align & optimize the backup of data across AWS services in the cloud. AWS_Backup
Amazon EBS
Amazon Elastic Block Store is a web service that provides block-level storage volumes. EBS
Amazon EFS Storage
EFS offers file storage for the user’s Amazon EC2 instances. It’s kind of blob Storage. EFS
Amazon FSx
FSx supply fully managed 3rd-party file systems with the native compatibility & characteristic sets for workloads. It’s available as FSx for Windows server (Fully managed file storage built on Windows Server) & Lustre (Fully managed high-performance file system integrated with S3). FSx_WindowsFSx_Lustre
AWS Storage Gateway
Storage Gateway is a service which connects an on-premises software appliance with cloud-based storage. Storage_Gateway
AWS DataSync
DataSync makes it simple & fast to move large amounts of data online between on-premises storage & S3, EFS, or FSx for Windows File Server. DataSync
AWS Transfer Family
The Transfer Family provides fully managed support for file transfers directly into & out of S3. Transfer_Family
AWS Snow Family
Highly-secure, portable devices to collect & process data at the edge, and migrate data into and out of AWS. Snow_Family
Classification: Object storage: S3 File storage services: Elastic File System, FSx for Windows Servers & FSx for Lustre Block storage: EBS Backup: AWS Backup Data transfer: Storage gateway –> 3 types: Tape, File, Volume. Transfer Family –> SFTP, FTPS, FTP. Edge computing and storage and Snow Family –> Snowcone, Snowball, Snowmobile
Databases
Database type
Use cases
Service
Description
Relational
Traditional applications, ERP, CRM, e-commerce
Aurora, RDS, Redshift
RDS is a web service that makes it easier to set up, control, and scale a relational database in the cloud. AuroraRDSRedshift
Key-value
High-traffic web apps, e-commerce systems, gaming applications
DynamoDB
DynamoDB is a fully administered NoSQL database service that offers quick and reliable performance with integrated scalability. DynamoDB
ElastiCache helps in setting up, managing, and scaling in-memory cache conditions. MemcachedRedis
Document
Content management, catalogs, user profiles
DocumentDB
DocumentDB (with MongoDB compatibility) is a quick, dependable, and fully-managed database service that makes it easy for you to set up, operate, and scale MongoDB-compatible databases.DocumentDB
Wide column
High scale industrial apps for equipment maintenance, fleet management, and route optimization
Keyspaces (for Apache Cassandra)
Keyspaces is a scalable, highly available, and managed Apache Cassandra–compatible database service. Keyspaces
Graph
Fraud detection, social networking, recommendation engines
Neptune
Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets. Neptune
Time series
IoT applications, DevOps, industrial telemetry
Timestream
Timestream is a fast, scalable, and serverless time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day. Timestream
Ledger
Systems of record, supply chain, registrations, banking transactions
Quantum Ledger Database (QLDB)
QLDB is a fully managed ledger database that provides a transparent, immutable, and cryptographically verifiable transaction log owned by a central trusted authority. QLDB
Developer Tools
Service
Description
Cloud9
Cloud9 is a cloud-based IDE that enables the user to write, run, and debug code. Cloud9
CodeArtifact
CodeArtifact is a fully managed artifact repository service that makes it easy for organizations of any size to securely store, publish, & share software packages used in their software development process. CodeArtifact
CodeBuild
CodeBuild is a fully managed service that assembles source code, runs unit tests, & also generates artefacts ready to deploy. CodeBuild
CodeGuru
CodeGuru is a developer tool powered by machine learning that provides intelligent recommendations for improving code quality & identifying an application’s most expensive lines of code. CodeGuru
Cloud Development Kit
Cloud Development Kit (AWS CDK) is an open source software development framework to define cloud application resources using familiar programming languages. CDK
CodeCommit
CodeCommit is a version control service that enables the user to personally store & manage Git archives in the AWS cloud. CodeCommit
CodeDeploy
CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as EC2, Fargate, Lambda, & on-premises servers. CodeDeploy
CodePipeline
CodePipeline is a fully managed continuous delivery service that helps automate release pipelines for fast & reliable app & infra updates. CodePipeline
CodeStar
CodeStar enables to quickly develop, build, & deploy applications on AWS. CodeStar
CLI
AWS CLI is a unified tool to manage AWS services & control multiple services from the command line & automate them through scripts. CLI
X-Ray
X-Ray helps developers analyze & debug production, distributed applications, such as those built using a microservices architecture. X-Ray
CDK uses the familiarity & expressive power of programming languages for modeling apps. CDK
Corretto
Corretto is a no-cost, multiplatform, production-ready distribution of the OpenJDK. Corretto
Crypto Tools
Cryptography is hard to do safely & correctly. The AWS Crypto Tools libraries are designed to help everyone do cryptography right, even without special expertise. Crypto Tools
Serverless Application Model (SAM)
SAM is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, & event source mappings. SAM
Tools for developing and managing applications on AWS
Security, Identity, & Compliance
Category
Use cases
Service
Description
Identity & access management
Securely manage access to services and resources
Identity & Access Management (IAM)
IAM is a web service for safely controlling access to AWS services. IAM
Securely manage access to services and resources
Single Sign-On
SSO helps in simplifying, managing SSO access to AWS accounts & business applications. SSO
Identity management for apps
Cognito
Cognito lets you add user sign-up, sign-in, & access control to web & mobile apps quickly and easily. Cognito
Managed Microsoft Active Directory
Directory Service
AWS Managed Microsoft Active Directory (AD) enables your directory-aware workloads & AWS resources to use managed Active Directory (AD) in AWS. DirectoryService
Simple, secure service to share AWS resources
Resource Access Manager
Resource Access Manager (RAM) is a service that enables you to easily & securely share AWS resources with any AWS account or within AWS Organization. RAM
Central governance and management across AWS accounts
Organizations
Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS. Orgs
Detection
Unified security and compliance center
Security Hub
Security Hub gives a comprehensive view of security alerts & security posture across AWS accounts. SecurityHub
Managed threat detection service
GuardDuty
GuardDuty is a threat detection service that continuously monitors for malicious activity & unauthorized behavior to protect AWS accounts, workloads, & data stored in S3. GuardDuty
Analyze application security
Inspector
Inspector is a security vulnerability assessment service improves the security & compliance of the AWS resources. Inspector
Record and evaluate configurations of your AWS resources
Config
Config is a service that enables to assess, audit, & evaluate the configurations of AWS resources. Config
Track user activity and API usage
CloudTrail
CloudTrail is a service that enables governance, compliance, operational auditing, & risk auditing of AWS account. CloudTrail
Security management for IoT devices
IoT Device Defender
IoT Device Defender is a fully managed service that helps secure fleet of IoT devices. IoTDD
Infrastructure protection
DDoS protection
Shield
Shield is a managed DDoS protection service that safeguards apps running. It provides always-on detection & automatic inline mitigations that minimize application downtime & latency. Shield
Filter malicious web traffic
Web Application Firewall (WAF)
WAF is a web application firewall that helps protect web apps or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. WAF
Central management of firewall rules
Firewall Manager
Firewall Manager eases the user AWS WAF administration & maintenance activities over multiple accounts & resources. FirewallManager
Data protection
Discover and protect your sensitive data at scale
Macie
Macie is a fully managed data (security & privacy) service that uses ML & pattern matching to discover & protect sensitive data. Macie
Key storage and management
Key Management Service (KMS)
KMS makes it easy for to create & manage cryptographic keys & control their use across a wide range of AWS services & in your applications. KMS
Hardware based key storage for regulatory compliance
CloudHSM
CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate & use your own encryption keys. CloudHSM
Provision, manage, and deploy public and private SSL/TLS certificates
Certificate Manager
Certificate Manager is a service that easily provision, manage, & deploy public and private SSL/TLS certs for use with AWS services & internal connected resources. ACM
Rotate, manage, and retrieve secrets
Secrets Manager
Secrets Manager assist the user to safely encode, store, & recover credentials for any user’s database & other services. SecretsManager
Incident response
Investigate potential security issues
Detective
Detective makes it easy to analyze, investigate, & quickly identify the root cause of potential security issues or suspicious activities. Detective
Provides scalable, cost-effective business continuity for physical, virtual, & cloud servers. CloudEndure
Compliance
No cost, self-service portal for on-demand access to AWS’ compliance reports
Artifact
Artifact is a web service that enables the user to download AWS security & compliance records. Artifact
Data Lakes & Analytics
Category
Use cases
Service
Description
Analytics
Interactive analytics
Athena
Athena is an interactive query service that makes it easy to analyze data in S3 using standard SQL. Athena
Big data processing
EMR
EMR is the industry-leading cloud big data platform for processing vast amounts of data using open source tools such as Apache Spark, Hive, HBase,Flink, Hudi, & Presto. EMR
Data warehousing
Redshift
The most popular & fastest cloud data warehouse. Redshift
Real-time analytics
Kinesis
Kinesis makes it easy to collect, process, & analyze real-time, streaming data so one can get timely insights. Kinesis
Operational analytics
Elasticsearch Service
Elasticsearch Service is a fully managed service that makes it easy to deploy, secure, & run Elasticsearch cost effectively at scale. ES
Dashboards & visualizations
Quicksight
QuickSight is a fast, cloud-powered business intelligence service that makes it easy to deliver insights to everyone in organization. QuickSight
Data movement
Real-time data movement
1) Amazon Managed Streaming for Apache Kafka (MSK) 2) Kinesis Data Streams 3) Kinesis Data Firehose 4) Kinesis Data Analytics 5) Kinesis Video Streams 6) Glue
MSK is a fully managed service that makes it easy to build & run applications that use Apache Kafka to process streaming data. MSKKDSKDFKDAKVSGlue
Data lake
Object storage
1) S3 2) Lake Formation
Lake Formation is a service that makes it easy to set up a secure data lake in days. A data lake is a centralized, curated, & secured repository that stores all data, both in its original form & prepared for analysis. S3LakeFormation
Backup & archive
1) S3 Glacier 2) Backup
S3 Glacier & S3 Glacier Deep Archive are a secure, durable, & extremely low-cost S3 cloud storage classes for data archiving & long-term backup. S3Glacier
Data catalog
1) Glue 2)) Lake Formation
Refer as above.
Third-party data
Data Exchange
Data Exchange makes it easy to find, subscribe to, & use third-party data in the cloud. DataExchange
Predictive analytics && machine learning
Frameworks & interfaces
Deep Learning AMIs
Deep Learning AMIs provide machine learning practitioners & researchers with the infrastructure & tools to accelerate deep learning in the cloud, at any scale. DeepLearningAMIs
Platform services
SageMaker
SageMaker is a fully managed service that provides every developer & data scientist with the ability to build, train, & deploy machine learning (ML) models quickly. SageMaker
Containers
Use cases
Service
Description
Store, encrypt, and manage container images
ECR
Refer compute section
Run containerized applications or build microservices
ECS
Refer compute section
Manage containers with Kubernetes
EKS
Refer compute section
Run containers without managing servers
Fargate
Fargate is a serverless compute engine for containers that works with both ECS & EKS. Fargate
Run containers with server-level control
EC2
Refer compute section
Containerize and migrate existing applications
App2Container
App2Container (A2C) is a command-line tool for modernizing .NET & Java applications into containerized applications. App2Container
Quickly launch and manage containerized applications
Copilot
Copilot is a command line interface (CLI) that enables customers to quickly launch & easily manage containerized applications on AWS. Copilot
Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to users of your application, which improves performance & reduces latency.
Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora (MySQL & PostgreSQL-compatible editions), where the database will automatically start up, shut down, & scale capacity up or down based on your application’s needs.
RDS Proxy is a fully managed, highly available database proxy for RDS that makes applications more scalable, resilient to database failures, & more secure.
AppSync is a fully managed service that makes it easy to develop GraphQL APIs by handling the heavy lifting of securely connecting to data sources like AWS DynamoDB, Lambda.
EventBridge is a serverless event bus that makes it easy to connect applications together using data from apps, integrated SaaS apps, & AWS services.
Step Functions is a serverless function orchestrator that makes it easy to sequence Lambda functions & multiple AWS services into business-critical applications.
The easiest way to set up and govern a new, secure multi-account AWS environment. ControlTower
Organizations
Organizations helps centrally govern environment as you grow & scale workloads on AWS Organizations
Well-Architected Tool
Well-Architected Tool helps review the state of workloads & compares them to the latest AWS architectural best practices. WATool
Budgets
Budgets allows to set custom budgets to track cost & usage from the simplest to the most complex use cases. Budgets
License Manager
License Manager makes it easier to manage software licenses from software vendors such as Microsoft, SAP, Oracle, & IBM across AWS & on-premises environments. LicenseManager
Provision
CloudFormation
CloudFormation enables the user to design & provision AWS infrastructure deployments predictably & repeatedly. CloudFormation
Service Catalog
Service Catalog allows organizations to create & manage catalogs of IT services that are approved for use on AWS. ServiceCatalog
OpsWorks
OpsWorks presents a simple and flexible way to create and maintain stacks and applications. OpsWorks
Marketplace
Marketplace is a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, & deploy software that runs on AWS. Marketplace
Operate
CloudWatch
CloudWatch offers a reliable, scalable, & flexible monitoring solution that can easily start. CloudWatch
CloudTrail
CloudTrail is a service that enables governance, compliance, operational auditing, & risk auditing of AWS account. CloudTrail
Read For Me launched at the 2021 AWS re:Invent Builders’ Fair in Las Vegas. A web application which helps the visually impaired ‘hear documents. With the help of AI services such as Amazon Textract, Amazon Comprehend, Amazon Translate and Amazon Polly utilizing an event-driven architecture and serverless technology, users upload a picture of a document, or anything with text, and within a few seconds “hear” that document in their chosen language.
AWS read for me
2- Delivering code and architectures through AWS Proton and Git
Infrastructure operators are looking for ways to centrally define and manage the architecture of their services, while developers need to find a way to quickly and safely deploy their code. In this session, learn how to use AWS Proton to define architectural templates and make them available to development teams in a collaborative manner. Also, learn how to enable development teams to customize their templates so that they fit the needs of their services.
3- Accelerate front-end web and mobile development with AWS Amplify
User-facing web and mobile applications are the primary touchpoint between organizations and their customers. To meet the ever-rising bar for customer experience, developers must deliver high-quality apps with both foundational and differentiating features. AWS Amplify helps front-end web and mobile developers build faster front to back. In this session, review Amplify’s core capabilities like authentication, data, and file storage and explore new capabilities, such as Amplify Geo and extensibility features for easier app customization with AWS services and better integration with existing deployment pipelines. Also learn how customers have been successful using Amplify to innovate in their businesses.
3- Train ML models at scale with Amazon SageMaker, featuring Aurora
Today, AWS customers use Amazon SageMaker to train and tune millions of machine learning (ML) models with billions of parameters. In this session, learn about advanced SageMaker capabilities that can help you manage large-scale model training and tuning, such as distributed training, automatic model tuning, optimizations for deep learning algorithms, debugging, profiling, and model checkpointing, so that even the largest ML models can be trained in record time for the lowest cost. Then, hear from Aurora, a self-driving vehicle technology company, on how they use SageMaker training capabilities to train large perception models for autonomous driving using massive amounts of images, video, and 3D point cloud data.
AWS RE:INVENT 2020 – LATEST PRODUCTS AND SERVICES ANNOUNCED:
Amazon Elasticsearch Service is uniquely positioned to handle log analytics workloads. With a multitude of open-source and AWS-native service options, users can assemble effective log data ingestion pipelines and couple these with Amazon Elasticsearch Service to build a robust, cost-effective log analytics solution. This session reviews patterns and frameworks leveraged by companies such as Capital One to build an end-to-end log analytics solution using Amazon Elasticsearch Service.
Many companies in regulated industries have achieved compliance requirements using AWS Config. They also need a record of the incidents generated by AWS Config in tools such as ServiceNow for audits and remediation. In this session, learn how you can achieve compliance as code using AWS Config. Through the creation of a noncompliant Amazon EC2 machine, this demo shows how AWS Config triggers an incident into a governance, risk, and compliance system for audit recording and remediation. The session also covers best practices for how to automate the setup process with AWS CloudFormation to support many teams.
3- Cost-optimize your enterprise workloads with Amazon EBS – Compute
Recent times have underscored the need to enable agility while maintaining the lowest total cost of ownership (TCO). In this session, learn about the latest volume types that further optimize your performance and cost, while enabling you to run newer applications on AWS with high availability. Dive deep into the latest AWS volume launches and cost-optimization strategies for workloads such as databases, virtual desktop infrastructure, and low-latency interactive applications.
Location data is a vital ingredient in today’s applications, enabling use cases from asset tracking to geomarketing. Now, developers can use the new Amazon Location Service to add maps, tracking, places, geocoding, and geofences to applications, easily, securely, and affordably. Join this session to see how to get started with the service and integrate high-quality location data from geospatial data providers Esri and HERE. Learn how to move from experimentation to production quickly with location capabilities. This session can help developers who require simple location data and those building sophisticated asset tracking, customer engagement, fleet management, and delivery applications.
In this session, learn how Amazon Connect Tasks makes it easy for you to prioritize, assign, and track all the tasks that agents need to complete, including work in external applications needed to resolve customer issues (such as emails, cases, and social posts). Tasks provides a single place for agents to be assigned calls, chats, and tasks, ensuring agents are focused on the highest-priority work. Also, learn how you can also use Tasks with Amazon Connect’s workflow capabilities to automate task-related actions that don’t require agent interaction. Come see how you can use Amazon Connect Tasks to increase customer satisfaction while improving agent productivity.
New agent-assist capabilities from Amazon Connect Wisdom make it easier and faster for agents to find the information they need to solve customer issues in real time. In this session, see how agents can use simple ML-powered search to find information stored across knowledge bases, wikis, and FAQs, like Salesforce and ServiceNow. Join the session to hear Traeger Pellet Grills discuss how it’s using these new features, along with Contact Lens for Amazon Connect, to deliver real-time recommendations to agents based on issues automatically detected during calls.
Grafana is a popular, open-source data visualization tool that enables you to centrally query and analyze observability data across multiple data sources. Learn how the new Amazon Managed Service for Grafana, announced with Grafana’s parent company Grafana Labs, solves common observability challenges. With the new fully managed service, you can monitor, analyze, and alarm on metrics, logs, and traces while offloading the operational management of security patching, upgrading, and resource scaling to AWS. This session also covers new Grafana capabilities such as advanced security features and native AWS service integrations to simplify configuration and onboarding of data sources.
Prometheus is a popular open-source monitoring and alerting solution optimized for container environments. Customers love Prometheus for its active open-source community and flexible query language, using it to monitor containers across AWS and on-premises environments. Amazon Managed Service for Prometheus is a fully managed Prometheus-compatible monitoring service. In this session, learn how you can use the same open-source Prometheus data model, existing instrumentation, and query language to monitor performance with improved scalability, availability, and security without having to manage the underlying infrastructure.
Today, enterprises use low-power, long-range wide-area network (LoRaWAN) connectivity to transmit data over long ranges, through walls and floors of buildings, and in commercial and industrial use cases. However, this requires companies to operate their own LoRa network server (LNS). In this session, learn how you can use LoRaWAN for AWS IoT Core to avoid time-consuming and undifferentiated development work, operational overhead of managing infrastructure, or commitment to costly subscription-based pricing from third-party service providers.
10-AWS CloudShell: The fastest way to get started with AWS CLI
AWS CloudShell is a free, browser-based shell available from the AWS console that provides a simple way to interact with AWS resources through the AWS command-line interface (CLI). In this session, see an overview of both AWS CloudShell and the AWS CLI, which when used together are the fastest and easiest ways to automate tasks, write scripts, and explore new AWS services. Also, see a demo of both services and how to quickly and easily get started with each.
Industrial organizations use AWS IoT SiteWise to liberate their industrial equipment data in order to make data-driven decisions. Now with AWS IoT SiteWise Edge, you can collect, organize, process, and monitor your equipment data on premises before sending it to local or AWS Cloud destinations—all while using the same asset models, APIs, and functionality. Learn how you can extend the capabilities of AWS IoT SiteWise to the edge with AWS IoT SiteWise Edge.
AWS Fault Injection Simulator is a fully managed chaos engineering service that helps you improve application resiliency by making it easy and safe to perform controlled chaos engineering experiments on AWS. In this session, see an overview of chaos engineering and AWS Fault Injection Simulator, and then see a demo of how to use AWS Fault Injection Simulator to make applications more resilient to failure.
Organizations are breaking down data silos and building petabyte-scale data lakes on AWS to democratize access to thousands of end users. Since its launch, AWS Lake Formation has accelerated data lake adoption by making it easy to build and secure data lakes. In this session, AWS Lake Formation GM Mehul A. Shah showcases recent innovations enabling modern data lake use cases. He also introduces a new capability of AWS Lake Formation that enables fine-grained, row-level security and near-real-time analytics in data lakes.
Machine learning (ML) models may generate predictions that are not fair, whether because of biased data, a model that contains bias, or bias that emerges over time as real-world conditions change. Likewise, closed-box ML models are opaque, making it difficult to explain to internal stakeholders, auditors, external regulators, and customers alike why models make predictions both overall and for individual inferences. In this session, learn how Amazon SageMaker Clarify is providing built-in tools to detect bias across the ML workflow including during data prep, after training, and over time in your deployed model.
Amazon EMR on Amazon EKS introduces a new deployment option in Amazon EMR that allows you to run open-source big data frameworks on Amazon EKS. This session digs into the technical details of Amazon EMR on Amazon EKS, helps you understand benefits for customers using Amazon EMR or running open-source Spark on Amazon EKS, and discusses performance considerations.
Finding unexpected anomalies in metrics can be challenging. Some organizations look for data that falls outside of arbitrary ranges; if the range is too narrow, they miss important alerts, and if it is too broad, they receive too many false alerts. In this session, learn about Amazon Lookout for Metrics, a fully managed anomaly detection service that is powered by machine learning and over 20 years of anomaly detection expertise at Amazon to quickly help organizations detect anomalies and understand what caused them. This session guides you through setting up your own solution to monitor for anomalies and showcases how to deliver notifications via various integrations with the service.
17- Improve application availability with ML-powered insights using Amazon DevOps Guru
As applications become increasingly distributed and complex, developers and IT operations teams need more automated practices to maintain application availability and reduce the time and effort spent detecting, debugging, and resolving operational issues manually. In this session, discover Amazon DevOps Guru, an ML-powered cloud operations service, informed by years of Amazon.com and AWS operational excellence, that provides an easy and automated way to improve an application’s operational performance and availability. See how you can transform your IT operations and reduce mean time to recovery (MTTR) with contextual insights.
Amazon Connect Voice ID provides real-time caller authentication that makes voice interactions in contact centers more secure and efficient. Voice ID uses machine learning to verify the identity of genuine customers by analyzing a caller’s unique voice characteristics. This allows contact centers to use an additional security layer that doesn’t rely on the caller answering multiple security questions, and it makes it easy to enroll and verify customers without disrupting the natural flow of the conversation. Join this session to see how fast and secure ML-based voice authentication can power your contact center.
G4ad instances feature the latest AMD Radeon Pro V520 GPUs and second-generation AMD EPYC processors. These new instances deliver the best price performance in Amazon EC2 for graphics-intensive applications such as virtual workstations, game streaming, and graphics rendering. This session dives deep into these instances, ideal use cases, and performance benchmarks, and it provides a demo.
new capability that enables deployment of Amazon ECS tasks on customer-managed infrastructure. This session covers the evolution of Amazon ECS over time, including new on-premises capabilities to manage your hybrid footprint using a common fully managed control plane and API. You learn some foundational technical details and important tenets that AWS is using to design these capabilities, and the session ends with a short demo of Amazon ECS Anywhere.
Amazon Aurora Serverless is an on-demand, auto scaling configuration of Amazon Aurora that automatically adjusts database capacity based on application demand. With Amazon Aurora Serverless v2, you can now scale database workloads instantly from hundreds to hundreds of thousands of transactions per second and adjust capacity in fine-grained increments to provide just the right amount of database resources. This session dives deep into Aurora Serverless v2 and shows how it can help you operate even the most demanding database workloads worry-free.
Apple delights its customers with stunning devices like iPhones, iPads, MacBooks, Apple Watches, and Apple TVs, and developers want to create applications that run on iOS, macOS, iPadOS, tvOS, watchOS, and Safari. In this session, learn how Amazon is innovating to improve the development experience for Apple applications. Come learn how AWS now enables you to develop, build, test, and sign Apple applications with the flexibility, scalability, reliability, and cost benefits of Amazon EC2.
When industrial equipment breaks down, this means costly downtime. To avoid this, you perform maintenance at regular intervals, which is inefficient and increases your maintenance costs. Predictive maintenance allows you to plan the required repair at an optimal time before a breakdown occurs. However, predictive maintenance solutions can be challenging and costly to implement given the high costs and complexity of sensors and infrastructure. You also have to deal with the challenges of interpreting sensor data and accurately detecting faults in order to send alerts. Come learn how Amazon Monitron helps you solve these challenges by offering an out-of-the-box, end-to-end, cost-effective system.
As data grows, we need innovative approaches to get insight from all the information at scale and speed. AQUA is a new hardware-accelerated cache that uses purpose-built analytics processors to deliver up to 10 times better query performance than other cloud data warehouses by automatically boosting certain types of queries. It’s available in preview on Amazon Redshift RA3 nodes in select regions at no extra cost and without any code changes. Attend this session to understand how AQUA works and which analytic workloads will benefit the most from AQUA.
Figuring out if a part has been manufactured correctly, or if machine part is damaged, is vitally important. Making this determination usually requires people to inspect objects, which can be slow and error-prone. Some companies have applied automated image analysis—machine vision—to detect anomalies. While useful, these systems can be very difficult and expensive to maintain. In this session, learn how Amazon Lookout for Vision can automate visual inspection across your production lines in few days. Get started in minutes, and perform visual inspection and identify product defects using as few as 30 images, with no machine learning (ML) expertise required.
AWS Proton is a new service that enables infrastructure operators to create and manage common container-based and serverless application stacks and automate provisioning and code deployments through a self-service interface for their developers. Learn how infrastructure teams can empower their developers to use serverless and container technologies without them first having to learn, configure, and maintain the underlying resources.
Migrating applications from SQL Server to an open-source compatible database can be time-consuming and resource-intensive. Solutions such as the AWS Database Migration Service (AWS DMS) automate data and database schema migration, but there is often more work to do to migrate application code. This session introduces Babelfish for Aurora PostgreSQL, a new translation layer for Amazon Aurora PostgreSQL that enables Amazon Aurora to understand commands from applications designed to run on Microsoft SQL Server. Learn how Babelfish for Aurora PostgreSQL works to reduce the time, risk, and effort of migrating Microsoft SQL Server-based applications to Aurora, and see some of the capabilities that make this possible.
Over the past decade, we’ve witnessed a digital transformation in healthcare, with organizations capturing huge volumes of patient information. But this data is often unstructured and difficult to extract, with information trapped in clinical notes, insurance claims, recorded conversations, and more. In this session, explore how the new Amazon HealthLake service removes the heavy lifting of organizing, indexing, and structuring patient information to provide a complete view of each patient’s health record in the FHIR standard format. Come learn how to use prebuilt machine learning models to analyze and understand relationships in the data, identify trends, and make predictions, ultimately delivering better care for patients.
When business users want to ask new data questions that are not answered by existing business intelligence (BI) dashboards, they rely on BI teams to create or update data models and dashboards, which can take several weeks to complete. In this session, learn how Merlin lets users simply enter their questions on the Merlin search bar and get answers in seconds. Merlin uses natural language processing and semantic data understanding to make sense of the data. It extracts business terminologies and intent from users’ questions, retrieves the corresponding data from the source, and returns the answer in the form of a number, chart, or table in Amazon QuickSight.
When developers publish images publicly for anyone to find and use—whether for free or under license—they must make copies of common images and upload them to public websites and registries that do not offer the same availability commitment as Amazon ECR. This session explores a new Amazon public registry, Amazon ECR Public, built with AWS experience operating Amazon ECR. Here, developers can share georeplicated container software worldwide for anyone to discover and download. Developers can quickly publish public container images with a single command. Learn how anyone can browse and pull container software for use in their own applications.
Industrial companies are constantly working to avoid unplanned downtime due to equipment failure and to improve operational efficiency. Over the years, they have invested in physical sensors, data connectivity, data storage, and dashboarding to monitor equipment and get real-time alerts. Current data analytics methods include single-variable thresholds and physics-based modeling approaches, which are not effective at detecting certain failure types and operating conditions. In this session, learn how Amazon Lookout for Equipment uses data from your sensors to detect abnormal equipment behavior so that you can take action before machine failures occur and avoid unplanned downtime.
In this session, learn how Contact Lens for Amazon Connect enables your contact center supervisors to understand the sentiment of customer conversations, identify call drivers, evaluate compliance with company guidelines, and analyze trends. This can help supervisors train agents, replicate successful interactions, and identify crucial company and product feedback. Your supervisors can conduct fast full-text search on all transcripts to quickly troubleshoot customer issues. With real-time capabilities, you can get alerted to issues during live customer calls and deliver proactive assistance to agents while calls are in progress, improving customer satisfaction. Join this session to see how real-time ML-powered analytics can power your contact center.
AWS Local Zones places compute, storage, database, and other select services closer to locations where no AWS Region exists today. Last year, AWS launched the first two Local Zones in Los Angeles, and organizations are using Local Zones to deliver applications requiring ultra-low-latency compute. AWS is launching Local Zones in 15 metro areas to extend access across the contiguous US. In this session, learn how you can run latency-sensitive portions of applications local to end users and resources in a specific geography, delivering single-digit millisecond latency for use cases such as media and entertainment content creation, real-time gaming, reservoir simulations, electronic design automation, and machine learning.
Your customers expect a fast, frictionless, and personalized customer service experience. In this session, learn about Amazon Connect Customer Profiles—a new unified customer profile capability to allow agents to provide more personalized service during a call. Customer Profiles automatically brings together customer information from multiple applications, such as Salesforce, Marketo, Zendesk, ServiceNow, and Amazon Connect contact history, into a unified customer profile. With Customer Profiles, agents have the information they need, when they need it, directly in their agent application, resulting in improved customer satisfaction and reduced call resolution times (by up to 15%).
Preparing training data can be tedious. Amazon SageMaker Data Wrangler provides a faster, visual way to aggregate and prepare data for machine learning. In this session, learn how to use SageMaker Data Wrangler to connect to data sources and use prebuilt visualization templates and built-in data transforms to streamline the process of cleaning, verifying, and exploring data without having to write a single line of code. See a demonstration of how SageMaker Data Wrangler can be used to perform simple tasks as well as more advanced use cases. Finally, see how you can take your data preparation workflows into production with a single click.
To provide access to critical resources when needed and also limit the potential financial impact of an application outage, a highly available application design is critical. In this session, learn how you can use Amazon CloudWatch and AWS X-Ray to increase the availability of your applications. Join this session to learn how AWS observability solutions can help you proactively detect, efficiently investigate, and quickly resolve operational issues. All of which help you manage and improve your application’s availability.
Security is critical for your Kubernetes-based applications. Join this session to learn about the security features and best practices for Amazon EKS. This session covers encryption and other configurations and policies to keep your containers safe.
Don’t miss the AWS Partner Keynote with Doug Yeum, head of Global Partner Organization; Sandy Carter, vice president, Global Public Sector Partners and Programs; and Dave McCann, vice president, AWS Migration, Marketplace, and Control Services, to learn how AWS is helping partners modernize their businesses to help their customers transform.
Join Swami Sivasubramanian for the first-ever Machine Learning Keynote, live at re:Invent. Hear how AWS is freeing builders to innovate on machine learning with the latest developments in AWS machine learning, demos of new technology, and insights from customers.
Join Peter DeSantis, senior vice president of Global Infrastructure and Customer Support, to learn how AWS has optimized its cloud infrastructure to run some of the world’s most demanding womath.ceilrkloads and give your business a competitive edge.
Join Dr. Werner Vogels at 8:00AM (PST) as he goes behind the scenes to show how Amazon is solving today’s hardest technology problems. Based on his experience working with some of the largest and most successful applications in the world, Dr. Vogels shares his insights on building truly resilient architectures and what that means for the future of software development.
Cloud architecture has evolved over the years as the nature of adoption has changed and the level of maturity in our thinking continues to develop. In this session, Rudy Valdez, VP of Solutions Architecture and Training & Certification, walks
Organizations around the world are minimizing operations and maximizing agility by developing with serverless building blocks. Join David Richardson, VP of Serverless, for a closer look at the serverless programming model, including event-dri
AWS edge computing solutions provide infrastructure and software that move data processing and analysis as close to the endpoint where data is generated as required by customers. In this session, learn about new edge computing capabilities announced at re:Invent and how customers are using purpose-built edge solutions to extend the cloud to the edge.
Topics on simplifying container deployment, legacy workload migration using containers, optimizing costs for containerized applications, container architectural choices, and more.
Do you need to know what’s happening with your applications that run on Amazon EKS? In this session, learn how you can combine open-source tools, such as Prometheus and Grafana, with Amazon CloudWatch using CloudWatch Container Insights. Come to this session for a demo of Prometheus metrics with Container Insights.
The hard part is done. You and your team have spent weeks poring over pull requests, building microservices and containerizing them. Congrats! But what do you do now? How do you get those services on AWS? How do you manage multiple environments? How do you automate deployments? AWS Copilot is a new command line tool that makes building, developing, and operating containerized applications on AWS a breeze. In this session, learn how AWS Copilot can help you and your team manage your services and deploy them to production, safely and delightfully.
Five years ago, if you talked about containers, the assumption was that you were running them on a Linux VM. Fast forward to today, and now that assumption is challenged—in a good way. Come to this session to explore the best data plane option to meet your needs. This session covers the advantages of different abstraction models (Amazon EC2 or AWS Fargate), the operating system (Linux or Windows), the CPU architecture (x86 or Arm), and the commercial model (Spot or On-Demand Instances.)
Security is critical for your Kubernetes-based applications. Join this session to learn about the security features and best practices for Amazon EKS. This session covers encryption and other configurations and policies to keep your containers safe.
In this session, learn how the Commonwealth Bank of Australia (CommBank) built a platform to run containerized applications in a regulated environment and then replicated it across multiple departments using Amazon EKS, AWS CDK, and GitOps. This session covers how to manage multiple multi-team Amazon EKS clusters across multiple AWS accounts while ensuring compliance and observability requirements and integrating Amazon EKS with AWS Identity and Access Management, Amazon CloudWatch, AWS Secrets Manager, Application Load Balancer, Amazon Route 53, and AWS Certificate Manager.
Amazon EKS is a fully managed service that makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. Join this session to learn about how Verizon runs its core applications on Amazon EKS at scale. Verizon also discusses how it worked with AWS to overcome several post-Amazon EKS migration challenges and ensured that the platform was robust.
Containers have helped revolutionize modern application architecture. While managed container services have enabled greater agility in application development, coordinating safe deployments and maintainable infrastructure has become more important than ever. This session outlines how to integrate CI/CD best practices into deployments of your Amazon ECS and AWS Fargate services using pipelines and the latest in AWS developer tooling.
With Amazon ECS, you can run your containerized workloads securely and with ease. In this session, learn how to utilize the full spectrum of Amazon ECS security features and its tight integrations with AWS security features to help you build highly secure applications.
Do you have to budget your spend for container workloads? Do you need to be able to optimize your spend in multiple services to reduce waste? If so, this session is for you. It walks you through how you can use AWS services and configurations to improve your cost visibility. You learn how you can select the best compute options for your containers to maximize utilization and reduce duplication. This combined with various AWS purchase options helps you ensure that you’re using the best options for your services and your budget.
You have a choice of approach when it comes to provisioning compute for your containers. Some users prefer to have more direct control of their instances, while others could do away with the operational heavy lifting. AWS Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. This session explores the benefits and considerations of running on Fargate or directly on Amazon EC2 instances. You hear about new and upcoming features and learn how Amenity Analytics benefits from the serverless operational model.
Are you confused by the many choices of containers services that you can run on AWS? This session explores all your options and the advantages of each. Whether you are just beginning to learn Docker or are an expert with Kubernetes, join this session to learn how to pick the right services that would work best for you.
Leading containers migration and modernization initiatives can be daunting, but AWS is making it easier. This session explores architectural choices and common patterns, and it provides real-world customer examples. Learn about core technologies to help you build and operate container environments at scale. Discover how abstractions can reduce the pain for infrastructure teams, operators, and developers. Finally, hear the AWS vision for how to bring it all together with improved usability for more business agility.
As the number of services grow within an application, it becomes difficult to pinpoint the exact location of errors, reroute traffic after failures, and safely deploy code changes. In this session, learn how to integrate AWS App Mesh with Amazon ECS to export monitoring data and implement consistent communications control logic across your application. This makes it easy to quickly pinpoint the exact locations of errors and automatically reroute network traffic, keeping your container applications highly available and performing well.
Enterprises are continually looking to develop new applications using container technologies and leveraging modern CI/CD tools to automate their software delivery lifecycles. This session highlights the types of applications and associated factors that make a candidate suitable to be containerized. It also covers best practices that can be considered as you embark on your modernization journey.
Because of its security, reliability, and scalability capabilities, Amazon Elastic Kubernetes Service (Amazon EKS) is used by organization in their most sensitive and mission-critical applications. This session focuses on how Amazon EKS networking works with an Amazon VPC and how to expose your Kubernetes application using Elastic Load Balancing load balancers. It also looks at options for more efficient IP address utilization.
Network design is a critical component in your large-scale migration journey. This session covers some of the real-world networking challenges faced when migrating to the cloud. You learn how to overcome these challenges by diving deep into topics such as establishing private connectivity to your on-premises data center and accelerating data migrations using AWS Direct Connect/Direct Connect gateway, centralizing and simplifying your networking with AWS Transit Gateway, and extending your private DNS into the cloud. The session also includes a discussion of related best practices.
5G will be the catalyst for the next industrial revolution. In this session, come learn about key technical use cases for different industry segments that will be enabled by 5G and related technologies, and hear about the architectural patterns that will support these use cases. You also learn about AWS-enabled 5G reference architectures that incorporate AWS services.
AWS offers a breadth and depth of machine learning (ML) infrastructure you can use through either a do-it-yourself approach or a fully managed approach with Amazon SageMaker. In this session, explore how to choose the proper instance for ML inference based on latency and throughput requirements, model size and complexity, framework choice, and portability. Join this session to compare and contrast compute-optimized CPU-only instances, such as Amazon EC2 C4 and C5; high-performance GPU instances, such as Amazon EC2 G4 and P3; cost-effective variable-size GPU acceleration with Amazon Elastic Inference; and highest performance/cost with Amazon EC2 Inf1 instances powered by custom-designed AWS Inferentia chips.
When it comes to architecting your workloads on VMware Cloud on AWS, it is important to understand design patterns and best practices. Come join this session to learn how you can build well-architected cloud-based solutions for your VMware workloads. This session covers infrastructure designs with native AWS service integrations across compute, networking, storage, security, and operations. It also covers the latest announcements for VMware Cloud on AWS and how you can use these new features in your current architecture.
One of the most critical phases of executing a migration is moving traffic from your existing endpoints to your newly deployed resources in the cloud. This session discusses practices and patterns that can be leveraged to ensure a successful cutover to the cloud. The session covers preparation, tools and services, cutover techniques, rollback strategies, and engagement mechanisms to ensure a successful cutover.
AWS DeepRacer is the fastest way to get rolling with machine learning. Developers of all skill levels can get hands-on, learning how to train reinforcement learning models in a cloud based 3D racing simulator. Attend a session to get started, and then test your skills by competing for prizes and glory in an exciting autonomous car racing experience throughout re:Invent!
AWS DeepRacer gives you an interesting and fun way to get started with reinforcement learning (RL). RL is an advanced machine learning (ML) technique that takes a very different approach to training models than other ML methods. Its super power is that it learns very complex behaviors without requiring any labeled training data, and it can make short-term decisions while optimizing for a longer-term goal. AWS DeepRacer makes it fast and easy to build models in Amazon SageMaker and train, test, and iterate quickly and easily on the track in the AWS DeepRacer 3D racing simulator.
As more organizations are looking to migrate to the cloud, Red Hat OpenShift Service offers a proven, reliable, and consistent platform across the hybrid cloud. Red Hat and AWS recently announced a fully managed joint service that can be deployed directly from the AWS Management Console and can integrate with other AWS Cloud-native services. In this session, you learn about this new service, which delivers production-ready Kubernetes that many enterprises use on premises today, enhancing your ability to shift workloads to the AWS Cloud and making it easier to adopt containers and deploy applications faster. This presentation is brought to you by Red Hat, an AWS Partner.
Event-driven architecture can help you decouple services and simplify dependencies as your applications grow. In this session, you learn how Amazon EventBridge provides new options for developers who are looking to gain the benefits of this approach.
Amazon Timestream is a fast, scalable, and serverless time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day at as little as one-tenth the cost of relational databases. In this session, dive deep on Amazon Timestream features and capabilities, including its serverless automatic scaling architecture, its storage tiering that simplifies your data lifecycle management, its purpose-built query engine that lets you access and analyze recent and historical data together, and its built-in time series analytics functions that help you identify trends and patterns in your data in near-real time.
Savings Plans is a flexible pricing model that allows you to save up to 72 percent on Amazon EC2, AWS Fargate, and AWS Lambda. Many AWS users have adopted Savings Plans since its launch in November 2019 for the simplicity, savings, ease of use, and flexibility. In this session, learn how many organizations use Savings Plans to drive more migrations and business outcomes. Hear from Comcast on their compute transformation journey to the cloud and how it started with RIs. As their cloud usage evolved, they adopted Savings Plans to drive business outcomes such as new architecture patterns.
The ability to deploy only configuration changes, separate from code, means you do not have to restart the applications or services that use the configuration and changes take effect immediately. In this session, learn best practices used by teams within Amazon to rapidly release features at scale. Learn about a pattern that uses AWS CodePipeline and AWS AppConfig that will allow you to roll out application configurations without taking applications out of service. This will help you ship features faster across complex environments or regions.
I watched (binged) the A Cloud Guru course in two days and did the 6 practice exams over a week. I originally was only getting 70%’s on the exams, but continued doing them on my free time (to the point where I’d have 15 minutes and knock one out on my phone lol) and started getting 90%’s. – A mix of knowledge vs memorization tbh. Just make sure you read why your answers are wrong.
I don’t really have a huge IT background, although will note I work in a DevOps (1 1/2 years) environment; so I do use AWS to host our infrastructure. However, the exam is very high level compared to what I do/services I use. I’m fairly certain with zero knowledge/experience, someone could pass this within two weeks. AWS is also currently promoting a “get certified” challenge and is offering 50% off.
Went through the entire CloudAcademy course. Most of the info went out the other ear. Got a 67% on their final exam. Took the ExamPro free exam, got 69%.
Was going to take it last Saturday, but I bought TutorialDojo’s exams on Udemy. Did one Friday night, got a 50% and rescheduled it a week later to today Sunday.
Took 4 total TD exams. Got a 50%, 54%, 67%, and 64%. Even up until last night I hated the TD exams with a passion, I thought they were covering way too much stuff that didn’t even pop up in study guides I read. Their wording for some problems were also atrocious. But looking back, the bulk of my “studying” was going through their pretty well written explanations, and their links to the white papers allowed me to know what and where to read.
Not sure what score I got yet on the exam. As someone who always hated testing, I’m pretty proud of myself. I also had to take a dump really bad starting at around question 25. Thanks to TutorialsDojo Jon Bonso for completely destroying my confidence before the exam, forcing me to up my game. It’s better to walk in way over prepared than underprepared.
I would like to thank this community for recommendations about exam preparation. It was wayyyy easier than I expected (also way easier than TD practice exams scenario-based questions-a lot less wordy on real exam). I felt so unready before the exam that I rescheduled the exam twice. Quick tip: if you have limited time to prepare for this exam, I would recommend scheduling the exam beforehand so that you don’t procrastinate fully.
Resources:
-Stephane’s course on Udemy (I have seen people saying to skip hands-on videos but I found them extremely helpful to understand most of the concepts-so try to not skip those hands-on)
-Tutorials Dojo practice exams (I did only 3.5 practice tests out of 5 and already got 8-10 EXACTLY worded questions on my real exam)
Previous Aws knowledge:
-Very little to no experience (deployed my group’s app to cloud via Elastic beanstalk in college-had 0 clue at the time about what I was doing-had clear guidelines)
Preparation duration: -2 weeks (honestly watched videos for 12 days and then went over summary and practice tests on the last two days)
I used Stephane Maarek on Udemy. Purchased his course and the 6 Practice Exams. Also got Neal Davis’ 500 practice questions on Udemy. I took Stephane’s class over 2 days, then spent the next 2 weeks going over the tests (3~4 per day) till I was constantly getting over 80% – passed my exam with a 882.
What an adventure, I’ve never really gieven though to getting a cert until one day it just dawned on me that it’s one of the few resources that are globally accepted. So you can approach any company and basically prove you know what’s up on AWS 😀
Passed with two weeks of prep (after work and weekends)
This was just a nice structured presentation that also gives you the powerpoint slides plus cheatsheets and a nice overview of what is said in each video lecture.
Udemy – AWS Certified Cloud Practitioner Practice Exams, created by Jon Bonso**, Tutorials Dojo**
These are some good prep exams, they ask the questions in a way that actually make you think about the related AWS Service. With only a few “Bullshit! That was asked in a confusing way” questions that popped up.
I took CCP 2 days ago and got the pass notification right after submitting the answers. In about the next 3 hours I got an email from Credly for the badge. This morning I got an official email from AWS congratulating me on passing, the score is much higher than I expected. I took Stephane Maarek’s CCP course and his 6 demo exams, then Neal Davis’ 500 questions also. On all the demo exams, I took 1 fail and all passes with about 700-800. But in the real exam, I got 860. The questions in the real exam are kind of less verbose IMO, but I don’t truly agree with some people I see on this sub saying that they are easier. Just a little bit of sharing, now I’ll find something to continue ^^
Passed the exam! Spent 25 minutes answering all the questions. Another 10 to review. I might come back and update this post with my actual score.
Background
– A year of experience working with AWS (e.g., EC2, Elastic Beanstalk, Route 53, and Amplify).
– Cloud development on AWS is not my strong suit. I just Google everything, so my knowledge is very spotty. Less so now since I studied for this exam.
Study stats
– Spent three weeks studying for the exam.
– Studied an hour to two every day.
– Solved 800-1000 practice questions.
– Took 450 screenshots of practice questions and technology/service descriptions as reference notes to quickly swift through on my phone and computer for review. Screenshots were of questions that I either didn’t know, knew but was iffy on, or those I believed I’d easily forget.
– Made 15-20 pages of notes. Chill. Nothing crazy. This is on A4 paper. Free-form note taking. With big diagrams. Around 60-80 words per page.
– I was getting low-to-mid 70%s on Neal Davis’s and Stephane Maarek’s practice exams. Highest score I got was an 80%.
– I got a 67(?)% on one of Stephane Maarek’s exams. The only sub-70% I ever got on any practice test. I got slightly anxious. But given how much harder Maarek’s exams are compared to the actual exam, the anxiety was undue.
– Finishing the practice exams on time was never a problem for me. I would finish all of them comfortably within 35 minutes.
Resources used
– AWS Cloud Practitioner Essentials on the AWS Training and Certification Portal
– AWS Certified Cloud Practitioner Practice Tests (Book) by Neal Davis
– 6 Practice Exams | AWS Certified Cloud Practitioner CLF-C01 by Stephane Maarek*
– Certified Cloud Practitioner Course by Exam Pro (Paid Version)**
– One or two free practice exams found by a quick Google search
*Regarding Exam Pro: I went through about 40% of the video lectures. I went through all the videos in the first few sections but felt that watching the lectures was too slow and laborious even at 1.5-2x speed. (The creator, for the most part, reads off of the slides, adding brief comments here and there.) So, I decided to only watch the video lectures for sections I didn’t have a good grasp on. (I believe the video lectures provided in the course are just split versions of the full length course available for free on YouTube under the freeCodeCamp channel, here.) The online course provides five practice exams. I did not take any of them.
**Regarding Stephane Maarek: I only took his practice exams. I did not take his study guide course.
Notes
– My study regimen (i.e., an hour to two every day for three weeks) was overkill.
– The questions on the practice exams created by Neal Davis and Stephane Maarek were significantly harder than those on the actual exam. I believe I could’ve passed without touching any of these resources.
– I retook one or two practice exams out of the 10+ I’ve taken. I don’t think there’s a need to retake the exams as long as you are diligent about studying the questions and underlying concepts you got wrong. I reviewed all the questions I missed on every practice exam the day before.
What would I do differently?
– Focus on practice tests only. No video lectures.
– Focus on the technologies domain. You can intuit your way through questions in the other domains.
I thank you all for helping me through this process! Couldn’t have done it without all of the recommendations and guidance on this page.
Background: I am a back-end developer that works 12 hours a day for corporate America, so no time to study (or do anything) but I made it work.
Could I have probably gone for SAA first? Yeah, but I wanted to prove to myself that I could do it. I studied for about a month. I used Maarek’s Udemy course at 1.5x speed and I couldn’t recommend it more. I also used his practice exams. I’ll be honest, I took 5 practice exams and got somehow managed to fail every single one in the mid 60’s lol. Cleared the exam with an 800. Practice exams WAY harder.
My 2 cents on must knows:
AWS Shared Security Model (who owns what)
Everything Billing (EC2 instance, S3, different support plans)
I had a few ML questions that caught me off guard
VPC concepts – i.e. subnets, NACL, Transit Gateway
I studied solidly for two weeks, starting with Tutorials Dojo (which was recommended somewhere on here). I turned all of their vocabulary words and end of module questions into note cards. I did the same with their final assessment and one free exam.
During my second week, I studied the cards for anywhere from one to two hours a day, and I’d randomly watch videos on common exam questions.
The last thing I did was watch a 3 hr long video this morning that walks you through setting up AWS Instances. The visual of setting things up filled in a lot of holes.
I had some PSI software problems, and ended up getting started late. I was pretty dejected towards the end of the exam, and was honestly (and pleasantly) surprised to see that I passed.
Hopefully this helps someone. Keep studying and pushing through – if you know it, you know it. Even if you have a bad start. Cheers 🍻
I am a Linux Systems Engineer with over 15 years of experience as Linux and Unix Systems Administrator and onprem infrastructure engineer. My AWS knowledge are limited to basic EC2, IAM and S3. Initially I was studying for the AWS Certified Cloud Practitioner certification but I just couldn't keep from falling asleep. Too much information about too many products I know nothing about. I just don't learn like that. I am self taught, I learn by doing things. I may not necessarily know every detail of the tech I am working with, but I can get stuff done. Provided what I just said, would it be easier for me to go for the AWS Architect Associate or SysOps Administrator certification? Thanks submitted by /u/nousewindows [link] [comments]
Passed "AWS Security" today with a little above the minimum - 780. That was a really hard exam; I did not expect so hard after SA Pro . I used A Cantril, TutorialDojo and AWS Twitch Security podcast. And thanks,paper_recluse, for his list - EKS/ECS logging no one mentioned anywhere, in addition to his list to paying attention - the Encryption SDK and RAM (policy), and all sorts of policy evaluation (SCP especially). And yes, take your time, more than 1 month is needed for this exam to pass without hesitation. submitted by /u/Wide-Answer-2789 [link] [comments]
Today Amazon CodeCatalyst announces support for approval gates within a workflow. Workflows provide an automated procedure for building, testing, and deploying code as part of a continuous integration and continuous delivery (CI/CD) system. Approval gates pause the workflow run at the gate so that a user can validate whether it should be allowed to proceed.
The Amazon Time Sync Service now supports clock synchronization within microseconds of UTC on 87 additional Amazon Elastic Compute Cloud (Amazon EC2) instance types in supported regions, including all C7i, M7i, R7i, C7a, M7a, R7a, and M7g instances.
Hello all I hope you can help me with answering this question. I have a fundamental question about the DOP-CO2 certification, and it is practical. I would like to know how many of you and your enterprises use code commit, code deploy and code pipeline VS different tools (GitHub, Jenkins, etc) AND Cloudformation vs Other Iac tool (Terraform/Ansible) Currently, where I work we do not use any AWS (code commit, deploy, pipeline and Cloudformation) and from what I have seen in re:Invent many tools replace this and it seems that everybody revolts around them I would appreciate it if you could share your thoughts. I really appreciate any help you can provide. submitted by /u/Herrmadbeef [link] [comments]
Hello community! I write to ask dear community for recommendation. I'm breaking into a new career in cloud computing ( Cloud Engineer ) but I completely have no idea where to start from. I check online and YouTube but the resources are overwhelming. I'll appreciate if someone can recommend a roadmap that will get me ready for entry level Cloud engineer job in few months. Thanks submitted by /u/Saheed-Toyin [link] [comments]
Thanks to everyone who recommended their study resources here. A little background info: I am a computer science graduate who graduated in May of 2023 but found it hard to find a job due to the market. I decided that I would get a Solutions Architect Associate certification while waiting for the market to open up. This is how I studied: In January, I began watching Stephane Maarek's course on Udemy. I put in maybe 30 minutes every other day watching the course videos until I finally finished around late-March with everything. I recommend doing the hands-on activities up until you reach the section for S3. After that, I realized the hands-on videos aren't really needed for passing the exam. For the most part, I was able to get away with 2x speeding the videos and replaying something if Stephane's voice was too fast. By the time I reached the end of the course, I decided to take the practice test. I did it with internet assistance to look things up and ended up getting a 58%. I was kinda bummed out but realized I needed to fill in my knowledge gaps, so I ended up buying Jon Bonso's practice tests on TD. Honestly, doing the TD tests was frustrating at first because I kept on getting so many questions wrong. I felt like I knew nothing. To be honest, most of the time I got questions right, it was because I simply memorized the answer from a previous run, this wasn't necessarily a bad thing though, it allowed me to memorize the concepts by repeatedly seeing the same questions over and over. I first started by completing all four Section-based exams on TD. I did not track my scores but they were not good. I would probably get 2 out of every 5 questions right. Every time I did not know the answer or got it wrong, I would write down the concept and answer the question on a notes sheet. Review mode was also very helpful. I only ended up taking the first four review mode exams out of the seven because I was in a time crunch and needed to take the exam soon. I ended up averaging around 70% on all of them by the time I was ready to take the exam so I was a bit nervous that I was going to fail. The day before I took the exam, I reviewed my notes sheet and did topic-based questions on TD. Overall, the exam was easier compared to TD practice tests. I would say that it falls in the middle ground between Maarek's practice test and TD practice tests. Lots of questions about encryption. There was no single question where I felt overly stumped. I didn't feel like I scored horribly on the exam but I did have around 10 "second guess" questions in which I just always stuck to my first answer choice. I felt like I scored 75% on the exam and that's exactly what I got. Barely passed, but it is what it is, I'm certified now! Maarek's course gives a great overview of the AWS services and TD practice tests will show you how to answer how these services relate. If you are feeling like you aren't ready for the exam like I was, just give it a shot, you never know 😉 submitted by /u/GNFit [link] [comments]
Recently passed so I'm paying the anecdote tax. Prep A Cloud Guru. Got a license through my employer. Worked through all of the videos over a week or two, didn't touch the demos or hands-on labs. Mareek + Singh's practice exams. Used a Udemy free trial and ran through these over a week, one per night. Bonso's practice exams on Tutorial Dojo for the last week. After the ACG videos, I found the site for a couple practice questions and realized ACG didn't cover some of the topics adequately at all. Sought out Mareek's stuff since it helped me pass the SAA-C02 a while ago and it helped fill in some of the blanks. I was going to wing the exam after that, but the wait-two-weeks retake clause made me push it back just to drill things in. Bought Bonso's practice exams on a whim and those tied everything together. TD Exam Results Test Score Date Completed (PDT) Review Mode Set 5 80% Apr. 19, 12:58pm Review Mode Set 4 81.54% Apr 18, 1:19am Timed Mode Set 1 78.46% Apr. 17, 3:51am Review Mode Set 3 72.31% Apr. 17, 2:35am Review Mode Set 2 63.08% Apr. 16, 4:03am Review Mode Set 1 58.46% Apr. 16, 2:15am Test Day If my anxiety was a CloudWatch metric, I'd be alarming all day. The test was at 2pm on Friday, the 19th. Did that last practice exam beforehand and was feeling confident again. Question format was similar to the practice exams. Felt like every other question or answer referenced Lambda or EventBridge which had me questioning things. Overall, a straightforward experience. I skipped 9 questions and had 90 minutes left; reviewed those and some flagged questions for another ~20 minutes before doing the big submit. Finished the test at ~3:30pm, got the results email at about 7:30pm - 782! A pass is a pass. Tips and Takeaways Don't bother with ACG until they update the course. At a minimum, know all of the services on the exam guide at the most surface level. Just knowing the services alone should free up two wrong answers on most questions. Process of elimination is OP. If you're worried about the time the official exam will take, the test exams from the resources I mentioned were a good indicator of how fast you could expect to work through the real problems. Topics that came up the most were EC2, EBS, ASG, RDS, VPC, Route 53, CloudFront, S3, CloudFormation, CloudWatch, CloudTrail, IAM, Lambda, EventBridge, and Organizations. No Beanstalk, EKS, or ECS questions. Still no labs. That should be the case for a while. submitted by /u/midhabits [link] [comments]
Do forgive me for this seemingly stupid question. I applied for an IT role in February and have just fortunately received an offer. When I submitted my application back in February, I listed an AWS cloud certification on my CV. The certification has just expired in early April, and therefore I wonder if this would cause any issues in background check? The background check company is HireRight and my IT role is actually Microsoft Azure specific. Thank you! submitted by /u/Valuable-Ad3229 [link] [comments]
I have been preparing for the exam with very little prior knowledge of AWS (basically only a little of s3 and lambda) but with a good amount of knowledge in development, for the past month and a half every four days for 6 hours and I only went through 1/3 of the materials (using Stephane Marek’s course), which totals around 75 hours to cover 11 hours of the course. I have the time, but at this point I started wondering if I am being too thorough when going through the materials, for example I memorize the amount of read replicas that RDS can have attached, the name of the header that should be present in the request when enabling sse-c encryption in s3, the amount of points of presence that cloudfront has, etc. Is it something that could pop up on the exam/ could be helpful when doing cloud development, or am I overdoing it? Also for the context, it is my first cloud certificate submitted by /u/nikita_lafinskiy [link] [comments]
I decided to plan for getting certified as AWS Developer Associate. But it's been 2 weeks and my motivation is going day by day. I am not making any progress, started learning from AWS website (Skill Builder). I am a working professional who uses most of the services in my actual work, this certification will really help me to prove myself in the organization. But I can only give 1.5 to 2 hrs daily for learning. I need a definite plan which works for me best because learning from AWS website is not helping me and I am planning to give the exam in the end of May. Which means I have atleast 1 month for preparation. Please do suggest some study material. This will help me, thanks in advance!! 🙂 submitted by /u/Wild-Employment-6573 [link] [comments]
So for some reason, I'm unable to grasp this topic (S3 Block Public Access). I've gone over the AWS Documentation multiple times and tried to watch youtube videos - but the presentation of this topic from use case perspective I feel is severely lacking. Mostly In my attempts to answer these I'm getting wrong implying I'm not understanding the crux of these. I've no idea WHY this is escaping me. Could someone help me break these down in Layman's terms and use case terms (esp for SAP-C02)? BlockPublicAcls Setting this option to TRUE causes the following behavior: PUT Bucket acl and PUT Object acl calls fail if the specified access control list (ACL) is public. PUT Object calls fail if the request includes a public ACL. If this setting is applied to an account, then PUT Bucket calls fail if the request includes a public ACL. IgnorePublicAcls Setting this option to TRUE causes Amazon S3 to ignore all public ACLs on a bucket and any objects that it contains. This setting enables you to safely block public access granted by ACLs while still allowing PUT Object calls that include a public ACL (as opposed to BlockPublicAcls, which rejects PUT Object calls that include a public AC). Enabling this setting doesn't affect the persistence of any existing ACLs and doesn't prevent new public ACLs from being set. BlockPublicPolicy Setting this option to TRUE for a bucket causes Amazon S3 to reject calls to PUT Bucket policy if the specified bucket policy allows public access. Setting this option to TRUE for a bucket also causes Amazon S3 to reject calls to PUT access point policy for all of the bucket's same-account access points if the specified policy allows public access RestrictPublicBuckets Setting this option to TRUE restricts access to an access point or bucket with a public policy to only AWS service principals and authorized users within the bucket owner's account and access point owner's account. This setting blocks all cross-account access to the access point or bucket (except by AWS service principals), while still allowing users within the account to manage the access point submitted by /u/titan1978 [link] [comments]
Dear r/AWSCertifications community, I started preparing for the AWS Solutions Architect (Associate) certifications 6/7 months ago. My study plan was the following: spend 4 hours a week watching the famous Udemy course from Stephane Mareek take notes and build my own "preparation book" just watch without doing any pratical exercise using AWS The content is quite easy to follow and I don't face any learning difficulties. However, I stopped studying more or less at the middle of the course and I have hard time with resuming the preparation. I also felt I was not remembering things learned a few weeks back. Furthermore, the course seems a never ending story. I think it requires you to study so many details that I believe are not useful to memorize. I was curious about the study plan adoped by folks that certified for this before. What was your study plan? Do you have any tips to share to make it work without big effort? How did you make the preparation compatible with your private life and job? submitted by /u/mindtheclap [link] [comments]
Hey guys, So 2 weeks ago today I posted on this sub about my experience after coming up just short of passing my Cloud Practitioner Exam. The post got way more responses than I thought so I thought I’d post an update. Well, this morning I tried again and passed!! Thanks to everyone who commented with helpful tips and words of encouragement. It was a pleasant surprise to see just how uplifting and supportive this community was. Still not 100% sure what’s next from here, so any advice or connections in the field are welcome. Thank you all once again 🙂 TL;DR: two weeks ago I barely failed the CLF-C02 exam but today I passed submitted by /u/billmassi [link] [comments]
Passed my AWS Certified Solutions Architect - Associate with score 796. This was my first AWS exam. I have no prior AWS experience. My background is as a Java Developer. Adrian Cantrill's course helped me a lot in my journey. The course was in-depth and had lots of hands-on. I started with his free Tech Fundamentals course and then his Solution Architect Associate course. Took some reference from AWS skill builder as well. For practice, I used Tutorials Dojo and Stephane Maarek's practice papers. Both of them helped me a lot in my final exam, especially Tutorials Dojo's practice papers. I scheduled my exam after scoring 80-90% in my mocks. I used to get 50-60% in my first attempts. Exam had questions from all basic AWS services like EC2, VPC, IAM, Auto scaling, S3 and Database(RDS, RDS Proxy, Dynamo DB, Aurora) . Few questions were there from Kinesis and Cloudfront. After giving exam, I realised the topics I still need to work on. Pearson VUE exam centre was quite good. I had no technical difficulties and everything went smooth. All in all it was a good experience. All the best to everyone who are going to give their exam soon:) submitted by /u/hobbit-in-shire [link] [comments]
I passed the SAA-C03 with a 796, which was higher than the scores I’ve gotten in tutorialsdojo. I used Cantrill’s course btw. For some context in my background, I’m a university student that will be interning at AWS this summer as a cloud support associate. I’ve taken the CCP(820) last September after 4 months of preparation on and off. AWS apparently pays half for any certification that I do in that time, so, I was planning on preparing for the DVA-C02 during the evenings and weekends. I expect to complete the certification by August at the latest, if nothing goes wrong. I expect that it should not be difficult since I’m directly working with AWS 8+ hours a day, I should be gaining a lot of hands on experience. I still have another year of schooling left before I graduate. What certifications would be considered a good investment for knowledge/help in job seeking? I was considering the CKA, CCNA, Terraform, or the AWS networking/security specialty. submitted by /u/HandDazzling2014 [link] [comments]
Hey everyone, I have a month to prepare for the professional exam. Currently, I score 70% on the SAA-C03 (Associate) practice exams from Tutorial Dojo. I've decided not to pursue the Associate exam because I don’t want to spend an additional $150 on it. I believe I can be ready for the Professional level soon, as I dedicate at least two hours daily to studying and practicing the services. Additionally, I'm fully engaged with AWS in my new project. What are your recommendations? How should I structure my preparation program? I already have the Tutorials Dojo bundles but am considering adding Cantrill courses to my schedule. Do you have any other suggestions? Cheers! submitted by /u/gobitpide [link] [comments]
I just passed mt CLF-C02 Exam and it said PASS! The thing is, I had not made a Credly account, so I do not know where my badge will go. Will it be sent to the same email that I received my Pearson receipt? I am just anxious because I do not waqnt to lost the exam! Thanks! submitted by /u/nunpatrck [link] [comments]
I am preparing to take the AWS Solutions Architect Associate test before May, and was wondering whether anyone knows any methods to save some bucks on test. I know about the free retake promo they are running, but technically I have to pay full for it anyway. Also, have any of you guys taken the certification exam online through any of your universities' study rooms? I feel that it would be better than taking at home, much quieter where I am away from distractions. Although there's one door which is transparent and I don't know if it violates the proctoring policies. Any help is appreciated! 🙂 submitted by /u/sovereign_aura [link] [comments]
Amazon Personalize is excited to announce automatic training for solutions. With automatic training, developers can set a cadence for their Personalize solutions to automatically retrain using the latest data from their dataset group. This process creates a newly trained machine learning (ML) model, also known as a solution version, and maintains the relevance of Amazon Personalize recommendations for end users.
Amazon Simple Email Service (Amazon SES) is now available in the AWS GovCloud (US-East) Region, giving government customers and their partners the benefits of Amazon SES.
Passed the exam with 868 points, my preparation was chaotic : I was studying for SAA on Stephan's course, then the company forced my to do CLF-C01 because they already paid that course for me and I had to wait a lot of time before to do the exam so I had to restudy everything again. Did Stephane's preparation test and I found them quite difficult, then I did Tutorial Dojo test and I found them easier. Also the exam was quite easy and I think that Tutorial Dojo test are more similar to the real exam. Now I wonder what to do next: I would take also SAA but currently I don't work on AWS projects, and I don't know if I ever will. I think that Stephane's course could help me to pass the exam but I don't know if I will be ready to applicate for some AWS jobs with previous experience and just theoretical knowledge. Does SAA Cantrill's course give you enough preparation to pass jobs interviews ? submitted by /u/lestatt71 [link] [comments]
AWS Identity and Access Management (IAM) Roles Anywhere now provides the capability to define a set of mapping rules, allowing you to specify which data is extracted from your X.509 end-entity certificates. The data that is mapped is referred to as attributes and used as session tags in the IAM policy condition in order to allow or deny permissions. These attributes can be in one of the subject, issuer, or subject alternative name (SAN) fields of the X.509 certificate.
One thing people come to this sub for is anecdotes about the tests. Here is one more: I am in management with 0 technical hands on roles in the last 20 years. I have 0 experience with AWS. I spent 9 days "studying" for CLF-C02. Studying was listening to a very long 17 hour video series, and reading the AWS documentation when I had questions. A couple days ago I took the exam and "passed". No detail email yet. I tried practice exams from 3 sources. 1, I can not recall. 2, Udemy and 3, Dojo. I thought Dojo's were the "best". Even still, I think some question move beyond "difficult" into intentionally vague, especially as it relates to the Well-Architected Framework Pillars, etc. As stated, I do not have my score back, but the exam felt substantially easier than the practice tests. I got 60s and 70s on the Dojo tests the first time through. I got 70s to low 80s the second time through. I suspect but do not know that I will get mid 800s or higher on the exam. Again, the official exam questions were more straight forward. The distractor answers "seemed" less applicable than on the practice tests. Hope this helps someone. submitted by /u/unknownatthistime [link] [comments]
Today, we are excited to announce that customers will now be able to use Apache Livy to submit their Apache Spark jobs to Amazon EMR on EKS, in addition to using StartJobRun API, Spark Operator, Spark Submit and Interactive Endpoints. With this launch, customers will be able to use a REST interface to easily submit Spark jobs or snippets of Spark code, retrieve results synchronously or asynchronously while continuing to get all of the Amazon EMR on EKS benefits such as EMR optimized Spark runtime, SSL secured Livy endpoint, programmatic set-up experience etc.
Amazon SageMaker enables SageMaker Projects in the updated SageMaker Studio experience. Prior to this SageMaker Projects were available only in SageMaker Studio classic.
Today, AWS Elastic Disaster Recovery (AWS DRS) announces support of AWS Outposts racks. With this launch, you can now set your data replication and recovery destinations to AWS Outposts racks, in addition to using AWS Regions and Availability Zones where AWS DRS is available.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R6gd instances are available in the Europe (Zurich) Region. These instances are powered by AWS Graviton2 processors and are built on the AWS Nitro System. The Nitro System is a collection of AWS designed hardware and software innovations that enables the delivery of efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage. R6gd instances provide local SSD storage and are ideal for memory-intensive workloads such as open-source databases, in-memory caches, and real time big data analytics that need access to high-speed, low latency storage. These instances offer up to 25 Gbps of network bandwidth, up to 19 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS), up to 512 GiB RAM, and up to 3.8TB of NVMe SSD local instance storage.
AWS Glue Studio Notebooks provides interactive job authoring in AWS Glue, which helps simplify the process of developing data integration jobs. Studio Notebooks is generally available in the following 6 AWS regions starting today: Middle East (UAE), Asia Pacific (Hyderabad), Asia Pacific (Melbourne), Israel (Tel Aviv), Europe (Spain) and Europe (Zurich).
AWS Resilience Hub is now a HIPAA eligible service, enabling healthcare and life sciences organizations to now use AWS Resilience Hub to run sensitive workloads regulated under the U.S. Health Insurance Portability and Accountability Act (HIPAA). AWS maintains a standards-based risk management program to ensure that the HIPAA-eligible services specifically support HIPAA administrative, technical, and physical safeguards.
So I’m currently working for AWS as a data center technician and I leaning towards pursuing cloud computing. I’ve heard that getting certification is the good for the technical aspect. However, I’ve been told then acquiring a degree in business to pair with the cloud certification would better my chances of getting higher paying jobs. I need to an answer because I do not want to waste my time. I mean why would someone pursue a business degree if I want to mainly work on the tech/networking said, what are the benefits of that? Because I also hear tons of people who make 6 figures with just certifications alone submitted by /u/GASHI_CASH [link] [comments]
So I already work for AWS as a data tech and I believe I want to pursue cloud computing. I want to take the cloud practitioner exam but I feel like taking the comptia Network + would give me the basics in networking so I’m more equipped to take the cloud practitioner certification. Do you think this course of action is wise or is the network + a waste of time ? P.S I already have my A+ so that also factors into my contemplation to take the network + submitted by /u/GASHI_CASH [link] [comments]
Well, I passed. 800 isn’t good enough for me but I will take it. I’d like to thank u/acantril for his incredible program. The DoJo practice exams were the icing on the cake as far as exam prep goes. The meat and potatoes really was Adrian’s course. The level of detail and guidance I received was better than any manager or team I’ve been a part of and I feel confident in my abilities to move on to the next step in my career. Thank you. This is quite a momentous occasion for me. I’ve dealt with depression and overcame it to sharpen my skills back into the lion I once was. This moment hit me as I sat next to my father’s hospital bed with him asking for his life to end due to a massive stroke and brain bleed the doctors are hesitant to fix…. He’s dealt with this for the past ten years and is exhausted. I heard the email toast on my phone and couldn’t believe it. The nurse gave him fentanyl and he felt better and I’ve never felt better. My fathers situation isn’t great, but my success has shown me and him that we mustn’t think this way. I found out about his emergency surgery the night before my exam. I studied for a month, and all last night. I was unsure of how I did, I felt like I had passed but there were some interesting choice of words for a few questions that I felt were trying to have me second guess myself but I trusted my instinct and it worked well for me. I’d also like to thank you, the community. Your stories of success and failure helped me tremendously. Now for a much needed break, what a day. submitted by /u/el_DAN1MAL [link] [comments]
Continuing my journey to become a Solutions Architect (also DevOps Engineer) and need a study buddy to not only stay accountable but also talk through some of the material. We could also mix things up every now and then with flash cards or party games. If you're working on the AWS CCP feel free to join as well and ask any questions. Explaining cloud concepts to others would definitely help my understanding. https://discord.gg/zyN5rxSV submitted by /u/HeroAcademiaNet [link] [comments]
Today, AWS announces the release of Neuron 2.18, introducing stable support (out of beta) for PyTorch 2.1, adding continuous batching with vLLM support, and adding support for speculative decoding with Llama-2-70B sample in Transformers NeuronX library.
Amazon WorkSpaces now offers APIs to link your AWS accounts, within the same region, so that these accounts can use the same underlying dedicated infrastructure. AWS enables you to run your Bring Your Own License (BYOL) WorkSpaces on infrastructure that is dedicated to you in the AWS Cloud, and these new APIs make it easier for the you to use your dedicated infrastructure efficiently.
AWS IAM Identity Center administrators now have an option to extend the session duration for Amazon CodeWhisperer separately from the session durations of other IAM Identity Center integrated applications and the AWS access portal. Users of Amazon CodeWhisperer can work in the integrated development environment (IDE) for 90 days without being asked to re-authenticate by CodeWhisperer.
Starting today, the next generation of the Meta Llama models, Llama 3, is now available via Amazon SageMaker JumpStart, a machine learning (ML) hub that offers pretrained models, built-in algorithms, and pre-built solutions to help you quickly get started with ML. You can deploy and use Llama 3 foundation models with a few clicks in SageMaker Studio or programmatically through the SageMaker Python SDK.
AWS Storage Gateway expands availability to the AWS Canada West (Calgary) Region enabling customers to deploy and manage hybrid cloud storage for their on-premises workloads.
Trusted Language Extensions for PostgreSQL (pg_tle) now supports client authentication hook that lets you run additional checks over the existing authentication process, allowing you to enhance the security posture of your databases. A hook is an internal callback mechanism available to developers for extending PostgreSQL's core functionality. By using hooks, developers can implement their own functions or procedures for use during various database operations.
Hi all, I’ve got a route53 hosted domain but cannot for the life of me figure out how to get it routed to a github pages website. Are there any resources you guys like, or could someone give me a 1 hour consult or similar? Thanks! submitted by /u/changomacho [link] [comments]
AWS Partner Central has enhanced the user management experience in AWS Partner Central with a modernized look and feel and more customized capabilities. Now, alliance leads and alliance team users can assign specific user roles with distinctive permission sets and seamlessly invite new users to register via email, specifying a role assignment upon registration. These updates simplify and provide efficient user management through bulk actions and personalized role assignments.
Today, we are announcing the availability of the Amazon Neptune connector for Nodestream, the Parquet input file format for Nodestream, and the Nodestream Security Bill Of Material (SBOM) plug-in for CycloneDX and SPDX file formats.
AWS Partners with services solutions will now have additional guidance available through the tasks feature in AWS Partner Central to help expediate their self-service journey with AWS. With this release, partners in the early stages of their engagement with the AWS Partner Network (APN) will be provided with a series of personalized tasks guiding them toward achieving AWS Select Tier status. Achieving this tier unlocks additional benefits, including access to funding and programs.
Starting today, Partners can leverage the AWS Partner CRM Connector for Salesforce to create offers for future dated agreements and agreement based offers, providing greater flexibility and control over offer management and integration within AWS Marketplace. Future dated agreements allows you to schedule service start dates to begin in the future. Partners can also access their agreements and create agreement-based offers for SaaS Contract products, with or without consumption, across multiple seller accounts, enabling buyers to replace existing agreements when they accept the offer.
Today, Amazon Simple Queue Service (SQS) announces support for dead-letter queue redrive for FIFO queues in the AWS GovCloud (US) Regions. Dead-letter queue redrive is an enhanced capability to improve the dead-letter queue management experience for Amazon SQS customers. Amazon SQS already supports redriving messages from a standard dead-letter queue to a standard source queues or a standard custom destination queue. Now SQS customers can also redrive messages from a FIFO dead-letter queue to FIFO source queues or FIFO custom destination queues.
Hey all, I thought I'd share what tools and methods I used to get certified three times in a row in a short time. Here's the simple answer: Watching Stéphane Maarek's Udemy course. (And skipping every single hands on section/practice test). Doing a single Tutorial's Dojo Practice exam for each individual cert in review mode. Having a large amount of uninterrupted free time due to a recent layoff. If you don't care about the gritty details and just want to pass remember to use Maarek's Course and Tutorial's Dojo exams and you'll do fine. They cover nearly everything and are worth every single penny. Also, if you aren't interested in reading a very long wall of text, feel free to AMA. I don't mind providing shorter answers to any questions about a specific cert that you may have! For those that don't mind a really long answer, here is my study process and a few tips based on my personal experience: I skipped every single hands-on section in Maarek's courses. This drastically cut down the study time as these sections personally felt way too deep compared to what the exams ask of you. Just listening to the course and following along with the material exclusively on the slides felt like more than enough prep for me. However, the hands-on are great for visual learners and especially the real world. I highly recommend watching these sections before interviews for cloud based jobs that use AWS or for creating your own personal cloud projects. I'd never suggest ignoring these sections entirely, especially if you want to work in AWS or the cloud space overall. However, for the purposes of the exam, you will never have to do any hands on work. Like the documentation says: the exam is multiple choice. That's it. Maarek's Course is very similar to reading the AWS Whitepaper. He goes over everything you will need from those documents. In my experience, if you have Maarek's slides you don't need the Whitepaper. (At the associate level at least) I only took 1 TD practice exam. This was more of a personal choice. I hate practice tests. My other Certs are CompTIA and those practice tests are some of the worst to deal with so my view of practice exams is quite negative. However, TD's practice exams are top-notch. They are perfect for getting you in the mindset of the exam. When people on this sub say that the TD exam questions and the real exam questions look similar, they are telling the truth 100%. The scenarios/answers are different of course, but the important part is, they get you used to the wording and structure of the questions. The Official AWS practice questions also prepare you, but there are a very limited number of those questions. If you have the time to do more than 1 TD practice test, I recommend it even as a practice test hater. However, In my personal experience the score didn't matter. It matters whether or not you understood why you got the answer right or wrong. I personally took notes on all questions whether I got them right or wrong. I haven't passed a single practice exam a day in my life but now I have 3 AWS and 4 CompTIA certs. I was able to pass these exams so quickly due being laid-off. I had an abundance of time and was lucky enough to not need much money while I look for another job. I sat for and passed each exam during the week I studied for it. And I took 1 exam per week. To clarify, I started each Udemy course on a Monday, finished by Wednesday. Took a TD exam on Thursday, and sat for the exam on Friday. Additionally, these certs have nothing to do with my job search so I did not have a lot of pressure to pass. Not having the cert be directly tied to my future really allowed me to absorb the knowledge quickly. Like many have said before, if you are doing this cert (or any other cert) while doing a full-time job or have anything else in life that can distract you from studying, you will very likely need more time. If you have a large block of uninterrupted free time, I highly recommend doing as much studying in that timeframe as the material is very intuitive and if you are shooting for multiple certs try studying for them close together as the material overlaps a whole lot between exams. Maarek's Courses even highlight which sections will overlap. Some Auxiliary tips based on sitting for the English version of the exams: Read the official AWS exam guide and sample questions on the AWS website for the cert you are studying for. All the relevant information you need to focus on is right there, there are no tricks. I can't speak for the exam in other languages, but the English version of the exam will not try to trick you. The language is very straightforward. I feel that I should reiterate one last time there are no tricks. Both the AWS website and Maarek's course will tell you all you need to know for the exam. Like the official documentation says: The associate-level exams are multiple choice, not hands-on. Process of elimination works very well. That is how someone like me with no cloud experience was able to logic my way through any difficult questions. If you have basic knowledge of what the cloud is and why we would want to use it, you've already won half the battle. The rest is just doing what the question asks. Overall it is up to you to know which answer achieves the goal the question asks you to achieve. Sounds obvious, right? That's because it is. Remember: No Tricks. Quick Disclaimer: We all learn differently, so I know my crude study methods might not work for everyone. But either way, I wish you luck in your future exams. Just don't psyche yourself out and know that you got this! And a Big shout out to Stéphane Maarek and Jon Bonso for making such awesome study material! That's it, hope you liked it, and I'll see you in the next reddit post submitted by /u/SgtSchittBrix [link] [comments]
Please note the screen attach where we have a graph grouped by use type. I can perfectly understand almost all items there, but can not get what the "No usage type" refers to. submitted by /u/ApartmentSad3053 [link] [comments]
submitted by /u/scientificamerican [link] [comments]
Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.
Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.