Skip to content
IT - Engineering - Cloud - Finance

IT – Engineering – Cloud – Finance

IT, Engineering, Entrepreneurship, Sports, Finances, Life, Success, Failure

  • Main
  • About
  • Online Store
  • Books
  • Contact
  • Top 100 AWS Certified Cloud Practitioner Exam Preparation Questions and Answers Dumps
  • Show All Posts
  • Privacy Policy
  • Disclaimer

Tag: AWS Lake Formation

Posted on June 23, 2019May 16, 2022

Top 100 AWS Solutions Architect Associate Certification Exam Questions and Answers Dump – SAA-C02 and SAA-C03

AWS Solution Architect Associate Exam Questions and Answers Dump
 

AWS Certified Solutions Architect – Associate  average salary

The AWS Certified Solutions Architect – Associate  average salary is  $149,446/year

In this blog, we will help you prepare for the AWS Solution Architect Associate Certification Exam, give you some  facts and summaries, provide AWS Solution Architect Associate Top  Questions and Answers Dump

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA-C02 SAA-C03 Exam Prep

The popular AWS Certified Solutions Architect Associate exam will have its new version this coming August. 2022.

AWS Certified Solutions Architect – Associate (SAA-C03) Exam Guide

AWS SAA-C03 Exam Guide
AWS SAA-C03 Exam Guide

The AWS Certified Solutions Architect – Associate (SAA-C03) exam is intended for individuals who perform in a solutions architect role.
The exam validates a candidate’s ability to use AWS technologies to design solutions based on the AWS Well-Architected Framework.

2022 AWS Cloud Practitioner Exam Preparation

The exam also validates a candidate’s ability to complete the following tasks:
• Design solutions that incorporate AWS services to meet current business requirements and future projected needs
• Design architectures that are secure, resilient, high-performing, and cost-optimized
• Review existing solutions and determine improvements

Unscored content
The exam includes 15 unscored questions that do not affect your score.
AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.

Target candidate description
The target candidate should have at least 1 year of hands-on experience designing cloud solutions that use AWS services

Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720.
Your score shows how you performed on the exam as a whole and whether or not you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.

Content outline:
Domain 1: Design Secure Architectures 30%
Domain 2: Design Resilient Architectures 26%
Domain 3: Design High-Performing Architectures 24%
Domain 4: Design Cost-Optimized Architectures 20%

Domain 1: Design Secure Architectures
This exam domain is focused on securing your architectures on AWS and comprises 30% of the exam. Task statements include:

Task Statement 1: Design secure access to AWS resources.
Knowledge of:
• Access controls and management across multiple accounts
• AWS federated access and identity services (for example, AWS Identity and Access Management [IAM], AWS Single Sign-On [AWS SSO])
• AWS global infrastructure (for example, Availability Zones, AWS Regions)
• AWS security best practices (for example, the principle of least privilege)
• The AWS shared responsibility model

Skills in:
• Applying AWS security best practices to IAM users and root users (for example, multi-factor authentication [MFA])
• Designing a flexible authorization model that includes IAM users, groups, roles, and policies
• Designing a role-based access control strategy (for example, AWS Security Token Service [AWS STS], role switching, cross-account access)
• Designing a security strategy for multiple AWS accounts (for example, AWS Control Tower, service control policies [SCPs])
• Determining the appropriate use of resource policies for AWS services
• Determining when to federate a directory service with IAM roles

Task Statement 2: Design secure workloads and applications.

Knowledge of:
• Application configuration and credentials security
• AWS service endpoints
• Control ports, protocols, and network traffic on AWS
• Secure application access
• Security services with appropriate use cases (for example, Amazon Cognito, Amazon GuardDuty, Amazon Macie)
• Threat vectors external to AWS (for example, DDoS, SQL injection)

Skills in:
• Designing VPC architectures with security components (for example, security groups, route tables, network ACLs, NAT gateways)
• Determining network segmentation strategies (for example, using public subnets and private subnets)
• Integrating AWS services to secure applications (for example, AWS Shield, AWS WAF, AWS SSO, AWS Secrets Manager)
• Securing external network connections to and from the AWS Cloud (for example, VPN, AWS Direct Connect)

Task Statement 3: Determine appropriate data security controls.

Knowledge of:
• Data access and governance
• Data recovery
• Data retention and classification
• Encryption and appropriate key management

Skills in:
• Aligning AWS technologies to meet compliance requirements
• Encrypting data at rest (for example, AWS Key Management Service [AWS KMS])
• Encrypting data in transit (for example, AWS Certificate Manager [ACM] using TLS)
• Implementing access policies for encryption keys
• Implementing data backups and replications
• Implementing policies for data access, lifecycle, and protection
• Rotating encryption keys and renewing certificates


Save 65% on select product(s) with promo code 65ZDS44X on Amazon.com

Domain 2: Design Resilient Architectures
This exam domain is focused on designing resilient architectures on AWS and comprises 26% of the exam. Task statements include:


Task Statement 1: Design scalable and loosely coupled architectures.
Knowledge of:
• API creation and management (for example, Amazon API Gateway, REST API)
• AWS managed services with appropriate use cases (for example, AWS Transfer Family, Amazon
Simple Queue Service [Amazon SQS], Secrets Manager)
• Caching strategies
• Design principles for microservices (for example, stateless workloads compared with stateful workloads)
• Event-driven architectures
• Horizontal scaling and vertical scaling
• How to appropriately use edge accelerators (for example, content delivery network [CDN])
• How to migrate applications into containers
• Load balancing concepts (for example, Application Load Balancer)
• Multi-tier architectures
• Queuing and messaging concepts (for example, publish/subscribe)
• Serverless technologies and patterns (for example, AWS Fargate, AWS Lambda)
• Storage types with associated characteristics (for example, object, file, block)
• The orchestration of containers (for example, Amazon Elastic Container Service [Amazon ECS],Amazon Elastic Kubernetes Service [Amazon EKS])
• When to use read replicas
• Workflow orchestration (for example, AWS Step Functions)

Skills in:
• Designing event-driven, microservice, and/or multi-tier architectures based on requirements
• Determining scaling strategies for components used in an architecture design
• Determining the AWS services required to achieve loose coupling based on requirements
• Determining when to use containers
• Determining when to use serverless technologies and patterns
• Recommending appropriate compute, storage, networking, and database technologies based on requirements
• Using purpose-built AWS services for workloads

Task Statement 2: Design highly available and/or fault-tolerant architectures.
Knowledge of:
• AWS global infrastructure (for example, Availability Zones, AWS Regions, Amazon Route 53)
• AWS managed services with appropriate use cases (for example, Amazon Comprehend, Amazon Polly)
• Basic networking concepts (for example, route tables)
• Disaster recovery (DR) strategies (for example, backup and restore, pilot light, warm standby,
active-active failover, recovery point objective [RPO], recovery time objective [RTO])
• Distributed design patterns
• Failover strategies
• Immutable infrastructure
• Load balancing concepts (for example, Application Load Balancer)
• Proxy concepts (for example, Amazon RDS Proxy)
• Service quotas and throttling (for example, how to configure the service quotas for a workload in a standby environment)
• Storage options and characteristics (for example, durability, replication)
• Workload visibility (for example, AWS X-Ray)

Skills in:
• Determining automation strategies to ensure infrastructure integrity
• Determining the AWS services required to provide a highly available and/or fault-tolerant architecture across AWS Regions or Availability Zones
• Identifying metrics based on business requirements to deliver a highly available solution
• Implementing designs to mitigate single points of failure
• Implementing strategies to ensure the durability and availability of data (for example, backups)
• Selecting an appropriate DR strategy to meet business requirements
• Using AWS services that improve the reliability of legacy applications and applications not built for the cloud (for example, when application changes are not possible)
• Using purpose-built AWS services for workloads

Domain 3: Design High-Performing Architectures
This exam domain is focused on designing high-performing architectures on AWS and comprises 24% of the exam. Task statements include:

Task Statement 1: Determine high-performing and/or scalable storage solutions.

Knowledge of:
• Hybrid storage solutions to meet business requirements
• Storage services with appropriate use cases (for example, Amazon S3, Amazon Elastic File System [Amazon EFS], Amazon Elastic Block Store [Amazon EBS])
• Storage types with associated characteristics (for example, object, file, block)

Skills in:
• Determining storage services and configurations that meet performance demands
• Determining storage services that can scale to accommodate future needs

Task Statement 2: Design high-performing and elastic compute solutions.
Knowledge of:
• AWS compute services with appropriate use cases (for example, AWS Batch, Amazon EMR, Fargate)
• Distributed computing concepts supported by AWS global infrastructure and edge services
• Queuing and messaging concepts (for example, publish/subscribe)
• Scalability capabilities with appropriate use cases (for example, Amazon EC2 Auto Scaling, AWS Auto Scaling)
• Serverless technologies and patterns (for example, Lambda, Fargate)
• The orchestration of containers (for example, Amazon ECS, Amazon EKS)

Skills in:
• Decoupling workloads so that components can scale independently
• Identifying metrics and conditions to perform scaling actions
• Selecting the appropriate compute options and features (for example, EC2 instance types) to meet business requirements
• Selecting the appropriate resource type and size (for example, the amount of Lambda memory) to meet business requirements

Task Statement 3: Determine high-performing database solutions.
Knowledge of:
• AWS global infrastructure (for example, Availability Zones, AWS Regions)
• Caching strategies and services (for example, Amazon ElastiCache)
• Data access patterns (for example, read-intensive compared with write-intensive)
• Database capacity planning (for example, capacity units, instance types, Provisioned IOPS)
• Database connections and proxies
• Database engines with appropriate use cases (for example, heterogeneous migrations, homogeneous migrations)
• Database replication (for example, read replicas)
• Database types and services (for example, serverless, relational compared with non-relational, in-memory)

Skills in:
• Configuring read replicas to meet business requirements
• Designing database architectures
• Determining an appropriate database engine (for example, MySQL compared with
PostgreSQL)
• Determining an appropriate database type (for example, Amazon Aurora, Amazon DynamoDB)
• Integrating caching to meet business requirements

Task Statement 4: Determine high-performing and/or scalable network architectures.
Knowledge of:
• Edge networking services with appropriate use cases (for example, Amazon CloudFront, AWS Global Accelerator)
• How to design network architecture (for example, subnet tiers, routing, IP addressing)
• Load balancing concepts (for example, Application Load Balancer)
• Network connection options (for example, AWS VPN, Direct Connect, AWS PrivateLink)

Skills in:
• Creating a network topology for various architectures (for example, global, hybrid, multi-tier)
• Determining network configurations that can scale to accommodate future needs
• Determining the appropriate placement of resources to meet business requirements
• Selecting the appropriate load balancing strategy

Task Statement 5: Determine high-performing data ingestion and transformation solutions.
Knowledge of:
• Data analytics and visualization services with appropriate use cases (for example, Amazon Athena, AWS Lake Formation, Amazon QuickSight)
• Data ingestion patterns (for example, frequency)
• Data transfer services with appropriate use cases (for example, AWS DataSync, AWS Storage Gateway)
• Data transformation services with appropriate use cases (for example, AWS Glue)
• Secure access to ingestion access points
• Sizes and speeds needed to meet business requirements
• Streaming data services with appropriate use cases (for example, Amazon Kinesis)


Skills in:
• Building and securing data lakes
• Designing data streaming architectures
• Designing data transfer solutions
• Implementing visualization strategies
• Selecting appropriate compute options for data processing (for example, Amazon EMR)
• Selecting appropriate configurations for ingestion
• Transforming data between formats (for example, .csv to .parquet)

Domain 4: Design Cost-Optimized Architectures
This exam domain is focused optimizing solutions for cost-effectiveness on AWS and comprises 20% of the exam. Task statements include:

Task Statement 1: Design cost-optimized storage solutions.
Knowledge of:
• Access options (for example, an S3 bucket with Requester Pays object storage)
• AWS cost management service features (for example, cost allocation tags, multi-account billing)
• AWS cost management tools with appropriate use cases (for example, AWS Cost Explorer, AWS Budgets, AWS Cost and Usage Report)
• AWS storage services with appropriate use cases (for example, Amazon FSx, Amazon EFS, Amazon S3, Amazon EBS)
• Backup strategies
• Block storage options (for example, hard disk drive [HDD] volume types, solid state drive [SSD] volume types)
• Data lifecycles
• Hybrid storage options (for example, DataSync, Transfer Family, Storage Gateway)
• Storage access patterns
• Storage tiering (for example, cold tiering for object storage)
• Storage types with associated characteristics (for example, object, file, block)

Skills in:
• Designing appropriate storage strategies (for example, batch uploads to Amazon S3 compared with individual uploads)
• Determining the correct storage size for a workload
• Determining the lowest cost method of transferring data for a workload to AWS storage
• Determining when storage auto scaling is required
• Managing S3 object lifecycles
• Selecting the appropriate backup and/or archival solution
• Selecting the appropriate service for data migration to storage services
• Selecting the appropriate storage tier
• Selecting the correct data lifecycle for storage
• Selecting the most cost-effective storage service for a workload

Task Statement 2: Design cost-optimized compute solutions.
Knowledge of:
• AWS cost management service features (for example, cost allocation tags, multi-account billing)
• AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report)
• AWS global infrastructure (for example, Availability Zones, AWS Regions)
• AWS purchasing options (for example, Spot Instances, Reserved Instances, Savings Plans)
• Distributed compute strategies (for example, edge processing)
• Hybrid compute options (for example, AWS Outposts, AWS Snowball Edge)
• Instance types, families, and sizes (for example, memory optimized, compute optimized, virtualization)
• Optimization of compute utilization (for example, containers, serverless computing, microservices)
• Scaling strategies (for example, auto scaling, hibernation)

Skills in:
• Determining an appropriate load balancing strategy (for example, Application Load Balancer [Layer 7] compared with Network Load Balancer [Layer 4] compared with Gateway Load Balancer)
• Determining appropriate scaling methods and strategies for elastic workloads (for example, horizontal compared with vertical, EC2 hibernation)
• Determining cost-effective AWS compute services with appropriate use cases (for example, Lambda, Amazon EC2, Fargate)
• Determining the required availability for different classes of workloads (for example, production workloads, non-production workloads)
• Selecting the appropriate instance family for a workload
• Selecting the appropriate instance size for a workload

Task Statement 3: Design cost-optimized database solutions.
Knowledge of:
• AWS cost management service features (for example, cost allocation tags, multi-account billing)
• AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report)
• Caching strategies
• Data retention policies
• Database capacity planning (for example, capacity units)
• Database connections and proxies
• Database engines with appropriate use cases (for example, heterogeneous migrations, homogeneous migrations)
• Database replication (for example, read replicas)
• Database types and services (for example, relational compared with non-relational, Aurora, DynamoDB)

Skills in:
• Designing appropriate backup and retention policies (for example, snapshot frequency)
• Determining an appropriate database engine (for example, MySQL compared with PostgreSQL)
• Determining cost-effective AWS database services with appropriate use cases (for example, DynamoDB compared with Amazon RDS, serverless)
• Determining cost-effective AWS database types (for example, time series format, columnar format)
• Migrating database schemas and data to different locations and/or different database engines

Task Statement 4: Design cost-optimized network architectures.
Knowledge of:
• AWS cost management service features (for example, cost allocation tags, multi-account billing)
• AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report)
• Load balancing concepts (for example, Application Load Balancer)
• NAT gateways (for example, NAT instance costs compared with NAT gateway costs)
• Network connectivity (for example, private lines, dedicated lines, VPNs)
• Network routing, topology, and peering (for example, AWS Transit Gateway, VPC peering)
• Network services with appropriate use cases (for example, DNS)

Skills in:
• Configuring appropriate NAT gateway types for a network (for example, a single shared NAT
gateway compared with NAT gateways for each Availability Zone)
• Configuring appropriate network connections (for example, Direct Connect compared with VPN compared with internet)
• Configuring appropriate network routes to minimize network transfer costs (for example, Region to Region, Availability Zone to Availability Zone, private to public, Global Accelerator, VPC endpoints)
• Determining strategic needs for content delivery networks (CDNs) and edge caching
• Reviewing existing workloads for network optimizations
• Selecting an appropriate throttling strategy
• Selecting the appropriate bandwidth allocation for a network device (for example, a single VPN compared with multiple VPNs, Direct Connect speed)

Which key tools, technologies, and concepts might be covered on the exam?
The following is a non-exhaustive list of the tools and technologies that could appear on the exam.
This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam.
The general tools and technologies in this list appear in no particular order.
AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance:
• Compute
• Cost management
• Database
• Disaster recovery
• High performance
• Management and governance
• Microservices and component decoupling
• Migration and data transfer
• Networking, connectivity, and content delivery
• Resiliency
• Security
• Serverless and event-driven design principles
• Storage

AWS Services and Features
There are lots of new services and feature updates in scope for the new AWS Certified Solutions Architect Associate certification! Here’s a list of some of the new services that will be in scope for the new version of the exam:

Analytics:
• Amazon Athena
• AWS Data Exchange
• AWS Data Pipeline
• Amazon EMR
• AWS Glue
• Amazon Kinesis
• AWS Lake Formation
• Amazon Managed Streaming for Apache Kafka (Amazon MSK)
• Amazon OpenSearch Service (Amazon Elasticsearch Service)
• Amazon QuickSight
• Amazon Redshift

Application Integration:
• Amazon AppFlow
• AWS AppSync
• Amazon EventBridge (Amazon CloudWatch Events)
• Amazon MQ
• Amazon Simple Notification Service (Amazon SNS)
• Amazon Simple Queue Service (Amazon SQS)
• AWS Step Functions

AWS Cost Management:
• AWS Budgets
• AWS Cost and Usage Report
• AWS Cost Explorer
• Savings Plans

Compute:
• AWS Batch
• Amazon EC2
• Amazon EC2 Auto Scaling
• AWS Elastic Beanstalk
• AWS Outposts
• AWS Serverless Application Repository
• VMware Cloud on AWS
• AWS Wavelength

Containers:
• Amazon Elastic Container Registry (Amazon ECR)
• Amazon Elastic Container Service (Amazon ECS)
• Amazon ECS Anywhere
• Amazon Elastic Kubernetes Service (Amazon EKS)
• Amazon EKS Anywhere
• Amazon EKS Distro

Database:
• Amazon Aurora
• Amazon Aurora Serverless
• Amazon DocumentDB (with MongoDB compatibility)
• Amazon DynamoDB
• Amazon ElastiCache
• Amazon Keyspaces (for Apache Cassandra)
• Amazon Neptune
• Amazon Quantum Ledger Database (Amazon QLDB)
• Amazon RDS
• Amazon Redshift
• Amazon Timestream

Developer Tools:
• AWS X-Ray

Front-End Web and Mobile:
• AWS Amplify
• Amazon API Gateway
• AWS Device Farm
• Amazon Pinpoint

Machine Learning:
• Amazon Comprehend
• Amazon Forecast
• Amazon Fraud Detector
• Amazon Kendra
• Amazon Lex
• Amazon Polly
• Amazon Rekognition
• Amazon SageMaker
• Amazon Textract
• Amazon Transcribe
• Amazon Translate

Management and Governance:
• AWS Auto Scaling
• AWS CloudFormation
• AWS CloudTrail
• Amazon CloudWatch
• AWS Command Line Interface (AWS CLI)
• AWS Compute Optimizer
• AWS Config
• AWS Control Tower
• AWS License Manager
• Amazon Managed Grafana
• Amazon Managed Service for Prometheus
• AWS Management Console
• AWS Organizations
• AWS Personal Health Dashboard
• AWS Proton
• AWS Service Catalog
• AWS Systems Manager
• AWS Trusted Advisor
• AWS Well-Architected Tool

Media Services:
• Amazon Elastic Transcoder
• Amazon Kinesis Video Streams

Migration and Transfer:
• AWS Application Discovery Service
• AWS Application Migration Service (CloudEndure Migration)
• AWS Database Migration Service (AWS DMS)
• AWS DataSync
• AWS Migration Hub
• AWS Server Migration Service (AWS SMS)
• AWS Snow Family
• AWS Transfer Family

Networking and Content Delivery:
• Amazon CloudFront
• AWS Direct Connect
• Elastic Load Balancing (ELB)
• AWS Global Accelerator
• AWS PrivateLink
• Amazon Route 53
• AWS Transit Gateway
• Amazon VPC
• AWS VPN

Security, Identity, and Compliance:
• AWS Artifact
• AWS Audit Manager
• AWS Certificate Manager (ACM)
• AWS CloudHSM
• Amazon Cognito
• Amazon Detective
• AWS Directory Service
• AWS Firewall Manager
• Amazon GuardDuty
• AWS Identity and Access Management (IAM)
• Amazon Inspector
• AWS Key Management Service (AWS KMS)
• Amazon Macie
• AWS Network Firewall
• AWS Resource Access Manager (AWS RAM)
• AWS Secrets Manager
• AWS Security Hub
• AWS Shield
• AWS Single Sign-On
• AWS WAF

Serverless:
• AWS AppSync
• AWS Fargate
• AWS Lambda

Storage:
• AWS Backup
• Amazon Elastic Block Store (Amazon EBS)
• Amazon Elastic File System (Amazon EFS)
• Amazon FSx (for all types)
• Amazon S3
• Amazon S3 Glacier
• AWS Storage Gateway

 

Out-of-scope AWS services and features
The following is a non-exhaustive list of AWS services and features that are not covered on the exam.
These services and features do not represent every AWS offering that is excluded from the exam content.

Analytics:
• Amazon CloudSearch

Application Integration:
• Amazon Managed Workflows for Apache Airflow (Amazon MWAA)

AR and VR:
• Amazon Sumerian

Blockchain:
• Amazon Managed Blockchain

Compute:
• Amazon Lightsail

Database:
• Amazon RDS on VMware

Developer Tools:
• AWS Cloud9
• AWS Cloud Development Kit (AWS CDK)
• AWS CloudShell
• AWS CodeArtifact
• AWS CodeBuild
• AWS CodeCommit
• AWS CodeDeploy
• Amazon CodeGuru
• AWS CodeStar
• Amazon Corretto
• AWS Fault Injection Simulator (AWS FIS)
• AWS Tools and SDKs

Front-End Web and Mobile:
• Amazon Location Service

Game Tech:
• Amazon GameLift
• Amazon Lumberyard
Internet of Things:
• All services

Which new AWS services will be covered in the SAA-C03?
AWS Data Exchange,
AWS Data Pipeline,
AWS Lake Formation,
Amazon Managed Streaming for Apache Kafka,
Amazon AppFlow,
AWS Outposts,
VMware Cloud on AWS,
AWS Wavelength,
Amazon Neptune,
Amazon Quantum Ledger Database,
Amazon Timestream,
AWS Amplify,
Amazon Comprehend,
Amazon Forecast,
Amazon Fraud Detector,
Amazon Kendra,
AWS License Manager,
Amazon Managed Grafana,
Amazon Managed Service for Prometheus,
AWS Proton,
Amazon Elastic Transcoder,
Amazon Kinesis Video Streams,
AWS Application Discovery Service,
AWS WAF Serverless,
AWS AppSync,

AWS SAA Exam Prep

Get the AWS  SAA-C02 / SAA-C03 Exam Prep App on:  iOS – Android – Windows 10/11

AWS Solutions Architect Associates SAA-C02 and SAA-C03 Certification Exam Prep
 
 
#AWS #SAAC02 #SAAC03 #SolutionsArchitect #AWSSAA #SAA #AWSCertification #AWSTraining #LearnAWS #CloudArchitect #SolutionsArchitect  #Djamgatech
 
AWS SAA Exam Prep App on iOs
 
AWS SAA Exam Prep App on android
 
AWS SAA Exam Prep App on Windows 10/11
 
AWS SAA App details and features
 

Solution Architecture Definition 1:

Solution architecture is a practice of defining and describing an architecture of a system delivered in context of a specific solution and as such it may encompass description of an entire system or only its specific parts. Definition of a solution architecture is typically led by a solution architect.

Solution Architecture  Definition 2:

The AWS Certified Solutions Architect – Associate examination is intended for individuals who perform a solutions architect role and have one or more years of hands-on experience designing available, cost-efficient, fault-tolerant, and scalable distributed systems on AWS.

AWS Solution Architect Associate Exam Facts and Summaries (SAA-C02 & SAA-C03)

 

  1. Take an AWS Training Class
  2. Study AWS Whitepapers and FAQs: AWS Well-Architected webpage (various whitepapers linked)
  3. If you are running an application in a production environment and must add a new EBS volume with data from a snapshot, what could you do to avoid degraded performance during the volume’s first use?
    Initialize the data by reading each storage block on the volume.
    Volumes created from an EBS snapshot must be initialized. Initializing occurs the first time a storage block on the volume is read, and the performance impact can be impacted by up to 50%. You can avoid this impact in production environments by pre-warming the volume by reading all of the blocks.
  4. If you are running a legacy application that has hard-coded static IP addresses and it is running on an EC2 instance; what is the best failover solution that allows you to keep the same IP address on a new instance?
    Elastic IP addresses (EIPs) are designed to be attached/detached and moved from one EC2 instance to another. They are a great solution for keeping a static IP address and moving it to a new instance if the current instance fails. This will reduce or eliminate any downtime uses may experience.
  5. Which feature of Intel processors help to encrypt data without significant impact on performance?
    AES-NI
  6. You can mount to EFS from which two of the following?
    • On-prem servers running Linux
    • EC2 instances running Linux

    EFS is not compatible with Windows operating systems.

  7. When a file(s) is encrypted and the stored data is not in transit it’s known as encryption at rest. What is an example of encryption at rest? 

  8. When would vertical scaling be necessary? When an application is built entirely into one source code, otherwise known as a monolithic application.

  9. Fault-Tolerance allows for continuous operation throughout a failure, which can lead to a low Recovery Time Objective.  RPO vs RTO

  10. High-Availability means automating tasks so that an instance will quickly recover, which can lead to a low Recovery Time Objective.  RPO vs. RTO
  11. Frequent backups reduce the time between the last backup and recovery point, otherwise known as the Recovery Point Objective.  RPO vs. RTO
  12. Which represents the difference between Fault-Tolerance and High-Availability? High-Availability means the system will quickly recover from a failure event, and Fault-Tolerance means the system will maintain operations during a failure.
  13. From a security perspective, what is a principal? An anonymous user falls under the definition of a principal. A principal can be an anonymous user acting on a system.

    An authenticated user falls under the definition of a principal. A principal can be an authenticated user acting on a system.

  14. What are two types of session data saving for an Application Session State? Stateless and Stateful

23. It is the customer’s responsibility to patch the operating system on an EC2 instance.

24. In designing an environment, what four main points should a Solutions Architect keep in mind? Cost-efficient, secure, application session state, undifferentiated heavy lifting: These four main points should be the framework when designing an environment.

25. In the context of disaster recovery, what does RPO stand for? RPO is the abbreviation for Recovery Point Objective.

26. What are the benefits of horizontal scaling?

Vertical scaling can be costly while horizontal scaling is cheaper.


Horizontal scaling suffers from none of the size limitations of vertical scaling.

Having horizontal scaling means you can easily route traffic to another instance of a server.

Top
Reference: AWS Solution Architect Associate Exam Prep

Top 100 AWS Solution Architect Associate Exam Prep Questions and Answers Dump – SAA-C02 and SAA-C03

AWS SAA Exam Prep

For a better mobile experience, download the mobile app below:

AWS Solutions Architect Associates SAA-C02 and SAA-C03 Certification Exam Prep
 
 
#AWS #SAAC02 #SAAC03 #SolutionsArchitect #AWSSAA #SAA #AWSCertification #AWSTraining #LearnAWS #CloudArchitect #SolutionsArchitect  #Djamgatech
 
AWS SAA Exam Prep App on iOs
 
AWS SAA Exam Prep App on android
 
AWS SAA Exam Prep App on Windows 10/11
 
AWS SAA App details and features

A company is developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? (Select TWO.)

  • A. CloudWatch
  • B. DynamoDB
  • C. Elastic Load Balancing
  • D. ElastiCache
  • E. Storage Gateway

Answer: B and D ( Get the SAA Exam Prep for More:  iOS – Android – Windows  )

Reference: AWS Session management


Top

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA Exam Prep AWS SAA-C02 SAA-C03 Exam Prep

Q1: A Solutions Architect is designing a critical business application with a relational database that runs on an EC2 instance. It requires a single EBS volume that can support up to 16,000 IOPS.
Which Amazon EBS volume type can meet the performance requirements of this application?

  • A. EBS Provisioned IOPS SSD
  • B. EBS Throughput Optimized HDD
  • C. EBS General Purpose SSD
  • D. EBS Cold HDD
Answer: A  ( Get the SAA Exam Prep for More:  iOS – Android – Windows  )
EBS Provisioned IOPS SSD provides sustained performance for mission-critical low-latency workloads. EBS General Purpose SSD can provide bursts of performance up to 3,000 IOPS and have a maximum baseline performance of 10,000 IOPS for volume sizes greater than 3.3 TB. The 2 HDD options are lower cost, high throughput volumes.

Reference: Amazon EBS Performance Tips

Top

Q2: An application running on EC2 instances processes sensitive information stored on Amazon S3. The information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 is a security risk.
Which solution will resolve the security concern?

  • A. Access the data through an Internet Gateway.
  • B. Access the data through a VPN connection.
  • C. Access the data through a NAT Gateway.
  • D.Access the data through a VPC endpoint for Amazon S3

Answer: D ( Get the SAA-C02 / SAA-C03 Exam Prep for More:  iOS – Android – Windows  )
VPC endpoints for Amazon S3 provide secure connections to S3 buckets that do not require a gateway or NAT instances. NAT Gateways and Internet Gateways still route traffic over the Internet to the public endpoint for Amazon S3. There is no way to connect to Amazon S3 via VPN.

Reference: S3 VPC Endpoints

Top

Q3: An organization is building an Amazon Redshift cluster in their shared services VPC. The cluster will host sensitive data.
How can the organization control which networks can access the cluster?

  • A. Run the cluster in a different VPC and connect through VPC peering.
  • B. Create a database user inside the Amazon Redshift cluster only for users on the network.
  • C. Define a cluster security group for the cluster that allows access from the allowed networks.
  • D. Only allow access to networks that connect with the shared services network via VPN.

Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep for More:  iOS – Android – Windows  )
A security group can grant access to traffic from the allowed networks via the CIDR range for each network. VPC peering and VPN are connectivity services and cannot control traffic for security. Amazon Redshift user accounts address authentication and authorization at the user level and have no control over network traffic.

Reference: AWS Security best practice

Top

AWS SAA-C02 SAA-C03 Exam Prep

 

Q4: A web application allows customers to upload orders to an S3 bucket. The resulting Amazon S3 events trigger a Lambda function that inserts a message to an SQS queue. A single EC2 instance reads messages from the queue, processes them, and stores them in an DynamoDB table partitioned by unique order ID. Next month traffic is expected to increase by a factor of 10 and a Solutions Architect is reviewing the architecture for possible scaling problems.
Which component is MOST likely to need re-architecting to be able to scale to accommodate the new traffic?

  • A. Lambda function
  • B. SQS queue
  • C. EC2 instance
  • D. DynamoDB table

Answer: C ( Get the SAA-C02 / SAA-C03 Exam Prep for More:  iOS – Android – Windows  )
A single EC2 instance will not scale and is a single point of failure in the architecture. A much better solution would be to have EC2 instances in an Auto Scaling group across 2 availability zones read messages from the queue. The other responses are all managed services that can be configured to scale or will scale automatically.

Reference: Eliminating Single Points of Failures on AWS Cloud

  • Single NAT Instance in Network
  • Running all Workloads in single AZ Compute/Storage
  • Single DNS and other DNS Issues in Network
  • Not setting up for Auto-Scale Core Services
  • AWS Load Balancer – Cross Network
  • AWS RDS within single AZ Database
  • Manual Scale
  • How to Remove Single Points of Failure by Using a High-Availability Partition Group in Your AWS CloudHSM Environment

Top

Q5: An application requires a highly available relational database with an initial storage capacity of 8 TB. The database will grow by 8 GB every day. To support expected traffic, at least eight read replicas will be required to handle database reads.
Which option will meet these requirements?

  • A. DynamoDB
  • B. Amazon S3
  • C. Amazon Aurora
  • D. Amazon Redshift

Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep for More:  iOS – Android – Windows  )
Amazon Aurora is a relational database that will automatically scale to accommodate data growth. Amazon Redshift does not support read replicas and will not automatically scale. DynamoDB is a NoSQL service, not a relational database. Amazon S3 is object storage, not a relational database.

Reference: Replication with Amazon Aurora

Top

Q6: How can you improve the performance of EFS?

  • A. Use an instance-store backed EC2 instance.
  • B. Provision more throughput than is required.
  • C. Divide your files system into multiple smaller file systems.
  • D. Provision higher IOPs for your EFS.

Answer: B ( Get the SAA-C02 / SAA-C03 Exam Prep for More:  iOS – Android – Windows  )
Amazon EFS now allows you to instantly provision the throughput required for your applications independent of the amount of data stored in your file system. This allows you to optimize throughput for your application’s performance needs.

Reference: Amazon EFS Performance

Top

Q7:
If you are designing an application that requires fast (10 – 25Gbps), low-latency connections between EC2 instances, what EC2 feature should you use?

  • A. Snapshots
  • B. Instance store volumes
  • C. Placement groups
  • D. IOPS provisioned instances.

Answer:  ( Get the SAA-C02 / SAA-C03 Exam Prep for More:  iOS – Android – Windows  )
Placement groups are a clustering of EC2 instances in one Availability Zone with fast (up to 25Gbps) connections between them. This feature is used for applications that need extremely low-latency connections between instances.

Reference: Placement Groups

Top

AWS SAA-C02 SAA-C03 Exam Prep

 

Q8: A Solution Architect is designing an online shopping application running in a VPC on EC2 instances behind an ELB Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The application tier must read and write data to a customer managed database cluster. There should be no access to the database from the Internet, but the cluster must be able to obtain software patches from the Internet.

Which VPC design meets these requirements?

  • A. Public subnets for both the application tier and the database cluster
  • B. Public subnets for the application tier, and private subnets for the database cluster
  • C. Public subnets for the application tier and NAT Gateway, and private subnets for the database cluster
  • D. Public subnets for the application tier, and private subnets for the database cluster and NAT Gateway

Answer: C.
The online application must be in public subnets to allow access from clients’ browsers. The database cluster must be in private subnets to meet the requirement that there be no access from the Internet.
A NAT Gateway is required to give the database cluster the ability to download patches from the Internet. NAT Gateways must be deployed in public subnets.

Reference: Public and Private Subnets

Top

Q9: What command should you run on a running instance if you want to view its user data (that is used at launch)?

  • A. curl http://254.169.254.169/latest/user-data
  • B. curl http://localhost/latest/meta-data/bootstrap
  • C. curl http://localhost/latest/user-data
  • D. curl http://169.254.169.254/latest/user-data

Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep for More:  iOS – Android – Windows  )
Retrieve Instance User Data
To retrieve user data from within a running instance, use the following URI:
http://169.254.169.254/latest/user-data

Reference: Instance Metadata and User Data

Get user data from AWS Ec2 running instance
Get user data from AWS Ec2 running instance

Top

AWS SAA-C02 SAA-C03 Exam Prep
( Get the SAA-C02 / SAA-C03 Exam Prep for More:  iOS – Android – Windows  )

AWS SAA-C02 SAA-C03 Exam Prep

Q10: A company is developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? (Select TWO.)

  • A. CloudWatch
  • B. DynamoDB
  • C. Elastic Load Balancing
  • D. ElastiCache
  • E. Storage Gateway

Answer: B. and D. ( Get the SAA-C02 / SAA-C03 Exam Prep for More:  iOS – Android – Windows  )
Both DynamoDB and ElastiCache provide high performance storage of key-value pairs.
CloudWatch and ELB are not storage services. Storage Gateway is a storage service, but it is a hybrid Storage service that enables on-premises applications to use cloud storage.

A stateful web service will keep track of the “state” of a client’s connection and data over several requests. So for example, the client might login, select a users account data, update their address, attach a photo, and change the status flag, then disconnect.

In a stateless web service, the server doesn’t keep any information from one request to the next. The client needs to do it’s work in a series of simple transactions, and the client has to keep track of what happens between requests. So in the above example, the client needs to do each operation separately: connect and update the address, disconnect. Connect and attach the photo, disconnect. Connect and change the status flag, disconnect.

A stateless web service is much simpler to implement, and can handle greater volume of clients.

Reference: Stateful & Stateless web service

Top

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA Exam Prep AWS SAA-C02 SAA-C03 Exam Prep

Q11: From a security perspective, what is a principal?

  • A. An identity
  • B. An anonymous user 
  • C. An authenticated user
  • D. A resource

Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep for More:  iOS – Android – Windows  )

An anonymous user falls under the definition of a principal. A principal can be an anonymous user acting on a system.  An authenticated user falls under the definition of a principal. A principal can be an authenticated user acting on a system.

Reference: Access management

AWS SAA-C02 SAA-C03 Exam Prep

Q12: What are the characteristics of a tiered application?

  • A. All three application layers are on the same instance
  • B. The presentation tier is on an isolated instance than the logic layer
  • C. None of the tiers can be cloned
  • D. The logic layer is on an isolated instance than the data layer
  • E. Additional machines can be added to help the application by implementing horizontal scaling
  • F.  Incapable of horizontal scaling

Answer: B. D. and E. ( Get the SAA-C02 / SAA-C03 Exam Prep for More:  iOS – Android – Windows  )

In a tiered application, the presentation layer is separate from the logic layer; the logic layer is separate from the data layer. Since parts of the application are isolated, they can scale horizontally.

Reference: Tiered Application

Q13: When using horizontal scaling, how can a server’s capacity closely match it’s rising demand?

A. By frequently purchasing additional instances and smaller resources

B. By purchasing more resources very far in advance

C. By purchasing more resources after demand has risen

D. It is not possible to predict demand

Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep for More:  iOS – Android – Windows  )

Reference: AWS Horizontal Scaling

Q14: What is the concept behind AWS’ Well-Architected Framework?

A. It’s a set of best practice areas, principles, and concepts that can help you implement effective AWS solutions.

B. It’s a set of best practice areas, principles, and concepts that can help you implement effective solutions tailored to your specific business.

C. It’s a set of best practice areas, principles, and concepts that can help you implement effective solutions from another web host.

D. It’s a set of best practice areas, principles, and concepts that can help you implement effective E-Commerce solutions.


Answer: A.

Reference: AWS Well architected Framework
 

Q15: Select the true statements regarding AWS Regions.

 

A. Availability Zones are isolated locations within regions

B. Region codes identify specific regions (example: US-EAST-2)

C. All AWS Regions contain the full set of AWS services.

D. An AWS Region is assigned based on the user’s location when creating an AWS account.

Answer: (A, B, D) ( Get the SAA-C02 / SAA-C03 Exam Prep for More:  iOS – Android – Windows  )
Reference: AWS Regions

AWS SAA-C02 SAA-C03 Exam Prep

 

Q16: Which is not one of the five pillars of a well-architected framework?

 

A. Reliability

B. Performance Efficiency

C. Structural Simplicity

D. Security

E. Operational Excellence

Answer: C – ( Get the SAA-C02 / SAA-C03 Exam Prep for More:  iOS – Android – Windows  )

Reference: AWS Well Architected Framework

AWS SAA-C02 SAA-C03 Exam Prep

Q17: You lead a team to develop a new online game application in AWS EC2. The application will have a large number of users globally. For a great user experience, this application requires very low network latency and jitter. If the network speed is not fast enough, you will lose customers. Which tool would you choose to improve the application performance? (Select TWO.)

A. AWS VPN

B. AWS Global Accelerator

C. Direct Connect

D. API Gateway

E. CloudFront

Answer: ( Get the SAA-C02 / SAA-C03 Exam Prep App for More:  iOS – Android – Windows  )

Notes: This online game application has global users and needs low latency. Both CloudFront and Global Accelerator can speed up the distribution of contents over the AWS global network. AWS Global Accelerator works at the network layer and is able to direct traffic to optimal endpoints. Check what is global-accelerator for reference.  CloudFront delivers content through edge locations and users are routed to the edge location that has the lowest time delay.

Q18: A company has a media processing application deployed in a local data center.  Its file storage is built on a Microsoft Windows file server. The application and file server need to be migrated to AWS. You want to quickly set up the file server in AWS and the application code should continue working to access the file systems. Which method should you choose to create the file server?

A. Create a Windows File Server from Amazon WorkSpaces.

B. Configure a high performance Windows File System in Amazon EFS.

C. Create a Windows File Server in Amazon FSx.

D. Configure a secure enterprise storage through Amazon WorkDocs.

Answer: C – ( Get the SAA-C02 / SAA-C03 Exam Prep App for More:  iOS – Android – Windows  )

Notes: In this question, a Windows file server is required in AWS and the application should continue to work unchanged. Amazon FSx for Windows File Server is the correct answer as it is backed by a fully native Windows file system.

AWS SAA-C02 SAA-C03 Exam Prep

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA Exam Prep AWS SAA-C02 SAA-C03 Exam Prep

Q19: You are developing an application using AWS SDK to get objects from AWS S3. The objects have big sizes and sometimes there are failures when getting objects especially when the network connectivity is poor. You want to get a specific range of bytes in a single GET request and retrieve the whole object in parts. Which method can achieve this?

A. Enable multipart upload in the AWS SDK.

B. Use the “Range” HTTP header in a GET request to download the specified range bytes of an object.

C. Reduce the retry requests and enlarge the retry timeouts through AWS SDK when fetching S3 objects.

D. Retrieve the whole S3 object through a single GET operation.

Answer: – ( Get the SAA-C02 / SAA-C03 Exam Prep App for More:  iOS – Android – Windows  )

Notes: Because with byte-range fetches, users can establish concurrent connections to Amazon S3 to fetch different parts from within the same object.

Through the “Range” header in the HTTP GET request, a specified portion of the objects can be downloaded instead of the whole objects. Check the explanations in here.

Q20: You have an application hosted in an Auto Scaling group and an application load balancer distributes traffic to the ASG. You want to add a scaling policy that keeps the average aggregate CPU utilization of the Auto Scaling group to be 60 percent. The capacity of the Auto Scaling group should increase or decrease based on this target value. Which scaling policy does it belong to?

A. Target tracking scaling policy.

B. Step scaling policy.

C. Simple scaling policy.

D. Scheduled scaling policy.

Answer: A – ( Get the SAA-C02 / SAA-C03 Exam Prep App for More:  iOS – Android – Windows  )

Notes: A target tracking scaling policy can be applied to check the ASGAverageCPUUtilization metric.  In ASG, you can add a target tracking scaling policy based on a target. Check here for different scaling policies.

AWS SAA-C02 SAA-C03 Exam Prep

 

Q21: You need to launch a number of EC2 instances to run Cassandra. There are large distributed and replicated workloads in Cassandra and you plan to launch instances using EC2 placement groups. The traffic should be distributed evenly across several partitions and each partition should contain multiple instances. Which strategy would you use when launching the placement groups?

A. Cluster placement strategy

B. Spread placement strategy.

C. Partition placement strategy.

D. Network placement strategy.

Answer: – ( Get the SAA-C02 / SAA-C03 Exam Prep App for More:  iOS – Android – Windows  )

Notes:  Placement groups have the placement strategies of Cluster, Partition and Spread. With the Partition placement strategy, instances in one partition do not share the underlying hardware with other partitions. This strategy is suitable for distributed and replicated workloads such as Cassandra. Details please refer to Placement Groups Limitation partition.

Q22: To improve the network performance, you launch a C5 EC2 Amazon Linux instance and enable enhanced networking by modifying the instance attribute with “aws ec2 modify-instance-attribute –instance-id instance_id –ena-support”. Which mechanism does the EC2 instance use to enhance the networking capabilities?

A. Intel 82599 Virtual Function (VF) interface.

B. Elastic Fabric Adapter (EFA).

C. Elastic Network Adapter (ENA).

D. Elastic Network Interface (ENI).

Answer: C

Notes: Enhanced networking has two mechanisms: Elastic Network Adapter (ENA) and Intel 82599Virtual Function (VF) interface. For ENA, users can enable it with –ena-support. References can be found here

Q23: You work for an online retailer where any downtime at all can cause a significant loss of revenue. You have architected your application to be deployed on an Auto Scaling Group of EC2 instances behind a load balancer. You have configured and deployed these resources using a CloudFormation template. The Auto Scaling Group is configured with default settings, and a simple CPU utilization scaling policy. You have also set up multiple Availability Zones for high availability. The Load Balancer does health checks against an html file generated by script. When you begin performing load testing on your application and notice in CloudWatch that the load balancer is not sending traffic to one of your EC2 instances. What could be the problem?

A. The EC2 instance has failed the load balancer health check.

B. The instance has not been registered with CloudWatch.

C. The EC2 instance has failed EC2 status checks.

D. You are load testing at a moderate traffic level and not all instances are needed.

Answer: iOS – Android

Notes: The load balancer will route the incoming requests only to the healthy instances. The EC2 instance may have passed status check and be considered health to the Auto Scaling Group, but the ELB may not use it if the ELB health check has not been met. The ELB health check has a default of 30 seconds between checks, and a default of 3 checks before making a decision. Therefore the instance could be visually available but unused for at least 90 seconds before the GUI would show it as failed. In CloudWatch where the issue was noticed it would appear to be a healthy EC2 instance but with no traffic. Which is what was observed.

References: ELB HealthCheck

Q24: Your company is using a hybrid configuration because there are some legacy applications which are not easily converted and migrated to AWS. And with this configuration comes a typical scenario where the legacy apps must maintain the same private IP address and MAC address. You are attempting to convert the application to the cloud and have configured an EC2 instance to house the application. What you are currently testing is removing the ENI from the legacy instance and attaching it to the EC2 instance. You want to attempt a cold attach. What does this mean?

A. Attach ENI when it’s stopped.

B. Attach ENI before the public IP address is assigned.

C. Attach ENI to an instance when it’s running.

D. Attach ENI when the instance is being launched.

Answer: iOS – Android

Notes: Best practices for configuring network interfaces You can attach a network interface to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach). You can detach secondary network interfaces when the instance is running or stopped. However, you can’t detach the primary network interface. You can move a network interface from one instance to another, if the instances are in the same Availability Zone and VPC but in different subnets. When launching an instance using the CLI, API, or an SDK, you can specify the primary network interface and additional network interfaces. Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance. A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly. Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves. Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance. If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing. If possible, use a secondary private IPv4 address on the primary network interface instead. For more information, see Assigning a secondary private IPv4 address.

Reference: EC2 ENI User Guide

Q25: Your company has recently converted to a hybrid cloud environment and will slowly be migrating to a fully AWS cloud environment. The AWS side is in need of some steps to prepare for disaster recovery. A disaster recovery plan needs drawn up and disaster recovery drills need to be performed for compliance reasons. The company wants to establish Recovery Time and Recovery Point Objectives. The RTO and RPO can be pretty relaxed. The main point is to have a plan in place, with as much cost savings as possible. Which AWS disaster recovery pattern will best meet these requirements?

A. Warm Standby

B. Backup and restore

C. Multi Site

D. Pilot Light

Answer: B

Notes: Backup and Restore: This is the least expensive option and cost is the overriding factor.

AWS SAA-C02 SAA-C03 Exam Prep

 

Q26: An international travel company has an application which provides travel information and alerts to users all over the world. The application is hosted on groups of EC2 instances in Auto Scaling Groups in multiple AWS Regions. There are also load balancers routing traffic to these instances. In two countries, Ireland and Australia, there are compliance rules in place that dictate users connect to the application in eu-west-1 and ap-southeast-1. Which service can you use to meet this requirement?

A. Use Route 53 weighted routing.

B. Use Route 53 geolocation routing.

C. Configure CloudFront and the users will be routed to the nearest edge location.

D. Configure the load balancers to route users to the proper region.

Answer: iOS – Android

Notes: Geolocation routing lets you choose the resources that serve your traffic based on the geographic location of your users, meaning the location that DNS queries originate from. For example, you might want all queries from Europe to be routed to an ELB in the Frankfurt region. When you use geolocation routing, you can localize your content and present some or all of your website in the language of your users. You can also use geolocation routing to restrict distribution of content to only the locations in which you have distribution rights. Another possible use is for balancing load across endpoints in a predictable, easy-to-manage way, so that each user location is consistently routed to the same endpoint.

Reference: Geolocation Routing Policy

AWS SAA-C02 SAA-C03 Exam Prep

 

Q26: You have taken over management of several instances in the company AWS environment. You want to quickly review scripts used to bootstrap the instances at runtime. A URL command can be used to do this. What can you append to the URL http://169.254.169.254/latest/ to retrieve this data?

A. user-data/

B. instance-demographic-data/

C. meta-data/

D. instance-data/

Answer: A

Notes: When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives.

Reference: EC2 instance user data

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA-C02 SAA-C03 Exam Prep

Q27: A software company has created an application to capture service requests from users and also enhancement requests. The application is deployed on an Auto Scaling group of EC2 instances fronted by an Application Load Balancer. The Auto Scaling group has scaled to maximum capacity, but there are still requests being lost. The cost of these instances is becoming an issue. What step can the company take to ensure requests aren’t lost?

A. Use larger instances in the Auto Scaling group.

B. Use spot instances to save money.

C. Use an SQS queue with the Auto Scaling group to capture all requests.

D. Use a Network Load Balancer instead for faster throughput.

Answer: iOS – Android

Notes: There are some scenarios where you might think about scaling in response to activity in an Amazon SQS queue. For example, suppose that you have a web app that lets users upload images and use them online. In this scenario, each image requires resizing and encoding before it can be published. The app runs on EC2 instances in an Auto Scaling group, and it’s configured to handle your typical upload rates. Unhealthy instances are terminated and replaced to maintain current instance levels at all times. The app places the raw bitmap data of the images in an SQS queue for processing. It processes the images and then publishes the processed images where they can be viewed by users. The architecture for this scenario works well if the number of image uploads doesn’t vary over time. But if the number of uploads changes over time, you might consider using dynamic scaling to scale the capacity of your Auto Scaling group.

Reference: Using SQS Queue

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA Exam Prep AWS SAA-C02 SAA-C03 Exam Prep

Q28: A company has an auto scaling group of EC2 instances hosting their retail sales application. Any significant downtime for this application can result in large losses of profit. Therefore the architecture also includes an Application Load Balancer and an RDS database in a Multi-AZ deployment. The company has a very aggressive Recovery Time Objective (RTO) in case of disaster. How long will a failover typically complete?

 
 

A. Under 10 minutes

B. Within an hour

C. Almost instantly

D. one to two minutes

Answer:  D

Notes: What happens during Multi-AZ failover and how long does it take? Failover is automatically handled by Amazon RDS so that you can resume database operations as quickly as possible without administrative intervention. When failing over, Amazon RDS simply flips the canonical name record (CNAME) for your DB instance to point at the standby, which is in turn promoted to become the new primary. We encourage you to follow best practices and implement database connection retry at the application layer. Failovers, as defined by the interval between the detection of the failure on the primary and the resumption of transactions on the standby, typically complete within one to two minutes. Failover time can also be affected by whether large uncommitted transactions must be recovered; the use of adequately large instance types is recommended with Multi-AZ for best results. AWS also recommends the use of Provisioned IOPS with Multi-AZ instances for fast, predictable, and consistent throughput performance.

Reference: RDS FAQ

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA-C02 SAA-C03 Exam Prep

Q29: You have two EC2 instances running in the same VPC, but in different subnets. You are removing the secondary ENI from an EC2 instance and attaching it to another EC2 instance. You want this to be fast and with limited disruption. So you want to attach the ENI to the EC2 instance when it’s running. What is this called?

A. hot attach

B. warm attach

C. cold attach

D. synchronous attach

Answer: iOS – Android

Notes: Here are some best practices for configuring network interfaces. You can attach a network interface to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach). You can detach secondary network interfaces when the instance is running or stopped. However, you can’t detach the primary network interface. You can move a network interface from one instance to another if the instances are in the same Availability Zone and VPC but in different subnets. When launching an instance using the CLI, API, or an SDK, you can specify the primary network interface and additional network interfaces. Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance. A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly. Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves. Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance. If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing. If possible, use a secondary private IPv4 address on the primary network interface instead. For more information, see Assigning a secondary private IPv4 address.

Reference: EC2 ENI

 

Q30: You suspect that one of the AWS services your company is using has gone down. How can you check on the status of this service?

 

A. AWS Trusted Advisor

B. Amazon Inspector

C. AWS Personal Health Dashboard

D. AWS Organizations

Answer: C

Notes: AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that may impact you. While the Service Health Dashboard displays the general status of AWS services, Personal Health Dashboard gives you a personalized view of the performance and availability of the AWS services underlying your AWS resources. The dashboard displays relevant and timely information to help you manage events in progress, and provides proactive notification to help you plan for scheduled activities. With Personal Health Dashboard, alerts are triggered by changes in the health of AWS resources, giving you event visibility and guidance to help quickly diagnose and resolve issues.

Reference: AWS Personal Health Dashboard

 

Q31: You have configured an Auto Scaling Group of EC2 instances fronted by an Application Load Balancer and backed by an RDS database. You want to begin monitoring the EC2 instances using CloudWatch metrics. Which metric is not readily available out of the box?

A. CPU utilization

B. DiskReadOps

C. NetworkIn

D. Memory utilization

Answer: iOS – Android

Notes: Memory utilization is not available as an out of the box metric in CloudWatch. You can, however, collect memory metrics when you configure a custom metric for CloudWatch.

Types of custom metrics that you can set up include:

  • Memory utilization
  • Disk swap utilization
  • Disk space utilization
  • Page file utilization
  • Log collection

Reference: EC2 custom metrics

 

Q32: Several instances you are creating have a specific data requirement. The requirement states that the data on the root device needs to persist independently from the lifetime of the instance. After considering AWS storage options, which is the simplest way to meet these requirements?

A. Store your root device data on Amazon EBS.

B. Store the data on the local instance store.

C. Create a cron job to migrate the data to S3.

D. Send the data to S3 using S3 lifecycle rules.

Answer: A

Notes: By using Amazon EBS, data on the root device will persist independently from the lifetime of the instance. This enables you to stop and restart the instance at a subsequent time, which is similar to shutting down your laptop and restarting it when you need it again.

Reference: Amazon EBS

 

Q33: A company has an Auto Scaling Group of EC2 instances hosting their retail sales application. Any significant downtime for this application can result in large losses of profit. Therefore the architecture also includes an Application Load Balancer and an RDS database in a Multi-AZ deployment. What will happen to preserve high availability if the primary database fails?

A. A Lambda function kicks off a CloudFormation template to deploy a backup database.

B. The CNAME is switched from the primary db instance to the secondary.

C. Route 53 points the CNAME to the secondary database instance.

D. The Elastic IP address for the primary database is moved to the secondary database.

Answer: iOS – Android

Notes: Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.

Failover is automatically handled by Amazon RDS so that you can resume database operations as quickly as possible without administrative intervention. When failing over, Amazon RDS simply flips the canonical name record (CNAME) for your DB instance to point at the standby, which is in turn promoted to become the new primary.

References: RDS Multi-AZ

Q34: After several issues with your application and unplanned downtime, your recommendation to migrate your application to AWS is approved. You have set up high availability on the front end with a load balancer and an Auto Scaling Group. What step can you take with your database to configure high-availability and ensure minimal downtime (under five minutes)?

A. Create a read replica.

B. Enable Multi-AZ failover on the database.

C. Take frequent snapshots of your database.

D. Create your database using CloudFormation and save the template for reuse.

Answer: B

Notes: In the event of a planned or unplanned outage of your DB instance, Amazon RDS automatically switches to a standby replica in another Availability Zone if you have enabled Multi-AZ. The time it takes for the failover to complete depends on the database activity and other conditions at the time the primary DB instance became unavailable. Failover times are typically 60–120 seconds. However, large transactions or a lengthy recovery process can increase failover time. When the failover is complete, it can take additional time for the RDS console to reflect the new Availability Zone. Note the above sentences. Large transactions could cause a problem in getting back up within five minutes, but this is clearly the best of the available choices to attempt to meet this requirement. We must move through our questions on the exam quickly, but always evaluate all the answers for the best possible solution.

References: Enable Multi-AZ

AWS SAA-C02 SAA-C03 Exam Prep

 

Q35: A new startup is considering the advantages of using DynamoDB versus a traditional relational database in AWS RDS. The NoSQL nature of DynamoDB presents a small learning curve to the team members who all have experience with traditional databases. The company will have multiple databases, and the decision will be made on a case-by-case basis. Which of the following use cases would favour DynamoDB? Select two.

A. Strong referential integrity between tables

B. Storing BLOB data

C. Storing infrequently accessed data

D. Managing web session data

E. Storing metadata for S3 objects

Answer: iOS – Android

Notes: DynamoDB is a NoSQL database that supports key-value and document data structures. A key-value store is a database service that provides support for storing, querying, and updating collections of objects that are identified using a key and values that contain the actual content being stored. Meanwhile, a document data store provides support for storing, querying, and updating items in a document format such as JSON, XML, and HTML. DynamoDB’s fast and predictable performance characteristics make it a great match for handling session data. Plus, since it’s a fully-managed NoSQL database service, you avoid all the work of maintaining and operating a separate session store.

Storing metadata for Amazon S3 objects is correct because the Amazon DynamoDB stores structured data indexed by primary key and allows low-latency read and write access to items ranging from 1 byte up to 400KB. Amazon S3 stores unstructured blobs and is suited for storing large objects up to 5 TB. In order to optimize your costs across AWS services, large objects or infrequently accessed data sets should be stored in Amazon S3, while smaller data elements or file pointers (possibly to Amazon S3 objects) are best saved in Amazon DynamoDB.

References: DynamoDB Session Manager

Q36: You have been tasked with designing a strategy for backing up EBS volumes attached to an instance-store-backed EC2 instance. You have been asked for an executive summary on your design, and the executive summary should include an answer to the question, “What can an EBS volume do when snapshotting the volume is in progress”?

A. The volume can be used normally while the snapshot is in progress.

B. The volume can only accommodate writes while a snapshot is in progress.

C. The volume can not be used while a snapshot is in progress.

D. The volume can only accommodate reads while a snapshot is in progress.

Answer: A

Notes: You can create a point-in-time snapshot of an EBS volume and use it as a baseline for new volumes or for data backup. If you make periodic snapshots of a volume, the snapshots are incremental; the new snapshot saves only the blocks that have changed since your last snapshot. Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed. While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume.

References: EBS Creating snapshots

Q37: You are working as a Solutions Architect in a large healthcare organization. You have many Auto Scaling Groups that you need to create. One requirement is that you need to reuse some software licenses and therefore need to use dedicated hosts on EC2 instances in your Auto Scaling Groups. What step must you take to meet this requirement?

A. Create your launch configuration, but manually change the instances to Dedicated Hosts in the EC2 console.

B. Use a launch template with your Auto Scaling Group.

C. Create the Dedicated Host EC2 instances, then add them to an existing Auto Scaling Group.

D. Make sure your launch configurations are using Dedicated Hosts.

Answer: B

Notes: In addition to the features of Amazon EC2 Auto Scaling that you can configure by using launch templates, launch templates provide more advanced Amazon EC2 configuration options. For example, you must use launch templates to use Amazon EC2 Dedicated Hosts. Dedicated Hosts are physical servers with EC2 instance capacity that are dedicated to your use. While Amazon EC2 Dedicated Instances also run on dedicated hardware, the advantage of using Dedicated Hosts over Dedicated Instances is that you can bring eligible software licenses from external vendors and use them on EC2 instances. If you currently use launch configurations, you can specify a launch template when you update an Auto Scaling group that was created using a launch configuration. To create a launch template to use with an Auto Scaling Group, create the template from scratch, create a new version of an existing template, or copy the parameters from a launch configuration, running instance, or other template.

References: Ec2 Autoscaling Group Launch Templates

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA-C02 SAA-C03 Exam Prep

Q38: Your organization uses AWS CodeDeploy for deployments. Now you are starting a project on the AWS Lambda platform. For your deployments, you’ve been given a requirement of performing blue-green deployments. When you perform deployments, you want to split traffic, sending a small percentage of the traffic to the new version of your application. Which deployment configuration will allow this splitting of traffic?

A. Canary

B. All at Once

C. Linear

D. Weighted routing

Answer: iOS – Android

Notes: With canary, traffic is shifted in two increments. You can choose from predefined canary options that specify the percentage of traffic shifted to your updated Lambda function version in the first increment and the interval, in minutes, before the remaining traffic is shifted in the second increment.

References: Canary Deployment

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA-C02 SAA-C03 Exam Prep

Q39: A financial institution has an application that produces huge amounts of actuary data, which is ultimately expected to be in the terabyte range. There is a need to run complex analytic queries against terabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Which storage service will best meet this requirement?

A. RDS

B. DynamoDB

C. Redshift

D. ElastiCache

Answer: C

Notes: Amazon Redshift is a fast, fully-managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It enables you to run complex analytic queries against terabytes to petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Most results come back in seconds. With Redshift, you can start small for just $0.25 per hour with no commitments and scale-out to petabytes of data for $1,000 per terabyte per year, less than a tenth of the cost of traditional on-premises solutions. Amazon Redshift also includes Amazon Redshift Spectrum, allowing you to run SQL queries directly against exabytes of unstructured data in Amazon S3 data lakes. No loading or transformation is required, and you can use open data formats, including Avro, CSV, Grok, Amazon Ion, JSON, ORC, Parquet, RCFile, RegexSerDe, Sequence, Text, and TSV. Redshift Spectrum automatically scales query compute capacity based on the data retrieved, so queries against Amazon S3 run fast, regardless of data set size.

References: Amazon Redshift

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA-C02 SAA-C03 Exam Prep

Q40: A company has an application for sharing static content, such as photos. The popularity of the application has grown, and the company is now sharing content worldwide. This worldwide service has caused some issues with latency. What AWS services can be used to host a static website, serve content to globally dispersed users, and address latency issues, while keeping cost under control? Choose two.

A. EC2 placement group

B. S3

C. Cloudfront

D. AWS Global Accelerator

E. AWS CloudFormation

Answer: iOS – Android

Notes: Amazon S3 is an object storage built to store and retrieve any amount of data from anywhere on the Internet. It’s a simple storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low costs. AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around the world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Both services integrate with AWS Shield for DDoS protection.

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services. CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing, or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience. Lastly, if you use AWS origins such as Amazon S3, Amazon EC2, or Elastic Load Balancing, you don’t pay for any data transferred between these services and CloudFront.

References: CloudFront – S3

AWS SAA-C02 SAA-C03 Exam Prep

Q41: You have just been hired by a large organization which uses many different AWS services in their environment. Some of the services which handle data include: RDS, Redshift, ElastiCache, DynamoDB, S3, and Glacier. You have been instructed to configure a web application using stateless web servers. Which services can you use to handle session state data? Choose two.

 

A. RDS

B. Glacier

C. Redshift

D. Elasticache

E. DynamoDB

Answer: iOS – Android

Notes: Elasticache and DynamoDB both can be used to store session data.

References:

AWS SAA-C02 SAA-C03 Exam Prep

 

Q42: After an IT Steering Committee meeting you have been put in charge of configuring a hybrid environment for the company’s compute resources. You weigh the pros and cons of various technologies based on the requirements you are given. Your primary requirement is the necessity for a private, dedicated connection, which bypasses the Internet and can provide throughput of 10 Gbps. Which option will you select?

A. AWS Direct Connect

B. VPC Peering

C. AWS VPN

D. AWS Direct Gateway

Answer: A

Notes: AWS Direct Connect can reduce network costs, increase bandwidth throughput, and provide a more consistent network experience than internet-based connections. It uses industry-standard 802.1q VLANs to connect to Amazon VPC using private IP addresses. You can choose from an ecosystem of WAN service providers for integrating your AWS Direct Connect endpoint in an AWS Direct Connect location with your remote networks. AWS Direct Connect lets you establish 1 Gbps or 10 Gbps dedicated network connections (or multiple connections) between AWS networks and one of the AWS Direct Connect locations. You can also work with your provider to create sub-1G connection or use link aggregation group (LAG) to aggregate multiple 1 gigabit or 10 gigabit connections at a single AWS Direct Connect endpoint, allowing you to treat them as a single, managed connection. A Direct Connect gateway is a globally available resource to enable connections to multiple Amazon VPCs across different regions or AWS accounts.

References: AWS Direct Connect

Q43: An application is hosted on an EC2 instance in a VPC. The instance is in a subnet in the VPC, and the instance has a public IP address. There is also an internet gateway and a security group with the proper ingress configured. But your testers are unable to access the instance from the Internet. What could be the problem?

A. Make sure the instance has a private IP address.

B. Add a route to the route table, from the subnet containing the instance, to the Internet Gateway.

C. A NAT gateway needs to be configured.

D. A Virtual private gateway needs to be configured.

Answer: iOS – Android

Notes:

The question doesn’t state if the subnet containing the instance is public or private. An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet. To enable access to or from the internet for instances in a subnet in a VPC, you must do the following:

  • Attach an internet gateway to your VPC.
  • Add a route to your subnet’s route table that directs internet-bound traffic to the internet gateway. If a subnet is associated with a route table that has a route to an internet gateway, it’s known as a public subnet. If a subnet is associated with a route table that does not have a route to an internet gateway, it’s known as a private subnet.
  • Ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address).
  • Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance.
  • In your subnet route table, you can specify a route for the internet gateway to all destinations not explicitly known to the route table (0.0.0.0/0 for IPv4 or ::/0 for IPv6). Alternatively, you can scope the route to a narrower range of IP addresses. For example, the public IPv4 addresses of your company’s public endpoints outside of AWS, or the elastic IP addresses of other Amazon EC2 instances outside your VPC. To enable communication over the Internet for IPv4, your instance must have a public IPv4 address or an Elastic IP address that’s associated with a private IPv4 address on your instance. Your instance is only aware of the private (internal) IP address space defined within the VPC and subnet. The internet gateway logically provides the one-to-one NAT on behalf of your instance so that when traffic leaves your VPC subnet and goes to the Internet, the reply address field is set to the public IPv4 address or elastic IP address of your instance and not its private IP address. Conversely, traffic that’s destined for the public IPv4 address or elastic IP address of your instance has its destination address translated into the instance’s private IPv4 address before the traffic is delivered to the VPC. To enable communication over the Internet for IPv6, your VPC and subnet must have an associated IPv6 CIDR block, and your instance must be assigned an IPv6 address from the range of the subnet. IPv6 addresses are globally unique, and therefore public by default.

References: VPC Internet Gateway – Route Table

AWS SAA-C02 SAA-C03 Exam Prep

 

Q44: A data company has implemented a subscription service for storing video files. There are two levels of subscription: personal and professional use. The personal users can upload a total of 5 GB of data, and professional users can upload as much as 5 TB of data. The application can upload files of size up to 1 TB to an S3 Bucket. What is the best way to upload files of this size?

A. Multipart upload

B. Single-part Upload

C. AWS Snowball

D. AWS SnowMobile

Answers: A

Notes: The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object (see Operations on Objects). Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket. You can list all of your in-progress multipart uploads or get a list of the parts that you have uploaded for a specific multipart upload. Each of these operations is explained in this section.

References: Multipart upload API

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA-C02 SAA-C03 Exam Prep

Q45: You have multiple EC2 instances housing applications in a VPC in a single Availability Zone. The applications need to communicate at extremely high throughputs to avoid latency for end users. The average throughput needs to be 6 Gbps. What’s the best measure you can do to ensure this throughput?

A. Put the instances in a placement group

B. Use Elastic Network Interfaces

C. Use Auto Scaling Groups

D. Increase the size of the instances

Answer: iOS – Android

Notes: Amazon Web Services’ (AWS) solution to reducing latency between instances involves the use of placement groups. As the name implies, a placement group is just that — a group. AWS instances that exist within a common availability zone can be grouped into a placement group. Group members are able to communicate with one another in a way that provides low latency and high throughput. A cluster placement group is a logical grouping of instances within a single Availability Zone. A cluster placement group can span peered VPCs in the same Region. Instances in the same cluster placement group enjoy a higher per-flow throughput limit of up to 10 Gbps for TCP/IP traffic and are placed in the same high-bisection bandwidth segment of the network.

References: Placement Group

 

Q46: A team member has been tasked to configure four EC2 instances for four separate applications. These are not high-traffic apps, so there is no need for an Auto Scaling Group. The instances are all in the same public subnet and each instance has an EIP address, and all of the instances have the same Security Group. But none of the instances can send or receive internet traffic. You verify that all the instances have a public IP address. You also verify that an internet gateway has been configured. What is the most likely issue?

A. There is no route in the route table to the internet gateway (or it has been deleted).

B. Each instance needs its own security group.

C. The route table is corrupt.

D. You are using the default nacl.

Answers:  A

Notes: The question details all of the configuration needed for internet access, except for a route to the IGW in the route table. This is definitely a key step in any checklist for internet connectivity. It is quite possible to have a subnet with the ‘Public’ attribute set but no route to the Internet in the assigned Route table. (test it yourself). This may have been a setup error, or someone may have thoughtlessly altered the shared Route table for a special case instead of creating a new Route table for the special case.

References: Public – Private VPC

AWS SAA-C02 SAA-C03 Exam Prep

Q47: You have been assigned to create an architecture which uses load balancers to direct traffic to an Auto Scaling Group of EC2 instances across multiple Availability Zones. The application to be deployed on these instances is a life insurance application which requires path-based and host-based routing. Which type of load balancer will you need to use?

A. Any type of load balancer will meet these requirements.

B. Classic Load Balancer

C. Network Load Balancer

D. Application Load Balancer

Answers: D

Notes: Only the Application Load Balancer can support path-based and host-based routing. Using an Application Load Balancer instead of a Classic Load Balancer has the following benefits:

  • Support for path-based routing. You can configure rules for your listener that forward requests based on the URL in the request. This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL.
  • Support for host-based routing. You can configure rules for your listener that forward requests based on the host field in the HTTP header. This enables you to route requests to multiple domains using a single load balancer.
  • Support for routing based on fields in the request, such as standard and custom HTTP headers and methods, query parameters, and source IP addresses.
  • Support for routing requests to multiple applications on a single EC2 instance. You can register each instance or IP address with the same target group using multiple ports.
  • Support for redirecting requests from one URL to another.
  • Support for returning a custom HTTP response.
  • Support for registering targets by IP address, including targets outside the VPC for the load balancer.
  • Support for registering Lambda functions as targets.
  • Support for the load balancer to authenticate users of your applications through their corporate or social identities before routing requests.
  • Support for containerized applications. Amazon Elastic Container Service (Amazon ECS) can select an unused port when scheduling a task and register the task with a target group using this port. This enables you to make efficient use of your clusters.
  • Support for monitoring the health of each service independently, as health checks are defined at the target group level and many CloudWatch metrics are reported at the target group level. Attaching a target group to an Auto Scaling group enables you to scale each service dynamically based on demand.
  • Access logs contain additional information and are stored in compressed format.
  • Improved load balancer performance.

References: Application Load Balancer – ELB FAQS

AWS SAA-C02 SAA-C03 Exam Prep

Q48: You have been assigned to create an architecture which uses load balancers to direct traffic to an Auto Scaling Group of EC2 instances across multiple Availability Zones. You were considering using an Application Load Balancer, but some of the requirements you have been given seem to point to a Classic Load Balancer. Which requirement would be better served by an Application Load Balancer?

A. Support for EC2-Classic

B. Path-based routing

C. Support for sticky sessions using application-generated cookies

D. Support for TCP and SSL listeners

Answers: B

Notes:

Using an Application Load Balancer instead of a Classic Load Balancer has the following benefits:

  • Support for path-based routing. You can configure rules for your listener that forward requests based on the URL in the request. This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL.

References: Path-based Routing

 

Q49: You have been tasked to review your company disaster recovery plan due to some new requirements. The driving factor is that the Recovery Time Objective has become very aggressive. Because of this, it has been decided to configure Multi-AZ deployments for the RDS MySQL databases. Unrelated to DR, it has been determined that some read traffic needs to be offloaded from the master database. What step can be taken to meet this requirement?

A. Convert to Aurora to allow the standby to serve read traffic.

B. Redirect some of the read traffic to the standby database.

C. Add DAX to the solution to alleviate excess read traffic.

D. Add read replicas to offload some read traffic.

Answer: iOS – Android

Notes: Amazon RDS Read Replicas for MySQL and MariaDB now support Multi-AZ deployments. Combining Read Replicas with Multi-AZ enables you to build a resilient disaster recovery strategy and simplify your database engine upgrade process. Amazon RDS Read Replicas enable you to create one or more read-only copies of your database instance within the same AWS Region or in a different AWS Region. Updates made to the source database are then asynchronously copied to your Read Replicas. In addition to providing scalability for read-heavy workloads, Read Replicas can be promoted to become a standalone database instance when needed.

References: Amazon RDS Read Replicas

 

Q50: A gaming company is designing several new games which focus heavily on player-game interaction. The player makes a certain move and the game has to react very quickly to change the environment based on that move and to present the next decision for the player in real-time. A tool is needed to continuously collect data about player-game interactions and feed the data into the gaming platform in real-time. Which AWS service can best meet this need?

A. AWS Lambda

B. Kinesis Data Streams

C. Kinesis Data Analytics

D. AWS IoT

Answers: B

Notes: Kinesis Data Streams can be used to continuously collect data about player-game interactions and feed the data into your gaming platform. With Kinesis Data Streams, you can design a game that provides engaging and dynamic experiences based on players’ actions and behaviors.

References: Kinesis Data Streams

AWS SAA-C02 SAA-C03 Exam Prep

Q51: You are designing an architecture for a financial company which provides a day trading application to customers. After viewing the traffic patterns for the existing application you notice that traffic is fairly steady throughout the day, with the exception of large spikes at the opening of the market in the morning and at closing around 3 pm. Your architecture will include an Auto Scaling Group of EC2 instances. How can you configure the Auto Scaling Group to ensure that system performance meets the increased demands at opening and closing of the market?

A. Configure a Dynamic Scaling Policy to scale based on CPU Utilization.

B. Use a load balancer to ensure that the load is distributed evenly during high-traffic periods.

C. Configure your Auto Scaling Group to have a desired size which will be able to meet the demands of the high-traffic periods.

D. Use a predictive scaling policy on the Auto Scaling Group to meet opening and closing spikes.

Answer: iOS – Android

Notes: Use a predictive scaling policy on the Auto Scaling Group to meet opening and closing spikes: Using data collected from your actual EC2 usage and further informed by billions of data points drawn from our own observations, we use well-trained Machine Learning models to predict your expected traffic (and EC2 usage) including daily and weekly patterns. The model needs at least one day’s of historical data to start making predictions; it is re-evaluated every 24 hours to create a forecast for the next 48 hours. What we can gather from the question is that the spikes at the beginning and end of day can potentially affect performance. Sure, we can use dynamic scaling, but remember, scaling up takes a little bit of time. We have the information to be proactive, use predictive scaling, and be ready for these spikes at opening and closing.

References: predictive scaling policy on the Auto Scaling Group

Q52: A software gaming company has produced an online racing game which uses CloudFront for fast delivery to worldwide users. The game also uses DynamoDB for storing in-game and historical user data. The DynamoDB table has a preconfigured read and write capacity. Users have been reporting slow down issues, and an analysis has revealed that the DynamoDB table has begun throttling during peak traffic times. Which step can you take to improve game performance?

A. Add a load balancer in front of the web servers.

B. Add ElastiCache to cache frequently accessed data in memory.

C. Add an SQS Queue to queue requests which could be lost.

D. Make sure DynamoDB Auto Scaling is turned on.

Answers: D

Notes: Amazon DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns. This enables a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic, without throttling. When the workload decreases, Application Auto Scaling decreases the throughput so that you don’t pay for unused provisioned capacity. Note that if you use the AWS Management Console to create a table or a global secondary index, DynamoDB auto scaling is enabled by default. You can modify your auto scaling settings at any time.

References: DynamoDB AutoScaling

AWS SAA-C02 SAA-C03 Exam Prep

Q53: You have configured an Auto Scaling Group of EC2 instances. You have begun testing the scaling of the Auto Scaling Group using a stress tool to force the CPU utilization metric being used to force scale out actions. The stress tool is also being manipulated by removing stress to force a scale in. But you notice that these actions are only taking place in five-minute intervals. What is happening?

A. Auto Scaling Groups can only scale in intervals of five minutes or greater.

B. The Auto Scaling Group is following the default cooldown procedure.

C. A load balancer is managing the load and limiting the effectiveness of stressing the servers.

D. The stress tool is configured to run for five minutes.

Answer: iOS – Android

Notes: The cooldown period helps you prevent your Auto Scaling group from launching or terminating additional instances before the effects of previous activities are visible. You can configure the length of time based on your instance startup time or other application needs. When you use simple scaling, after the Auto Scaling group scales using a simple scaling policy, it waits for a cooldown period to complete before any further scaling activities due to simple scaling policies can start. An adequate cooldown period helps to prevent the initiation of an additional scaling activity based on stale metrics. By default, all simple scaling policies use the default cooldown period associated with your Auto Scaling Group, but you can configure a different cooldown period for certain policies, as described in the following sections. Note that Amazon EC2 Auto Scaling honors cooldown periods when using simple scaling policies, but not when using other scaling policies or scheduled scaling. A default cooldown period automatically applies to any scaling activities for simple scaling policies, and you can optionally request to have it apply to your manual scaling activities. When you use the AWS Management Console to update an Auto Scaling Group, or when you use the AWS CLI or an AWS SDK to create or update an Auto Scaling Group, you can set the optional default cooldown parameter. If a value for the default cooldown period is not provided, its default value is 300 seconds.

References: EC2 AutoScaling cooldown

Q54: A team of architects is designing a new AWS environment for a company which wants to migrate to the Cloud. The architects are considering the use of EC2 instances with instance store volumes. The architects realize that the data on the instance store volumes are ephemeral. Which action will not cause the data to be deleted on an instance store volume?

A. Reboot

B. The underlying disk drive fails.

C. Hardware disk failure.

D. Instance is stopped

Answers: A

Notes: Some Amazon Elastic Compute Cloud (Amazon EC2) instance types come with a form of directly attached, block-device storage known as the instance store. The instance store is ideal for temporary storage, because the data stored in instance store volumes is not persistent through instance stops, terminations, or hardware failures.

References: Instance store vs EBS – EC2 instance storage user guide

AWS SAA-C02 SAA-C03 Exam Prep

Q55: You work for an advertising company that has a real-time bidding application. You are also using CloudFront on the front end to accommodate a worldwide user base. Your users begin complaining about response times and pauses in real-time bidding. Which service can be used to reduce DynamoDB response times by an order of magnitude (milliseconds to microseconds)?

A. DAX

B. DynamoDB Auto Scaling

C. Elasticache

D. CloudFront Edge Caches

Answers: A

Notes: Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache that can reduce Amazon DynamoDB response times from milliseconds to microseconds, even at millions of requests per second. While DynamoDB offers consistent single-digit millisecond latency, DynamoDB with DAX takes performance to the next level with response times in microseconds for millions of requests per second for read-heavy workloads. With DAX, your applications remain fast and responsive, even when a popular event or news story drives unprecedented request volumes your way. No tuning required.

References:

AWS SAA-C02 SAA-C03 Exam Prep

Q56: A travel company has deployed a website which serves travel updates to users all over the world. The traffic this database serves is very read heavy and can have some latency issues at certain times of the year. What can you do to alleviate these latency issues?

A. Place CloudFront in front of the Database.

B. Add read replicas

C. Configure RDS Multi-AZ

D. Configure multi-Region RDS

Answer: iOS – Android

Notes: Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances. Read replicas are available in Amazon RDS for MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server as well as Amazon Aurora.

References: Amazon RDS Read Replicas

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA Exam Prep AWS SAA-C02 SAA-C03 Exam Prep

Q57: A large financial institution is gradually moving their infrastructure and applications to AWS. The company has data needs that will utilize all of RDS, DynamoDB, Redshift, and ElastiCache. Which description best describes Amazon Redshift?

A. Key-value and document database that delivers single-digit millisecond performance at any scale.

B. Cloud-based relational database.

C. Can be used to significantly improve latency and throughput for many read-heavy application workloads.

D. Near real-time complex querying on massive data sets.

Answers: D

Notes: Amazon Redshift is a fast, fully-managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against terabytes to petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Most results come back in seconds. With Redshift, you can start small for just $0.25 per hour with no commitments and scale out to petabytes of data for $1,000 per terabyte per year, less than a tenth the cost of traditional on-premises solutions. Amazon Redshift also includes Amazon Redshift Spectrum, allowing you to run SQL queries directly against exabytes of unstructured data in Amazon S3 data lakes. No loading or transformation is required, and you can use open data formats, including Avro, CSV, Grok, Amazon Ion, JSON, ORC, Parquet, RCFile, RegexSerDe, Sequence, Text, and TSV. Redshift Spectrum automatically scales query compute capacity based on the data retrieved, so queries against Amazon S3 run fast, regardless of data set size.

References: Redshift

AWS Azure GCP Certification Prep
AWS Azure GCP Certification Prep

Q58: You are designing an architecture which will house an Auto Scaling Group of EC2 instances. The application hosted on the instances is expected to be an extremely popular social networking site. Forecasts for traffic to this site expect very high traffic and you will need a load balancer to handle tens of millions of requests per second while maintaining high throughput at ultra low latency. You need to select the type of load balancer to front your Auto Scaling Group to meet this high traffic requirement. Which load balancer will you select?

A. You will need an Application Load Balancer to meet this requirement.

B. All the AWS load balancers meet the requirement and perform the same.

C. You will select a Network Load Balancer to meet this requirement.

D. You will need a Classic Load Balancer to meet this requirement.

Answers: C

Notes: Network Load Balancer Overview: A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration. When you enable an Availability Zone for the load balancer, Elastic Load Balancing creates a load balancer node in the Availability Zone. By default, each load balancer node distributes traffic across the registered targets in its Availability Zone only. If you enable cross-zone load balancing, each load balancer node distributes traffic across the registered targets in all enabled Availability Zones. It is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on your part. The Network Load Balancer is API-compatible with the Application Load Balancer, including full programmatic control of Target Groups and Targets. Here are some of the most important features:

  • Static IP Addresses – Each Network Load Balancer provides a single IP address for each Availability Zone in its purview. If you have targets in us-west-2a and other targets in us-west-2c, NLB will create and manage two IP addresses (one per AZ); connections to that IP address will spread traffic across the instances in all the VPC subnets in the AZ. You can also specify an existing Elastic IP for each AZ for even greater control. With full control over your IP addresses, a Network Load Balancer can be used in situations where IP addresses need to be hard-coded into DNS records, customer firewall rules, and so forth.
  • Zonality – The IP-per-AZ feature reduces latency with improved performance, improves availability through isolation and fault tolerance, and makes the use of Network Load Balancers transparent to your client applications. Network Load Balancers also attempt to route a series of requests from a particular source to targets in a single AZ while still providing automatic failover should those targets become unavailable.
  • Source Address Preservation – With Network Load Balancer, the original source IP address and source ports for the incoming connections remain unmodified, so application software need not support X-Forwarded-For, proxy protocol, or other workarounds. This also means that normal firewall rules, including VPC Security Groups, can be used on targets.
  • Long-running Connections – NLB handles connections with built-in fault tolerance, and can handle connections that are open for months or years, making them a great fit for IoT, gaming, and messaging applications.
  • Failover – Powered by Route 53 health checks, NLB supports failover between IP addresses within and across regions.

References: Network Load Balancer

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA-C02 SAA-C03 Exam Prep

Q59: An organization of about 100 employees has performed the initial setup of users in IAM. All users except administrators have the same basic privileges. But now it has been determined that 50 employees will have extra restrictions on EC2. They will be unable to launch new instances or alter the state of existing instances. What will be the quickest way to implement these restrictions?

A. Create an IAM Role for the restrictions. Attach it to the EC2 instances.

B. Create the appropriate policy. Place the restricted users in the new policy.

C. Create the appropriate policy. With only 20 users, attach the policy to each user.

D. Create the appropriate policy. Create a new group for the restricted users. Place the restricted users in the new group and attach the policy to the group.

Answer: iOS – Android

Notes: You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. AWS supports six types of policies: identity-based policies, resource-based policies, permissions boundaries, Organizations SCPs, ACLs, and session policies. IAM policies define permissions for an action regardless of the method that you use to perform the operation. For example, if a policy allows the GetUser action, then a user with that policy can get user information from the AWS Management Console, the AWS CLI, or the AWS API. When you create an IAM user, you can choose to allow console or programmatic access. If console access is allowed, the IAM user can sign in to the console using a user name and password. Or if programmatic access is allowed, the user can use access keys to work with the CLI or API.

References: IAM Access Policy

 

Q60: You are managing S3 buckets in your organization. This management of S3 extends to Amazon Glacier. For auditing purposes you would like to be informed if an object is restored to S3 from Glacier. What is the most efficient way you can do this?

A. Create a CloudWatch event for uploads to S3

B. Create an SNS notification for any upload to S3.

C. Configure S3 notifications for restore operations from Glacier.

D. Create a Lambda function which is triggered by restoration of object from Glacier to S3.

Answers: C

Notes: The Amazon S3 notification feature enables you to receive notifications when certain events happen in your bucket. To enable notifications, you must first add a notification configuration that identifies the events you want Amazon S3 to publish and the destinations where you want Amazon S3 to send the notifications. An S3 notification can be set up to notify you when objects are restored from Glacier to S3.

References: S3 notifications

 

Q61: Your company has gotten back results from an audit. One of the mandates from the audit is that your application, which is hosted on EC2, must encrypt the data before writing this data to storage. Which service could you use to meet this requirement?

A. AWS Cloud HSM

B. Security Token Service

C. EBS encryption

D. AWS KMS

Answers: D

Notes: You can configure your application to use the KMS API to encrypt all data before saving it to disk. This link details how to choose an encryption service for various use cases:

References: AWS KMS

 

Q62: Recent worldwide events have dictated that you perform your duties as a Solutions Architect from home. You need to be able to manage several EC2 instances while working from home and have been testing the ability to ssh into these instances. One instance in particular has been a problem and you cannot ssh into this instance. What should you check first to troubleshoot this issue?

A. Make sure that the security group for the instance has ingress on port 80 from your home IP address.

B. Make sure that your VPC has a connected Virtual Private Gateway.

C. Make sure that the security group for the instance has ingress on port 22 from your home IP address.

D. Make sure that the Security Group for the instance has ingress on port 443 from your home IP address.

Answer: iOS – Android

Notes: The rules of a security group control the inbound traffic that’s allowed to reach the instances that are associated with the security group. The rules also control the outbound traffic that’s allowed to leave them. The following are the characteristics of security group rules:

  • By default, security groups allow all outbound traffic.
  • Security group rules are always permissive; you can’t create rules that deny access.
  • Security groups are stateful. If you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. For VPC security groups, this also means that responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules. For more information, see Connection tracking.
  • You can add and remove rules at any time. Your changes are automatically applied to the instances that are associated with the security group. The effect of some rule changes can depend on how the traffic is tracked. For more information, see Connection tracking. When you associate multiple security groups with an instance, the rules from each security group are effectively aggregated to create one set of rules. Amazon EC2 uses this set of rules to determine whether to allow access. You can assign multiple security groups to an instance. Therefore, an instance can have hundreds of rules that apply. This might cause problems when you access the instance. We recommend that you condense your rules as much as possible.

References: Security Groups Rules

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA-C02 SAA-C03 Exam Prep

Q62: A consultant is hired by a small company to configure an AWS environment. The consultant begins working with the VPC and launching EC2 instances within the VPC. The initial instances will be placed in a public subnet. The consultant begins to create security groups. What is true of the default security group?

A. You can delete this group, however, you can’t change the group’s rules.

B. You can delete this group or you can change the group’s rules.

C. You can’t delete this group, nor can you change the group’s rules.

D. You can’t delete this group, however, you can change the group’s rules.

Answers: D

Notes: Your VPC includes a default security group. You can’t delete this group, however, you can change the group’s rules. The procedure is the same as modifying any other security group. For more information, see Adding, removing, and updating rules.

References: VPC Security Groups

 

Q63: You are evaluating the security setting within the main company VPC. There are several NACLs and security groups to evaluate and possibly edit. What is true regarding NACLs and security groups?

A. Network ACLs and security groups are both stateful.

B. Network ACLs and security groups are both stateless.

C. Network ACLs are stateless, and security groups are stateful.

D. Network ACLs and stateful, and security groups are stateless.

Answer: iOS – Android

Notes: Network ACLs are stateless, which means that responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa).

The following are the basic characteristics of security groups for your VPC:

  • There are quotas on the number of security groups that you can create per VPC, the number of rules that you can add to each security group, and the number of security groups that you can associate with a network interface. For more information, see Amazon VPC quotas.
  • You can specify allow rules, but not deny rules.
  • You can specify separate rules for inbound and outbound traffic.
  • When you create a security group, it has no inbound rules. Therefore, no inbound traffic originating from another host to your instance is allowed until you add inbound rules to the security group.
  • By default, a security group includes an outbound rule that allows all outbound traffic. You can remove the rule and add outbound rules that allow specific outbound traffic only. If your security group has no outbound rules, no outbound traffic originating from your instance is allowed.
  • Security groups are stateful. If you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. Responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules.
  • References: VPC Security Groups – NACL

 

Q64: Your company needs to deploy an application in the company AWS account. The application will reside on EC2 instances in an Auto Scaling Group fronted by an Application Load Balancer. The company has been using Elastic Beanstalk to deploy the application due to limited AWS experience within the organization. The application now needs upgrades and a small team of subcontractors have been hired to perform these upgrades. What can be used to provide the subcontractors with short-lived access tokens that act as temporary security credentials to the company AWS account?

A. IAM Roles

B. AWS STS

C. IAM user accounts

D. AWS SSO

Answers: B

Notes: AWS Security Token Service (AWS STS) is the service that you can use to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use. You can use the AWS Security Token Service (AWS STS) to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use, with the following differences: Temporary security credentials are short-term, as the name implies. They can be configured to last for anywhere from a few minutes to several hours. After the credentials expire, AWS no longer recognizes them or allows any kind of access from API requests made with them. Temporary security credentials are not stored with the user but are generated dynamically and provided to the user when requested. When (or even before) the temporary security credentials expire, the user can request new credentials, as long as the user requesting them still has permissions to do so.

References: AWS STS

AWS SAA-C02 SAA-C03 Exam Prep

Q65: The company you work for has reshuffled teams a bit and you’ve been moved from the AWS IAM team to the AWS Network team. One of your first assignments is to review the subnets in the main VPCs. What are two key concepts regarding subnets?

A. A subnet spans all the Availability Zones in a Region.

B. Private subnets can only hold database.

C. Each subnet maps to a single Availability Zone.

D. Every subnet you create is associated with the main route table for the VPC.

E. Each subnet is associated with one security group.

Answer: iOS – Android

Notes: A VPC spans all the Availability Zones in the region. After creating a VPC, you can add one or more subnets in each Availability Zone. When you create a subnet, you specify the CIDR block for the subnet, which is a subset of the VPC CIDR block. Each subnet must reside entirely within one Availability Zone and cannot span zones. Availability Zones are distinct locations that are engineered to be isolated from failures in other Availability Zones. By launching instances in separate Availability Zones, you can protect your applications from the failure of a single location. A VPC spans all of the Availability Zones in the Region. After creating a VPC, you can add one or more subnets in each Availability Zone. You can optionally add subnets in a Local Zone, which is an AWS infrastructure deployment that places compute, storage, database, and other select services closer to your end users. A Local Zone enables your end users to run applications that require single-digit millisecond latencies. For information about the Regions that support Local Zones, see Available Regions in the Amazon EC2 User Guide for Linux Instances. When you create a subnet, you specify the CIDR block for the subnet, which is a subset of the VPC CIDR block. We assign a unique ID to each subnet.

References: VPC Subnets

 

Q66: Amazon Web Services offers 4 different levels of support. Which of the following are valid support levels? Choose 3

A. Enterprise

B. Developer

C. Corporate

D. Business

E. Free Tier


Answer: A B D
Notes: The correct answers are Enterprise, Business, Developer.
References: https://docs.aws.amazon.com/

 

Q67: You are reviewing Change Control requests, and you note that there is a change designed to reduce wasted CPU cycles by increasing the value of your Amazon SQS “VisibilityTimeout” attribute. What does this mean?

A. While processing a message, a consumer instance can amend the message visibility counter by a fixed amount.

B. When a consumer instance retrieves a message, that message will be hidden from other consumer instances for a fixed period.

C. When the consumer instance polls for new work the SQS service will allow it to wait a certain time for a message to be available before closing the connection.

D. While processing a message, a consumer instance can reset the message visibility by restarting the preset timeout counter.

E. When the consumer instance polls for new work, the consumer instance will wait a certain time until it has a full workload before closing the connection.

F. When a new message is added to the SQS queue, it will be hidden from consumer instances for a fixed period.


Answer: B
Notes: Poor timing of SQS processes can significantly impact the cost effectiveness of the solution. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours.
References: https://docs.aws.amazon.com/sqs

 

Q68: You are a security architect working for a large antivirus company. The production environment has recently been moved to AWS and is in a public subnet. You are able to view the production environment over HTTP. However, when your customers try to update their virus definition files over a custom port, that port is blocked. You log in to the console and you allow traffic in over the custom port. How long will this take to take effect?

A. After a few minutes.

B. Immediately.

C. Straight away, but to the new instances only.

D. Straight away to the new instances, but old instances must be stopped and restarted before the new rules apply.


Answer: B
Notes: Immediately
References:
References: https://docs.aws.amazon.com/iam

Q69: Amazon SQS keeps track of all tasks and events in an application.

A. True

B. False


Answer: B
Notes: Amazon SWF (not Amazon SQS) keeps track of all tasks and events in an application. Amazon SQS requires you to implement your own application-level tracking, especially if your application uses multiple queues. Amazon SWF FAQs.
References:
References: https://docs.aws.amazon.com/sqs

 

Q70: Your Security Manager has hired a security contractor to audit your network and firewall configurations. The consultant doesn’t have access to an AWS account. You need to provide the required access for the auditing tasks, and answer a question about login details for the official AWS firewall appliance. Which of the following might you do?
Choose 2

A. Create an IAM User with a policy that can Read Security Group and NACL settings.

B. Explain that AWS implements network security differently and that there is no such thing as an official AWS firewall appliance. Security Groups and NACLs are used instead.

C. Create an IAM Role with a policy that can Read Security Group and NACL settings.

D. Explain that AWS is a cloud service and that AWS manages the Network appliances.

E. Create an IAM Role with a policy that can Read Security Group and Route settings.


Answer: A and B
Notes: Create an IAM user for the auditor and explain that the firewall functionality is implemented as stateful Security Groups, and stateless subnet NACLs. AWS has removed the Firewall appliance from the hub of the network and implemented the firewall functionality as stateful Security Groups, and stateless subnet NACLs. This is not a new concept in networking, but rarely implemented at this scale.
References: https://docs.aws.amazon.com/iam

Q71: How many internet gateways can I attach to my custom VPC?

A. 5
B. 3
C. 2
D. 1


Answer: D
Notes: 1
References: https://docs.aws.amazon.com/vpc

Q72: How long can a message be retained in an SQS Queue?

A. 14 days

B. 1 day

C. 7 days

D. 30 days


Answer: A
Notes: Messages can be retained in queues for up to 14 days.
References: https://docs.aws.amazon.com/sqs

 

Q73: Although your application customarily runs at 30% usage, you have identified a recurring usage spike (>90%) between 8pm and midnight daily. What is the most cost-effective way to scale your application to meet this increased need?

A. Manually deploy Reactive Event-based Scaling each night at 7:45.

B. Deploy additional EC2 instances to meet the demand.

C. Use scheduled scaling to boost your capacity at a fixed interval.

D. Increase the size of the Resource Group to meet demand.

Answer: C
Notes: Scheduled scaling allows you to set your own scaling schedule. For example, let’s say that every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling actions based on the predictable traffic patterns of your web application. Scaling actions are performed automatically as a function of time and date.
Reference: Scheduled scaling for Amazon EC2 Auto Scaling. 

Q74: To save money, you quickly stored some data in one of the attached volumes of an EC2 instance and stopped it for the weekend. When you returned on Monday and restarted your instance, you discovered that your data was gone. Why might that be?

A. The EBS volume was not large enough to store your data.

B. The instance failed to connect to the root volume on Monday.

C. The elastic block-level storage service failed over the weekend.

D. The volume was ephemeral, block-level storage. Data on an instance store volume is lost if an instance is stopped.


Answer: D
Notes: the EC2 instance had an instance store volume attached to it. Instance store volumes are ephemeral, meaning that data in attached instance store volumes is lost if the instance stops.
Reference: Instance store lifetime 

Q75: Select all the true statements on S3 URL styles: Choose 2

A. Virtual hosted-style URLs will be eventually depreciated in favor of Path-Style URLs for S3 bucket access.

B. Virtual-host-style URLs (such as: https://bucket-name.s3.Region.amazonaws.com/key name) are supported by AWS.

C. Path-Style URLs (such as https://s3.Region.amazonaws.com/bucket-name/key name) are supported by AWS.

D. DNS compliant names are NOT recommended for the URLs to access S3.


Answer: B and C
Notes: Virtual-host-style URLs and Path-Style URLs (soon to be retired) are supported by AWS. DNS compliant names are recommended for the URLs to access S3.
References: https://docs.aws.amazon.com/s3

Q76: With EBS, I can ____. Choose 2

A. Create an encrypted snapshot from an unencrypted snapshot by creating an encrypted copy of the unencrypted snapshot.

B. Create an unencrypted volume from an encrypted snapshot.

C. Create an encrypted volume from a snapshot of another encrypted volume.

D. Encrypt an existing volume.


Answer: A and C
Notes: Although there is no direct way to encrypt an existing unencrypted volume or snapshot, you can encrypt them by creating either a volume or a snapshot. Reference: Encrypting unencrypted resources. You can create an encrypted volume from a snapshot of another encrypted volume.
References: https://docs.aws.amazon.com/ebs

Q77: You have been engaged by a company to design and lead a migration to an AWS environment. The team is concerned about the capabilities of the new environment, especially when it comes to high availability and cost-effectiveness. The design calls for about 20 instances (c3.2xlarge) pulling jobs/messages from SQS. Network traffic per instance is estimated to be around 500 Mbps at the beginning and end of each job. Which configuration should you plan on deploying?

A. Use a 2nd Network Interface to separate the SQS traffic for the storage traffic.

B. Choose a different instance type that better matched the traffic demand.

C.Spread the Instances over multiple AZs to minimize the traffic concentration and maximize fault-tolerance.

D. Deploy as a Cluster Placement Group as the aggregated burst traffic could be around 10 Gbps.


Answer: C
Notes: With a multi-AZ configuration, an additional reliability point is scored as the entire Availability Zone itself is ruled out as a single point of failure. This ensures high availability. Wherever possible, use simple solutions such as spreading the load out rather than expensive high tech solutions.
References: AZ

Q78: You are a solutions architect working for a cosmetics company. Your company has a busy Magento online store that consists of a two-tier architecture. The web servers are on EC2 instances deployed across multiple AZs, and the database is on a Multi-AZ RDS MySQL database instance. Your store is having a Black Friday sale in five days, and having reviewed the performance for the last sale you expect the site to start running very slowly during the peak load. You investigate and you determine that the database was struggling to keep up with the number of reads that the store was generating. Which solution would you implement to improve the application read performance the most?

A. Deploy an Amazon ElastiCache cluster with nodes running in each AZ.

B. Upgrade your RDS MySQL instance to use provisioned IOPS.

C. Add an RDS Read Replica in each AZ.

D. Upgrade the RDS MySQL instance to a larger type.


Answer: C
Notes: RDS Replicas can substantially increase the Read performance of your database. Multiple read replicas can be made to increase performance further. It will also require the least modifications to any code, and is generally possible to be implemented in the timeframe specified
References: RDS

Q79: Which native AWS service will act as a file system mounted on an S3 bucket?

A. Amazon Elastic Block Store

B. File Gateway

C. Amazon S3

D. Amazon Elastic File System


Answer: B
Notes: A file gateway supports a file interface into Amazon Simple Storage Service (Amazon S3) and combines a service and a virtual software appliance. By using this combination, you can store and retrieve objects in Amazon S3 using industry-standard file protocols such as Network File System (NFS) and Server Message Block (SMB). The software appliance, or gateway, is deployed into your on-premises environment as a virtual machine (VM) running on VMware ESXi, Microsoft Hyper-V, or Linux Kernel-based Virtual Machine (KVM) hypervisor. The gateway provides access to objects in S3 as files or file share mount points. You can manage your S3 data using lifecycle policies, cross-region replication, and versioning. You can think of a file gateway as a file system mount on S3.
Reference: What is AWS Storage Gateway? .

 

Q80:You have been evaluating the NACLS in your company. Most of the NACLs are configured the same: 100 All Traffic Allow 200 All Traffic Deny ‘*’ All Traffic Deny If a request comes in, how will it be evaluated?

A. The default will deny traffic.

B. The request will be allowed.

C. The highest numbered rule will be used, a deny.

D. All rules will be evaluated and the end result will be Deny.

Answer: B

Notes: Rules are evaluated starting with the lowest numbered rule. As soon as a rule matches traffic, it’s applied immediately regardless of any higher-numbered rule that may contradict it. The following are the basic things that you need to know about network ACLs: Your VPC automatically comes with a modifiable default network ACL. By default, it allows all inbound and outbound IPv4 traffic and, if applicable, IPv6 traffic. You can create a custom network ACL and associate it with a subnet. By default, each custom network ACL denies all inbound and outbound traffic until you add rules. Each subnet in your VPC must be associated with a network ACL. If you don’t explicitly associate a subnet with a network ACL, the subnet is automatically associated with the default network ACL. You can associate a network ACL with multiple subnets. However, a subnet can be associated with only one network ACL at a time. When you associate a network ACL with a subnet, the previous association is removed. A network ACL contains a numbered list of rules. We evaluate the rules in order, starting with the lowest-numbered rule, to determine whether traffic is allowed in or out of any subnet associated with the network ACL. The highest number that you can use for a rule is 32766. We recommend that you start by creating rules in increments (for example, increments of 10 or 100) so that you can insert new rules where you need to later on. A network ACL has separate inbound and outbound rules, and each rule can either allow or deny traffic. Network ACLs are stateless, which means that responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa).

References: network ACL

Q81: You have been given an assignment to configure Network ACLs in your VPC. Before configuring the NACLs, you need to understand how the NACLs are evaluated. How are NACL rules evaluated?

A. NACL rules are evaluated by rule number from lowest to highest and executed immediately when a matching rule is found.

B. NACL rules are evaluated by rule number from highest to lowest, and executed immediately when a matching rule is found.

C. All NACL rules that you configure are evaluated before traffic is passed through.

D. NACL rules are evaluated by rule number from highest to lowest, and all are evaluated before traffic is passed through.

Answer: A

Notes: NACL rules are evaluated by rule number from lowest to highest and executed immediately when a matching rule is found.

You can add or remove rules from the default network ACL, or create additional network ACLs for your VPC. When you add or remove rules from a network ACL, the changes are automatically applied to the subnets that it’s associated with. A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC. The following are the parts of a network ACL rule:

  • Rule number. Rules are evaluated starting with the lowest-numbered rule. As soon as a rule matches traffic, it’s applied regardless of any higher-numbered rule that might contradict it.
  • Type. The type of traffic, for example, SSH. You can also specify all traffic or a custom range.
  • Protocol. You can specify any protocol that has a standard protocol number. For more information, see Protocol Numbers. If you specify ICMP as the protocol, you can specify any or all of the ICMP types and codes.
  • Port range. The listening port or port range for the traffic. For example, 80 for HTTP traffic.
  • Source. [Inbound rules only] The source of the traffic (CIDR range).
  • Destination. [Outbound rules only] The destination for the traffic (CIDR range).
  • Allow/Deny. Whether to allow or deny the specified traffic.
  • Reference: NACL Rules

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA-C02 SAA-C03 Exam Prep

Q82: Your company has gone through an audit with a focus on data storage. You are currently storing historical data in Amazon Glacier. One of the results of the audit is that a portion of the infrequently-accessed historical data must be able to be accessed immediately upon request. Where can you store this data to meet this requirement?

A. S3 Standard

B. Leave infrequently-accessed data in Glacier.

C. S3 Standard-IA

D. Store the data in EBS

Answer: C

Notes: S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of S3 Standard, with a low-per-GB storage price and per GB retrieval fee. This combination of low cost and high performance make S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files. S3 Storage Classes can be configured at the object level and a single bucket can contain objects stored across S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA. You can also use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes.

Reference: S3 Standard-IA

Q84: After an IT Steering Committee meeting, you have been put in charge of configuring a hybrid environment for the company’s compute resources. You weigh the pros and cons of various technologies, such as VPN and Direct Connect, and based on the requirements you have decided to configure a VPN connection. What features and advantages can a VPN connection provide?

A VPN  provides a connection between an on-premises network and a VPC, using a secure and private connection with IPsec and TLS.

A VPC/VPN Connection utilizes IPSec to establish encrypted network connectivity between your intranet and Amazon VPC over the Internet. VPN Connections can be configured in minutes and are a good solution if you have an immediate need, have low-to-modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity.

AWS Client VPN is a managed client-based VPN service that enables you to securely access your AWS resources or your on-premises network. With AWS Client VPN, you configure an endpoint to which your users can connect to establish a secure TLS VPN session. This enables clients to access resources in AWS or on-premises from any location using an OpenVPN-based VPN client.

You can create an IPsec VPN connection between your VPC and your remote network. On the AWS side of the Site-to-Site VPN connection, a virtual private gateway or transit gateway provides two VPN endpoints (tunnels) for automatic failover. You configure your customer gateway device on the remote side of the Site-to-Site VPN connection.

 

Q86: Your company has decided to go to a hybrid cloud environment. Part of this effort will be to move a large data warehouse to the cloud. The warehouse is 50TB, and will take over a month to migrate given the current bandwidth available. What is the best option available to perform this migration considering both cost and performance aspects?

AWS Snowball Edge.

The AWS Snowball Edge is a type of Snowball device with on-board storage and compute power for select AWS capabilities. Snowball Edge can undertake local processing and edge-computing workloads in addition to transferring data between your local environment and the AWS Cloud.

Each Snowball Edge device can transport data at speeds faster than the internet. This transport is done by shipping the data in the appliances through a regional carrier. The appliances are rugged shipping containers, complete with E Ink shipping labels. The AWS Snowball Edge device differs from the standard Snowball because it can bring the power of the AWS Cloud to your on-premises location, with local storage and compute functionality.

Snowball Edge devices have three options for device configurations: storage optimized, compute optimized, and with GPU. When this guide refers to Snowball Edge devices, it’s referring to all options of the device. Whenever specific information applies to only one or more optional configurations of devices, like how the Snowball Edge with GPU has an on-board GPU, it will be called out. For more information, see Snowball Edge Device Options.  

Q87: You have been assigned the review of the security in your company AWS cloud environment. Your final deliverable will be a report detailing potential security issues. One of the first things that you need to describe is the responsibilities of the company under the shared responsibility module. Which measure is the customer’s responsibility?

EC2 instance OS Patching

Notes: Security and compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates. The customer assumes responsibility for, and management of, the guest operating system (including updates and security patches), other associated application software, and the configuration of the AWS provided security group firewall. Customers should carefully consider the services they choose, as their responsibilities vary depending on the services used, the integration of those services into their IT environment, and applicable laws and regulations. The nature of this shared responsibility also provides the flexibility and customer control that permits the deployment. As shown in the chart below, this differentiation of responsibility is commonly referred to as Security “of” the Cloud versus Security “in” the Cloud.

Customers that deploy an Amazon EC2 instance are responsible for management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.

Q88: You work for a busy real estate company, and you need to protect your data stored on S3 from accidental deletion. Which of the following actions might you take to achieve this? Choose 2

A. Create a bucket policy that prohibits anyone from deleting things from the bucket.
B. Enable S3 – Infrequent Access Storage (S3 – IA).
C. Enable versioning on the bucket. If a file is accidentally deleted, delete the delete marker.
D. Configure MFA-protected API access.
E. Use pre-signed URL’s so that users will not be able to accidentally delete data.


Answer: C and D
Notes: The best answers are to allow versioning on the bucket and to protect the objects by configuring MFA-protected API access.
Reference: https://docs.aws.amazon.com/s3

Q89: AWS intends to shut down your spot instance; which of these scenarios is possible? Choose 3

A. AWS sends a notification of termination and you receive it 120 seconds before the intended forced shutdown.

B. AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, and you delay it by sending a ‘Delay300’ instruction before the forced shutdown takes effect.

C. AWS sends a notification of termination and you receive it 120 seconds before the intended forced shutdown, but AWS does not action the shutdown.

D. AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, but you block the shutdown because you used ‘Termination Protection’ when you initialized the instance.

E. AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, but the defined duration period (also known as Spot blocks) hasn’t ended yet.

F. AWS sends a notification of termination, but you do not receive it within the 120 seconds and the instance is shutdown.


Answer: A E and F
Notes: When Amazon EC2 is going to interrupt your Spot Instance, it emits an event two minutes prior to the actual interruption (except for hibernation, which gets the interruption notice, but not two minutes in advance because hibernation begins immediately).

In rare situations, Spot blocks may be interrupted due to Amazon EC2 capacity needs. In these cases, AWS provides a two-minute warning before the instance is terminated, and customers are not charged for the terminated instances even if they have used them.

It is possible that your Spot Instance is terminated before the warning can be made available.
Reference: https://docs.aws.amazon.com/ec2

Q90: What does the “EAR” in a policy document stand for?

A. Effects, APIs, Roles
B. Effect, Action, Resource
C. Ewoks, Always, Romanticize
D. Every, Action, Reasonable


Answer: B.
Notes: The elements included in a policy document that make up the “EAR” are effect, action, and resource.
Reference:  Policies and Permissions in IAM 

Q91: _____ provides real-time streaming of data.

A. Kinesis Data Analytics
B. Kinesis Data Firehose
C. Kinesis Data Streams
D. SQS


Answer: C
Notes: Kinesis Data Streams offers real-time data streaming
Reference: Amazon Kinesis Data Streams – 

Q92: You can use _ to build a schema for your data, and _ to query the data that’s stored in S3.

A. Glue, Athena
B. EC2, SQS
C. EC2, Glue
D. Athena, Lambda


Answer: A
Notes: Kinesis Data Streams offers real-time data streaming
Reference: Glue and Athena are correct – 

Q93: What type of work does EMR perform?

A. Data processing information (DPI) jobs.
B. Big data (BD) jobs.
C. Extract, transform, and load (ETL) jobs.
D. Huge amounts of data (HAD) jobs


Answer: C
Notes: EMR excels at extract, transform, and load (ETL) jobs.
Reference: Apache EMR – https://aws.amazon.com/emr/

Q94: _____ allows you to transform data using SQL as it’s being passed through Kinesis.

A. RDS
B. Kinesis Data Analytics
C. Redshift
D. DynamoDB


Answer: B
Notes: Kinesis Data Analytics allows you to transform data using SQL.
Reference: Amazon Kinesis Data Analytics –

Q95 [SAA-C03]: A company runs a public-facing three-tier web application in a VPC across multiple Availability Zones. Amazon EC2 instances for the application tier running in private subnets need to download software patches from the internet. However, the EC2 instances cannot be directly accessible from the internet. Which actions should be taken to allow the EC2 instances to download the needed patches? (Select TWO.)

A. Configure a NAT gateway in a public subnet.
B. Define a custom route table with a route to the NAT gateway for internet traffic and associate it with the private subnets for the application tier.
C. Assign Elastic IP addresses to the EC2 instances.
D. Define a custom route table with a route to the internet gateway for internet traffic and associate it with the private subnets for the application tier.
E. Configure a NAT instance in a private subnet.


Answer: A. B.
Notes: – A NAT gateway forwards traffic from the EC2 instances in the private subnet to the internet or other AWS services, and then sends the response back to the instances. After a NAT gateway is created, the route tables for private subnets must be updated to point internet traffic to the NAT gateway.

Reference: NAT Gateway – 

Q96 [SAA-C03]: A solutions architect wants to design a solution to save costs for Amazon EC2 instances that do not need to run during a 2-week company shutdown. The applications running on the EC2 instances store data in instance memory that must be present when the instances resume operation. Which approach should the solutions architect recommend to shut down and resume the EC2 instances?

A. Modify the application to store the data on instance store volumes. Reattach the volumes while restarting them.
B. Snapshot the EC2 instances before stopping them. Restore the snapshot after restarting the instances.
C. Run the applications on EC2 instances enabled for hibernation. Hibernate the instances before the 2- week company shutdown.
D. Note the Availability Zone for each EC2 instance before stopping it. Restart the instances in the same Availability Zones after the 2-week company shutdown.


Answer: C.
Notes: Hibernating EC2 instances save the contents of instance memory to an Amazon Elastic Block Store (Amazon EBS) root volume. When the instances restart, the instance memory contents are reloaded.

Reference: Hibernating – 

 

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA-C02 SAA-C03 Exam Prep

Q97 [SAA-C03]: A company plans to run a monitoring application on an Amazon EC2 instance in a VPC. Connections are made to the EC2 instance using the instance’s private IPv4 address. A solutions architect needs to design a solution that will allow traffic to be quickly directed to a standby EC2 instance if the application fails and becomes unreachable. Which approach will meet these requirements?

A) Deploy an Application Load Balancer configured with a listener for the private IP address and register the primary EC2 instance with the load balancer. Upon failure, de-register the instance and register the standby EC2 instance.
B) Configure a custom DHCP option set. Configure DHCP to assign the same private IP address to the standby EC2 instance when the primary EC2 instance fails.
C) Attach a secondary elastic network interface to the EC2 instance configured with the private IP address. Move the network interface to the standby EC2 instance if the primary EC2 instance becomes unreachable.
D) Associate an Elastic IP address with the network interface of the primary EC2 instance. Disassociate the Elastic IP from the primary instance upon failure and associate it with a standby EC2 instance.


Answer: C.
Notes: A secondary elastic network interface can be added to an EC2 instance. While primary network interfaces cannot be detached from an instance, secondary network interfaces can be detached and attached to a different EC2 instance.

Reference: A secondary elastic network interface – 

Q98 [SAA-C03]: An analytics company is planning to offer a web analytics service to its users. The service will require that the users’ webpages include a JavaScript script that makes authenticated GET requests to the company’s Amazon S3 bucket. What must a solutions architect do to ensure that the script will successfully execute?

A. Enable cross-origin resource sharing (CORS) on the S3 bucket.
B. Enable S3 Versioning on the S3 bucket.
C. Provide the users with a signed URL for the script.
D. Configure an S3 bucket policy to allow public execute privileges.


Answer: A.
Notes: Web browsers will block running a script that originates from a server with a domain name that is different from the webpage. Amazon S3 can be configured with CORS to send HTTP headers that allow the script to run

Reference: Amazon S3 can be configured with CORS – 

Q99 [SAA-C03]: A company’s security team requires that all data stored in the cloud be encrypted at rest at all times using encryption keys stored on premises. Which encryption options meet these requirements? (Select TWO.)

A. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3).
B. Use server-side encryption with AWS KMS managed encryption keys (SSE-KMS).
C. Use server-side encryption with customer-provided encryption keys (SSE-C).
D. Use client-side encryption to provide at-rest encryption.
E. Use an AWS Lambda function invoked by Amazon S3 events to encrypt the data using the customer’s keys.


Answer: C. D.
Notes: Server-side encryption with customer-provided keys (SSE-C) enables Amazon S3 to encrypt objects on the server side using an encryption key provided in the PUT request. The same key must be provided in the GET requests for Amazon S3 to decrypt the object. Customers also have the option to encrypt data on the client side before uploading it to Amazon S3, and then they can decrypt the data after downloading it. AWS software development kits (SDKs) provide an S3 encryption client that streamlines the process.

Reference: Server-side encryption with customer-provided keys (SSE-C) – 

Q100 [SAA-C03]: A company uses Amazon EC2 Reserved Instances to run its data processing workload. The nightly job typically takes 7 hours to run and must finish within a 10-hour time window. The company anticipates temporary increases in demand at the end of each month that will cause the job to run over the time limit with the capacity of the current resources. Once started, the processing job cannot be interrupted before completion. The company wants to implement a solution that would provide increased resource capacity as cost-effectively as possible. What should a solutions architect do to accomplish this?

A) Deploy On-Demand Instances during periods of high demand.
B) Create a second EC2 reservation for additional instances.
C) Deploy Spot Instances during periods of high demand.
D) Increase the EC2 instance size in the EC2 reservation to support the increased workload.


Answer: A.
Notes: While Spot Instances would be the least costly option, they are not suitable for jobs that cannot be interrupted or must complete within a certain time period. On-Demand Instances would be billed for the number of seconds they are running.

Reference: Spot Instances –  On-Demand instances

Q101 [SAA-C03]: A company runs an online voting system for a weekly live television program. During broadcasts, users submit hundreds of thousands of votes within minutes to a front-end fleet of Amazon EC2 instances that run in an Auto Scaling group. The EC2 instances write the votes to an Amazon RDS database. However, the database is unable to keep up with the requests that come from the EC2 instances. A solutions architect must design a solution that processes the votes in the most efficient manner and without downtime. Which solution meets these requirements?

A. Migrate the front-end application to AWS Lambda. Use Amazon API Gateway to route user requests to the Lambda functions.
B. Scale the database horizontally by converting it to a Multi-AZ deployment. Configure the front-end application to write to both the primary and secondary DB instances.
C. Configure the front-end application to send votes to an Amazon Simple Queue Service (Amazon SQS) queue. Provision worker instances to read the SQS queue and write the vote information to the database.
D. Use Amazon EventBridge (Amazon CloudWatch Events) to create a scheduled event to re-provision the database with larger, memory optimized instances during voting periods. When voting ends, re-provision the database to use smaller instances.


Answer: C.
Notes: – Decouple the ingestion of votes from the database to allow the voting system to continue processing votes without waiting for the database writes. Add dedicated workers to read from the SQS queue to allow votes to be entered into the database at a controllable rate. The votes will be added to the database as fast as the database can process them, but no votes will be lost.

Reference: Decouple – 

Q102 [SAA-C03]: A company has a two-tier application architecture that runs in public and private subnets. Amazon EC2 instances running the web application are in the public subnet and an EC2 instance for the database runs on the private subnet. The web application instances and the database are running in a single Availability Zone (AZ). Which combination of steps should a solutions architect take to provide high availability for this architecture? (Select TWO.)

A. Create new public and private subnets in the same AZ.
B. Create an Amazon EC2 Auto Scaling group and Application Load Balancer spanning multiple AZs for the web application instances.
C. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer.
D. Create new public and private subnets in a new AZ. Create a database using an EC2 instance in the public subnet in the new AZ. Migrate the old database contents to the new database.
E. Create new public and private subnets in the same VPC, each in a new AZ. Create an Amazon RDS Multi-AZ DB instance in the private subnets. Migrate the old database contents to the new DB instance.


Answer: B. E.
Notes: Create new subnets in a new Availability Zone (AZ) to provide a redundant network. Create an Auto Scaling group with instances in two AZs behind the load balancer to ensure high availability of the web application and redistribution of web traffic between the two public AZs. Create an RDS DB instance in the two private subnets to make the database tier highly available too.

Reference: Auto Scaling group with instances in two AZs behind the load balancer – 

Q103 [SAA-C03]: A website runs a custom web application that receives a burst of traffic each day at noon. The users upload new pictures and content daily, but have been complaining of timeouts. The architecture uses Amazon EC2 Auto Scaling groups, and the application consistently takes 1 minute to initiate upon boot up before responding to user requests. How should a solutions architect redesign the architecture to better respond to changing traffic?

A. Configure a Network Load Balancer with a slow start configuration.
B. Configure Amazon ElastiCache for Redis to offload direct requests from the EC2 instances.
C. Configure an Auto Scaling step scaling policy with an EC2 instance warmup condition.
D. Configure Amazon CloudFront to use an Application Load Balancer as the origin.


Answer: C.
Notes: The current configuration puts new EC2 instances into service before they are able to respond to transactions. This could also cause the instances to overscale. With a step scaling policy, you can specify the number of seconds that it takes for a newly launched instance to warm up. Until its specified warm-up time has expired, an EC2 instance is not counted toward the aggregated metrics of the Auto Scaling group. While scaling out, the Auto Scaling logic does not consider EC2 instances that are warming up as part of the current capacity of the Auto Scaling group. Therefore, multiple alarm breaches that fall in the range of the same step adjustment result in a single scaling activity. This ensures that you do not add more instances than you need.

Reference: Step scaling policy – 

Q104 [SAA-C03]: An application running on AWS uses an Amazon Aurora Multi-AZ DB cluster deployment for its database. When evaluating performance metrics, a solutions architect discovered that the database reads are causing high I/O and adding latency to the write requests against the database. What should the solutions architect do to separate the read requests from the write requests?

A. Enable read-through caching on the Aurora database.
B. Update the application to read from the Multi-AZ standby instance.
C. Create an Aurora replica and modify the application to use the appropriate endpoints.
D. Create a second Aurora database and link it to the primary database as a read replica.

Answer: C.
Notes: Aurora Replicas provide a way to offload read traffic. Aurora Replicas share the same underlying storage as the main database, so lag time is generally very low. Aurora Replicas have their own endpoints, so the application will need to be configured to direct read traffic to the new endpoints.
Reference: Aurora Replicas 

Question 106: A company plans to migrate its on-premises workload to AWS. The current architecture is composed of a Microsoft SharePoint server that uses a Windows shared file storage. The Solutions Architect needs to use a cloud storage solution that is highly available and can be integrated with Active Directory for access control and authentication. Which of the following options can satisfy the given requirement?

A. Create a file system using Amazon EFS and join it to an Active Directory domain.
B. Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory domain in AWS.
C. Create a Network File System (NFS) file share using AWS Storage Gateway.
D. Launch an Amazon EC2 Windows Server to mount a new S3 bucket as a file volume.


Answer: B.
Notes: Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessible over the industry-standard Service Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. Amazon FSx is accessible from Windows, Linux, and MacOS compute instances and devices. Thousands of compute instances and devices can access a file system concurrently.
ReferenceText: FSx

 

Question 107: A company plans to migrate its on-premises workload to AWS. The current architecture is composed of a Microsoft SharePoint server that uses a Windows shared file storage. The Solutions Architect needs to use a cloud storage solution that is highly available and can be integrated with Active Directory for access control and authentication. Which of the following options can satisfy the given requirement?

A. Create a file system using Amazon EFS and join it to an Active Directory domain.
B. Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory domain in AWS.
C. Create a Network File System (NFS) file share using AWS Storage Gateway.
D. Launch an Amazon EC2 Windows Server to mount a new S3 bucket as a file volume.


Answer: B.
Notes: Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessible over the industry-standard Service Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. Amazon FSx is accessible from Windows, Linux, and MacOS compute instances and devices. Thousands of compute instances and devices can access a file system concurrently.
Reference: FSx
Category: Design Resilient Architectures

Question 108: A Forex trading platform, which frequently processes and stores global financial data every minute, is hosted in your on-premises data center and uses an Oracle database. Due to a recent cooling problem in their data center, the company urgently needs to migrate their infrastructure to AWS to improve the performance of their applications. As the Solutions Architect, you are responsible in ensuring that the database is properly migrated and should remain available in case of database server failure in the future. Which of the following is the most suitable solution to meet the requirement?

A. Create an Oracle database in RDS with Multi-AZ deployments.
B. Launch an Oracle database instance in RDS with Recovery Manager (RMAN) enabled.
C. Launch an Oracle Real Application Clusters (RAC) in RDS.
D. Convert the database schema using the AWS Schema Conversion Tool and AWS Database Migration Service. Migrate the Oracle database to a non-cluster Amazon Aurora with a single instance.


Answer: A.
Notes: Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable.
Reference: RDS Multi AZ
Category: Design Resilient Architectures

Question 109: A data analytics company, which uses machine learning to collect and analyze consumer data, is using Redshift cluster as their data warehouse. You are instructed to implement a disaster recovery plan for their systems to ensure business continuity even in the event of an AWS region outage. Which of the following is the best approach to meet this requirement?

A. Do nothing because Amazon Redshift is a highly available, fully-managed data warehouse which can withstand an outage of an entire AWS region.
B. Enable Cross-Region Snapshots Copy in your Amazon Redshift Cluster.
C. Create a scheduled job that will automatically take the snapshot of your Redshift Cluster and store it to an S3 bucket. Restore the snapshot in case of an AWS region outage.
D. Use Automated snapshots of your Redshift Cluster.


Answer: B.
Notes: You can configure Amazon Redshift to copy snapshots for a cluster to another region. To configure cross-region snapshot copy, you need to enable this copy feature for each cluster and configure where to copy snapshots and how long to keep copied automated snapshots in the destination region. When cross-region copy is enabled for a cluster, all new manual and automatic snapshots are copied to the specified region.
Reference: Redshift Snapshots

Category: Design Resilient Architectures

 

Question 109: A start-up company has an EC2 instance that is hosting a web application. The volume of users is expected to grow in the coming months and hence, you need to add more elasticity and scalability in your AWS architecture to cope with the demand. Which of the following options can satisfy the above requirement for the given scenario? (Select TWO.)

A. Set up two EC2 instances and then put them behind an Elastic Load balancer (ELB).
B. Set up two EC2 instances deployed using Launch Templates and integrated with AWS Glue.
C. Set up an S3 Cache in front of the EC2 instance.
D. Set up two EC2 instances and use Route 53 to route traffic based on a Weighted Routing Policy.
E. Set up an AWS WAF behind your EC2 Instance.


Answer: A. D.
Notes: Using an Elastic Load Balancer is an ideal solution for adding elasticity to your application. Alternatively, you can also create a policy in Route 53, such as a Weighted routing policy, to evenly distribute the traffic to 2 or more EC2 instances. Hence, setting up two EC2 instances and then put them behind an Elastic Load balancer (ELB) and setting up two EC2 instances and using Route 53 to route traffic based on a Weighted Routing Policy are the correct answers.
Reference: Elastic Load Balancing
Category: Design Resilient Architectures

Question 110: A company plans to deploy a Docker-based batch application in AWS. The application will be used to process both mission-critical data as well as non-essential batch jobs. Which of the following is the most cost-effective option to use in implementing this architecture?

A. Use ECS as the container management service then set up Reserved EC2 Instances for processing both mission-critical and non-essential batch jobs.
B. Use ECS as the container management service then set up a combination of Reserved and Spot EC2 Instances for processing mission-critical and non-essential batch jobs respectively.
C. Use ECS as the container management service then set up On-Demand EC2 Instances for processing both mission-critical and non-essential batch jobs.
D. Use ECS as the container management service then set up Spot EC2 Instances for processing both mission-critical and non-essential batch jobs.


Answer: B.
Notes: Amazon ECS lets you run batch workloads with managed or custom schedulers on Amazon EC2 On-Demand Instances, Reserved Instances, or Spot Instances. You can launch a combination of EC2 instances to set up a cost-effective architecture depending on your workload. You can launch Reserved EC2 instances to process the mission-critical data and Spot EC2 instances for processing non-essential batch jobs. There are two different charge models for Amazon Elastic Container Service (ECS): Fargate Launch Type Model and EC2 Launch Type Model. With Fargate, you pay for the amount of vCPU and memory resources that your containerized application requests while for EC2 launch type model, there is no additional charge. You pay for AWS resources (e.g. EC2 instances or EBS volumes) you create to store and run your application. You only pay for what you use, as you use it; there are no minimum fees and no upfront commitments. In this scenario, the most cost-effective solution is to use ECS as the container management service then set up a combination of Reserved and Spot EC2 Instances for processing mission-critical and non-essential batch jobs respectively. You can use Scheduled Reserved Instances (Scheduled Instances) which enables you to purchase capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term. This will ensure that you have an uninterrupted compute capacity to process your mission-critical batch jobs.
Reference: Emazon ECS

Category: Design Resilient Architectures

[/bgcollapse]

Question 111: A company has recently adopted a hybrid cloud architecture and is planning to migrate a database hosted on-premises to AWS. The database currently has over 50 TB of consumer data, handles highly transactional (OLTP) workloads, and is expected to grow. The Solutions Architect should ensure that the database is ACID-compliant and can handle complex queries of the application. Which type of database service should the Architect use?

A. Amazon DynamoDB
B. Amazon RDS
C. Amazon Redshift
D. Amazon Aurora


Answer: D.
Notes: Amazon Aurora (Aurora) is a fully managed relational database engine that’s compatible with MySQL and PostgreSQL. You already know how MySQL and PostgreSQL combine the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. The code, tools, and applications you use today with your existing MySQL and PostgreSQL databases can be used with Aurora. With some workloads, Aurora can deliver up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. Aurora includes a high-performance storage subsystem. Its MySQL- and PostgreSQL-compatible database engines are customized to take advantage of that fast distributed storage. The underlying storage grows automatically as needed, up to 64 tebibytes (TiB). Aurora also automates and standardizes database clustering and replication, which are typically among the most challenging aspects of database configuration and administration.
Reference: Aurora
Category: Design Resilient Architectures

Question 112: An online stocks trading application that stores financial data in an S3 bucket has a lifecycle policy that moves older data to Glacier every month. There is a strict compliance requirement where a surprise audit can happen at anytime and you should be able to retrieve the required data in under 15 minutes under all circumstances. Your manager instructed you to ensure that retrieval capacity is available when you need it and should handle up to 150 MB/s of retrieval throughput. Which of the following should you do to meet the above requirement? (Select TWO.)

A. Retrieve the data using Amazon Glacier Select.
B. Use Bulk Retrieval to access the financial data.
C. Purchase provisioned retrieval capacity.
D. Use Expedited Retrieval to access the financial data.
E. Specify a range, or portion, of the financial data archive to retrieve.


Answer: C. D.
Notes: Expedited retrievals allow you to quickly access your data when occasional urgent requests for a subset of archives are required. For all but the largest archives (250 MB+), data accessed using Expedited retrievals are typically made available within 1–5 minutes. Provisioned Capacity ensures that retrieval capacity for Expedited retrievals is available when you need it. To make an Expedited, Standard, or Bulk retrieval, set the Tier parameter in the Initiate Job (POST jobs) REST API request to the option you want, or the equivalent in the AWS CLI or AWS SDKs. If you have purchased provisioned capacity, then all expedited retrievals are automatically served through your provisioned capacity. Provisioned capacity ensures that your retrieval capacity for expedited retrievals is available when you need it. Each unit of capacity provides that at least three expedited retrievals can be performed every five minutes and provides up to 150 MB/s of retrieval throughput. You should purchase provisioned retrieval capacity if your workload requires highly reliable and predictable access to a subset of your data in minutes. Without provisioned capacity Expedited retrievals are accepted, except for rare situations of unusually high demand. However, if you require access to Expedited retrievals under all circumstances, you must purchase provisioned retrieval capacity.
Reference: Amazon Glacier
Category: Design Resilient Architectures

Question 113: An organization stores and manages financial records of various companies in its on-premises data center, which is almost out of space. The management decided to move all of their existing records to a cloud storage service. All future financial records will also be stored in the cloud. For additional security, all records must be prevented from being deleted or overwritten. Which of the following should you do to meet the above requirement?
A. Use AWS Storage Gateway to establish hybrid cloud storage. Store all of your data in Amazon S3 and enable object lock.
B. Use AWS DataSync to move the data. Store all of your data in Amazon EFS and enable object lock.
C. Use AWS Storage Gateway to establish hybrid cloud storage. Store all of your data in Amazon EBS and enable object lock.
D. Use AWS DataSync to move the data. Store all of your data in Amazon S3 and enable object lock.


Answer: D.
Notes: AWS DataSync allows you to copy large datasets with millions of files, without having to build custom solutions with open source tools, or license and manage expensive commercial network acceleration software. You can use DataSync to migrate active data to AWS, transfer data to the cloud for analysis and processing, archive data to free up on-premises storage capacity, or replicate data to AWS for business continuity. AWS DataSync enables you to migrate your on-premises data to Amazon S3, Amazon EFS, and Amazon FSx for Windows File Server. You can configure DataSync to make an initial copy of your entire dataset, and schedule subsequent incremental transfers of changing data towards Amazon S3. Enabling S3 Object Lock prevents your existing and future records from being deleted or overwritten. AWS DataSync is primarily used to migrate existing data to Amazon S3. On the other hand, AWS Storage Gateway is more suitable if you still want to retain access to the migrated data and for ongoing updates from your on-premises file-based applications.
ReferenceText: AWS DataSync
ReferenceUrl: https://aws.amazon.com/datasync/faqs/
Category: Design Secure Applications and Architectures

Question 114: A solutions architect is designing a solution to run a containerized web application by using Amazon Elastic Container Service (Amazon ECS). The solutions architect wants to minimize cost by running multiple copies of a task on each container instance. The number of task copies must scale as the load increases and decreases. Which routing solution distributes the load to the multiple tasks?

A. Configure an Application Load Balancer to distribute the requests by using path-based routing.
B. Configure an Application Load Balancer to distribute the requests by using dynamic host port mapping.
C. Configure an Amazon Route 53 alias record set to distribute the requests with a failover routing policy.
D. Configure an Amazon Route 53 alias record set to distribute the requests with a weighted routing policy.


Answer: B.
Notes: With dynamic host port mapping, multiple tasks from the same service are allowed for each container instance. You can use weighted routing policies to route traffic to instances at proportions that you specify. You cannot use weighted routing policies to manage multiple tasks on a single container.
Notes: With dynamic host port mapping, multiple tasks from the same service are allowed for each container instance. You can use weighted routing policies to route traffic to instances at proportions that you specify. You cannot use weighted routing policies to manage multiple tasks on a single container. 
Reference: Choosing a routing policy
Category: Design Cost-Optimized Architecures

Question 115: Question: A Solutions Architect needs to deploy a mobile application that can collect votes for a popular singing competition. Millions of users from around the world will submit votes using their mobile phones. These votes must be collected and stored in a highly scalable and highly available data store which will be queried for real-time ranking. Which of the following combination of services should the architect use to meet this requirement?
A. Amazon Redshift and AWS Mobile Hub
B. Amazon DynamoDB and AWS AppSync
C. Amazon Relational Database Service (RDS) and Amazon MQ
D. Amazon Aurora and Amazon Cognito


Answer: B.
Notes: When the word durability pops out, the first service that should come to your mind is Amazon S3. Since this service is not available in the answer options, we can look at the other data store available which is Amazon DynamoDB. DynamoDB is durable, scalable, and highly available data store which can be used for real-time tabulation. You can also use AppSync with DynamoDB to make it easy for you to build collaborative apps that keep shared data updated in real time. You just specify the data for your app with simple code statements and AWS AppSync manages everything needed to keep the app data updated in real time. This will allow your app to access data in Amazon DynamoDB, trigger AWS Lambda functions, or run Amazon Elasticsearch queries and combine data from these services to provide the exact data you need for your app.

Reference: https://aws.amazon.com/dynamodb/faqs/
Category: Design High-Performing Architectures

 

Question 116: The usage of a company’s image-processing application is increasing suddenly with no set pattern. The application’s processing time grows linearly with the size of the image. The processing can take up to 20 minutes for large image files. The architecture consists of a web tier, an Amazon Simple Queue Service (Amazon SQS) standard queue, and message consumers that process the images on Amazon EC2 instances. When a high volume of requests occurs, the message backlog in Amazon SQS increases. Users are reporting the delays in processing. A solutions architect must improve the performance of the application in compliance with cloud best practices. Which solution will meet these requirements?

A. Purchase enough Dedicated Instances to meet the peak demand. Deploy the instances for the consumers.
B. Convert the existing SQS standard queue to an SQS FIFO queue. Increase the visibility timeout.
C. Configure a scalable AWS Lambda function as the consumer of the SQS messages.
D. Create a message consumer that is an Auto Scaling group of instances. Configure the Auto Scaling group to scale based upon the ApproximateNumberOfMessages Amazon CloudWatch metric.


Answer: D.
Notes: FIFO queues will solve problems that occur when messages are processed out of order. FIFO queues will not improve performance during sudden volume increases. Additionally, you cannot convert SQS queues after you create them.
Reference: FIFO Queues

Question 117: An application is hosted on an EC2 instance with multiple EBS Volumes attached and uses Amazon Neptune as its database. To improve data security, you encrypted all of the EBS volumes attached to the instance to protect the confidential data stored in the volumes. Which of the following statements are true about encrypted Amazon Elastic Block Store volumes? (Select TWO.)


A. All data moving between the volume and the instance are encrypted.
B. Snapshots are automatically encrypted.
C. The volumes created from the encrypted snapshot are not encrypted.
D. Snapshots are not automatically encrypted.
E. Only the data in the volume is encrypted and not all the data moving between the volume and the instance.
Answer: A. B.
Notes: Amazon Elastic Block Store (Amazon EBS) provides block level storage volumes for use with EC2 instances. EBS volumes are highly available and reliable storage volumes that can be attached to any running instance that is in the same Availability Zone. EBS volumes that are attached to an EC2 instance are exposed as storage volumes that persist independently from the life of the instance. Encryption operations occur on the servers that host EC2 instances, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. You can encrypt both the boot and data volumes of an EC2 instance.
Reference: EBS

Question 118: A reporting application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. For complex reports, the application can take up to 15 minutes to respond to a request. A solutions architect is concerned that users will receive HTTP 5xx errors if a report request is in process during a scale-in event. What should the solutions architect do to ensure that user requests will be completed before instances are terminated?

A. Enable sticky sessions (session affinity) for the target group of the instances.
B. Increase the instance size in the Application Load Balancer target group.
C. Increase the cooldown period for the Auto Scaling group to a greater amount of time than the time required for the longest running responses.
D. Increase the deregistration delay timeout for the target group of the instances to greater than 900 seconds.

Answer: D.
Notes: By default, Elastic Load Balancing waits 300 seconds before the completion of the deregistration process, which can help in-flight requests to the target become complete. To change the amount of time that Elastic Load Balancing waits, update the deregistration delay value.
Reference: Deregistration Delay.

Question 119: Question: A company used Amazon EC2 Spot Instances for a demonstration that is now complete. A solutions architect must remove the Spot Instances to stop them from incurring cost. What should the solutions architect do to meet this requirement?

A. Cancel the Spot request only.
B. Terminate the Spot Instances only.
C. Cancel the Spot request. Terminate the Spot Instances.
D. Terminate the Spot Instances. Cancel the Spot request.


Answer: C.
Notes: To remove the Spot Instances, the appropriate steps are to cancel the Spot request and then to terminate the Spot Instances.
Reference: Spot Instances

Question 120: Question: Which components are required to build a site-to-site VPN connection on AWS? (Select TWO.)
A. An Internet Gateway
B. A NAT gateway
C. A customer Gateway
D. A Virtual Private Gateway
E. Amazon API Gateway


Answer: C. D.
Notes: A virtual private gateway is attached to a VPC to create a site-to-site VPN connection on AWS. You can accept private encrypted network traffic from an on-premises data center into your VPC without the need to traverse the open public internet. A customer gateway is required for the VPN connection to be established. A customer gateway device is set up and configured in the customer’s data center.
Reference: What is AWS Site-to-Site VPN?

Question 121: A company runs its website on Amazon EC2 instances behind an Application Load Balancer that is configured as the origin for an Amazon CloudFront distribution. The company wants to protect against cross-site scripting and SQL injection attacks. Which approach should a solutions architect recommend to meet these requirements?

A. Enable AWS Shield Advanced. List the CloudFront distribution as a protected resource.
B. Define an AWS Shield Advanced policy in AWS Firewall Manager to block cross-site scripting and SQL injection attacks.
C. Set up AWS WAF on the CloudFront distribution. Use conditions and rules that block cross-site scripting and SQL injection attacks.
D. Deploy AWS Firewall Manager on the EC2 instances. Create conditions and rules that block cross-site scripting and SQL injection attacks.


Answer: C.
Notes: AWS WAF can detect the presence of SQL code that is likely to be malicious (known as SQL injection). AWS WAF also can detect the presence of a script that is likely to be malicious (known as cross-site scripting).
Reference: AWS WAF.

Question 122: A media company is designing a new solution for graphic rendering. The application requires up to 400 GB of storage for temporary data that is discarded after the frames are rendered. The application requires approximately 40,000 random IOPS to perform the rendering. What is the MOST cost-effective storage option for this rendering application?
A. A storage optimized Amazon EC2 instance with instance store storage
B. A storage optimized Amazon EC2 instance with a Provisioned IOPS SSD (io1 or io2) Amazon Elastic Block Store (Amazon EBS) volume
C. A burstable Amazon EC2 instance with a Throughput Optimized HDD (st1) Amazon Elastic Block Store (Amazon EBS) volume
D. A burstable Amazon EC2 instance with Amazon S3 storage over a VPC endpoint


Answer: A.
Notes: SSD-Backed Storage Optimized (i2) instances provide more than 365,000 random IOPS. The instance store has no additional cost, compared with the regular hourly cost of the instance.
Reference: Amazon EC2 pricing.

Question 123: A company is deploying a new application that will consist of an application layer and an online transaction processing (OLTP) relational database. The application must be available at all times. However, the application will have periods of inactivity. The company wants to pay the minimum for compute costs during these idle periods. Which solution meets these requirements MOST cost-effectively?
A. Run the application in containers with Amazon Elastic Container Service (Amazon ECS) on AWS Fargate. Use Amazon Aurora Serverless for the database.
B. Run the application on Amazon EC2 instances by using a burstable instance type. Use Amazon Redshift for the database.
C. Deploy the application and a MySQL database to Amazon EC2 instances by using AWS CloudFormation. Delete the stack at the beginning of the idle periods.
D. Deploy the application on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. Use Amazon RDS for MySQL for the database.


Answer: A.
Notes: When Amazon ECS uses Fargate for compute, it incurs no costs when the application is idle. Aurora Serverless also incurs no compute costs when it is idle.
Reference: AWS Fargate Pricing.

 

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA Exam Prep AWS SAA-C02 SAA-C03 Exam Prep

What are the 5 pillars of a well architected framework:

AWS Well-Architected helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications and workloads. Based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization — AWS Well-Architected provides a consistent approach for customers and partners to evaluate architectures, and implement designs that can scale over time.

1. Operational Excellence

The operational excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. You can find prescriptive guidance on implementation in the Operational Excellence Pillar whitepaper.

2. Security
The security pillar includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies. You can find prescriptive guidance on implementation in the Security Pillar whitepaper.

 

3. Reliability
The reliability pillar includes the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. You can find prescriptive guidance on implementation in the Reliability Pillar whitepaper.

4. Performance Efficiency
The performance efficiency pillar includes the ability to use computing resources efficiently to meet system requirements and to maintain that efficiency as demand changes and technologies evolve. You can find prescriptive guidance on implementation in the Performance Efficiency Pillar whitepaper.

5. Cost Optimization
The cost optimization pillar includes the ability to avoid or eliminate unneeded cost or suboptimal resources. You can find prescriptive guidance on implementation in the Cost Optimization Pillar whitepaper.

The AWS Well-Architected Framework provides architectural best practices across the five pillars for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud.
The framework provides a set of questions that allows you to review an existing or proposed architecture. It also provides a set of AWS best practices for each pillar.
Using the Framework in your architecture helps you produce stable and efficient systems, which allows you to focus on functional requirements.

 

Other AWS Facts and Summaries and Questions/Answers Dump

  • AWS Certified Solution Architect Associate Exam Prep App
  • AWS S3 facts and summaries and Q&A Dump
  • AWS DynamoDB facts and summaries and Questions and Answers Dump
  • AWS EC2 facts and summaries and Questions and Answers Dump
  • AWS Serverless facts and summaries and Questions and Answers Dump
  • AWS Developer and Deployment Theory facts and summaries and Questions and Answers Dump
  • AWS IAM facts and summaries and Questions and Answers Dump
  • AWS Lambda facts and summaries and Questions and Answers Dump
  • AWS SQS facts and summaries and Questions and Answers Dump
  • AWS RDS facts and summaries and Questions and Answers Dump
  • AWS ECS facts and summaries and Questions and Answers Dump
  • AWS CloudWatch facts and summaries and Questions and Answers Dump
  • AWS SES facts and summaries and Questions and Answers Dump
  • AWS EBS facts and summaries and Questions and Answers Dump
  • AWS ELB facts and summaries and Questions and Answers Dump
  • AWS Autoscaling facts and summaries and Questions and Answers Dump
  • AWS VPC facts and summaries and Questions and Answers Dump
  • AWS KMS facts and summaries and Questions and Answers Dump
  • AWS Elastic Beanstalk facts and summaries and Questions and Answers Dump
  • AWS CodeBuild facts and summaries and Questions and Answers Dump
  • AWS CodeDeploy facts and summaries and Questions and Answers Dump
  • AWS CodePipeline facts and summaries and Questions and Answers Dump
AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA-C02 SAA-C03 Exam Prep

What means undifferentiated heavy lifting?

The reality, of course, today is that if you come up with a great idea you don’t get to go quickly to a successful product. There’s a lot of undifferentiated heavy lifting that stands between your idea and that success. The kinds of things that I’m talking about when I say undifferentiated heavy lifting are things like these: figuring out which servers to buy, how many of them to buy, what time line to buy them.

Eventually you end up with heterogeneous hardware and you have to match that. You have to think about backup scenarios if you lose your data center or lose connectivity to a data center. Eventually you have to move facilities. There’s negotiations to be done. It’s a very complex set of activities that really is a big driver of ultimate success.

But they are undifferentiated from, it’s not the heart of, your idea. We call this muck. And it gets worse because what really happens is you don’t have to do this one time. You have to drive this loop. After you get your first version of your idea out into the marketplace, you’ve done all that undifferentiated heavy lifting, you find out that you have to cycle back. Change your idea. The winners are the ones that can cycle this loop the fastest.

On every cycle of this loop you have this undifferentiated heavy lifting, or muck, that you have to contend with. I believe that for most companies, and it’s certainly true at Amazon, that 70% of your time, energy, and dollars go into the undifferentiated heavy lifting and only 30% of your energy, time, and dollars gets to go into the core kernel of your idea.

I think what people are excited about is that they’re going to get a chance they see a future where they may be able to invert those two. Where they may be able to spend 70% of their time, energy and dollars on the differentiated part of what they’re doing.

— Jeff Bezos, 2006

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA-C02 SAA-C03 Exam Prep

AWS Certified Solutions Architect Associates Questions and Answers around the web.

Testimonial: Passed SAA-C02!

 
AWS Certified Solutions Architect Associate

So my exam was yesterday and I got the results in 24 hours. I think that’s how they review all saa exams, not showing the results right away anymore.

I scored 858. Was practicing with Stephan’s udemy lectures and Bonso exam tests. My test results were as follows Test 1. 63%, 93% Test 2. 67%, 87% Test 3. 81 % Test 4. 72% Test 5. 75 % Test 6. 81% Stephan’s test. 80%

I was reading all question explanations (even the ones I got correct)

The actual exam was pretty much similar to these. The topics I got were:

  1. A lot of S3 (make sure you know all of it from head to toes)

  2. VPC peering

  3. DataSync and Database Migration Service in same questions. Make sure you know the difference

  4. One EKS question

  5. 2-3 KMS questions

  6. Security group question

  7. A lot of RDS Multi-AZ

  8. SQS + SNS fan out pattern

  9. ECS microservice architecture question

  10. Route 53

  11. NAT gateway

And that’s all I can remember)

I took extra 30 minutes, because English is not my native language and I had plenty of time to think and then review flagged questions.

Good luck with your exams guys!

Testimonial: Passed SAA-C02

 
AWS Certified Solutions Architect Associate

Hey guys, just giving my update so all of you guys working towards your certs can stay motivated as these success stories drove me to reach this goal.

Background: 12 years of military IT experience, never worked with the cloud. I’ve done 7 deployments (that is a lot in 12 years), at which point I came home from the last one burnt out with a family that barely knew me. I knew I needed a change, but had no clue where to start or what I wanted to do. I wasn’t really interested in IT but I knew it’d pay the bills. After seeing videos about people in IT working from home(which after 8+ years of being gone from home really appealed to me), I stumbled across a video about a Solutions Architect’s daily routine working from home and got me interested in AWS.

It took me 68 days straight of hard work to pass this exam with confidence. No rest days, more than 120 pages of hand-written notes and hundreds and hundreds of flash cards.

In the beginning, I hopped on Stephane Maarek’s course for the CCP exam just to see if it was for me. I did the course in about a week and then after doing some research on here, got the CCP Practice exams from tutorialsdojo.com Two weeks after starting the Udemy course, I passed the exam. By that point, I’d already done lots of research on the different career paths and the best way to study, etc.

Cantrill(10/10) – That same day, I hopped onto Cantrill’s course for the SAA and got to work. Somebody had mentioned that by doing his courses you’d be over-prepared for the exam. While I think a combination of material is really important for passing the certification with confidence, I can say without a doubt Cantrill’s courses got me 85-90% of the way there. His forum is also amazing, and has directly contributed to me talking with somebody who works at AWS to land me a job, which makes the money I spent on all of his courses A STEAL. As I continue my journey (up next is SA Pro), I will be using all of his courses.

Neal Davis(8/10) – After completing Cantrill’s course, I found myself needing a resource to reinforce all the material I’d just learned. AWS is an expansive platform and the many intricacies of the different services can be tricky. For this portion, I relied on Neal Davis’s Training Notes series. These training notes are a very condensed version of the information you’ll need to pass the exam, and with the proper context are very useful to find the things you may have missed in your initial learnings. I will be using his other Training Notes for my other exams as well.

TutorialsDojo(10/10) – These tests filled in the gaps and allowed me to spot my weaknesses and shore them up. I actually think my real exam was harder than these, but because I’d spent so much time on the material I got wrong, I was able to pass the exam with a safe score.

As I said, I was surprised at how difficult the exam was. A lot of my questions were related to DBs, and a lot of them gave no context as to whether the data being loaded into them was SQL or NoSQL which made the choice selection a little frustrating. A lot of the questions have 2 VERY SIMILAR answers, and often time the wording of the answers could be easy to misinterpret (such as when you are creating a Read Replica, do you attach it to the primary application DB that is slowing down because of read issues or attach it to the service that is causing the primary DB to slow down). For context, I was scoring 95-100% on the TD exams prior to taking the test and managed a 823 on the exam so I don’t know if I got unlucky with a hard test or if I’m not as prepared as I thought I was (i.e. over-thinking questions).

Anyways, up next is going back over the practical parts of the course as I gear up for the SA Pro exam. I will be taking my time with this one, and re-learning the Linux CLI in preparation for finding a new job.

PS if anybody on here is hiring, I’m looking! I’m the hardest worker I know and my goal is to make your company as streamlined and profitable as possible. 🙂

AWS Solutions Architect Associates SAA-C02 and SAA-C03 Certification Exam Prep
 
 
#AWS #SAAC02 #SAAC03 #SolutionsArchitect #AWSSAA #SAA #AWSCertification #AWSTraining #LearnAWS #CloudArchitect #SolutionsArchitect  #Djamgatech
 
AWS SAA Exam Prep App on iOs
 
AWS SAA Exam Prep App on android
 
AWS SAA Exam Prep App on Windows 10/11
 
AWS SAA App details and features

Testimonial: How did you prepare for AWS Certified Solutions Architect – Associate Level certification?

 

Practical knowledge is 30% important and rest is Jayendra blog and Dumps.

Buying udemy courses doesn’t make you pass, I can tell surely without going to dumps and without going to jayendra’s blog not easy to clear the certification.

Read FAQs of S3, IAM, EC2, VPC, SQS, Autoscaling, Elastic Load Balancer, EBS, RDS, Lambda, API Gateway, ECS.

Read the Security Whitepaper and Shared Responsibility model.

The most important thing is basic questions from the last introduced topics to the exam is very important like Amazon Kinesis, etc…

– ACloudGuru course with practice test’s

– Created my own cheat sheet in excel

– Practice questions on various website

– Few AWS services FAQ’s

Exam feedback:

– Some questions were your understanding about which service to pick for the use case.

– many questions on VPC

– a couple of unexpected question on AWS CloudHSM, AWS systems manager, aws athena

– encryption at rest and in transit services

– migration from on-premise to AWS

– backup data in az vs regional

I believe the time was sufficient.

Overall I feel AWS SAA was more challenging in theory than GCP Associate CE.

some resources I bookmarked:

  • Comparison of AWS Services
  • Solutions Architect – Associate | Qwiklabs
  • okeeffed/cheat-sheets
  • A curated list of AWS resources to prepare for the AWS Certifications
  • AWS Cheat Sheet

Whitepapers are the important information about each services that are published by Amazon in their website. If you are preparing for the AWS certifications, it is very important to use the some of the most recommended whitepapers to read before writing the exam.

The following are the list of whitepapers that are useful for preparing solutions architectexam. Also you will be able to find the list of whitepapers in the exam blueprint.

  • Overview of Security Processes
  • Storage Options in the Cloud
  • Defining Fault Tolerant Applications in the AWS Cloud
  • Overview of Amazon Web Services
  • Compliance Whitepaper
  • Architecting for the AWS Cloud

Data Security questions could be the more challenging and it’s worth noting that you need to have a good understanding of security processes described in the whitepaper titled “Overview of Security Processes”.

In the above list, most important whitepapers are Overview of Security Processes and Storage Options in the Cloud. Read more here…

Big thanks to /u/acantril for his amazing course – AWS Certified Solutions Architect – Associate (SAA-C02) – the best IT course I’ve ever had – and I’ve done many on various other platforms:

  • CBTNuggets

  • LinuxAcademy

  • ACloudGuru

  • Udemy

  • Linkedin

  • O’Reilly

  • AWS Solutions Architect Associates SAA-C02 and SAA-C03 Certification Exam Prep
     
     
    #AWS #SAAC02 #SAAC03 #SolutionsArchitect #AWSSAA #SAA #AWSCertification #AWSTraining #LearnAWS #CloudArchitect #SolutionsArchitect  #Djamgatech
     
    AWS SAA Exam Prep App on iOs
     
    AWS SAA Exam Prep App on android
     
    AWS SAA Exam Prep App on Windows 10/11
     
    AWS SAA App details and features
AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA-C02 SAA-C03 Exam Prep

If you’re on the fence with buying one of his courses, stop thinking and buy it, I guarantee you won’t regret it! Other materials used for study:

  • Jon Bonso Practice Exams for SAA-C02 @ Tutorialsdojo (amazing practice exams!)

  • Random YouTube videos (example)

  • Official AWS Documentation (example)

  • TechStudySlack (learning community)

Study duration approximately ~3 months with the following regimen:

  • Daily study from 30min to 2hrs

    • Usually early morning before work

    • Sometimes on the train when commuting from/to work

    • Sometimes in the evening

    • Due to being a father/husband, study wasn’t always possible

  • All learned topics reviewed weekly

AWS Solutions Architect Associates SAA-C02 and SAA-C03 Certification Exam Prep
 
 
#AWS #SAAC02 #SAAC03 #SolutionsArchitect #AWSSAA #SAA #AWSCertification #AWSTraining #LearnAWS #CloudArchitect #SolutionsArchitect  #Djamgatech
 
AWS SAA Exam Prep App on iOs
 
AWS SAA Exam Prep App on android
 
AWS SAA Exam Prep App on Windows 10/11
 
AWS SAA App details and features
AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA Exam Prep AWS SAA-C02 SAA-C03 Exam Prep

Testimonial: I passed SAA-C02… But don’t do what I did to pass it

AWS Certified Solutions Architect Associate

I’ve been following this subreddit for awhile and gotten some helpful tips, so I’d like to give back with my two cents. FYI I passed the exam 788

The exam materials that I used were the following:

  • AWS Certified Solutions Architect Associate All-in-One Exam Guide (Banerjee)

  • Stephen Maarek’s Udemy course, and his 6 exam practices

  • Adrian Cantrill’s online course (about `60% done)

  • TutorialDojo’s exams

(My company has udemy business account so I was able to use Stephen’s course/exam)

I scheduled my exam at the end of March, and started with Adrian’s. But I was dumb thinking that I could go through his course within 3 weeks… I stopped around 12% of his course and went to the textbook and finished reading the all-in-one exam guide within a weekend. Then I started going through Stephen’s course. While learning the course, I pushed back the exam to end of April, because I knew I wouldn’t be ready by the exam comes along.

Five days before the exam, I finished Stephen’s course, and then did his final exam on the course. I failed miserably (around 50%). So I did one of Stephen’s practice exam and did worse (42%). I thought maybe it might be his exams that are slightly difficult, so I went and bought Jon Bonso’s exam and got 60% on his first one. And then I realized based on all the questions on the exams, I was definitely lacking some fundamentals. I went back to Adrian’s course and things were definitely sticking more – I think it has to do with his explanations + more practical stuff. Unfortunately, I could not finish his course before the exam (because I was cramming), and on the day of the exam, I could only do Bonso’s four of six exams, with barely passing one of them.

Please, don’t do what I did. I was desperate to get this thing over with it. I wanted to move on and work on other things for job search, but if you’re not in this situation, please don’t do this. I can’t for love of god tell you about OAI and Cloudfront and why that’s different than S3 URL. The only thing that I can remember is all the practical stuff that I did with Adrian’s course. I’ll never forget how to create VPC, because he make you manually go through it. I’m not against Stephen’s course – they are different on its own way (see the tips below).

So here’s what I recommend doing before writing for aws exam:

  1. Don’t schedule your exam beforehand. Go through the materials that you are doing, and make sure you get at least 80% on all of the Jon Bonso’s exam (I’d recommend maybe 90% or higher)

  2. If you like to learn things practically, I do recommend Adrian’s course. If you like to learn things conceptually, go with Stephen Maarek’s course. I find Stephen’s course more detailed when going through different architectures, but I can’t really say that because I didn’t really finish Adrian’s course

  3. Jon Bonso’s exam was about the same difficulty as the actual exam. But they’re slightly more tricky. For example, many of the questions will give you two different situation and you really have to figure out what they are asking for because they might contradict to each other, but the actual question is asking one specific thing. However, there were few questions that were definitely obvious if you knew the service.

I’m upset that even though I passed the exam, I’m still lacking some practical stuff, so I’m just going to go through Adrian’s Developer exam but without cramming this time. If you actually learn the materials and practice them, they are definitely useful in the real world. I hope this will help you passing and actually learning the stuff.

P.S I vehemently disagree with Adrian in one thing in his course. doggogram.io is definitely better than catagram.io, although his cats are pretty cool

Testimonial: I passed the SAA-C02 exam!

I sat the exam at a PearsonVUE test centre and scored 816.

The exam had lots of questions around S3, RDS and storage. To be honest it was a bit of a blur but they are the ones I remember.

I was a bit worried before sitting the exam as I was only hit 76% in the official AWS practice exam the night before but it turned out alright in the end!

I have around 8 years of experience in IT but AWS was relatively new to me around 5 weeks ago.

Training Material Used

Firstly I ran through the u/stephanemaarek course which I found to pretty much cover all that was required!

I then used the u/Tutorials_Dojo practice exams. I took one before starting Stephane’s course to see where I was at with no training. I got 46% but I suppose a few of them were lucky guesses!

I then finished the course and took another test and hit around 65%, TD was great as they gave explanations on the answers. I then used this go back to the course to go over my weak areas again.

I then seemed to not be able to get higher than the low 70% on the exams so I went through u/neal-davis course, this was also great as it had an “Exam Cram” video at the end of each topic.

I also set up flashcards on BrainScape which helped me remember AWS services and what their function is.

All in all it was a great learning experience and I look forward to putting my skills into action!

AWS Solutions Architect Associates SAA-C02 and SAA-C03 Certification Exam Prep
 
 
#AWS #SAAC02 #SAAC03 #SolutionsArchitect #AWSSAA #SAA #AWSCertification #AWSTraining #LearnAWS #CloudArchitect #SolutionsArchitect  #Djamgatech
 
AWS SAA Exam Prep App on iOs
 
AWS SAA Exam Prep App on android
 
AWS SAA Exam Prep App on Windows 10/11
 
AWS SAA App details and features
AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA-C02 SAA-C03 Exam Prep

Testimonial: I passed SAA with (799), had about an hour left on the clock.

Many FSx / EFS / Lustre questions

S3 Use cases, storage tiers, cloudfront were pretty prominent too

Only got one “figure out what’s wrong with this IAM policy” question

A handful of dynamodb questions and a handful for picking use cases between different database types or caching layers.

Other typical tips: When you’re unclear on what answer you should pick, or if they seem very similar – work on eliminating answers first. “It can’t be X because oy Y” and that can help a lot.

Testimonial: Passed the AWS Solutions Architect Associate exam!
I prepared mostly from freely available resources as my basics were strong. Bought Jon Bonso’s tests on Udemy and they turned out to be super important while preparing for those particular type of questions (i.e. the questions which feel subjective, but they aren’t), understanding line of questioning and most suitable answers for some common scenarios.

Created a Notion notebook to note down those common scenarios, exceptions, what supports what, integrations etc. Used that notebook and cheat sheets on Tutorials Dojo website for revision on final day.

Found the exam was little tougher than Jon Bonso’s, but his practice tests on Udemy were crucial. Wouldn’t have passed it without them.

Piece of advice for upcoming test aspirants: Get your basics right, especially networking. Understand properly how different services interact in VPC. Focus more on the last line of the question. It usually gives you a hint upon what exactly is needed. Whether you need cost optimization, performance efficiency or high availability. Little to no operational effort means serverless. Understand all serverless services thoroughly.

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA Exam Prep AWS SAA-C02 SAA-C03 Exam Prep

Testimonial:  Passed Solutions Architect Associate (SAA-C02) Today!

I have almost no experience with AWS, except for completing the Certified Cloud Practitioner earlier this year. My work is pushing all IT employees to complete some cloud training and certifications, which is why I chose to do this.

How I Studied:
My company pays for acloudguru subscriptions for its employees, so I used that for the bulk of my learning. I took notes on 3×5 notecards on the key terms and concepts for review.

Once I scored passing grades on the ACG practice tests, I took the Jon Bonso tests on Udemy, which are much more difficult and fairly close to the difficulty of the actual exam. I scored 45%-74% on every Bonso practice test, and spent 1-2 hours after each test reviewing what I missed, supplementing my note cards, and taking time to understand my weak spots. I only took these tests once each, but in between each practice test, I would review all my note cards until I had the content largely memorized.

The Test:
This was one of the most difficult certification tests I’ve ever done. The exam was remote proctored with PearsonVUE (I used PSI for the CCP and didn’t like it as much) I felt like I was failing half the time. I marked about 25% of the questions for review, and I used up the entire allotted time. The questions are mostly about understanding which services interact with which other services, or which services are incompatible with the scenario. It was important for me to read through each response and eliminate the ones that don’t make sense. A lot of the responses mentioned a lot of AWS services that sound good but don’t actually work together (i.e. if it doesn’t make sense to have service X querying database Y, so that probably isn’t the right answer). I can’t point to one domain that really needs to be studied more than any other. You need to know all of the content for the exam.

Final Thoughts:
The ACG practice tests are not a good metric for success for the actual SAA exam, and I would not have passed without Bonso’s tests showing me my weak spots. PearsonVUE is better than PSI. Make sure to study everything thoroughly and review excessively. You don’t necessarily need 5 different study sources and years of experience to be able to pass (although both of those definitely help) and good luck to anyone that took the time to read!

AWS Solutions Architect Associates SAA-C02 and SAA-C03 Certification Exam Prep
 
 
#AWS #SAAC02 #SAAC03 #SolutionsArchitect #AWSSAA #SAA #AWSCertification #AWSTraining #LearnAWS #CloudArchitect #SolutionsArchitect  #Djamgatech
 
AWS SAA Exam Prep App on iOs
 
AWS SAA Exam Prep App on android
 
AWS SAA Exam Prep App on Windows 10/11
 
AWS SAA App details and features
AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA Exam Prep AWS SAA-C02 SAA-C03 Exam Prep

Testimonial: Passed AWS CSAA today!

AWS Certified Solutions Architect Associate
So glad to pass my first AWS certification after 6 weeks of preparation.

My Preparation:

After a series of trial of error in regards to picking the appropriate learning content. Eventually, I went with the community’s advice, and took the course presented by the amazing u/stephanemaarek, in addition to the practice exams by Jon Bonso.
At this point, I can’t say anything that hasn’t been said already about how helpful they are. It’s a great combination of learning material, I appreciate the instructor’s work, and the community’s help in this sub.

Review:

Throughout the course I noted down the important points, and used the course slides as a reference in the first review iteration.
Before resorting to Udemy’s practice exams, I purchased a practice exam from another website, that I regret (not to defame the other vendor, I would simply recommend Udemy).
Udemy’s practice exams were incredible, in that they made me aware of the points I hadn’t understood clearly. After each exam, I would go both through the incorrect answers, as well as the questions I marked for review, wrote down the topic for review, and read the explanation thoroughly. The explanations point to the respective documentation in AWS, which is a recommended read, especially if you don’t feel confident with the service.
What I want to note, is that I didn’t get satisfying marks on the first go at the practice exams (I got an average of ~70%).
Throughout the 6 practice exams, I aggregated a long list of topics to review, went back to the course slides and practice-exams explanations, in addition to the AWS documentation for the respective service.
On the second go I averaged 85%. The second attempt at the exams was important as a confidence boost, as I made sure I understood the services more clearly.

The take away:

Don’t feel disappointed if you get bad results at your practice-exams. Make sure to review the topics and give it another shot.

The AWS documentation is your friend! It is vert clear and concise. My only regret is not having referenced the documentation enough after learning new services.

The exam:

I scheduled the exam using PSI.
I was very confident going into the exam. But going through such an exam environment for the first time made me feel under pressure. Partly, because I didn’t feel comfortable being monitored (I was afraid to get eliminated if I moved or covered my mouth), but mostly because there was a lot at stake from my side, and I had to pass it in the first go.
The questions were harder than expected, but I tried analyze the questions more, and eliminate the invalid answers.
I was very nervous and kept reviewing flagged questions up to the last minute. Luckily, I pulled through.

The take away:

The proctors are friendly, just make sure you feel comfortable in the exam place, and use the practice exams to prepare for the actual’s exam’s environment. That includes sitting in a straight posture, not talking/whispering, or looking away.

Make sure to organize the time dedicated for each questions well, and don’t let yourself get distracted by being monitored like I did.

Don’t skip the question that you are not sure of. Try to select the most probable answer, then flag the question. This will make the very-stressful, last-minute review easier.

You have been engaged by a company to design and lead a migration to an AWS environment. The team is concerned about the capabilities of the new environment, especially when it comes to high availability and cost-effectiveness. The design calls for about 20 instances (c3.2xlarge) pulling jobs/messages from SQS. Network traffic per instance is estimated to be around 500 Mbps at the beginning and end of each job. Which configuration should you plan on deploying?

Spread the Instances over multiple AZs to minimize the traffic concentration and maximize fault-tolerance. With a multi-AZ configuration, an additional reliability point is scored as the entire Availability Zone itself is ruled out as a single point of failure. This ensures high availability. Wherever possible, use simple solutions such as spreading the load out rather than expensive high tech solutions

To save money, you quickly stored some data in one of the attached volumes of an EC2 instance and stopped it for the weekend. When you returned on Monday and restarted your instance, you discovered that your data was gone. Why might that be?

The volume was ephemeral, block-level storage. Data on an instance store volume is lost if an instance is stopped.

The most likely answer is that the EC2 instance had an instance store volume attached to it. Instance store volumes are ephemeral, meaning that data in attached instance store volumes is lost if the instance stops.

Reference: Instance store lifetime

Your company likes the idea of storing files on AWS. However, low-latency service of the last few days of files is important to customer service. Which Storage Gateway configuration would you use to achieve both of these ends?

A file gateway simplifies file storage in Amazon S3, integrates to existing applications through industry-standard file system protocols, and provides a cost-effective alternative to on-premises storage. It also provides low-latency access to data through transparent local caching.

Cached volumes allow you to store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data.

You’ve been commissioned to develop a high-availability application with a stateless web tier. Identify the most cost-effective means of reaching this end.

Use an Elastic Load Balancer, a multi-AZ deployment of an Auto-Scaling group of EC2 Spot instances (primary) running in tandem with an Auto-Scaling group of EC2 On-Demand instances (secondary), and DynamoDB.

With proper scripting and scaling policies, running EC2 On-Demand instances behind the Spot instances will deliver the most cost-effective solution because On-Demand instances will only spin up if the Spot instances are not available. DynamoDB lends itself to supporting stateless web/app installations better than RDS .

You are building a NAT Instance in an m3.medium using the AWS Linux2 distro with amazon-linux-extras installed. Which of the following do you need to set?

Ensure that “Source/Destination Checks” is disabled on the NAT instance. With a NAT instance, the most common oversight is forgetting to disable Source/Destination Checks. TNote: This is a legacy topic and while it may appear on the AWS exam it will only do so infrequently.

You are reviewing Change Control requests and you note that there is a proposed change designed to reduce errors due to SQS Eventual Consistency by updating the “DelaySeconds” attribute. What does this mean?

When a new message is added to the SQS queue, it will be hidden from consumer instances for a fixed period.

Delay queues let you postpone the delivery of new messages to a queue for a number of seconds, for example, when your consumer application needs additional time to process messages. If you create a delay queue, any messages that you send to the queue remain invisible to consumers for the duration of the delay period. The default (minimum) delay for a queue is 0 seconds. The maximum is 15 minutes. To set delay seconds on individual messages, rather than on an entire queue, use message timers to allow Amazon SQS to use the message timer’s DelaySeconds value instead of the delay queue’s DelaySeconds value. Reference: Amazon SQS delay queues.

Amazon SQS keeps track of all tasks and events in an application: True or False?

False. Amazon SWF (not Amazon SQS) keeps track of all tasks and events in an application. Amazon SQS requires you to implement your own application-level tracking, especially if your application uses multiple queues. Amazon SWF FAQs.

You work for a company, and you need to protect your data stored on S3 from accidental deletion. Which  actions might you take to achieve this?

Allow versioning on the bucket and to protect the objects by configuring MFA-protected API access.

Your Security Manager has hired a security contractor to audit your network and firewall configurations. The consultant doesn’t have access to an AWS account. You need to provide the required access for the auditing tasks, and answer a question about login details for the official AWS firewall appliance. Which actions might you do?

AWS has removed the Firewall appliance from the hub of the network and implemented the firewall functionality as stateful Security Groups, and stateless subnet NACLs. This is not a new concept in networking, but rarely implemented at this scale.

Create an IAM user for the auditor and explain that the firewall functionality is implemented as stateful Security Groups, and stateless subnet NACLs

Amazon ElastiCache can fulfill a number of roles. Which  operations  can be implemented using ElastiCache for Redis.

Amazon ElastiCache offers a fully managed Memcached and Redis service. Although the name only suggests caching functionality, the Redis service in particular can offer a number of operations such as Pub/Sub, Sorted Sets and an In-Memory Data Store. However, Amazon ElastiCache for Redis doesn’t support multithreaded architectures.

You have been asked to deploy an application on a small number of EC2 instances. The application must be placed across multiple Availability Zones and should also minimize the chance of underlying hardware failure. Which actions would provide this solution?

Deploy the EC2 servers in a Spread Placement Group.

Spread Placement Groups are recommended for applications that have a small number of critical instances which need to be kept separate from each other. Launching instances in a Spread Placement Group reduces the risk of simultaneous failures that might occur when instances share the same underlying hardware. Spread Placement Groups provide access to distinct hardware, and are therefore suitable for mixing instance types or launching instances over time. In this case, deploying the EC2 instances in a Spread Placement Group is the only correct option.

You manage a NodeJS messaging application that lives on a cluster of EC2 instances. Your website occasionally experiences brief, strong, and entirely unpredictable spikes in traffic that overwhelm your EC2 instances’ resources and freeze the application. As a result, you’re losing recently submitted messages from end-users. You use Auto Scaling to deploy additional resources to handle the load during spikes, but the new instances don’t spin-up fast enough to prevent the existing application servers from freezing. Can you provide the most cost-effective solution in preventing the loss of recently submitted messages?

Use Amazon SQS to decouple the application components and keep the messages in queue until the extra Auto-Scaling instances are available.

Neither increasing the size of your EC2 instances nor maintaining additional EC2 instances is cost-effective, and pre-warming an ELB signifies that these spikes in traffic are predictable. The cost-effective solution to the unpredictable spike in traffic is to use SQS to decouple the application components.

True statements on S3 URL styles

Virtual-host-style URLs (such as: https://bucket-name.s3.Region.amazonaws.com/key name) are supported by AWS.

Path-Style URLs (such as https://s3.Region.amazonaws.com/bucket-name/key name) are supported by AWS.

You run an automobile reselling company that has a popular online store on AWS. The application sits behind an Auto Scaling group and requires new instances of the Auto Scaling group to identify their public and private IP addresses. How can you achieve this?

Using a Curl or Get Command to get the latest meta-data from http://169.254.169.254/latest/meta-data/

What data formats are used to create CloudFormation templates?

JSOn and YAML

You have launched a NAT instance into a public subnet, and you have configured all relevant security groups, network ACLs, and routing policies to allow this NAT to function. However, EC2 instances in the private subnet still cannot communicate out to the internet. What troubleshooting steps should you take to resolve this issue?

Disable the Source/Destination Check on your NAT instance.

A NAT instance sends and retrieves traffic on behalf of instances in a private subnet. As a result, source/destination checks on the NAT instance must be disabled to allow the sending and receiving traffic for the private instances. Route 53 resolves DNS names, so it would not help here. Traffic that is originating from your NAT instance will not pass through an ELB. Instead, it is sent directly from the public IP address of the NAT Instance out to the Internet.

You need a storage service that delivers the lowest-latency access to data for a database running on a single EC2 instance. Which of the following AWS storage services is suitable for this use case?

Amazon EBS is a block level storage service for use with Amazon EC2. Amazon EBS can deliver performance for workloads that require the lowest-latency access to data from a single EC2 instance. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.

What are DynamoDB  use cases?

Use cases include storing JSON data, BLOB data and storing web session data. 

You are reviewing Change Control requests, and you note that there is a change designed to reduce costs by updating the Amazon SQS “WaitTimeSeconds” attribute. What does this mean?

When the consumer instance polls for new work, the SQS service will allow it to wait a certain time for one or more messages to be available before closing the connection.

Poor timing of SQS processes can significantly impact the cost effectiveness of the solution.

Long polling helps reduce the cost of using Amazon SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request) and false empty responses (when messages are available but aren’t included in a response).

Reference: Here

You have been asked to decouple an application by utilizing SQS. The application dictates that messages on the queue CAN be delivered more than once, but must be delivered in the order they have arrived while reducing the number of empty responses. Which  option is most suitable?

Configure a FIFO SQS queue and enable long polling.

You are a security architect working for a large antivirus company. The production environment has recently been moved to AWS and is in a public subnet. You are able to view the production environment over HTTP. However, when your customers try to update their virus definition files over a custom port, that port is blocked. You log in to the console and you allow traffic in over the custom port. How long will this take to take effect?

Immediately.

You need to restrict access to an S3 bucket. Which  methods can you use to do so?

There are two ways of securing S3, using either Access Control Lists (Permissions) or by using bucket Policies.

You are reviewing Change Control requests, and you note that there is a change designed to reduce wasted CPU cycles by increasing the value of your Amazon SQS “VisibilityTimeout” attribute. What does this mean?

When a consumer instance retrieves a message, that message will be hidden from other consumer instances for a fixed period.

Poor timing of SQS processes can significantly impact the cost effectiveness of the solution. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours.

With EBS, I can ____.

Create an encrypted volume from a snapshot of another encrypted volume.

Create an encrypted snapshot from an unencrypted snapshot by creating an encrypted copy of the unencrypted snapshot.

You can create an encrypted volume from a snapshot of another encrypted volume.

Although there is no direct way to encrypt an existing unencrypted volume or snapshot, you can encrypt them by creating either a volume or a snapshot. Reference: Encrypting unencrypted resources.

Following advice from your consultant, you have configured your VPC to use dedicated hosting tenancy. Your VPC has an Amazon EC2 Auto Scaling designed to launch or terminate Amazon EC2 instances on a regular basis, in order to meet workload demands. A subsequent change to your application has rendered the performance gains from dedicated tenancy superfluous, and you would now like to recoup some of these greater costs. How do you revert your instance tenancy attribute of a VPC to default for new launched EC2 instances?​

Modify the instance tenancy attribute of your VPC from dedicated to default using the AWS CLI, an AWS SDK, or the Amazon EC2 API.

You can change the instance tenancy attribute of a VPC from dedicated to default. Modifying the instance tenancy of the VPC does not affect the tenancy of any existing instances in the VPC. The next time you launch an instance in the VPC, it has a tenancy of default, unless you specify otherwise during launch. You can modify the instance tenancy attribute of a VPC using the AWS CLI, an AWS SDK, or the Amazon EC2 API only. Reference: Change the tenancy of a VPC.

How do DynamoDB indices work?

What is Amazon DynamoDB?

Amazon DynamoDB is a fast, fully managed NoSQL database service. DynamoDB makes it simple and cost-effective to store and retrieve any amount of data and serve any level of request traffic.

DynamoDB is used to create tables that store and retrieve any level of data.

  • DynamoDB uses SSD’s to store data.
  • Provides Automatic and synchronous data.
  • Maximum item size is 400KB
  • Supports cross-region replication.

DynamoDB Core Concepts:

  • The fundamental concepts around DynamoDB are:
    • Tables-which is a collection of data.
    • Items- They are the individual entries in the table.
    • Attributes- These are the properties associated with the entries.
  • Primary Keys.
  • Secondary Indexes.
  • DynamoDB streams.

Secondary Indexes:

  • The Secondary index is a data structure that contains a subset of attributes from the table, along with an alternate key that supports Query operations.
  • Every secondary index is related to only one table, from where it obtains data. This is called base table of the index.
  • When you create an index you create an alternate key for the index i.e. Partition Key and Sort key, DynamoDB creates a copy of the attributes into the index, including primary key attributes derived from the table.
  • After this is done, you use the query/scan in the same way as you would use a query on a table.

Every secondary index is instinctively maintained by DynamoDB.

DynamoDB Indexes: DynamoDB supports two indexes:

  1. Local Secondary Index (LSI)- The index has the same partition key as the base table but a different sort key,
  2. Global Secondary index (GSI)- The index has a partition key and sort key are different from those on the base table.

While creating more than one table using secondary table , you must do it in a sequence. Create table one after the another. When you create the first table wait for it to be active.

Once that table is active, create another table and wait for it to get active and so on. If you try to create one or more tables continuously DynamoDB will return a LimitExceededException.

You must specify the following, for every secondary index:

  • Type- You must mention the type of index you are creating whether it is a Global Secondary Index or a Local Secondary index.
  • Name- You must specify the name for the index. The rules for naming the indexes are the same as that for the table it is connected with. You can use the same name for the indexes that are connected with the different base table.
  • Key- The key schema for the index states that every attribute in the index must be of the top level attribute of type-string, number, or binary. Other data types which include documents and sets are not allowed. Other requirements depend on the type of index you choose.
    • For GSI- The partitions key can be any scalar attribute of the base table.

Sort key is optional and this too can be any scalar attribute of the base table.

  • For LSI- The partition key must be the same as the base table’s partition key.

The sort key must be a non-key table attribute.

  • Additional Attributes: The additional attributes are in addition to the tables key attributes. They are automatically projected into every index. You can use attributes for any data type, including scalars, documents and sets.
  • Throughput: The throughput settings for the index if necessary are:
    • GSI: Specify read and write capacity unit settings. These provisioned throughput settings are not dependent on the base tables settings.
    • LSI- You do not need to specify read and write capacity unit settings. Any read and write operations on the local secondary index are drawn from the provisioned throughput settings of the base table.

You can create upto 5 Global and 5 Local Secondary Indexes per table. With the deletion of a table all the indexes are connected with the table are also deleted.

You can use the Scan or Query operation to fetch the data from the table. DynamoDB will give you the results in descending or ascending order.

(Source)

 

What is NLB in AWS?

An NLB is a Network Load Balancer.

Network Load Balancer Overview: A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration. When you enable an Availability Zone for the load balancer, Elastic Load Balancing creates a load balancer node in the Availability Zone. By default, each load balancer node distributes traffic across the registered targets in its Availability Zone only. If you enable cross-zone load balancing, each load balancer node distributes traffic across the registered targets in all enabled Availability Zones. It is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on your part. The Network Load Balancer is API-compatible with the Application Load Balancer, including full programmatic control of Target Groups and Targets. Here are some of the most important features:

  • Static IP Addresses – Each Network Load Balancer provides a single IP address for each Availability Zone in its purview. If you have targets in us-west-2a and other targets in us-west-2c, NLB will create and manage two IP addresses (one per AZ); connections to that IP address will spread traffic across the instances in all the VPC subnets in the AZ. You can also specify an existing Elastic IP for each AZ for even greater control. With full control over your IP addresses, a Network Load Balancer can be used in situations where IP addresses need to be hard-coded into DNS records, customer firewall rules, and so forth.
  • Zonality – The IP-per-AZ feature reduces latency with improved performance, improves availability through isolation and fault tolerance, and makes the use of Network Load Balancers transparent to your client applications. Network Load Balancers also attempt to route a series of requests from a particular source to targets in a single AZ while still providing automatic failover should those targets become unavailable.
  • Source Address Preservation – With Network Load Balancer, the original source IP address and source ports for the incoming connections remain unmodified, so application software need not support X-Forwarded-For, proxy protocol, or other workarounds. This also means that normal firewall rules, including VPC Security Groups, can be used on targets.
  • Long-running Connections – NLB handles connections with built-in fault tolerance, and can handle connections that are open for months or years, making them a great fit for IoT, gaming, and messaging applications.
  • Failover – Powered by Route 53 health checks, NLB supports failover between IP addresses within and across regions.

How many types of VPC endpoints are available?

There are two types of VPC endpoints: (1) interface endpoints and (2) gateway endpoints. Interface endpoints enable connectivity to services over AWS PrivateLink.

What is the purpose of key pair with Amazon AWS EC2?

Amazon AWS uses key pair to encrypt and decrypt login information.

A sender uses a public key to encrypt data, which its receiver then decrypts using another private key. These two keys, public and private, are known as a key pair.

You need a key pair to be able to connect to your instances. The way this works on Linux and Windows instances is different.

First, when you launch a new instance, you assign a key pair to it. Then, when you log in to it, you use the private key.

The difference between Linux and Windows instances is that Linux instances do not have a password already set and you must use the key pair to log in to Linux instances. On the other hand, on Windows instances, you need the key pair to decrypt the administrator password. Using the decrypted password, you can use RDP and then connect to your Windows instance.

Amazon EC2 stores only the public key, and you can either generate it inside Amazon EC2 or you can import it. Since the private key is not stored by Amazon, it’s advisable to store it in a secure place as anyone who has this private key can log in on your behalf.

What is VPC PrivateLink?

AWS PrivateLink provides private connectivity between VPCs and services hosted on AWS or on-premises, securely on the Amazon network. By providing a private endpoint to access your services, AWS PrivateLink ensures your traffic is not exposed to the public internet.
 
AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA-C02 SAA-C03 Exam Prep

What is the difference between a VPC SG and an EC2 security group?

There are two types of Security Groups based on where you launch your instance. When you launch your instance on EC2-Classic, you have to specify an EC2-Classic Security Group . On the other hand, when you launch an instance in a VPC, you will have to specify an EC2-VPC Security Group. Now that we have a clear understanding what we are comparing, lets see their main differences:

EC2-Classic Security Group

  • When the instance is launched, you can only choose a Security Group that resides in the same region as the instance.
  • You cannot change the Security Group after the instance has launched (you may edit the rules)
  • They are not IPv6 Capable

EC2-VPC Security Group

  • You can change the Security Group after the instance has launched
  • They are IPv6 Capable

Generally speaking, they are not interchangeable and there are more capabilities on the EC2-VPC SGs. You may read more about them on Differences Between Security Groups for EC2-Classic and EC2-VPC

Why do AWS DynamoDB and S3 use gateway VPC endpoints rather than interface endpoints?

I think this is historical in nature. S3 and DynamoDB were the first services to support VPC endpoints. The release of those VPC endpoint features pre-dates two important services that subsequently enabled interface endpoints: Network Load Balancer and AWS PrivateLink.

What is the best way to develop AWS Lambda functions locally on your laptop?

  • Separate the Lambda handler from your core logic.
  • Take advantage of execution context reuse to improve the performance of your function. Initialize SDK clients and database connections outside of the function handler, and cache static assets locally in the /tmp directory. Subsequent invocations processed by the same instance of your function can reuse these resources. This saves execution time and avoid potential data leaks across invocations, don’t use the execution context to store user data, events, or other information with security implications. If your function relies on a mutable state that can’t be stored in memory within the handler, consider creating a separate function or separate versions of a function for each user.
  • Use AWS Lambda Environment Variables to pass operational parameters to your function. For example, if you are writing to an Amazon S3 bucket, instead of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable.

How can I see if/when someone logs into my AWS Windows instance?

You can use VPC Flow Logs. The steps would be the following:

  • Enable VPC Flow Logs for the VPC your EC2 instance lives in. You can do this from the VPC console
  • Having VPC Flow Logs enabled will create a CloudWatch Logs log group
  • Find the Elastic Network Interface assigned to your EC2 instance. Also, get the private IP of your EC2 instance. You can do this from the EC2 console.
  • Find the CloudWatch Logs log stream for that ENI.
  • Search the log stream for records where your Windows instance’s IP is the destination IP, make sure the port is the one you’re looking for. You’ll see records that tell you if someone has been connecting to your EC2 instance. For example, there are bytes transferred, status=ACCEPT, log-status=OK. You will also know the source IP that connected to your instance.

I recommend using CloudWatch Logs Metric Filters, so you don’t have to do all this manually. Metric Filters will find the patterns I described in your CloudWatch Logs entries and will publish a CloudWatch metric. Then you can trigger an alarm that notifies you when someone logs in to your instance.

Here are more details from the AWS Official Blog and the AWS documentation for VPC Flow Logs records:

VPC Flow Logs – Log and View Network Traffic Flows

Amazon Virtual Private Cloud

Also, there are 3rd-party tools that simplify all these steps for you and give you very nice visibility and alerts into what’s happening in your AWS network resources. I’ve tried Observable Networks and it’s great: Observable Networks

While enabling ports on AWS NAT gateway when you allow inbound traffic on port 80/443 , do you need to allow outbound traffic on the same ports or is it sufficient to allow outbound traffic on ephemeral ports (1024-65535)?

Typically outbound traffic is not blocked by NAT on any port, so you would not need to explicitly allow those, since they should already be allowed. Your firewall generally would have a rule to allow return traffic that was initiated outbound from inside your office.

Is AWS traffic between EC2 nodes in the same availability zone secure with respect to sending sensitive data?

According to Amazon’s documentation, it is impossible for one instance to sniff traffic bound for a different instance.

https://d0.awsstatic.com/whitepapers/aws-security-whitepaper.pdf

  • Packet sniffing by other tenants. It is not possible for a virtual instance running in promiscuous mode to receive or “sniff” traffic that is intended for a different virtual instance. While you can place your interfaces into promiscuous mode, the hypervisor will not deliver any traffic to them that is not addressed to them. Even two virtual instances that are owned by the same customer located on the same physical host cannot listen to each other’s traffic. Attacks such as ARP cache poisoning do not work within Amazon EC2 and Amazon VPC. While Amazon EC2 does provide ample protection against one customer inadvertently or maliciously attempting to view another’s data, as a standard practice you should encrypt sensitive traffic.

But as you can see, they still recommend that you should maintain encryption inside your network. We have taken the approach of terminating SSL at the external interface of the ELB, but then initiating SSL from the ELB to our back-end servers, and even further, to our (RDS) databases. It’s probably belt-and-suspenders, but in my industry it’s needed. Heck, we have some interfaces that require HTTPS and a VPN.

What’s the use case for S3 Pre-signed URL for uploading objects?

I get the use-case to allow access to private/premium content in S3 using Presigned-url that can be used to view or download the file until the expiration time set, But what’s a real life scenario in which a Webapp would have the need to generate URI to give users temporary credentials to upload an object, can’t the same be done by using the SDK and exposing a REST API at the backend.

Asking this since I want to build a POC for this functionality in Java, but struggling to find a real-world use-case for the same

Pre-signed URLs are used to provide short-term access to a private object in your S3 bucket. They work by appending an AWS Access Key, expiration time, and Sigv4 signature as query parameters to the S3 object. There are two common use cases when you may want to use them: 

  • Simple, occasional sharing of private files.
  • Frequent, programmatic access to view or upload a file in an application.

Imagine you may want to share a confidential presentation with a business partner, or you want to allow a friend to download a video file you’re storing in your S3 bucket. In both situations, you could generate a URL, and share it to allow the recipient short-term access.

There are a couple of different approaches for generating these URLs in an ad-hoc, one-off fashion, including:

  • Using the AWS Tools for Powershell.
  • Using the AWS CLI.

Source: Here

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA-C02 SAA-C03 Exam Prep

FROM AWS:REINVENT 2021:

AWS on Air

Peter DeSantis Keynote

Join Peter DeSantis, Senior Vice President, Utility Computing and Apps, to learn how AWS has optimized its cloud infrastructure to run some of the world’s most demanding workloads and give your business a competitive edge.

Werner Vogels Keynote

Join Dr. Werner Vogels, CTO, Amazon.com, as he goes behind the scenes to show how Amazon is solving today’s hardest technology problems. Based on his experience working with some of the largest and most successful applications in the world, Dr. Vogels shares his insights on building truly resilient architectures and what that means for the future of software development.

Accelerating innovation with AI and ML

Applied artificial intelligence (AI) solutions, such as contact center intelligence (CCI), intelligent document processing (IDP), and media intelligence (MI), have had a significant market and business impact for customers, partners, and AWS. This session details how partners can collaborate with AWS to differentiate their products and solutions with AI and machine learning (ML). It also shares partner and customer success stories and discusses opportunities to help customers who are looking for turnkey solutions.

Application integration patterns for microservices

An implication of applying the microservices architectural style is that a lot of communication between components is done over the network. In order to achieve the full capabilities of microservices, this communication needs to happen in a loosely coupled manner. In this session, explore some fundamental application integration patterns based on messaging and connect them to real-world use cases in a microservices scenario. Also, learn some of the benefits that asynchronous messaging can have over REST APIs for communication between microservices.

Maintain application availability and performance with Amazon CloudWatch

Avoiding unexpected user behavior and maintaining reliable performance is crucial. This session is for application developers who want to learn how to maintain application availability and performance to improve the end user experience. Also, discover the latest on Amazon CloudWatch.

How Amazon.com transforms customer experiences through AI/ML

Amazon is transforming customer experiences through the practical application of AI and machine learning (ML) at scale. This session is for senior business and technology decision-makers who want to understand Amazon.com’s approach to launching and scaling ML-enabled innovations in its core business operations and toward new customer opportunities. See specific examples from various Amazon businesses to learn how Amazon applies AI/ML to shape its customer experience while improving efficiency, increasing speed, and lowering cost. Also hear the lessons the Amazon teams have learned from the cultural, process, and technical aspects of building and scaling ML capabilities across the organization.

Accelerating data-led migrations

Data has become a strategic asset. Customers of all sizes are moving data to the cloud to gain operational efficiencies and fuel innovation. This session details how partners can create repeatable and scalable solutions to help their customers derive value from their data, win new customers, and grow their business. It also discusses how to drive partner-led data migrations using AWS services, tools, resources, and programs, such as the AWS Migration Acceleration Program (MAP). Also, this session shares customer success stories from partners who have used MAP and other resources to help customers migrate to AWS and improve business outcomes.

Accelerate front-end web and mobile development with AWS Amplify

User-facing web and mobile applications are the primary touchpoint between organizations and their customers. To meet the ever-rising bar for customer experience, developers must deliver high-quality apps with both foundational and differentiating features. AWS Amplify helps front-end web and mobile developers build faster front to back. In this session, review Amplify’s core capabilities like authentication, data, and file storage and explore new capabilities, such as Amplify Geo and extensibility features for easier app customization with AWS services and better integration with existing deployment pipelines. Also learn how customers have been successful using Amplify to innovate in their businesses.

AWS Am
AWS Amplify: Build, deploy and scale web apps

AWS Amplify is a set of tools and services that makes it quickand easy for front-end web and mobile developers to build full-stack applications on AWS

Amplify DataStore provides a  programming model for leveraging shared and distributed data without writing additional code for offline and online scenarios, which makes working
with distributed, cross-user data just as simple as working with local-only data

AWS AppSync is a managed GraphQL API service

Amazon DynamoDB is a  serverless key-value and document database that’s highly scalable

Amazon S3 allows you to store static assets

DevOps revolution

While DevOps has not changed much, the industry has fundamentally transformed over the last decade. Monolithic architectures have evolved into microservices. Containers and serverless have become the default. Applications are distributed on cloud infrastructure across the globe. The technical environment and tooling ecosystem has changed radically from the original conditions in which DevOps was created. So, what’s next? In this session, learn about the next phase of DevOps: a distributed model that emphasizes swift development, observable systems, accountable engineers, and resilient applications.

Innovation Day

Innovation Day is a virtual event that brings together organizations and thought leaders from around the world to share how cloud technology has helped them capture new business opportunities, grow revenue, and solve the big problems facing us today, and in the future. Featured topics include building the first human basecamp on the moon, the next generation F1 car, manufacturing in space, the Climate Pledge from Amazon, and building the city of the future at the foot of Mount Fuji.

Latest AWS Products and Services announced at re:invent 2021

Graviton 3:  AWS today announced the newest generation of its Arm-based Graviton processors: the Graviton 3. The company promises that the new chip will be 25 percent faster than the last-generation chips, with 2x faster floating-point performances and a 3x speedup for machine-learning workloads. AWS also promises that the new chips will use 60 percent less power.

Trn1 to train models for various applications

AWS Mainframe Modernization: Cut mainframe migration time by 2/3

AWS Private 5G: Deploy and manage your own private 5G network (Set up and scale a private mobile network in days)

Transaction for Governed tables in Lake Formation: Automatically manages conflicts and error

Serverless and On-Demand Analytics for Redshift, EMAR, MSK, Kinesis: 

Amazon Sagemaker Canvas: Create ML predictions without any ML experience or writing any code

AWS IoT TwinMaker: Real Time system that makes it easy to create and use digital twins of real-world systems.

Amazon DevOps Guru for RDS: Automatically detect, diagnose, and resolve hard-to-find database issues.

Amazon DynamoDB Standard-Infrequent Access table class: Reduce costs by up to 60%. Maintain the same performance, durability, scaling. and availability as Standard

AWS Database Migration Service Fleet Advisor: Accelerate database migration with automated inventory and migration: This service  makes it easier and faster to get your data to the cloud and match it with the correct database service. “DMS Fleet Advisor automatically builds an inventory of your on-prem database and analytics service by streaming data from on prem to Amazon S3. From there, we take it over. We analyze [the data] to match it with the appropriate amount of AWS Datastore and then provide customized migration plans.

Amazon Sagemaker Ground Truth Plus: Deliver high-quality training datasets fast, and reduce data labeling cost.

Amazon SageMaker Training Compiler: Accelerate model training by 50%

Amazon SageMaker Inference Recommender: Reduce time to deploy from weeks to hours

Amazon SageMaker Serverless Inference: Lower cost of ownership with pay-per-use pricing

Amazon Kendra Experience Builder: Deploy Intelligent search applications powered by Amazon Kendra with a few clicks.

Amazon Lex Automated Chatbot Designer: Drastically Simplifies bot design with advanced natural language understanding

Amazon SageMaker Studio Lab: A no cost, no setup access to powerful machine learning technology

AWS Cloud WAN: Build, manage and monitor global wide area networks

AWS Amplify Studio: Visually build complete, feature-rich apps in hours instead of weeks, with full control over the application code.

AWS Carbon Footprint Tool: Don’t forget to turn off the lights.

AWS Well-Architected Sustainability Pillar: Learn, measure, and improve  your workloads using environmental  best practices in cloud computing

AWS re:Post: Get Answers from AWS experts. A Reimagined Q&A Experience for the AWS Community

How do you build something completely new?

FROM AWS:REINVENT 2020:

Automate anything with AWS Systems Manager

You can automate any task that involves interaction with AWS and on-premises resources, including in multi-account and multi-Region environments, with AWS Systems Manager. In this session, learn more about three new Systems Manager launches at re:Invent—Change Manager, Fleet Manager, and Application Manager. In addition, learn how Systems Manager Automation can be used across multiple Regions and accounts, integrate with other AWS services, and extend to on-premises. This session takes a deep dive into how to author a custom runbook using an automation document, and how to execute automation anywhere.

Deliver cloud operations at scale with AWS Managed Services

Learn how you can quickly build scaled AWS operations tooling to meet some of the most complex and compliant operations system requirements.

 

Turbocharging query execution on Amazon EMR

Learn about the performance improvements made in Amazon EMR for Apache Spark and Presto, giving Amazon EMR one of the fastest runtimes for analytics workloads in the cloud. This session dives deep into how AWS generates smart query plans in the absence of accurate table statistics. It also covers adaptive query execution—a technique to dynamically collect statistics during query execution—and how AWS uses dynamic partition pruning to generate query predicates for speeding up table joins. You also learn about execution improvements such as data prefetching and pruning of nested data types.

Detect machine learning (ML) model drift in production

 Explore how state-of-the-art algorithms built into Amazon SageMaker are used to detect declines in machine learning (ML) model quality. One of the big factors that can affect the accuracy of models is the difference in the data used to generate predictions and what was used for training. For example, changing economic conditions could drive new interest rates affecting home purchasing predictions. Amazon SageMaker Model Monitor automatically detects drift in deployed models and provides detailed alerts that help you identify the source of the problem so you can be more confident in your ML applications.

Amazon Lightsail: The easiest way to get started on AWS

Amazon Lightsail is AWS’s simple, virtual private server. In this session, learn more about Lightsail and its newest launches. Lightsail is designed for simple web apps, websites, and dev environments. This session reviews core product features, such as preconfigured blueprints, managed databases, load balancers, networking, and snapshots, and includes a demo of the most recent launches. Attend this session to learn more about how you can get up and running on AWS in the easiest way possible.

Deep dive into AWS Lambda security: Function isolation

This session dives into the security model behind AWS Lambda functions, looking at how you can isolate workloads, build multiple layers of  protection, and leverage fine-grained authorization. You learn about the  implementation, the open-source Firecracker technology that provides one of  the most important layers, and what this means for how you build on Lambda. You also see how AWS Lambda securely runs your functions packaged and  deployed as container images. Finally, you learn about SaaS, customization, and safe patterns for running your own customers’ code in your Lambda functions.

 

Red team vs. blue team in AWS: Learn to defend your cloud applications (sponsored by Check Point Software)

Unauthorized users and financially motivated third parties also have access to advanced cloud capabilities. This causes concerns and creates challenges for customers responsible for the security of their cloud assets. Join us as Roy Feintuch, chief technologist of cloud products, and Maya Horowitz, director of threat intelligence and research, face off in an epic battle of defense against unauthorized cloud-native attacks. In this session, Roy uses security analytics, threat hunting, and cloud intelligence solutions to dissect and analyze some sneaky cloud breaches so you can strengthen your cloud defense. This presentation is brought to you by Check Point Software, an AWS Partner.

Best practices for security governance in serverless applications

AWS provides services and features that your organization can  leverage to improve the security of a serverless application. However, as organizations grow and developers deploy more serverless applications, how do  you know if all of the applications are in compliance with your organization’s security policies? This session walks you through serverless security, and you learn about protections and guardrails that you can build  to avoid misconfigurations and catch potential security risks.

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA-C02 SAA-C03 Exam Prep

How Amazon.com automates cash identification & matching with AWS AI/ML

The Amazon Cash application service matches incoming customer payments with accounts and open invoices, while an email ingestion service (EIS) processes more than 1 million semi-structured and unstructured remittance emails monthly. In this session, learn how this EIS classifies the emails, extracts invoice data from the emails, and then identifies the right invoices to close on Amazon financial platforms. Dive deep on how these services automated 89.5% of cash applications using AWS AI & ML services. Hear about how these services will eliminate the manual effort of 1000 cash application analysts in the next 10 years.

Understanding AWS Lambda streaming events

Dive into the details of using Amazon Kinesis Data Streams and Amazon DynamoDB Streams as event sources for AWS Lambda. This session walks you through how AWS Lambda scales along with these two event sources. It also covers best practices and challenges, including how to tune streaming sources for optimum performance and how to effectively monitor them.

Building real-time applications using Apache Flink

Build real-time applications using Apache Flink with Apache Kafka and Amazon Kinesis Data Streams. Apache Flink is a framework and engine for building streaming applications for use cases such as real-time analytics and complex event processing. This session covers best practices for building low-latency applications with Apache Flink when reading data from either Amazon MSK or Amazon Kinesis Data Streams. It also covers best practices for running low-latency Apache Flink applications using Amazon Kinesis Data Analytics and discusses AWS’s open-source contributions to this use case.

AWS SAA-C02 SAA-C03 Exam Prep
AWS SAA Exam Prep AWS SAA-C02 SAA-C03 Exam Prep

App modernization on AWS with Apache Kafka and Confluent Cloud

Learn how you can accelerate application modernization and benefit from the open-source Apache Kafka ecosystem by connecting your legacy, on-premises systems to the cloud. In this session, hear real customer stories about timely insights gained from event-driven applications built on an event streaming platform from Confluent Cloud running on AWS, which stores and processes historical data and real-time data streams. Confluent makes Apache Kafka enterprise-ready using infinite Kafka storage with Amazon S3 and multiple private networking options including AWS PrivateLink, along with self-managed encryption keys for storage volume encryption with AWS Key Management Service (AWS KMS).

BI at hyperscale: Quickly build and scale dashboards with Amazon QuickSight

Data-driven business intelligence (BI) decision making is more important than ever in this age of remote work. An increasing number of organizations are investing in data transformation initiatives, including migrating data to the cloud, modernizing data warehouses, and building data lakes. But what about the last mile—connecting the dots for end users with dashboards and visualizations? Come to this session to learn how Amazon QuickSight allows you to connect to your AWS data and quickly build rich and interactive dashboards with self-serve and advanced analytics capabilities that can scale from tens to hundreds of thousands of users, without managing any infrastructure and only paying for what you use.

 

Is there an Updated SAA-C03 Practice Exam?

As of this writing, the official SAA-C02 practice exam is not yet available. It would probably take about 3 more months before AWS finally releases the official version of the SAA-C03 practice exam for the new AWS Certified Solutions Architect Associate. In the meantime, you can try the new SAA-C03 sample exam so you can have a better idea of what will be the topic coverage would be, and how the scenarios will be presented.
This sample SAA-C03 sample exam PDF file can provide you with a hint of what the real SAA-C03 exam will look like in your upcoming test. In addition, the SAA-C03 sample questions also contain the necessary explanation and reference links that you can study.

Top-paying Cloud certifications:

  1. Google Certified Professional Cloud Architect — $175,761/year
  2. AWS Certified Solutions Architect – Associate — $149,446/year
  3. Azure/Microsoft Cloud Solution Architect – $141,748/yr
  4. Google Cloud Associate Engineer – $145,769/yr
  5. AWS Certified Cloud Practitioner — $131,465/year
  6. Microsoft Certified: Azure Fundamentals — $126,653/year
  7. Microsoft Certified: Azure Administrator Associate — $125,993/year

AWS Certified Solution Architect Associate Exam Prep Quiz App

AWS Solutions Architect Associates SAA-C02 and SAA-C03 Certification Exam Prep
 
 
#AWS #SAAC02 #SAAC03 #SolutionsArchitect #AWSSAA #SAA #AWSCertification #AWSTraining #LearnAWS #CloudArchitect #SolutionsArchitect  #Djamgatech
 
AWS SAA Exam Prep App on iOs
 
AWS SAA Exam Prep App on android
 
AWS SAA Exam Prep App on Windows 10/11
 
AWS SAA App details and features
AWS Certified Solution Architect Associate Exam Prep
AWS Certified Solution Architect Associate Exam Prep

Download AWS Solution Architect Associate Exam Prep Pro App (No Ads, Full version with answers) for:

Android –  iOS – Windows 10 – Amazon Android

 

Download AWS Solution Architect Associate Exam Prep Quiz App for:

All Platforms (PWA) –  Android –  iOS – Windows 10  – Amazon Android

 

AWS Cloud Certifications Breaking News –  Testimonials – AWS Top Stories

  • Any Open Source projects out there that provide similar functionality to Isengard?
    by /u/AdFrequent4872 (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 24, 2022 at 8:09 am

    Off the back of an earlier discussion following Corey’s blog, has anyone come across anything with similar functionality? Ex-AWS here and while Burner Accounts don’t seem a challenge to mimic, Isengard is kinda complex. Basic requirement: Allow authorised individuals to arbitrarily spin up POC/demo accounts not tied to an existing org. https://www.reddit.com/r/aws/comments/rx156k/the_aws_service_i_hate_the_most/?utm_source=share&utm_medium=ios_app&utm_name=iossmf submitted by /u/AdFrequent4872 [link] [comments]

  • How to connect rds database via ssh tunnel
    by /u/devilismypet (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 24, 2022 at 6:42 am

    Well I launched a db instance in aws rds wihout public. This db can be connected from its vpc only. I have also launched an ec2 instance in same vpc. Now I want to connect to db instance using ssh tunnel from my local machine. I will also have to use port forwading. Please help submitted by /u/devilismypet [link] [comments]

  • Best way to manage customers on a AWS hosted web app?
    by /u/analyzeTimes (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 24, 2022 at 6:29 am

    I'm by no means experienced in web development, so I appreciate any help. I have a data service that is currently leveraging AWS S3. I need to set up a website that has 3 primary functions: Provides a landing page detailing the service. Provides a means for customers to sign up, create a profile and preferences for the data service, and manages subscription payments. Provides a GUI for users to interact with the data service. The backend is pretty much a stitching of AWS Lambda functions and and S3 buckets just so I get my MVP off the ground. My question is specifically directed toward point 2. Does this community have any suggestions as to how to facilitate customer profile generation/management/payments that ties in seamlessly with AWS and a payment provider (such as Stripe or PayPal)? I hope to be pointed toward an implementation strategy that allows for me to manage customers using as much COTS software as possible, but I welcome any suggestions! Additional context if needed: This is a personal project that allows for public use of my constantly changing dataset and service. Moreover, it helps me expand my knowledge of AWS using working examples. Thank you! submitted by /u/analyzeTimes [link] [comments]

  • What is the best way to group AWS actions in policies?
    by /u/daidpndnt_src (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 24, 2022 at 6:01 am

    For a set of actions on a service, they could either be grouped by resource, or a specific condition that they support. I want to see which way these actions actually make sense in maintaining post definition. For e.g, should I group them using a resource tag condition or come up with a naming convention and then bundle them using a resource type? It is also unclear, as to how the actions in one statement are evaluated against the resources for that statement? For e.g., what if I lump a bunch of RDS related actions that require different resource types and then list all the resources for these actions, will it fail? submitted by /u/daidpndnt_src [link] [comments]

  • Integrating Service Workbench and Session Manager
    by /u/boyter (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 24, 2022 at 5:32 am

    submitted by /u/boyter [link] [comments]

  • ServerlessQ 1.0 - A hosted SQS + Lambda Service
    by /u/sandro-_ (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 24, 2022 at 5:21 am

    submitted by /u/sandro-_ [link] [comments]

  • Call http endpoint from java lambda
    by /u/LegitAndroid (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 24, 2022 at 5:07 am

    Noob to micro services. I understand an http request needs to be made with appropriate endpoint url, api URI, and http verb What’s the best way to do this in Java to reduce overhead with retries, response parsing, error handling etc? submitted by /u/LegitAndroid [link] [comments]

  • Is there any way to trigger a lambda when a glue workflow run finishes?
    by /u/ManausBrazil (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 24, 2022 at 4:05 am

    I know that I can just call a lambda from the last job of a workflow, but I'm looking for a less invasive way. I found that I can trigger a lambda when a glue job finishes through cloud watch events, but and how about workflow runs? Is it possible? How to? submitted by /u/ManausBrazil [link] [comments]

  • Sorting user data in S3 bucket
    by /u/Aerostade (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 24, 2022 at 3:50 am

    Hi all Long story short, I'm using amplify and cognito to allow non AWS users to upload data into an S3 bucket. The trouble is, this data isn't sorted by anything other than object name and upload time. If personA uploads a dataset at the same time as personB, and they happen to use the same camera naming conventions, the data gets mixed together. I'd like to either pre-sort the data by creating "folders" for every upload, or post-sort the data according to the cognito user who uploaded it. I can't even find where that metadata is. THanks! ​ Joe submitted by /u/Aerostade [link] [comments]

  • what is the difference between the following two policies and why would you prefer one of the following
    by /u/sar009 (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 24, 2022 at 2:20 am

    1. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::my-bucket" }, { "Effect": "Allow", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::my-bucket/*" } ] } 2. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3:ListBucket", "s3:GetObject"] "Resource": [ "arn:aws:s3:::my-bucket", "arn:aws:s3:::my-bucket/*" ] } ] } submitted by /u/sar009 [link] [comments]

  • How does amazon dynamically assign ip addreses to ec2?
    by /u/Oxffff0000 (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 23, 2022 at 11:39 pm

    The reason I asked is because I am trying to figure out how resolv.conf is being populated. Back in the days, we set up a DHCP server and its scopes so that when a client machine broadcasts a request that it needs an ip, it will get an ip address along with the nameserver which will get placed in resolv.conf. I'm trying to figure out how where our ec2 instances are getting its resolv.conf modified. Is there an amazon dhcp service that I can look at? submitted by /u/Oxffff0000 [link] [comments]

  • Can I transfer my domain name and Wordpress site to AWS? I'm so over Hostgator.
    by /u/CoyoteBrance (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 23, 2022 at 11:09 pm

    Looking for answers from people who done things like this in the past. Any advice appreciated. Thank you! submitted by /u/CoyoteBrance [link] [comments]

  • Does anybody know of an end to end tutorial to set up a Sagemaker ground truth Custom Streaming Workflow?
    by /u/manueslapera (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 23, 2022 at 10:38 pm

    Ive been trying to find any decent article that goes end to end to set up an sns powered custom Ground truth workflow, so when a new manifest/source is added to the input s3 bucket the custom workflow kicks in and the individual worker annotations are aggregated. Any pointers to any end to end tutorial would be good. AWS docs dont go end to end and only show disparate pieces and demos. submitted by /u/manueslapera [link] [comments]

  • New C7g EC2 instances powered by AWS Graviton3
    by /u/joelrwilliams1 (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 23, 2022 at 10:30 pm

    James Hamilton introduces the new EC2 instances: https://www.youtube.com/watch?v=JY4EimMEi_A submitted by /u/joelrwilliams1 [link] [comments]

  • Is it more secure to run Lambda inside a VPC?
    by /u/bashtoni (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 23, 2022 at 9:30 pm

    submitted by /u/bashtoni [link] [comments]

  • C7g instance family powered by Graviton3 launches
    by /u/lizthegrey (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 23, 2022 at 9:26 pm

    https://aws.amazon.com/ec2/instance-types/c7g/ https://aws.amazon.com/blogs/aws/new-amazon-ec2-c7g-instances-powered-by-aws-graviton3-processors/ https://aws.amazon.com/about-aws/whats-new/2022/05/amazon-ec2-c7g-instances-powered-aws-graviton3-processors/ Our experience at Honeycomb from our preview is 30%-40% better performance, the instances cost 7% more each, ~= 25%-35% better price-performance. https://www.honeycomb.io/blog/present-future-arm-aws-graviton-honeycomb/ submitted by /u/lizthegrey [link] [comments]

  • New – Amazon EC2 C7g Instances, Powered by AWS Graviton3 Processors
    by Sébastien Stormacq (AWS News Blog) on May 23, 2022 at 9:02 pm

    I am excited to announce that Amazon Elastic Compute Cloud (Amazon EC2) C7g instances powered by the latest AWS Graviton3 processors that have been available in preview since re:Invent last year are now available for all. Let’s decompose the name C7g: the “C” instance family is designed for compute-intensive workloads. This is the 7th generation

  • What means a repeated column in the same slice - Redshift
    by /u/Alarming_Rest1557 (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 23, 2022 at 8:32 pm

    I have been doing a practice about Redshift where we have to search for the best way to optimize a DB. Watching the disk_usage per column in a customer table (default dist style was even), I changed it for the dist style key, my doubt is, what means that the same column is repeated in the same slice. In both styles, I have around 1.5 million registers per slice but look more distributed whit the dist style key. ​ Dist Style Even ​ Dist Style Key submitted by /u/Alarming_Rest1557 [link] [comments]

  • AWS Week In Review – May 23, 2022
    by Sébastien Stormacq (AWS News Blog) on May 23, 2022 at 7:40 pm

    This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS! This is the right place to quickly learn about recent AWS news from last week, in just about five minutes or less. This week, I have collected a couple of

  • How are you setting up aws landing zone using terraform?
    by /u/myth007 (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 23, 2022 at 6:28 pm

    I am working on a small project where need to create a multi-account setup (dev, prod, log archive, security). I want to do this using the AWS landing zone, but don't want to do it through the console. Any suggestion how this can be done using terraform? Once account is created, then I can do other things using specific terraform code like setting us EKS etc. submitted by /u/myth007 [link] [comments]

  • ec2-spot-interrupter: a simple CLI tool that triggers Amazon EC2 Spot Interruption Notifications and Rebalance Recommendations
    by /u/tobypadilla (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 23, 2022 at 5:30 pm

    submitted by /u/tobypadilla [link] [comments]

  • Properly Unit Testing Lambda Functions — With an Actual Production Example
    by /u/shadowsyntax (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 23, 2022 at 4:06 pm

    submitted by /u/shadowsyntax [link] [comments]

  • AWS Lambda Function URLs with Serverless Framework
    by /u/RichardGrant_ (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 23, 2022 at 3:14 pm

    submitted by /u/RichardGrant_ [link] [comments]

  • What is the best way to share an S3 bucket with multiple users that are NOT AWS users?
    by /u/bashalltheway (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 23, 2022 at 3:10 pm

    Just got an weird request and I honestly think these people have no clue of how this works but I will do my best to explain and ask for assistance and tips: We have one S3 bucket that is used for financial documents. These documents are written by an SAP instance, since it is on AWS, all good, we use IAM to manage this. Now they want to expose this S3 bucket to outside users that do not have an AWS account. This means that they need some sort of authentication to these files. I am so confused and I do not know why this is an requirement since we can create a group and put those users there, it's not like there are thousands of users, it's only a few. Still, do you guys have any suggestions on how to manage this? submitted by /u/bashalltheway [link] [comments]

  • vpn pricing
    by /u/xha1e (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 23, 2022 at 1:56 pm

    Trying to understand pricing here. I want to create a single vpn endpoint and client connection from my home network to an aws vpc. Do I have this right? The cost to maintain the idle connection is: AWS Site-to-Site VPN connection fee: There is an hourly fee for AWS Site-to-Site VPN, while connections are active. For the US East (Ohio) Region, the fee is $0.05 per hour. You pay $36.00 per month in connection fees. Plus the endpoint and client: AWS Client VPN endpoint association $0.10 per hour AWS Client VPN connection $0.05 per hour I'm getting $144 per month? Is that right? If so what other alternatives are there, I went with the aws vpn because it seemed simple and integrated in the ecosystem, but the cost is a lot for my use case. submitted by /u/xha1e [link] [comments]

  • FinOps across multiple AWS organizations
    by /u/SpiteHistorical6274 (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 23, 2022 at 1:08 pm

    I'm looking for general info/whitepapers/design patterns/reference implementations for how people manage FinOps across multiple AWS Organizations. All the material I've found so far focuses on a single organization. Does any happen to have any useful resources? submitted by /u/SpiteHistorical6274 [link] [comments]

  • Creating a discord bot card game, is S3 a good option?
    by /u/xpritee (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 23, 2022 at 12:39 pm

    New to all these cloud storage. Am creating a card collecting discord bot, and am currently hosting on postimage but it's running into 503 sometimes and I suspect it's a rate limit. Here are the details: About 7k jpg images to host. Each average 40kb. Rarely upload after the initial 7k upload, maybe once in awhile I update with new cards (100-ish each time?) Need to download the images with rather quick response time (about 3s to download 3 images). Expect about 10k downloads per day (really rough estimate...) Will I overrun the free tier? If so, how much more do I expect to pay? Running the calculator on AWS tells me I shouldn't exceed $5 but I'm new to this and I don't trust my estimations. Thanks in advance! EDIT: I only need to host images, I realise you can run bots on the cloud but I just need the storage. submitted by /u/xpritee [link] [comments]

  • Database Specialty Course Prep
    by /u/Fawkzzz (AWS Certifications) on May 20, 2022 at 11:42 pm

    I'm considering the Database Specialty for my next certification. I saw there's a Stephane Maarek course, but I don't think he's very involved in it and passes off to another instructor who sounds like hot garbage. Are there any other reliable courses out there? I will pick up the Tutorial Dojo practice exams but there aren't eBooks available yet. submitted by /u/Fawkzzz [link] [comments]

  • INE for cloud training?
    by /u/Max-lower-back-Payne (AWS Certifications) on May 20, 2022 at 9:51 pm

    Has anyone used INE for AWS training? If so what did you think? submitted by /u/Max-lower-back-Payne [link] [comments]

  • Don't think I can afford Cantrill's AWS SOAA course ($48), is Stephane Maarek's course (on sale: £10.99) a good substitute?
    by /u/deadassmf (AWS Certifications) on May 20, 2022 at 6:08 pm

    (Should clarify I can technically "afford" $48, I'd just much rather pay less due to being a junior lol) Looked upon Reddit threads upon Reddit threads about the AWS Solutions Architect Associate exam and what course to use - by far Cantrill's is the most recommended and referred to as the "gold standard". When I visited his site though he seems to charge $48 to enrol onto the course, meanwhile Maarek's course on Udemy is currently on sale - down from £59.99 to £10.99 (for the next 5 days only!!!). I think if this was when Maarek's Udemy course was it's original £59.99 price then Cantrill's would be an easy winner, right? Cheaper and by far more recommended. I've seen some comments say that Maarek's is much less in terms of duration, only reads slides, and apparently doesn't cover as wide as Cantrill's, as well not having anything practical like Cantrill does. So I'm a little uncertain if it's a good substitute, even considering the current price difference? Context: Junior DevOps Engineer (security focused), I have 1yr exp as a junior and 1yr exp as an intern. submitted by /u/deadassmf [link] [comments]

  • Permission from the parent account through a user policy.
    by /u/kasun1988 (AWS Certifications) on May 20, 2022 at 12:18 pm

    For an IAM user to access resources in another account the following must be provided: Permission from the parent account through a user policy. Permission from the resource owner to the IAM user through a bucket policy, or the parent account through a bucket policy, bucket ACL or object ACL. Here What is meant by Permission from the parent account through a user policy. and resource owner to the IAM user submitted by /u/kasun1988 [link] [comments]

  • AWS solutions architect associate results
    by /u/greyskull57 (AWS Certifications) on May 20, 2022 at 11:10 am

    Hi guys, I gave aws exam yesterday, it's more than 24 hours since I have ended my test, but still waiting for result email, pass/fail. It's normal or can it go upto 5 days to just get my results? submitted by /u/greyskull57 [link] [comments]

  • Beginner Question
    by /u/californianoob (AWS Certifications) on May 20, 2022 at 5:05 am

    Hi everyone, I have non-computer, manufacturing related engineering degree with a very little experience in the field and I hate my job. I've heard people without degrees and experience getting AWS Certifications and getting hired easily by Amazon or other IT companies. If this is true which certificate should I start with? All I have is one semester of C++ experience 10y ago when I was studying. submitted by /u/californianoob [link] [comments]

  • Associate Solutions Architect – Early Career 2022
    by /u/youngfrenc (AWS Certifications) on May 20, 2022 at 4:32 am

    Hey guys just wanted to know if anyone has gotten through the aws solutions architect hiring process that amazon has, as i understand it we have 2 phases which includes curriculum based learning and the other phase is on the job training, each phase lasts around 6 months, wanted to know your opinions on this and whether or not its worth it, is any of the phases paid as well or not asking cause im a broke recent college grad submitted by /u/youngfrenc [link] [comments]

  • AWS Security Specialty Certification - Passed woot!
    by /u/Fawkzzz (AWS Certifications) on May 20, 2022 at 4:03 am

    I sat for the DevOps Pro a little less than a week ago and figured I would make an attempt at a Specialty exam next. Prep - I picked up the Tutorial Dojo eBook and Practice exams a few days ago. Yesterday I went through both practice exams in review mode and scored a 55% and 65%. I skimmed parts of the eBook and made another attempt at both exams today scoring an 83% and 85%. I took the AWS official practice exam yesterday and scored a 75%. I used Stephane Maarek and Adrian Cantrill's courses for my pro and associate certs, and those helped cover a lot of services that show up on the Security exam. Thoughts / Impressions - The exam has a TON of choose 2 or 3 in the multiple choices. The questions aren't as lengthy as the SAP, but are pretty similar in length to the DevOps Pro. I also was surprised to see multiple choices going from A-F on the exam. I flagged about 10 questions for review and changed most of my answers on those questions. I had about 30 minutes left on the clock when I finished due to the shorter questions. Services that come up a lot include but are not limited to... AWS KMS - Many questions on KMS S3 Permissions - At least 5 S3 related questions Cloud Watch Events - Various questions relied on Cloud Watch Events Secrets Manager & Parameter Store Security Groups Network ACL Cloud Trail STS Recovery and quarantining of EC2 instances submitted by /u/Fawkzzz [link] [comments]

  • Can anyone recommend any good course material for the AWS CCP Exam?
    by /u/NephilimTheGiant (AWS Certifications) on May 20, 2022 at 1:13 am

    Hi everyone, I recently dove head first into A Cloud Guru, and feel completely ripped off. Almost none of the course material was on the actual test, and I feel extremely discouraged because I feel like I actually know nothing now. Any input is appreciated, thanks! submitted by /u/NephilimTheGiant [link] [comments]

  • AWS CLoud Practitioner vs AWS Cloud Architect Exam
    by /u/kakkrot95 (AWS Certifications) on May 19, 2022 at 8:47 pm

    Hi, So i am from a tech background, Bachelors in COmputer apps and two post grad diplomas in wireless network and network security, At the moment I m working at Technical Support Position but looking to step in It Help desk And Service Desk Positions I honestly do not have much experience when it comes to a proper IT Job so I was planning to get a few certs in order to add on my resume and make Upskill a bit. I am mainly confused on which cert to prepare for, From what I gathered Practitioner is for people who have no prior IT experience at all. Can someone provide some more information on both of the exams and which one will be suitable for me. PS I have gone through almost 60% of the AWS Certified Solutions Architect - Associate 2020 Exam by Ryan Kroonenburg back in 2019 But I do not think I actually remember anything from it But I should be able to grab and digest the content from a new course pretty soon Thank you in advance submitted by /u/kakkrot95 [link] [comments]

  • Just passed SAA-C02 with score of 845
    by /u/fadesfast (AWS Certifications) on May 19, 2022 at 8:40 pm

    Took the amazing course created by u/acantril and followed up with the practice exams by u/jon-bonso-tdojo. I cannot recommend these two learning sources enough. Adrian's teaching style is AMAZING at helping commit to memory the various AWS services, their use cases, when to use them, etc, and the tdojo practice exams by Jon Bonso give a very realistic look into what the exam will be like. Thanks to both of you for guiding me along this journey! Although I grew up around technology, and worked for a year as a computer tech, my IT experience is quite limited. I began studying in late march and took the exam on Monday, passing with a score of 845. I used Pearson Vue Online and everything went smoothly. Although I did not receive an initial pass/fail upon completing the exam, I did receive an email notification within 24 hours. Moving forward, my biggest concerns are the lack of experience I have in the IT field, and the fact that I have a somewhat significant gap since I was last employed in early 2018 (due to college and family illnesses). I am already working through the cloud resume challenge, and intend to complete all of Adrian's advanced demo labs as well in order to build some experience. I know I have an uphill battle ahead of me to get myself into the field of cloud computing (ultimately a position as SA), but I will do whatever it takes to earn a position there. Please feel free to share any thoughts or advice for me moving forward. I would greatly appreciate any guidance! submitted by /u/fadesfast [link] [comments]

  • Looking for a career change
    by /u/sho2wavey (AWS Certifications) on May 19, 2022 at 6:27 pm

    Hi guys new to Reddit and hence this community. I’ve been working in a pharmacy job for 3 years. I’m 23 years old with no experience at all in IT except a course I did a couple years ago, Which I failed miserably coz I never saw myself in IT. As I’ve gotten older I’ve become bored with my job and I’m looking for something challenging. Not to brag but I know for sure I’m smart enough. Would doing the was solutions architect be a smart career move for me and what’s the likelihood I can land a job with just this certification and obviously the basic school stuff. submitted by /u/sho2wavey [link] [comments]

  • Which course is best for AWS certified solution Architect - Associate?
    by /u/Arajgor (AWS Certifications) on May 19, 2022 at 4:51 pm

    I have researched a lot and few names are out there. A cloud guru Stephane Maarek Adrian Cantrill Neal Davis many more So which one is best for understanding the AWS at the associate level not just for passing an exam? submitted by /u/Arajgor [link] [comments]

  • AWS Secrets Manager vs SSM Parameter Store?
    by /u/PerfectlyCooperative (AWS Certifications) on May 19, 2022 at 2:28 am

    Can anyone explain the differences and when to use either of these two? submitted by /u/PerfectlyCooperative [link] [comments]

  • Taking an Instructor-led Course After or Before Study Materials?
    by /u/g0stsec (AWS Certifications) on May 19, 2022 at 1:01 am

    Looking for recommendations on this. Background: I'm an IT professional with 20 years experience. Experience up and down the OSI model from end user support, helpdesk and enterprise service management. Solid background in networking, network security, systems administration, (Linux and Windows), storage solutions, and some virtualization experience (VMWare ESXi specifically). Opportunity: I have an opportunity to take a 3 day instructor-led course to prepare for the Solution Architect Associate exam. I'm not under the impression that this instructor led course is all I need. I expect to study materials for several weeks. Would you recommend taking the course soon (in the next few weeks) then studying for weeks, or taking the course after studying for several weeks (based on your experience if you have it please). submitted by /u/g0stsec [link] [comments]

  • New to AWS Certs, do I need anything before AWS SCS?
    by /u/GroundbreakingMark4 (AWS Certifications) on May 18, 2022 at 10:19 pm

    Hi everyone! I’m new to AWS certs but have some experience penetration testing AWS environments. I was thinking of doing AWS Solutions Architect Associate (with probably Cloud practitioner as part of my study) followed by AWS Security Specialist but I wasn’t sure if: a) there are any pre-requisites for the security specialist or if I could jump straight in b) if anyone has just jumped straight in to security specialist or if they any of the aforementioned certs (solutions architect and practitioner) are recommended. Thanks! submitted by /u/GroundbreakingMark4 [link] [comments]

  • AWS Backup Now Supports Amazon FSx for NetApp ONTAP
    by Jeff Barr (AWS News Blog) on May 18, 2022 at 8:08 pm

    If you are a long-time reader of this blog, you know that I categorize some posts as “chocolate and peanut butter” in homage to an ancient (1970 or so) series of TV commercials for Reese’s Peanut Butter Cups. Today, I am happy to bring you the latest such post, combining AWS Backup and Amazon FSx

  • Is this a good deal? $1250 for AWS Solutions Architect Course and Certification. I am looking to get AWS Certified and prefer an actual instructor which this offers. I usually have a hard time with self study.
    by /u/soulreaver99 (AWS Certifications) on May 18, 2022 at 6:00 pm

    submitted by /u/soulreaver99 [link] [comments]

  • How I passed Certified Cloud Practitioner Exam by studying < 15 hours (Tips)
    by /u/Adventurous-Sign4520 (AWS Certifications) on May 18, 2022 at 1:47 pm

    Hey everyone, I am writing this post to serve as a guide for folks who are looking for a quicker way to crack the exam. FYI, I have < 1 year AWS experience. Before I started my prep I was aware about high level basics for EC2, Lambda, SQS, SNS, RDS. I did Ultimate AWS Certified Cloud Practitioner - 2022 course by Stephne Maarek and skipped the hands on parts (you can do the hands-on if you are curious about something). I watched lectures at 1.25x and after end of every module, I would go through summary lecture and do the quizzes for the module. Before my exam, I went through all summary sections again and did all those practice quizzes again for each module. I appeared in the exam and surprisingly, everything that was asked was seen in the course. Just my two cents for people who are in a bit of a rush. Give yourself 2 days and schedule time blocks in calendar to study for the exam. Edit1: Create a document to remember high level summary of what each thing does. If instructor mentions something important (or something is highlighted in bold in the slides), put it in your notes. Take a minute to guess (or memorize) what the service does before instructor talks about it in the summary. I skipped Section 20: Other services as it was likely not going to be in the exam submitted by /u/Adventurous-Sign4520 [link] [comments]

  • SAP-C02 AWS Certified Solutions Architect - Professional certification exam is changing November 15, 2022. The last date to take the current exam is November 14, 2022
    by /u/HolmesChong (AWS Certifications) on May 18, 2022 at 1:29 pm

    Starting November 15, 2022, a new version of the AWS Certified Solutions Architect - Professional exam will be available. The AWS Certified Solutions Architect - Professional exam has been updated to align with the AWS Well-Architected Framework across all domains and will ensure the certification validates the latest AWS technical skills and cloud expertise. Please review the updated exam guide to learn what to expect and to help you prepare. If you are preparing for the current AWS Certified Solutions Architect - Professional exam, or need to recertify, you’ll want to make sure to take the current exam by November 14, 2022. https://aws.amazon.com/certification/coming-soon/ submitted by /u/HolmesChong [link] [comments]

  • appspec.yaml or appspec.yml for a code deloy on an ECS cluster
    by /u/KeyCup2606 (AWS Certifications) on May 18, 2022 at 1:26 pm

    Hello, I'm having this AWS Developer Certification question: ​ https://preview.redd.it/2kc0egxak8091.png?width=1217&format=png&auto=webp&s=b0589fa4121477855259de1cf8ea1afa8f9831b2 ​ ​ But according to AWS.We can use appsepc.yaml or appspec.yml . ​ ​ https://preview.redd.it/y7shtqajk8091.png?width=1418&format=png&auto=webp&s=5573403a3399267dc45de470511f304a4c68d4c5 I'm really confused .What's the correct answer ? submitted by /u/KeyCup2606 [link] [comments]

  • Passed SOA-C02 with 848
    by /u/nonFungibleHuman (AWS Certifications) on May 18, 2022 at 12:33 pm

    So finally received my results today, thanks to everyone that posted his/her experience here doing the exam. I took it in a Test center (Person Vue) because of your recommendations and would 100% repeat the experience, flawless and the labs went smooth. Background: This is my second cert, last year got the Developer Associate and up today I've got around 2 years working with AWS, lately on personal projects for learning purpouses. Having experience with the console helps a ton with the labs, and doing such projects helped me grasping the knowledge better. I am a software developer with 5 years of exp. and I want to jump into Architect or Devops, that will depend on my new job. I used u/stephanemaarek udemy course and u/jon-bonso-tdojo practice exams/labs, studied for 2 and a half months 1 to 2 hours daily, and took a bunch of notes in the form of flashcards (around 900 flashcards), which I revised daily (10-100 cards per day). On practice tests I was scoring around 80%, I did the one in Mareek course and then the final exam in job bonso material, doing the exams in section mode and review mode helped me a lot to tackle the weak points, and the explanations of each answer are just amazing. I am going to focus now on skill development, so no certs for now but after I decide with path to go (probably devops) I'll go for Devops Pro. submitted by /u/nonFungibleHuman [link] [comments]

  • AWS training materials vs courses
    by /u/FBAmike (AWS Certifications) on May 18, 2022 at 7:43 am

    HI, At the associate level, you see a lot of recommendations for Stephane, Neal, or Adrian's course, with some free resources mixed in (freecodecamp, etc) with a few practice test options. I don't have a good sense of AWS' own training. Is it simply too shallow to pass with? Is it organized poorly? Is it a viable alternative? Can someone who has a good sense of the various paid courses and AWS training put them in a context for me so I can figure out how to approach this. submitted by /u/FBAmike [link] [comments]

Download AWS Solution Architect Associate Exam Prep Pro App (No Ads, Full version with answers) for:

AWS SAA-C02 SAA-C03 Exam Prep

Android –  iOS – Windows 10 – Amazon Android

 

A Twitter List by enoumen
Posted on October 2, 2018May 16, 2022

Top 100 AWS Certified Cloud Practitioner Exam Preparation Questions and Answers Dumps

AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep

Welcome to the Top 100 AWS Certified Cloud Practitioner Exam Preparation Questions and Answers Dumps :

Definition and Objectives,  

Top 100 Questions and Answers Dumps, 

2022 AWS Cloud Practitioner Exam Preparation

White papers,  

Courses, Labs and Training Materials,  

Exam info and details,  

References,  

Jobs,

 Others

AWS Certificates, 

AWS Cloud Support Engineer Job Interview Prep,  


Save 65% on select product(s) with promo code 65ZDS44X on Amazon.com

Top 20 AWS Training Q&A , 


AWS Web Services Cheat Sheet,  

Latest Products & Services at AWS RE:INVENT

AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep
AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep

The AWS Certified Cloud Practitioner average salary is — $131,465/year

What is the AWS Certified Cloud Practitioner Exam?

The AWS Certified Cloud Practitioner Exam (CLF-C01) is an introduction to AWS services and the intention is to examine the candidates ability to define what the AWS cloud is and its global infrastructure. It provides an overview of AWS core services security aspects, pricing and support services. The main objective is to provide an overall understanding about the Amazon Web Services Cloud platform. The course helps you get the conceptual understanding of the AWS and can help you know about the basics of AWS and cloud computing, including the services, cases and benefits [Get AWS CCP Practice Exam PDF Dumps here]

2022 AWS CCP CLF-C01 Practice Exam Course on   – Top 250+ Questions and Detailed Answers – Success Guaranteed – Save 50% with this link

AWS Certified Cloud Practitioner Exam Preparation
AWS CCP Certified Cloud Practitioner Exam Preparation

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

AWS Cloud Practitioner Exam Prep - CCP CLF-C01
AWS Cloud Practitioner Exam Prep – CCP CLF-C01

 
AWS CCP CLF-C01 on Android 
 
This image has an empty alt attribute; its file name is image.png

 
AWS CCP CLF-C01 on iOS
 
AWS Certified Cloud Practitioner Mock Exams Pro Windows10/11
AWS Certified Cloud Practitioner Mock Exams Pro Windows10/11

AWS CCP CLF-C01 on Windows 10/11

To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Top

AWS Certified Cloud Practitioner Exam Prep (CLF-C01) Questions and Answers 

AWS Certified Cloud Practitioner Exam Certification Prep Quiz App

Download AWS Cloud Practitioner Exam Prep Pro App (No Ads, Full Version with Answers) for:

AWS Certified Cloud Practitioner Exam Preparation
AWS CCP Certified Cloud Practitioner Exam Preparation

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

Below we are providing you with:

  • aws cloud practitioner exam questions
  • aws cloud practitioner sample questions
  • aws cloud practitioner exam dumps
  • aws cloud practitioner practice questions and answers
  • aws cloud practitioner practice exam questions and references

Q1: For auditing purposes, your company now wants to monitor all API activity for all regions in your AWS environment. What can you use to fulfill this new requirement?

  • A. For each region, enable CloudTrail and send all logs to a bucket in each region.
  • B. Enable CloudTrail for all regions.
  • C. Ensure one CloudTrail is enabled for all regions.
  • D. Use AWS Config to enable the trail for all regions.

Answer:

Answer: (C) [Get AWS CCP Practice Exam PDF Dumps here] AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

Ensure one CloudTrail is enabled for all regions.
Turn on CloudTrail for all regions in your environment and CloudTrail will deliver log files from all regions to one S3 bucket.
AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting.

Reference:
AWS CloudTrail


Top


Q2: What is the best solution to provide secure access to an S3 bucket not using the internet?

  • A. Use a VPN connection.
  • B. Use an Internet Gateway.
  • C. Use a VPC Endpoint to access S3.
  • D. Use a NAT Gateway.

Answer:

Answer: iOS – Android (C) AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

[Get AWS CCP Practice Exam PDF Dumps here]

Use a VPC Endpoint to access S3.
A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

AWS PrivateLink simplifies the security of data shared with cloud-based applications by eliminating the exposure of data to the public Internet.

Reference:
VPC Endpoint


Top

Q3: In the AWS Shared Responsibility Model, which of the following are the responsibility of AWS?

  • A. Securing Edge Locations
  • B. Encrypting data
  • C. Password policies
  • D. Decomissioning data

Answer:

Answer: iOS – Android (A and D) AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

[Get AWS CCP Practice Exam PDF Dumps here]
It is AWS responsibility to secure Edge locations and decommission the data.
AWS responsibility “Security of the Cloud” – AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.

Reference:
AWS Shared Responsibility Model


Top

Q4: You have EC2 instances running at 90% utilization and you expect this to continue for at least a year. What type of EC2 instance would you choose to ensure your cost stay at a minimum?

  • A. Dedicated host instances
  • B. On-demand instances
  • C. Spot instances
  • D. Reserved instances

Answer:

Answer: iOS – Android AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

[Get AWS CCP Practice Exam PDF Dumps here]
Reserved instances are the best choice for instances with continuous usage and offer a reduced cost because you purchase the instance for the entire year.
Amazon EC2 Reserved Instances (RI) provide a significant discount (up to 75%) compared to On-Demand pricing and provide a capacity reservation when used in a specific Availability Zone.

Reference:
AWS Reserved instances.


Top

Q5: What tool would you use to get an estimated monthly cost for your environment?

  • A. TCO Calculator
  • B. Simply Monthly Calculator
  • C. Cost Explorer
  • D. Consolidated Billing

Answer:

Answer: iOS – Android (B) [Get AWS CCP Practice Exam PDF Dumps here]

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

The AWS Simple Monthly Calculator helps customers and prospects estimate their monthly AWS bill more efficiently. Using this tool, they can add, modify and remove services from their ‘bill’ and it will recalculate their estimated monthly charges automatically.

Reference:
AWS Simply Monthly Calculator


Top

Q6: How do you make sure your organization does not exceed its monthly budget?

AWS Certified Cloud Practitioner Exam Prep App
AWS Certified Cloud Practitioner Exam Prep PWA App

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11 [Get AWS CCP Practice Exam PDF Dumps here]

  • A. Sign up for the free alert under filing preferences in the AWS Management Console.
  • B. Set a schedule to regularly review the Billing an Cost Management dashboard each month.
  • C. Create an email alert in AWS Budget
  • D. In CloudWatch, create an alarm that triggers each time the limit is exceeded.

Answer:

Answer: iOS – Android (C) [Get AWS CCP Practice Exam PDF Dumps here]
AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount.
You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Reservation alerts are supported for Amazon EC2, Amazon RDS, Amazon Redshift, Amazon ElastiCache, and Amazon Elasticsearch reservations.

Reference:
AWS Budget.


Top

Q7: An Edge Location is a specialization AWS data centre that works with which services?

  • A. Lambda
  • B. CloudWatch
  • C. CloudFront
  • D. Route 53

Answer:

Answer: iOS – Android [Get AWS CCP Practice Exam PDF Dumps here]
Lambda@Edge lets you run Lambda functions to customize the content that CloudFront delivers, executing the functions in AWS locations closer to the viewer.
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you’re serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.

CloudFront speeds up the distribution of your content by routing each user request through the AWS backbone network to the edge location that can best serve your content. Typically, this is a CloudFront edge server that provides the fastest delivery to the viewer. Using the AWS network dramatically reduces the number of networks that your users’ requests must pass through, which improves performance. Users get lower latency—the time it takes to load the first byte of the file—and higher data transfer rates.

You also get increased reliability and availability because copies of your files (also known as objects) are now held (or cached) in multiple edge locations around the world.

Reference:
AWS Edge Locations


Top

Q8: What is the preferred method of linking 2 AWS accounts?

  • A. AWS Organizations
  • B. Cost Explorer
  • C. VPC Peering
  • D. Consolidated billing

Answer:

Answer: iOS – Android (A) AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11 [Get AWS CCP Practice Exam PDF Dumps here]
AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage. AWSOrganizations includes account management and consolidated billing capabilities that enable you to better meet the budgetary, security, and compliance needs of your business.

Reference:
AWS Organizations.


Top

Q9: Which of the following service is most useful when a Disaster Recovery method is triggered in AWS.

  • A. Amazon Route 53
  • B. Amazon SNS
  • C. Amazon SQS
  • D. Amazon Inspector

Answer:

Anser: A.
Route 53 is a domain name system service by AWS. When a Disaster does occur , it can be easy to switch to secondary sites using the Route53 service.
Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that
computers use to connect to each other. Amazon Route 53 is fully compliant with IPv6 as well.

Reference: AWS Route 53/

Top

Q10: Which of the following disaster recovery deployment mechanisms that has the highest downtime

  • A. Pilot light
  • B. Warm standby
  • C. Multi Site
  • D. Backup and Restore

Answer: iOS – Android [Get AWS CCP Practice Exam PDF Dumps here]

Answer: D.
The below snapshot from the AWS Documentation shows the spectrum of the Disaster recovery methods. If you go to the further end of the spectrum you have the least time for downtime for the users.

AWS Certified Cloud Practitioner Exam: AWS Disaster Recovery Techniques

AWS Disaster Recovery Techniques

Reference: AWS Route 53/

Top

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11 [Get AWS CCP Practice Exam PDF Dumps here]

Q11: Your company is planning to host resources in the AWS Cloud. They want to use services which can be used to decouple resources hosted on the cloud. Which of the following services can help fulfil this requirement?

  • A. AWS EBS Volumes
  • B. AWS EBS Snapshots
  • C. AWS Glacier
  • D. AWS SQS

Answer:


D. AWS SQS: Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components.

Reference: AWS Simple Queue Service Developer Guive

Top

Q12: If you have a set of frequently accessed files that are used on a daily basis, what S3 storage class should you store them in?

  • A. Infrequent Access
  • B. Fast Access
  • C. Reduced Redundancy
  • D. Standard

Answer:


D. Standard: The Standard storage class should be used for files that you access on a daily or very frequent basis.

Reference: AWS storage-classes/

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11 [Get AWS CCP Practice Exam PDF Dumps here]

AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep
AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep

Q13: What is the availability and durability rating of S3 Standard Storage Class?

Choose the correct answer:

  • A. 99.999999999% Durability and 99.99% Availability
  • B. 99.999999999% Availability and 99.90% Durability
  • C. 99.999999999% Durability and 99.00% Availability
  • D. 99.999999999% Availability and 99.99% Durability

Answer:


A. 99.999999999% Durability and 99.99% Availability
S3 Standard Storage class has a rating of 99.999999999% durability (referred to as 11 nines) and 99.99% availability.

Reference: AWS storage classes/

Top

Q14: What AWS database is primarily used to analyze data using standard SQL formatting with compatibility for your existing business intelligence tools

  • A. Redshift
  • B. RDS
  • C. DynamoDB
  • D. ElastiCache

Answer:


A. Redshift is a database offering that is fully-managed and used for data warehousing and analytics, including compatibility with existing business intelligence tools.

Reference: AWS redshift/

Top

Q15: What are the benefits of DynamoDB?

Choose the 3 correct answers:

  • A. Single-digit millisecond latency.
  • B. Supports multiple known NoSQL database engines like MariaDB and Oracle NoSQL.
  • C. Supports both document and key-value store data models.
  • D. Automatic scaling of throughput capacity.

Answer:


A. C. D. DynamoDB does not use/support other NoSQL database engines. You only have access to use DynamoDB’s built-in engine.

Reference: AWS DynamoDB

Top


AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

[Get AWS CCP Practice Exam PDF Dumps here]

Q16: Which of the following are the benefits of AWS Organizations?

Choose the 2 correct answers:

  • A. Analyze cost before migrating to AWS.
  • B. Centrally manage access polices across multiple AWS accounts.
  • C. Automate AWS account creation and management.
  • D. Provide technical help (by AWS) for issues in your AWS account.

Answer: iOS – Android [Get AWS CCP Practice Exam PDF Dumps here]


B. and C.
CENTRALLY MANAGE POLICIES ACROSS MULTIPLE AWS ACCOUNTS
AUTOMATE AWS ACCOUNT CREATION AND MANAGEMENT
CONTROL ACCESS TO AWS SERVICES
CONSOLIDATE BILLING ACROSS MULTIPLE AWS ACCOUNTS

Reference: AWS organizations/

Q17: There is a requirement hosting a set of servers in the Cloud for a short period of 3 months. Which of the following types of instances should be chosen to be cost effective.

  • A. Spot Instances
  • B. On-Demand
  • C. No Upfront costs Reserved
  • D. Partial Upfront costs Reserved

Answer:


B. Since the requirement is just for 3 months, then the best cost effective option is to use On-Demand Instances.

Reference: AWS pricing on-demand/

Top

Q18: Which of the following is not a disaster recovery deployment technique.

  • A. Pilot light
  • B. Warm standby
  • C. Single Site
  • D. Multi-Site

Answer:

Answer: iOS – Android [Get AWS CCP Practice Exam PDF Dumps here]

The following figure shows a spectrum for the four scenarios, arranged by how quickly a system can be available to users after a DR event.

AWS Disaster Recovery Techniques
AWS Disaster Recovery Techniques

Reference: Disaster Recovery


Top

Q19: Which of the following are attributes to the costing for using the Simple Storage Service. Choose 2 answers from the options given below

  • A. The storage class used for the objects stored.
  • B. Number of S3 buckets.
  • C. The total size in gigabytes of all objects stored.
  • D. Using encryption in S3

Answer:


Answer: iOS – Android ( A and C)

Below is a snapshot of the costing calculator for AWS S3.

AWS Certified Cloud Practitioner Exam: S3 storage cost estimator
Amazon S3 is storage for the Internet. It is designed to make web-scale computing easier for developers.

Reference: Calculator ; S3 storage classes

Q20: What endpoints are possible to send messages to with Simple Notification Service?

Choose the 3 correct answers:

  • A. SQS
  • B. SMS
  • C. FTP
  • D. Lambda

Answer:

Answer: iOS – Android
Reference: Using Amazon SNS for System-to-System Messaging with an HTTP/S Endpoint as a Subscriber

Top
AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

Q21: What service helps you to aggregate logs from your EC2 instance? Choose one answer from the options below:

  • A. SQS
  • B. S3
  • C. Cloudtrail
  • D. Cloudwatch Logs

Answer:


Answer: iOS – Android (D)

You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, and other sources. You can then retrieve the associated log data from CloudWatch Log.

Reference: AWS CloudWatch Logs

Top

Q22: A company is deploying a new two-tier web application in AWS. The company wants to store their most frequently used data so that the response time for the application is improved. Which AWS service provides the solution for the company’s requirements?

  • A. MySQL Installed on two Amazon EC2 Instances in a single Availability Zone
  • B. Amazon RDS for MySQL with Multi-AZ
  • C. Amazon ElastiCache
  • D. Amazon DynamoDB

Answer:


Answer: iOS – Android

Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases.

Reference: AWS elasticache/


Top

Q23: You have a distributed application that periodically processes large volumes of data across multiple Amazon EC2 Instances. The application is designed to recover gracefully from Amazon EC2 instance failures. You are required to accomplish this task in the most cost-effective way. Which of the following will meet your requirements?

  • A. Spot Instances
  • B. Reserved Instances
  • C. Dedicated Instances

On-Demand Instances

Answer:


Answer: iOS – Android

When you think of cost effectiveness, you can either have to choose Spot or Reserved instances. Now when you have a regular processing job, the best is to use spot instances and since your application is designed recover gracefully from Amazon EC2 instance failures, then even if you lose the Spot instance , there is no issue because your application can recover.

Reference: AWS EC2 spot instances


Top

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

Q24: Which of the following features is associated with a Subnet in a VPC to protect against Incoming traffic requests?

  • A. AWS Inspector
  • B. Subnet Groups
  • C. Security Groups
  • D. NACL

Answer:


Answer: iOS – Android (D) AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC.

Reference: AWS VPC ACLs


Top

Q25: A company is deploying a two-tier, highly available web application to AWS. Which service provides durable storage for static content while utilizing Overall CPU resources for the web tier?

  • A. Amazon EBC volume.
  • B. Amazon S3
  • C. Amazon EC2 instance store
  • D. Amazon RDS instance

Answer:


B. Amazon S3 is the default storage service that should be considered for companies. It provides durable storage for all static content.

Reference: S3 faqs


Top

Q26: What are characteristics of Amazon S3?
Choose 2 answers from the options given below.

  • A. S3 allows you to store objects of virtually unlimited size.
  • B. S3 allows you to store unlimited amounts of data.
  • C. S3 should be used to host relational database.
  • D. Objects are directly accessible via a URL.

Answer:


Answer: iOS – Android AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

Each object does have a limitation in S3, but you can store virtually unlimited amounts of data. Also each object gets a directly accessible URL

Reference: AWS s3 faqs

Top

Q26: When working on the costing for on-demand EC2 instances , which are the following are attributes which determine the costing of the EC2 Instance. Choose 3 answers from the options given below

  • A. Instance Type
  • B. AMI Type
  • C. Region
  • D. Edge location

Answer:


Answer: iOS – Android (A. B. C. ) AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

See components making up the pricing below.

AWS AMI Pricing
AWS AMI Pricing

Reference: AWS ec2 pricing on-demand/

Top

Q27: You have a mission-critical application which must be globally available at all times. If this is the case, which of the below deployment mechanisms would you employ

  • A. Deployment to multiple edge locations
  • B. Deployment to multiple Availability Zones
  • D. Deployment to multiple Data Centers
  • D. Deployment to multiple Regions

Answer:


Answer: iOS – Android (D) AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

Regions represent different geographic locations and it is best to host your application across multiple regions for disaster recovery.

Reference: AWS regions availability zones

Top

Q28: Which of the following are right principles when designing cloud based systems. Choose 2 answers from the options below

  • A. Build Tightly-coupled components
  • B. Build loosely-coupled components
  • C. Assume everything will fail
  • D. Use as many services as possible

Answer:


Answer: iOS – Android B. and C. AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

Always build components which are loosely coupled. This is so that even if one component does fail, the entire system does not fail. Also if you build with the assumption that everything will fail, then you will ensure that the right measures are taken to build a highly available and fault tolerant system.

Reference: AWS Well architected networks

Top

Q29: You have 2 accounts in your AWS account. One for the Dev and the other for QA. All are part of consolidated billing. The master account has purchase 3 reserved instances. The Dev department is currently using 2 reserved instances. The QA team is planning on using 3 instances which of the same instance type. What is the pricing tier of the instances that can be used by the QA Team?

  • A. No Reserved and 3 on-demand
  • B. One Reserved and 2 on-demand
  • C. Two Reserved and 1 on-demand
  • D. Three Reserved and no on-demand

Answer:


Answer: iOS – Android

Since all are a part of consolidating billing, the pricing of reserved instances can be shared by All. And since 2 are already used by the Dev team , another one can be used by the QA team. The rest of the instances can be on-demand instances.

Reference: AWS ec2 pricing reserved instances/

Top

Q30: Which one of the following features is normally present in all of AWS Support plans

  • A. 24/7 access to Customer Service
  • B. Access to all features in the Trusted Advisor
  • C. A technical Account Manager
  • D. A dedicated support person

Answer:


Answer: iOS – Android (A)

AWS Support plans
AWS Support plans

Reference: AWS premium support compare plans

Top

Q31: Which of the following storage mechanisms can be used to store messages effectively which can be used across distributed systems?

  • A. Amazon Glacier
  • B. Amazon EBS Volumes
  • C. Amazon EBS Snapshots
  • D. Amazon SQS

Answer:


Answer: iOS – Android (D)

Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components.

Reference: AWS Simple Queue Service

Top

Q32: You are exploring what services AWS has off-hand. You have a large number of data sets that need to be processed. Which of the following services can help fulfil this requirement.

  • A. EMR
  • B. S3
  • C. Glacier
  • D. Storage Gateway

Answer:


A. Amazon EMR helps you analyze and process vast amounts of data by distributing the computational work across a cluster of virtual servers running in the AWS Cloud. The cluster is managed using an open-source framework called Hadoop. Amazon EMR lets you focus on crunching or analyzing your data without having to worry about time-consuming setup, management, and tuning of Hadoop clusters or the compute capacity they rely on.

Reference: AWS Emr

Top

Q33: Which of the following services allows you to analyze EC2 Instances against pre-defined security templates to check for vulnerabilities

  • A. AWS Trusted Advisor
  • B. AWS Inspector
  • C. AWS WAF
  • D. AWS Shield

Answer:


Answer: iOS – Android (B)

Amazon Inspector enables you to analyze the behaviour of your AWS resources and helps you to identify potential security issues. Using Amazon Inspector, you can define a collection of AWS resources that you want to include in an assessment target. You can then create an assessment template and launch a security
assessment run of this target.

Reference: AWS inspector introduction


Top

Q34: Your company is planning to offload some of the batch processing workloads on to AWS. These jobs can be interrupted and resumed at any time. Which of the following instance types would be the most cost effective to use for this purpose.

  • A. On-Demand
  • B. Spot
  • C. Full Upfront Reserved
  • D. Partial Upfront Reserved

Answer:


B. Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. For example, Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks

Reference: AWS Spot Instances

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

Top

Q35: Which of the following is not a category recommendation given by the AWS Trusted Advisor?

  • A. Security
  • B. High Availability
  • C. Performance
  • D. Fault tolerance

Answer:


Answer: iOS – Android (B)

AWS Trusted advisor

Reference: AWS Trust Advisor

Top

Q36: Which of the below cannot be used to get data onto Amazon Glacier.

  • A. AWS Glacier API
  • B. AWS Console
  • C. AWS Glacier SDK
  • D. AWS S3 Lifecycle policies

Answer:


Answer: iOS – Android (B)

Note that the AWS Console cannot be used to upload data onto Glacier. The console can only be used to create a Glacier vault which can be used to upload the data.

Reference: Uploading an archive in AWS

Top

Q37: Which of the following from AWS can be used to transfer petabytes of data from on-premise locations to the AWS Cloud.

  • A. AWS Import/Export
  • B. AWS EC2
  • C. AWS Snowball
  • D. AWS Transfer

Answer:


Answer: iOS – Android (C)

Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data& into and out of the AWS cloud. Using Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. Transferring data with Snowball is simple, fast, secure, and can be as little as one-fifth the cost of high-speed Internet.

Reference: AWS snowball

Top

Q38: Which of the following services allows you to analyze EC2 Instances against pre-defined security templates to check for vulnerabilities

  • A. AWS Trusted Advisor
  • B. AWS Inspector
  • C. AWS WAF
  • D. AWS Shield

Answer:


Answer: iOS – Android

Amazon Inspector enables you to analyze the behavior of your AWS resources and helps you to identify potential security issues. Using Amazon Inspector, you can define a collection of AWS resources that you want to include in an assessment target. You can then create an assessment template and launch a security
assessment run of this target.

Reference: AWS Inspector


Top

Q39: Your company wants to move an existing Oracle database to the AWS Cloud. Which of the following services can help facilitate this move.

  • A. AWS Database Migration Service
  • B. AWS VM Migration Service
  • C. AWS Inspector
  • D. AWS Trusted Advisor

Answer:


Answer: iOS – Android (A)

AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open source databases.

Reference: AWS dms


Top


AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

Q40: Which of the following features of AWS RDS allows for offloading reads of the database.

  • A. Cross region replication
  • B. Creating Read Replica’s
  • C. Using snapshots
  • D. Using Multi-AZ feature

Answer:


Answer: iOS – Android (B)

You can reduce the load on your source DB Instance by routing read queries from your applications to the read replica. Read replicas allow you to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.

Reference: AWS read replicas


Top

Q41: Which of the following does AWS perform on its behalf for EBS volumes to make it less prone to failure?

  • A. Replication of the volume across Availability Zones
  • B. Replication of the volume in the same Availability Zone
  • C. Replication of the volume across Regions
  • D. Replication of the volume across Edge locations

Answer:


Answer: iOS – Android

When you create an EBS volume in an Availability Zone, it is automatically replicated within that zone to prevent data loss due to failure of any single hardware component

Reference: AWS EBS Volumes

Top

Q42: Your company is planning to host a large e-commerce application on the AWS Cloud. One of their major concerns is Internet attacks such as DDos attacks.

Which of the following services can help mitigate this concern. Choose 2 answers from the options given below

  • A. A. Cloudfront
  • B. AWS Shield
  • C. C. AWS EC2
  • D. AWS Config

Answer:


Answer: iOS – Android (A. and B. )

One of the first techniques to mitigate DDoS attacks is to minimize the surface area that can be attacked thereby limiting the options for attackers and allowing you to build protections in a single place. We want to ensure that we do not expose our application or resources to ports, protocols or applications from where they do not expect any communication. Thus, minimizing the possible points of attack and letting us concentrate our mitigation efforts. In some cases, you can do this by placing your computation resources behind Content Distribution
Networks (CDNs), Load Balancers and restricting direct Internet traffic to certain parts of your infrastructure
like your database servers. In other cases, you can use firewalls or Access Control Lists (ACLs) to control what traffic reaches your applications.

Reference: ddos attack protection/

Top

Q43: Which of the following are 2 ways that AWS allows to link accounts

  • A. Consolidating billing
  • B. AWS Organizations
  • C. Cost Explorer
  • D. IAM

Answer:


Answer: iOS – Android

You can use the consolidated billing feature in AWS Organizations to consolidate payment for multiple AWS accounts or multiple AISPL accounts. With consolidated billing, you can see a combined view of AWS charges incurred by all of your accounts. You also can get a cost report for each member account that is associated with your master account. Consolidated billing is offered at no additional charge.

Reference: AWS Consolidated billing

Top


Q44: Which of the following helps in DDos protection. Choose 2 answers from the options given below

  • A. Cloudfront
  • B. AWS Shield
  • C. AWS EC2
  • D. AWS Config

Answer:


Answer: iOS – Android ( A. and B. )

One of the first techniques to mitigate DDoS attacks is to minimize the surface area that can be attacked thereby limiting the options for attackers and allowing you to build protections in a single place. We want to ensure that we do not expose our application or resources to ports, protocols or applications from where they do not expect any communication. Thus, minimizing the possible points of attack and letting us concentrate our mitigation efforts. In some cases, you can do this by placing your computation resources behind; Content Distribution Networks (CDNs), Load Balancers and restricting direct Internet traffic to certain parts of your infrastructure like your database servers. In other cases, you can use firewalls or Access Control Lists (ACLs) to control what traffic reaches your applications.

Reference: AWS shield – ddos attack protection/

Top

Q45: Which of the following can be used to call AWS services from programming languages

  • A. AWS SDK
  • B. AWS Console
  • C. AWS CLI
  • D. AWS IAM

Answer:

Answer: iOS – Android (A)
AWS SDK can be plugged in for various programming languages. Using the SDK you can then call the required AWS services.

Reference: AWS tools

Q46: A company wants to host a self-managed database in AWS. How would you ideally implement this solution?

  • A. Using the AWS DynamoDB service
  • B. Using the AWS RDS service
  • C. Hosting a database on an EC2 Instance
  • D. Using the Amazon Aurora service

Answer:


Answer: iOS – Android (C)

If you want a self-managed database, that means you want complete control over the database engine and the underlying infrastructure. In such a case you need to host the database on an EC2 Instance

Reference: AWS ec2

Top

Q47: When creating security groups, which of the following is a responsibility of the customer. Choose 2 answers from the options given below.

  • A. Giving a name and description for the security group
  • B. Defining the rules as per the customer requirements.
  • C. Ensure the rules are applied immediately
  • D. Ensure the security groups are linked to the Elastic Network interface

Answer:


Answer: iOS – Android (A. and B.)

When you define security rules for EC2 Instances, you give a name, description and write the rules for the security group

Reference: AWS using Network Security Groups

Top

Q48: There is a requirement to host a database server for a minimum period of one year. Which of the following would result in the least cost?

  • A. Spot Instances
  • B. On-Demand
  • C. No Upfront costs Reserved
  • D. Partial Upfront costs Reserved

Answer:


Answer: iOS – Android (D.)

If the database is going to be used for a minimum of one year at least , then it is better to get Reserved Instances. You can save on costs , and if you use a partial upfront options , you can get a better discount

Reference: AWS Reserved Instances

Top

Q49: Which of the below can be used to import data into Amazon Glacier?
Choose 3 answers from the options given below:

  • A. AWS Glacier API
  • B. AWS Console
  • C. AWS Glacier SDK
  • D. AWS S3 Lifecycle policies

Answer:


Answer: iOS – Android (A. C. and D. )

The AWS Console cannot be used to upload data onto Glacier. The console can only be used to create a Glacier vault which can be used to upload the data.

Reference: Uploading an archive in AWS

Top

Q50: Which of the following can be used to secure EC2 Instances hosted in AWS. Choose 2 answers

  • A. Usage of Security Groups
  • B. Usage of AMI’s
  • C. Usage of Network Access Control Lists
  • D. Usage of the Internet gateway

Answer:


Answer: iOS – Android

Security groups acts as a virtual firewall for your instance to control inbound and outbound traffic. Network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for
controlling traffic in and out of one or more subnets.

Reference: VPC Security Groups and Network Access Control List

Top

Q51: Which of the following can be used to host virtual servers on AWS

  • A. AWS IAM
  • B. AWS Server
  • C. AWS EC2
  • D. AWS Regions

Answer:


Answer: iOS – Android (C.)

AWS EC2

Reference: AWS ec2

Top

Q52: You plan to deploy an application on AWS. This application needs to be PCI Compliant. Which of the below steps are needed to ensure the compliance? Choose 2 answers from the below list:

  • A. Choose AWS services which are PCI Compliant
  • B. Ensure the right steps are taken during application development for PCI Compliance
  • C. Encure the AWS Services are made PCI Compliant
  • D. Do an audit after the deployment of the application for PCI Compliance.

Answer:


Answer: iOS – Android

Q53: Which tool can you use to forecast your AWS spending?

  • A. AWS organizations
  • B. Amazon Dev pay
  • C. AWS Trusted Advisor
  • D. AWS Cost explorer

Answer:


Answer: iOS – Android (D)

AWS Cost Explorer lets you dive deeper into your cost and usage data to identify trends, pinpoint cost drivers, and detect anomalies.

Reference: AWS Cost Explorer Docs

Q54: The Trusted Advisor service provides insight regarding which four categories of an AWS account?

  • A. Security, fault tolerance, high availability, performance and Service Limits
  • B. Security, access control, high availability, performance and Service Limits
  • C. Performance, cost optimization, Security, fault tolerance and Service Limits
  • D. Performance, cost optimization, Access Control, Connectivity, and Service Limits

Answer:


C. Performance, cost optimization, Security, fault tolerance and Service Limits

Reference: AWS trusted advisor


Top

Q55: As per the AWS Acceptable Use Policy, penetration testing of EC2 instances

  • A. May be performed by AWS, and will be performed by AWS upon customer request
  • B. May be performed by AWS, and is periodically performed by AWS
  • C. Are expressly prohibited under all circumtances
  • D. May be performed by the customer on their own instances with prior authorization from AWS
  • E. May be performed by the customer on their own instances, only if performed from EC2 instances

Answer:


D. You need to take authorization from AWS before doing a penetration test on EC2 instances.

Reference: AWS pen testing


Top

Q56: What is the AWS feature that enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket

  • A. File Transfer
  • B. HTTP Transfer
  • C. Transfer Acceleration
  • D. S3 Acceleration

Answer:


C. Transfer Acceleration

Reference: AWS transfer acceleration examples


Top

Q56: What best describes an AWS region?

Choose the correct answer:

  • A. The physical networking connections between Availability Zones.
  • B. A specific location where an AWS data center is located.
  • C. A collection of DNS servers.
  • D. An isolated collection of AWS Availability Zones, of which there are many placed all around the world.

Answer:


D: An AWS region is an isolated geographical area that is is comprised of three or more AWS Availability Zones.

Reference:Concepts Regions And AvailabilityZones


Top

Q57: Which of the following is a factor when calculating Total Cost of Ownership (TCO) for the AWS Cloud?

  • A. The number of servers migrated to AWS
  • B. The number of users migrated to AWS
  • C. The number of passwords migrated to AWS
  • D. The number of keys migrated to AWS

Answer:

A. Running servers will incur costs. The number of running servers is one factor of Server Costs; a key component of AWS’s Total Cost of Ownership (TCO). Reference: AWS cost calculator

Top

Q58: Which AWS Services can be used to store files? Choose 2 answers from the options given below:

  • A. Amazon CloudWatch
  • B. Amazon Simple Storage Service (Amazon S3)
  • C. Amazon Elastic Block Store (Amazon EBS)
  • D. AWS COnfig
  • D. AWS Amazon Athena

B. and C. Amazon S3 is a Object storage built to store and retrieve any amount of data from anywhere. Amazon Elastic Block Store is a Persistent block storage for Amazon EC2.

Reference: AWS s3 and AWS EBS

Q59: What best describes Amazon Web Services (AWS)?

Choose the correct answer:

  • A. AWS is the cloud.
  • B. AWS only provides compute and storage services.
  • C. AWS is a cloud services provider.
  • D. None of the above.

Answer:


C: AWS is defined as a cloud services provider. They provide hundreds of services of which compute and storage are included (not not limited to).
Reference: AWS

Q60: Which AWS service can be used as a global content delivery network (CDN) service?

  • A. Amazon SES
  • B. Amazon CouldTrail
  • C. Amazon CloudFront
  • D. Amazon S3

Answer:

C: Amazon CloudFront is a web service that gives businesses and web application developers an easy
and cost effective way to distribute content with low latency and high data transfer speeds. Like other AWS services, Amazon CloudFront is a self-service, pay-per-use offering, requiring no long term commitments or minimum fees. With CloudFront, your files are delivered to end-users using a global network of edge locations.Reference: AWS cloudfront

Top

Q61: What best describes the concept of fault tolerance?

Choose the correct answer:

  • A. The ability for a system to withstand a certain amount of failure and still remain functional.
  • B. The ability for a system to grow in size, capacity, and/or scope.
  • C. The ability for a system to be accessible when you attempt to access it.
  • D. The ability for a system to grow and shrink based on demand.

Answer:


A: Fault tolerance describes the concept of a system (in our case a web application) to have failure in some of its components and still remain accessible (highly available). Fault tolerant web applications will have at least two web servers (in case one fails).

Reference:Designing fault tolerant applications/

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

Q62: The firm you work for is considering migrating to AWS. They are concerned about cost and the initial investment needed. Which of the following features of AWS pricing helps lower the initial investment amount needed?

Choose 2 answers from the options given below:

  • A. The ability to choose the lowest cost vendor.
  • B. The ability to pay as you go
  • C. No upfront costs
  • D. Discounts for upfront payments

Answer:
B and C: The best features of moving to the AWS Cloud is: No upfront cost and The ability to pay as you go where the customer only pays for the resources needed. Reference: AWS pricing

Top

Q63: What best describes the concept of elasticity?

Choose the correct answer:

  • A. The ability for a system to grow in size, capacity, and/or scope.
  • B. The ability for a system to grow and shrink based on demand.
  • C. The ability for a system to withstand a certain amount of failure and still remain functional.
  • D. ability for a system to be accessible when you attempt to access it.

Answer:


Answer: iOS – Android B:

Elasticity (think of a rubber band) defines a system that can easily (and cost-effectively) grow and shrink based on required demand.

Reference:Cost optimization automating elasticity

Q64: Your company has started using AWS. Your IT Security team is concerned with the security of hosting resources in the Cloud. Which AWS service provides security optimization recommendations that could help the IT Security team secure resources using AWS?

  • A. AWS API Gateway
  • B. Reserved Instances
  • C. AWS Trusted Advisor
  • D. AWS Spot Instances

Answer:

Answer: iOS – Android C:

An online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment, Trusted Advisor provides real time guidance to help you provision your resources following AWS best practices. Reference: AWS trusted advisor

Q65: What is the relationship between AWS global infrastructure and the concept of high availability?

Choose the correct answer:

  • A. AWS is centrally located in one location and is subject to widespread outages if something happens at that one location.
  • B. AWS regions and Availability Zones allow for redundant architecture to be placed in isolated parts of the world.
  • C. Each AWS region handles a different AWS services, and you must use all regions to fully use AWS.
  • D. None of the above

Answer


Answer: iOS – Android

As an AWS user, you can create your applications infrastructure and duplicate it. By placing duplicate infrastructure in multiple regions, high availability is created because if one region fails you have a backup (in a another region) to use.

Reference:RDS Concepts MultiAZ

Q66: You are hosting a number of EC2 Instances on AWS. You are looking to monitor CPU Utilization on the Instance. Which service would you use to collect and track performance metrics for AWS services?

  • A. Amazon CloudFront
  • B. Amazon CloudSearch
  • C. Amazon CloudWatch
  • D. AWS Managed Services

Top

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

Answer:

Answer: iOS – Android C: Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. Reference: AWS cloudwatch

Q67: Which of the following support plans give access to all the checks in the Trusted Advisor service.

Choose 2 answers from the options given below:

  • A. Basic
  • B. Business
  • C. Enterprise
  • D. None

Answer:
Answer: iOS – Android B and C: Reference: AWS Premium support compare plans

Q68: Which of the following in AWS maps to a separate geographic location?

A. AWS Region
B. AWS Data Centers
C. AWS Availability Zone

Answer:


Answer: iOS – Android A: Amazon cloud computing resources are hosted in multiple locations world-wide. These locations are composed of AWS Regions and Availability Zones. Each AWS Region is a separate geographic area. Reference: AWS Regions And Availability Zone

Top

Q69: What best describes the concept of scalability?

Choose the correct answer:

  • A. The ability for a system to grow and shrink based on demand.
  • B. The ability for a system to grow in size, capacity, and/or scope.
  • C. The ability for a system be be accessible when you attempt to access it.
  • D. The ability for a system to withstand a certain amount of failure and still remain functional.

Answer

Answer: iOS – Android B: Scalability refers to the concept of a system being able to easily (and cost-effectively) scale UP. For web applications, this means the ability to easily add server capacity when demand requires.

Reference:AWS autoscaling

Q70: If you wanted to monitor all events in your AWS account, which of the below services would you use?

  • A. AWS CloudWatch
  • B. AWS CloudWatch logs
  • C. AWS Config
  • D. AWS CloudTrail

Answer:

D: AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk
auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. Reference: Cloudtrail

Top

Q71: What are the four primary benefits of using the cloud/AWS?

Choose the correct answer:

  • A. Fault tolerance, scalability, elasticity, and high availability.
  • B. Elasticity, scalability, easy access, limited storage.
  • C. Fault tolerance, scalability, sometimes available, unlimited storage
  • D. Unlimited storage, limited compute capacity, fault tolerance, and high availability.

Answer:

Answer: iOS – Android Fault tolerance, scalability, elasticity, and high availability are the four primary benefits of AWS/the cloud.

Q72: What best describes a simplified definition of the “cloud”?

Choose the correct answer:

  • A. All the computers in your local home network.
  • B. Your internet service provider
  • C. A computer located somewhere else that you are utilizing in some capacity.
  • D. An on-premise data center that your company owns.

Answer


Answer: iOS – Android (D) The simplest definition of the cloud is a computer that is located somewhere else that you are utilizing in some capacity. AWS is a cloud services provider, as the provide access to computers they own (located at AWS data centers), that you use for various purposes.

Top

Q73: Your development team is planning to host a development environment on the cloud. This consists of EC2 and RDS instances. This environment will probably only be required for 2 months.

Which types of instances would you use for this purpose?

  • A. On-Demand
  • B. Spot
  • C. Reserved
  • D. Dedicated

Answer:

Answer: iOS – Android (A) The best and cost effective option would be to use On-Demand Instances. The AWS documentation gives the following additional information on On-Demand EC2 Instances. With On-Demand instances you only pay for EC2 instances you use. The use of On-Demand instances frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs. Reference: AWS ec2 pricing on-demand

Q74: Which of the following can be used to secure EC2 Instances?

  • A. Security Groups
  • B. EC2 Lists
  • C. AWS Configs
  • D. AWS CloudWatch

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

Answer:

Answer: iOS – Android security groups acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC could be assigned to a different set of security groups. If you don’t specify a particular group at launch time, the instance is automatically assigned to the default security group for the VPC. Reference: VPC Security Groups

Q75: What is the purpose of a DNS server?

Choose the correct answer:

  • A. To act as an internet search engine.
  • B. To protect you from hacking attacks.
  • C. To convert common language domain names to IP addresses.
  • D. To serve web application content.

Answer:


Answer: iOS – Android (C)

Domain name system servers act as a “third party” that provides the service of converting common language domain names to IP addresses (which are required for a web browser to properly make a request for web content).

Top

Q76:What best describes the concept of high availability?

Choose the correct answer:

  • A. The ability for a system to grow in size, capacity, and/or scope.
  • B. The ability for a system to withstand a certain amount of failure and still remain functional.
  • C. The ability for a system to grow and shrink based on demand.
  • D. The ability for a system to be accessible when you attempt to access it.

Answer:


Answer: iOS – Android (D)

High availability refers to the concept that something will be accessible when you try to access it. An object or web application is “highly available” when it is accessible a vast majority of the time.


Top

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

Q77: What is the major difference between AWS’s RDS and DynamoDB database services?

Choose the correct answer:

  • A. RDS offers NoSQL database options, and DynamoDB offers SQL database options.
  • B. RDS offers one SQL database option, and DynamoDB offers many NoSQL database options.
  • C. RDS offers SQL database options, and DynamoDB offers a NoSQL database option.
  • D. None of the above

Answer:


Answer: iOS – Android (C.)

RDS is a SQL database service (that offers several database engine options), and DynamoDB is a NoSQL database option that only offers one NoSQL engine.

Reference:

Q78: What are two open source in-memory engines supported by ElastiCache?

Choose the 2 correct answers:

  • A. CacheIt
  • B. Aurora
  • C. MemcacheD
  • D. Redis

Answer:


Answer: iOS – Android (C. and D.)

Redis, MemcacheD

Reference: AWS Elasticache/

Top

Q79: What AWS database service is used for data warehousing of petabytes of data?

Choose the correct answer:

  • A. RDS
  • B. Elasticache
  • C. Redshift
  • D. DynamoDB

Answer:


Answer: iOS – Android (C.)

Redshift is a fully-managed data warehouse that is perfect for storing petabytes worth of data.

Reference: AWS Redshift

Q80: Which AWS service uses a combination of publishers and subscribers?

Choose the correct answer:

  • A. Lambda
  • B. RDS
  • C. EC2
  • D. SNS

Answer:


Answer: iOS – Android

In SNS, there are two types of clients: publishers and subscribers. Publishers send the message, and subscribers receive the message.

Reference: AWS SNS

Q81: What SQL database engine options are available in RDS?

Choose the 3 correct answers:

  • A. MySQL
  • B. MongoDB
  • C. PostgreSQL
  • D. MariaDB

Answer:


Answer: iOS – Android (A. C. and ….)

RDS offers the following SQL options: Aurora MySQL MariaDB PostgreSQL Oracle Microsoft SQLServer

Reference:

Q81: What is the name of AWS’s RDS SQL database engine?

Choose the correct answer:

  • A. Lightsail
  • B. Aurora
  • C. MySQL
  • D. SNS

Answer:


Answer: iOS – Android (B.) AWS created their own custom SQL database engine, which is called Aurora.

Reference: AWS Aurora

Q82: Under what circumstances would you choose to use the AWS service CloudTrail?

Choose the correct answer:

  • A. When you want to log what actions various IAM users are taking in your AWS account.
  • B. When you want a serverless compute platform.
  • C. When you want to collect and view resource metrics.
  • D. When you want to send SMS notifications based on events that occur in your account.

Answer:


AAnswer: iOS – Android (A). When you want to log what actions various IAM users are taking in your AWS account.

Reference: AWS Cloudtrail

Q83: If you want to monitor the average CPU usage of your EC2 instances, which AWS service should you use?

Choose the correct answer:

  • A. CloudMonitor
  • B. CloudTrail
  • C. CloudWatch
  • D. None of the above

Answer:


C. CloudWatch is used to collect, view, and track metrics for resources (such as EC2 instances) in your AWS account.

Reference: AWS CloudWatch

Q84: What is AWS’s relational database service?

Choose the correct answer:

  • A. ElastiCache
  • B. DymamoDB
  • C. RDS
  • D. Redshift

Answer:


Answer: iOS – Android (C)

RDS offers SQL database options – otherwise known as relational databases.

Reference: AWS RDS

Top

Q85: If you want to have SMS or email notifications sent to various members of your department with status updates on resources in your AWS account, what service should you choose?

Choose the correct answer:

  • A. SNS
  • B. GetSMS
  • C. RDS
  • D. STS

Answer:


Answer: iOS – Android (A) Simple Notification Service (SNS) is what publishes messages to SMS and/or email endpoints.

Reference: AWS SNS

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

Q86: Which AWS service can provide a Desktop as a Service (DaaS) solution?

A. EC2

B. AWS Systems Manager

C. Amazon WorkSpaces

D. Elastic Beanstalk

Answer: iOS – Android

Amazon WorkSpaces is a managed, secure Desktop-as-a-Service (DaaS) solution. You can use Amazon WorkSpaces to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe

Q87: Your company has recently migrated large amounts of data to the AWS cloud in S3 buckets. But it is necessary to discover and protect the sensitive data in these buckets. Which AWS service can do that?

A. GuardDuty

B. Amazon Macie

C. CloudTrail

D. AWS Inspector

Answer: iOS – Android (B)

Notes: Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS.

Q88: Your Finance Department has instructed you to save costs wherever possible when using the AWS Cloud. You notice that using reserved EC2 instances on a 1year contract will save money. What payment method will save the most money?

A: Deferred

B: Partial Upfront

C: All Upfront

D: No Upfront

Answer: C

Notes: With the All Upfront option, you pay for the entire Reserved Instance term with one upfront payment. This option provides you with the largest discount compared to On Demand Instance pricing.

Q89: A fantasy sports company needs to run an application for the length of a football season (5 months). They will run the application on an EC2 instance and there can be no interruption. Which purchasing option best suits this use case?

A. On-Demand

B. Reserved

C. Dedicated

D. Spot

Answer: iOS – Android (A)

Notes: This is not a long enough term to make reserved instances the better option. Plus, the application can’t be interrupted, which rules out spot instances. Dedicated instances provide the option to bring along existing software licenses. 

The scenario does not indicate a need to do this.

Q90: Your company is considering migrating its data center to the cloud. What are the advantages of the AWS cloud over an on-premises data center?

A. Replace upfront operational expenses with low variable operational expenses.

B. Maintain physical access to the new data center, but share responsibility with AWS.

C. Replace low variable costs with upfront capital expenses.

D. Replace upfront capital expenses with low variable costs.

Answer: iOS – Android

Notes: All the hardware purchased upfront for a data center will be replaced by resources which are variable in nature with low upfront costs.

Q91:  You are leading a pilot program to try the AWS Cloud for one of your applications. You have been instructed to provide an estimate of your AWS bill. Which service will allow you to do this by manually entering your planned resources by service?

A. AWS CloudTrail

B. AWS Cost and Usage Report

C. AWS Pricing Calculator

D. AWS Cost Explorer

Answer: iOS – Android (C)

Notes: With the AWS Pricing Calculator, you can input the services you will use, and the configuration of those services, and get an estimate of the costs these services will accrue. AWS Pricing Calculator lets you explore AWS services, and create an estimate for the cost of your use cases on AWS.

Q92: Which AWS service would enable you to view the spending distribution in one of your AWS accounts?

A. AWS Spending Explorer

B. Billing Advisor

C. AWS Organizations

D. AWS Cost Explorer

Answer: iOS – Android

Notes: AWS Cost Explorer is a free tool that you can use to view your costs and usage. You can view data up to the last 13 months, forecast how much you are likely to spend for the next three months, and get recommendations for what Reserved Instances to purchase. You can use AWS Cost Explorer to see patterns in how much you spend on AWS resources over time, identify areas that need further inquiry, and see trends that you can use to understand your costs. You can also specify time ranges for the data, and view time data by day or by month.

Q93: You are managing the company’s AWS account. The current support plan is Basic, but you would like to begin using Infrastructure Event Management. What support plan (that already includes Infrastructure Event Management without an additional fee) should you upgrade to?

A. Upgrade to Enterprise plan.

B. Do nothing. It is included in the Basic plan.

C. Upgrade to Developer plan.

D. Upgrade to the Business plan. No other steps are necessary.

Answer: iOS – Android (A)

Notes: AWS Infrastructure Event Management is a structured program available to Enterprise support customers (and Business Support customers for an additional fee) that helps you plan for large-scale events, such as product or application launches, infrastructure migrations, and marketing events.

With Infrastructure Event Management, you get strategic planning assistance before your event, as well as real-time support during these moments that matter most for your business.

Q94: You have decided to use the AWS Cost and Usage Report to track your EC2 Reserved Instance costs. To where can these reports be published?

A. Trusted Advisor

B. An S3 Bucket that you own.

C. CloudWatch

D. An AWS owned S3 Bucket.

Answer: B

Notes: The AWS Cost and Usage Reports (AWS CUR) contains the most comprehensive set of cost and usage data available. You can use Cost and Usage Reports to publish your AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that break down your costs by the hour or day, by product or product resource, or by tags that you define yourself. AWS updates the report in your bucket once a day in comma-separated value (CSV) format. You can view the reports using spreadsheet software such as Microsoft Excel or Apache OpenOffice Calc, or access them from an application using the Amazon S3 API.

Q95: What can we do in AWS to receive the benefits of volume pricing for your multiple AWS accounts?

A. Use consolidated billing in AWS Organizations.

B. Purchase services in bulk from AWS Marketplace.

C. Use AWS Trusted Advisor

D. You will receive volume pricing by default.

Answer: iOS – Android (A)

Notes: You can use the consolidated billing feature in AWS Organizations to consolidate billing and payment for multiple AWS accounts or multiple Amazon Internet Services Pvt. Ltd (AISPL) accounts. You can combine the usage across all accounts in the organization to share the volume pricing discounts, Reserved Instance discounts, and Savings Plans. This can result in a lower charge for your project, department, or company than with individual standalone accounts.

Q96: A gaming company is using the AWS Developer Tool Suite to develop, build, and deploy their applications. Which AWS service can be used to trace user requests from end-to-end through the application?

A. AWS X-Ray

B. CloudWatch

C. AWS Inspector

D. CloudTrail

Answer: iOS – Android (A)

Notes: AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components.

Q97: A company needs to use a Load Balancer which can serve traffic at the TCP, and UDP layers. Additionally, it needs to handle millions of requests per second at very low latencies. Which Load Balancer should they use?

A. TCP Load Balancer

B. Application Load Balancer

C. Classic Load Balancer

D. Network Load Balancer

Answer: iOS – Android

Notes: Network Load Balancer is best suited for load balancing of Transmission Control Protocol (TCP), User Datagram Protocol (UDP) and Transport Layer Security (TLS) traffic where extreme performance is required. Operating at the connection level (Layer 4), Network Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) and is capable of handling millions of requests per second while maintaining ultra-low latencies.

Q98: Your company is migrating its services to the AWS cloud. The DevOps team has heard about infrastructure as code, and wants to investigate this concept. Which AWS service would they investigate?

A. AWS CloudFormation

B. AWS Lambda

C. CodeCommit

D. Elastic Beanstalk

Answer: iOS – Android (A)

Notes: AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS.

Q99: You have a MySQL database that you want to migrate to the cloud, and you need it to be significantly faster there. You are looking for a speed increase up to 5 times the current performance. Which AWS offering could you use?

A. Elasticache

B. Amazon Aurora

C. DynamoDB

D. Amazon RDS MySQL

Answer: iOS – Android (B)

Notes: Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is up to five times faster than standard MySQL databases and three times faster than standard PostgreSQL databases.

Q100:A developer is trying to programmatically retrieve information from an EC2 instance such as public keys, ip address, and instance id. From where can this information be retrieved?

A. Instance metadata

B. Instance Snapshot

C. CloudWatch Logs

D. Instance userdata

Answer: iOS – Android (A)

Notes: This type of data is stored in Instance metadata. Instance userdata does not retrieve the information mentioned, but can be used to help configure a new instance.

Q101: Why is AWS more economical than traditional data centers for applications with varying compute workloads?

A) Amazon EC2 costs are billed on a monthly basis.
B) Users retain full administrative access to their Amazon EC2 instances.
C) Amazon EC2 instances can be launched on demand when needed.
D) Users can permanently run enough instances to handle peak workloads.


Answer: C
Notes: The ability to launch instances on demand when needed allows users to launch and terminate instances in response to a varying workload. This is a more economical practice than purchasing enough on-premises servers to handle the peak load.
Reference:  Advantage of cloud computing

Q102: Which AWS service would simplify the migration of a database to AWS?

A) AWS Storage Gateway
B) AWS Database Migration Service (AWS DMS)
C) Amazon EC2
D) Amazon AppStream 2.0


Answer: B
Notes: AWS DMS helps users migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. AWS DMS can migrate data to and from most widely used commercial and open-source databases.
Reference: AWS DMS 

Q103: Which AWS offering enables users to find, buy, and immediately start using software solutions in their AWS environment?

A) AWS Config
B) AWS OpsWorks
C) AWS SDK
D) AWS Marketplace


Answer: D
Notes: AWS Marketplace is a digital catalog with thousands of software listings from independent software vendors that makes it easy to find, test, buy, and deploy software that runs on AWS.
Reference: AWS Markerplace 

Q104: Which AWS networking service enables a company to create a virtual network within AWS?

A) AWS Config
B) Amazon Route 53
C) AWS Direct Connect
D) Amazon Virtual Private Cloud (Amazon VPC)


Answer: D
Notes: Amazon VPC lets users provision a logically isolated section of the AWS Cloud where users can launch AWS resources in a virtual network that they define.
Reference: VPC https://aws.amazon.com/vpc/

Q105: Which component of the AWS global infrastructure does Amazon CloudFront use to ensure low-latency delivery?

A) AWS Regions
B) Edge locations
C) Availability Zones
D) Virtual Private Cloud (VPC)


Answer: B
Notes: – To deliver content to users with lower latency, Amazon CloudFront uses a global network of points of presence (edge locations and regional edge caches) worldwide.
Reference: Cloudfront – https://aws.amazon.com/cloudfront/

Q106: How would a system administrator add an additional layer of login security to a user’s AWS Management Console?

A) Use Amazon Cloud Directory
B) Audit AWS Identity and Access Management (IAM) roles
C) Enable multi-factor authentication
D) Enable AWS CloudTrail


Answer: C
Notes: – Multi-factor authentication (MFA) is a simple best practice that adds an extra layer of protection on top of a username and password. With MFA enabled, when a user signs in to an AWS Management Console, they will be prompted for their username and password (the first factor—what they know), as well as for an authentication code from their MFA device (the second factor—what they have). Taken together, these multiple factors provide increased security for AWS account settings and resources.
Reference: MFA – https://aws.amazon.com/iam/features/mfa/

Q107: Which service can identify the user that made the API call when an Amazon EC2 instance is terminated?

A) AWS Trusted Advisor
B) AWS CloudTrail
C) AWS X-Ray
D) AWS Identity and Access Management (AWS IAM)


Answer: B
Notes: – AWS CloudTrail helps users enable governance, compliance, and operational and risk auditing of their AWS accounts. Actions taken by a user, role, or an AWS service are recorded as events in CloudTrail. Events include actions taken in the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs and APIs.
Reference: AWS CloudTrail https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html

Q108: Which service would be used to send alerts based on Amazon CloudWatch alarms?

A) Amazon Simple Notification Service (Amazon SNS)
B) AWS CloudTrail
C) AWS Trusted Advisor
D) Amazon Route 53


Answer: A
Notes: Amazon SNS and Amazon CloudWatch are integrated so users can collect, view, and analyze metrics for every active SNS. Once users have configured CloudWatch for Amazon SNS, they can gain better insight into the performance of their Amazon SNS topics, push notifications, and SMS deliveries.
Reference: CloudWatch for Amazon SNS https://docs.aws.amazon.com/sns/latest/dg/sns-monitoring-using-cloudwatch.html

Q109: Where can a user find information about prohibited actions on the AWS infrastructure?

A) AWS Trusted Advisor
B) AWS Identity and Access Management (IAM)
C) AWS Billing Console
D) AWS Acceptable Use Policy


Answer: D
Notes: – The AWS Acceptable Use Policy provides information regarding prohibited actions on the AWS infrastructure.
Reference: AWS Acceptable Use Policy – https://aws.amazon.com/aup/

Q110: Which of the following is an AWS responsibility under the AWS shared responsibility model?

A) Configuring third-party applications
B) Maintaining physical hardware
C) Securing application access and data
D) Managing guest operating systems


Answer: B
Notes: – Maintaining physical hardware is an AWS responsibility under the AWS shared responsibility model.
Reference: AWS shared responsibility model https://aws.amazon.com/compliance/shared-responsibility-model/

Q111: Which recommendations are included in the AWS Trusted Advisor checks? (Select TWO.)

A) Amazon S3 bucket permissions
B) AWS service outages for services
C) Multi-factor authentication (MFA) use on the AWS account root user
D) Available software patches for Amazon EC2 instances
Answer: A and C
Notes: Trusted Advisor checks for S3 bucket permissions in Amazon S3 with open access permissions. Bucket permissions that grant list access to everyone can result in higher than expected charges if objects in the bucket are listed by unintended users at a high frequency. Bucket permissions that grant upload and delete access to all users create potential security vulnerabilities by allowing anyone to add, modify, or remove items in a bucket. This Trusted Advisor check examines explicit bucket permissions and associated bucket policies that might override the bucket permissions. 
Trusted Advisor does not provide notifications for service outages. You can use the AWS Personal Health Dashboard to learn about AWS Health events that can affect your AWS services or account.
Trusted Advisor checks the root account and warns if MFA is not enabled.
 Trusted Advisor does not provide information about the number of users in an AWS account.
Reference:  AWS Trusted Advisor best practice checklist.

AWS CCP Exam Topics:

The AWS Cloud Practitioner exam is broken down into 4 domains

  • Cloud Concepts
  • Security and Compliance
  • Technology
  • Billing and Pricing.

AWS Certified Cloud Practitioner Exam Whitepapers:

AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers.

  • Overview of Amazon Web Services
  • Architecting for the Cloud: AWS Best Practices
  • How AWS Pricing works whitepaper.
  • The Total Cost of (Non) Ownership of Web Application in the Cloud
  • Compare AWS Support Plans

Top

Online Training and Labs for AWS Cloud Certified Practitioner Exam

  • A Cloud Guru
  • Linux Academy
  • Udemy

Top

AWS Cloud Practitioners Jobs

  • Jobs Now
  • Weworkremotely
  • StackOverflow AWS Jobs

Top

AWS Certified Cloud Practitioner Exam info and details, How To:

The AWS Certified Cloud Practitioner Exam is a multiple choice, multiple answer exam. Here is the Exam Overview:

  • Certification Name: AWS Certified Cloud Practitioner.
  • Prerequisites for the Exam: None.
  • Exam Pattern: Multiple Choice Questions
  • Number of Questions: 65
  • Duration: 90 mins
  • Exam fees: US $100
  • Exam Guide on AWS Website
  • Available languages for tests: English, Japanese, Korean, Simplified Chinese
  • Read AWS whitepapers
  • Register for certification account here.
  • Prepare for Certification Here

Top

Additional Information for reference

Below are some useful reference links that would help you to learn about AWS Practitioner Exam.

  • AWS certified cloud practitioner/
  • certification faqs
  • AWS Cloud Practitioner Certification Exam on Quora

Other Relevant and Recommended AWS Certifications

AWS Certification Exams Roadmap
AWS Certification Exams Roadmap

AWS Certified Cloud Practitioner

AWS Certified Solutions Architect – Associate

AWS Certified Solution Architect Exam Prep App: Free

AAWS Certified Developer – Associate

AWS Certified SysOps Administrator – Associate

AWS Certified Solutions Architect – Professional

AWS Certified DevOps Engineer – Professional

AWS Certified Big Data Specialty

AWS Certified Advanced Networking.

AWS Certified Security – Specialty

Other AWS Certification Exams Questions and Answers Dumps:

Top 200 AWS Certified Associate SysOps Administrator Practice Quiz – Questions and Answers Dumps

Big Data and Data Analytics 101 – Top 50 AWS Certified Data Analytics – Specialty Questions and Answers Dumps

CyberSecurity 101 and Top 25 AWS Certified Security Specialty Questions and Answers Dumps

Networking 101 and Top 20 AWS Certified Advanced Networking Specialty Questions and Answers Dumps

Top

Other AWS Facts and Summaries and Questions/Answers Dump

  • AWS S3 facts and summaries and Q&A Dump
  • AWS DynamoDB facts and summaries and Questions and Answers Dump
  • AWS EC2 facts and summaries and Questions and Answers Dump
  • AWS Serverless facts and summaries and Questions and Answers Dump
  • AWS Developer and Deployment Theory facts and summaries and Questions and Answers Dump
  • AWS IAM facts and summaries and Questions and Answers Dump
  • AWS Lambda facts and summaries and Questions and Answers Dump
  • AWS SQS facts and summaries and Questions and Answers Dump
  • AWS RDS facts and summaries and Questions and Answers Dump
  • AWS ECS facts and summaries and Questions and Answers Dump
  • AWS CloudWatch facts and summaries and Questions and Answers Dump
  • AWS SES facts and summaries and Questions and Answers Dump
  • AWS EBS facts and summaries and Questions and Answers Dump
  • AWS ELB facts and summaries and Questions and Answers Dump
  • AWS Autoscaling facts and summaries and Questions and Answers Dump
  • AWS VPC facts and summaries and Questions and Answers Dump
  • AWS KMS facts and summaries and Questions and Answers Dump
  • AWS Elastic Beanstalk facts and summaries and Questions and Answers Dump
  • AWS CodeBuild facts and summaries and Questions and Answers Dump
  • AWS CodeDeploy facts and summaries and Questions and Answers Dump
  • AWS CodePipeline facts and summaries and Questions and Answers Dump
  • Pros and Cons of Cloud Computing
  • Cloud Customer Insurance – Cloud Provider Insurance – Cyber Insurance

Below is a listing of AWS certification exam quiz apps for all platforms:

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

AWS Certified Cloud practitioner Exam Prep FREE version: CCP, CLF-C01

Online Training and Labs for AWS Certified Solution Architect Associate Exam

  • A Cloud Guru
  • Linux Academy
  • Udemy

Top

AWS Certified Solution Architect Associate Jobs

  • Jobs Now
  • Weworkremotely
  • StackOverflow AWS Jobs

AWS Certification and Training Apps for all platforms:

AWS Cloud practitioner FREE version:

AWS Certified Cloud practitioner for the web:pwa

AWS Certified Cloud practitioner Exam Prep App for iOS

AWS Certified Cloud practitioner Exam Prep App for Microsoft/Windows10

AWS Certified Cloud practitioner Exam Prep App for Android (Google Play Store)

AWS Certified Cloud practitioner Exam Prep App for Android (Amazon App Store)

AWS Certified Cloud practitioner Exam Prep App for Android (Huawei App Gallery)

AWS Solution Architect FREE version:

AWS Certified Solution Architect Associate Exam Prep App for iOS: 

Solution Architect Associate for Android Google Play

AWS Certified Solution Architect Associate Exam Prep App :Pwa

AWS Certified Solution Architect Associate Exam Prep App for Amazon android

AWS Certified Cloud practitioner Exam Prep App for Microsoft/Windows10

AWS Certified Cloud practitioner Exam Prep App for Huawei App Gallery

AWS Cloud Practitioner PRO Versions:

AWS Certified Cloud practitioner PRO Exam Prep App for iOS

AWS Certified Cloud Practitioner PRO Associate Exam Prep App for android google

AWS Certified Cloud practitioner Exam Prep App for Amazon android

AWS Certified Cloud practitioner Exam Prep App for Windows 10

AWS Certified Cloud practitioner Exam Prep PRO App for Android (Huawei App Gallery)

AWS Solution Architect PRO

AWS Certified Solution Architect Associate PRO versions for iOS

AWS Certified Solution Architect Associate PRO Exam Prep App for Android google

AWS Certified Solution Architect Associate PRO Exam Prep App for Windows10

AWS Certified Solution Architect Associate PRO Exam Prep App for Amazon android

Huawei App Gallery: Coming soon

AWS Certified Developer Associates Free version:

AWS Certified Developer Associates for Android (Google Play)

AWS Certified Developer Associates Web/PWA

AWS Certified Developer Associates for iOs

AWS Certified Developer Associates for Android (Huawei App Gallery)

AWS Certified Developer Associates for windows 10 (Microsoft App store)

Amazon App Store: Coming soon

AWS Developer Associates PRO version

PRO version with mock exam for android (Google Play)

PRO version with mock exam ios

AWS Certified Developer Associates PRO for Android (Microsoft App Store)

AWS Certified Developer Associates PRO for Android (Huawei App Gallery): Coming soon

Latest Cloud AWS Cloud Training Questions and Answers from around the Web:

Jon Bonso vs Stephane Maarek CCP Practice Exam Differences

Tutorialsdojo.com are the best in the market IMO

They have a long standing reputation for quality.

I’ve used them, I’ve recommended them to friends and family and I recommend them to students of my AWS courses also.

And last but not least, the Djamgatech Apps for iOs and and android.

Practice on the web directly here via the AWS Cloud Practitioner Exam Perp App

I would also recommend checking: Exam Digest

What is the difference between Amazon EC2 Savings Plans and Spot Instances?

Amazon EC2 Savings Plans are ideal for workloads that involve a consistent amount of compute usage over a 1-year or 3-year term.
With Amazon EC2 Savings Plans, you can reduce your compute costs by up to 72% over On-Demand costs.

Spot Instances are ideal for workloads with flexible start and end times, or that can withstand interruptions. With Spot Instances, you can reduce your compute costs by up to 90% over On-Demand costs.
Unlike Amazon EC2 Savings Plans, Spot Instances do not require contracts or a commitment to a consistent amount of compute usage.

Amazon EBS vs Amazon EFS

An Amazon EBS volume stores data in a single Availability Zone.
To attach an Amazon EC2 instance to an EBS volume, both the Amazon EC2 instance and the EBS volume must reside within the same Availability Zone.

Amazon EFS is a regional service. It stores data in and across multiple Availability Zones.
The duplicate storage enables you to access data concurrently from all the Availability Zones in the Region where a file system is located. Additionally, on-premises servers can access Amazon EFS using AWS Direct Connect.

Which cloud deployment model allows you to connect public cloud resources to on-premises infrastructure?

Applications made available through hybrid deployments connect cloud resources to on-premises infrastructure and applications. For example, you might have an application that runs in the cloud but accesses data stored in your on-premises data center.

What is the difference between Amazon EC2 Savings Plans and Spot Instances?

Amazon EC2 Savings Plans are ideal for workloads that involve a consistent amount of compute usage over a 1-year or 3-year term.
With Amazon EC2 Savings Plans, you can reduce your compute costs by up to 72% over On-Demand costs.

Spot Instances are ideal for workloads with flexible start and end times, or that can withstand interruptions. With Spot Instances, you can reduce your compute costs by up to 90% over On-Demand costs.
Unlike Amazon EC2 Savings Plans, Spot Instances do not require contracts or a commitment to a consistent amount of compute usage.

Which benefit of cloud computing helps you innovate and build faster?

Agility: The cloud gives you quick access to resources and services that help you build and deploy your applications faster.

Which developer tool allows you to write code within your web browser?

Cloud9 is an integrated development environment (IDE) that allows you to write code within your web browser.

Which method of accessing an EC2 instance requires both a private key and a public key?

SSH allows you to access an EC2 instance from your local laptop using a key pair, which consists of a private key and a public key.

Which service allows you to track the name of the user making changes in your AWS account?

CloudTrail tracks user activity and API calls in your account, which includes identity information (the user’s name, source IP address, etc.) about the API caller.

Which analytics service allows you to query data in Amazon S3 using Structured Query Language (SQL)?

Athena is a query service that makes it easy to analyze data in Amazon S3 using SQL.

Which machine learning service helps you build, train, and deploy models quickly?

SageMaker helps you build, train, and deploy machine learning models quickly.

Which EC2 storage mechanism is recommended when running a database on an EC2 instance?

EBS is a storage device you can attach to your instances and is a recommended storage option when you run databases on an instance.

Which storage service is a scalable file system that only works with Linux-based workloads?

EFS is an elastic file system for Linux-based workloads.

Djamgatech: AI Driven Certification Preparation: Azure AI, AWS Machine Learning Specialty, AWS Data Analytics, GCP ML, GCP PDE,
Djamgatech: AI Driven Certification Preparation: Azure AI, AWS Machine Learning Specialty, AWS Data Analytics, GCP ML, GCP PDE,

Which AWS service provides a secure and resizable compute platform with choice of processor, storage, networking, operating system, and purchase model?

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. Amazon EC2 offers the broadest and deepest compute platform with choice of processor, storage, networking, operating system, and purchase model. Amazon EC2.

Which services allow you to build hybrid environments by connecting on-premises infrastructure to AWS?

Site-to-site VPN allows you to establish a secure connection between your on-premises equipment and the VPCs in your AWS account.

Direct Connect allows you to establish a dedicated network connection between your on-premises network and AWS.

What service could you recommend to a developer to automate the software release process?

CodePipeline is a developer tool that allows you to continuously automate the software release process.

Which service allows you to practice infrastructure as code by provisioning your AWS resources via scripted templates?

CloudFormation allows you to provision your AWS resources via scripted templates.

Which machine learning service allows you to add image analysis to your applications?

Rekognition is a service that makes it easy to add image analysis to your applications.

Which services allow you to run containerized applications without having to manage servers or clusters?

Fargate removes the need for you to interact with servers or clusters as it provisions, configures, and scales clusters of virtual machines to run containers for you.

ECS lets you run your containerized Docker applications on both Amazon EC2 and AWS Fargate.

EKS lets you run your containerized Kubernetes applications on both Amazon EC2 and AWS Fargate.

Amazon S3 offers multiple storage classes. Which storage class is best for archiving data when you want the cheapest cost and don’t mind long retrieval times?

S3 Glacier Deep Archive offers the lowest cost and is used to archive data. You can retrieve objects within 12 hours.

Djamgatech App for iOS, Android, Windows: AWS CCP, AWS SAA-C02, AZ900, AZ104, GCP ACE, AWS DVA-C01, AWS DAS-C01, AWS SCS-C01, AZ AI-900, AZ303, AZ304, AZ204
Djamgatech App for iOS, Android, Windows: AWS CCP, AWS SAA-C02, AZ900, AZ104, GCP ACE, AWS DVA-C01, AWS DAS-C01, AWS SCS-C01, AZ AI-900, AZ303, AZ304, AZ204

In the shared responsibility model, what is the customer responsible for?

You are responsible for patching the guest OS, including updates and security patches.

You are responsible for firewall configuration and securing your application.

A company needs phone, email, and chat access 24 hours a day, 7 days a week. The response time must be less than 1 hour if a production system has a service interruption. Which AWS Support plan meets these requirements at the LOWEST cost?

The Business Support plan provides phone, email, and chat access 24 hours a day, 7 days a week. The Business Support plan has a response time of less than 1 hour if a production system has a service interruption.

For more information about AWS Support plans, see Compare AWS Support Plans.

Which Amazon EC2 pricing model adjusts based on supply and demand of EC2 instances?

Spot Instances are discounted more heavily when there is more capacity available in the Availability Zones.

For more information about Spot Instances, see Amazon EC2 Spot Instances.

Which of the following is an advantage of consolidated billing on AWS?

Consolidated billing is a feature of AWS Organizations. You can combine the usage across all accounts in your organization to share volume pricing discounts, Reserved Instance discounts, and Savings Plans. This solution can result in a lower charge compared to the use of individual standalone accounts.

For more information about consolidated billing, see Consolidated billing for AWS Organizations.

A company requires physical isolation of its Amazon EC2 instances from the instances of other customers. Which instance purchasing option meets this requirement?

With Dedicated Hosts, a physical server is dedicated for your use. Dedicated Hosts provide visibility and the option to control how you place your instances on an isolated, physical server. For more information about Dedicated Hosts, see Amazon EC2 Dedicated Hosts.

A company is hosting a static website from a single Amazon S3 bucket.  Which AWS service will achieve lower latency and high transfer speeds?

CloudFront is a web service that speeds up the distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. Content is cached in edge locations. Content that is repeatedly accessed can be served from the edge locations instead of the source S3 bucket. For more information about CloudFront, see Accelerate static website content delivery.

Which AWS service provides a simple and scalable shared file storage solution for use with Linux-based Amazon EC2 instances and on-premises servers?

Amazon EFS provides an elastic file system that lets you share file data without the need to provision and manage storage. It can be used with AWS Cloud services and on-premises resources, and is built to scale on demand to petabytes without disrupting applications. With Amazon EFS, you can grow and shrink your file systems automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

For more information about using Amazon EFS, see Walkthrough: Create and mount a file system on premises with AWS Direct Connect and VPN.

Which service allows you to generate encryption keys managed by AWS?

KMS allows you to generate and manage encryption keys. The keys generated by KMS are managed by AWS.

Which service can integrate with a Lambda function to automatically take remediation steps when it uncovers suspicious network activity when monitoring logs in your AWS account?

GuardDuty can perform automated remediation actions by leveraging Amazon CloudWatch Events and AWS Lambda. GuardDuty continuously monitors for threats and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon S3. GuardDuty analyzes multiple AWS data sources, such as AWS CloudTrail event logs, Amazon VPC Flow Logs, and DNS logs.

Which service allows you to create access keys for someone needing to access AWS via the command line interface (CLI)?

IAM allows you to create users and generate access keys for users needing to access AWS via the CLI.

Which service allows you to record software configuration changes within your Amazon EC2 instances over time?

Config helps with recording compliance and configuration changes over time for your AWS resources.

Which service assists with compliance and auditing by offering a downloadable report that provides the status of passwords and MFA devices in your account?

IAM provides a downloadable credential report that lists all users in your account and the status of their various credentials, including passwords, access keys, and MFA devices.

Which service allows you to locate credit card numbers stored in Amazon S3?

Macie is a data privacy service that helps you uncover and protect your sensitive data, such as personally identifiable information (PII) like credit card numbers, passport numbers, social security numbers, and more.

How do you manage permissions for multiple users at once using AWS Identity and Access Management (IAM)?

An IAM group is a collection of IAM users. When you assign an IAM policy to a group, all users in the group are granted permissions specified by the policy.

Which service protects your web application from cross-site scripting attacks?

WAF helps protect your web applications from common web attacks, like SQL injection or cross-site scripting.
 

Which AWS Trusted Advisor real-time guidance recommendations are available for AWS Basic Support and AWS Developer Support customers?

Basic and Developer Support customers get 50 service limit checks.

Basic and Developer Support customers get security checks for “Specific Ports Unrestricted” on Security Groups.

Basic and Developer Support customers get security checks on S3 Bucket Permissions.

Which service allows you to simplify billing by using a single payment method for all your accounts?

Organizations offers consolidated billing that provides 1 bill for all your AWS accounts. This also gives you access to volume discounts.

Which AWS service usage will always be free even after the 12-month free tier plan has expired?

One million Lambda requests are always free each month.

What is the easiest way for a customer on the AWS Basic Support plan to increase service limits?

The Basic Support plan allows 24/7 access to Customer Service via email and the ability to open service limit increase support cases.

Which types of issues are covered by AWS Support?

“How to” questions about AWS service and features

Problems detected by health checks

Djamgatech: AI Driven Certification Preparation: Azure AI, AWS Machine Learning Specialty, AWS Data Analytics, GCP ML, GCP PDE,
Djamgatech: AI Driven Certification Preparation: Azure AI, AWS Machine Learning Specialty, AWS Data Analytics, GCP ML, GCP PDE,

Which features of AWS reduce your total cost of ownership (TCO)?

Sharing servers with others allows you to save money.

Elastic computing allows you to trade capital expense for variable expense.

You pay only for the computing resources you use with no long-term commitments.

Which service allows you to select and deploy operating system and software patches automatically across large groups of Amazon EC2 instances?

Systems Manager allows you to automate operational tasks across your AWS resources.

Which service provides the easiest way to set up and govern a secure, multi-account AWS environment?

Control Tower allows you to centrally govern and enforce the best use of AWS services across your accounts.

Which cost management tool gives you the ability to be alerted when the actual or forecasted cost and usage exceed your desired threshold?

Budgets allow you to improve planning and cost control with flexible budgeting and forecasting. You can choose to be alerted when your budget threshold is exceeded.

Which tool allows you to compare your estimated service costs per Region?

The Pricing Calculator allows you to get an estimate for the cost of AWS services. Comparing service costs per Region is a common use case.

Who can assist with accelerating the migration of legacy contact center infrastructure to AWS?

Professional Services is a global team of experts that can help you realize your desired business outcomes with AWS.

The AWS Partner Network (APN) is a global community of partners that helps companies build successful solutions with AWS.

Which cost management tool allows you to view costs from the past 12 months, current detailed costs, and forecasts costs for up to 3 months?

Cost Explorer allows you to visualize, understand, and manage your AWS costs and usage over time.

Which service reduces the operational overhead of your IT organization?

Managed Services implements best practices to maintain your infrastructure and helps reduce your operational overhead and risk.

How do I set up Failover on Amazon AWS Route53?

In simple configurations, you create a group of records that all have the same name and type, such as a group of weighted records with a type of A for example.com. In more complex configurations, you create a tree of records that route traffic based on multiple criteria. Read more …
 
 
  • How can a program running inside AWS EC2 determine which VPC and security group an incoming IP address or TCP connection belongs to, for application-layer firewalling?

    I assume it is your subscription where the VPCs are located, otherwise you can’t really discover the information you are looking for. On the EC2 server you could use AWS CLI or Powershell based scripts that query the IP information. Based on IP you can find out what instance uses the network interface, what security groups are tied to it and in which VPC the instance is hosted. Read more here…

     

  • What are some tips, tricks and gotchas when using AWS Lambda to connect to a VPC?

    When using AWS Lambda inside your VPC, your Lambda function will be allocated private IP addresses, and only private IP addresses, from your specified subnets. This means that you must ensure that your specified subnets have enough free address space for your Lambda function to scale up to. Each simultaneous invocation needs its own IP. Read more here…

How do AWS step functions communicate with lambda functions which are in a VPC?

When a Lambda “is in a VPC”, it really means that its attached Elastic Network Interface is the customer’s VPC and not the hidden VPC that AWS manages for Lambda.

The ENI is not related to the AWS Lambda management system that does the invocation (the data plane mentioned here). The AWS Step Function system can go ahead and invoke the Lambda through the API, and the network request for that can pass through the underlying VPC and host infrastructure.

Those Lambdas in turn can invoke other Lambda directly through the API, or more commonly by decoupling them, such as through Amazon SQS used as a trigger. Read more ….

How do I invoke an AWS Lambda function programmatically?

public InvokeResult invoke(InvokeRequest request)

Invokes a Lambda function. You can invoke a function synchronously (and wait for the response), or asynchronously. To invoke a function asynchronously, set InvocationType to Event.

For synchronous invocation, details about the function response, including errors, are included in the response body and headers. For either invocation type, you can find more information in the execution log and trace.

When an error occurs, your function may be invoked multiple times. Retry behavior varies by error type, client, event source, and invocation type. For example, if you invoke a function asynchronously and it returns an error, Lambda executes the function up to two more times. For more information, see Retry Behavior.

For asynchronous invocation, Lambda adds events to a queue before sending them to your function. If your function does not have enough capacity to keep up with the queue, events may be lost. Occasionally, your function may receive the same event multiple times, even if no error occurs. To retain events that were not processed, configure your function with a dead-letter queue.

The status code in the API response doesn’t reflect function errors. Error codes are reserved for errors that prevent your function from executing, such as permissions errors, limit errors, or issues with your function’s code and configuration. For example, Lambda returns TooManyRequestsException if executing the function would cause you to exceed a concurrency limit at either the account level ( Concurrent Invocation Limit Exceeded) or function level ( Reserved Function Concurrent Invocation LimitExceeded).

For functions with a long timeout, your client might be disconnected during synchronous invocation while it waits for a response. Configure your HTTP client, SDK, firewall, proxy, or operating system to allow for long connections with timeout or keep-alive settings.

This operation requires permission for the lambda:InvokeFunction action. Read more…

How bad would it be to configure one AWS VPC for all my environments (dev, stg, prod) while creating 2 subnets (priv, pub) for each environment?  It depends highly on the budget. However, for my systems I always set different environments up in different VPCs. Why? Because they’re guaranteed to be isolated from one another, and VPCs are very easy to create and manage if you’ve automated. The flip side is you do pay a bit more for edge services like NAT Gateway and ALB, since you’ll have at least one per VPC.

 

What are the differences between default and non-default AWS VPCs?

Default VPC

  1. 1 per region
  2. a set VPC CIDR range … you can’t changed it
  3. has everything configured by default .. 1 subnet per AZ, an internet gateway, routes and subnets set to allocate IPv4 by default.

Custom VPCs

  1. As any as you want per region (within limits)
  2. Customisable CIDR range
  3. Customisable subnet structure
  4. Nothing configured by default, you have to configure everything

Read more here…

 

 

What would be the effect if IPv4 stopped working suddenly, and only IPv6 was left standing?

if IPv4 stopped working, and IPv6 remained functional, through some magical means that prevented IPv4 from being fixed, there would be a few days of pandemonium while non-dual-stack networks and legacy IPv4-only networks flailed mightily, and then a whole bunch of IPv6-skilled network engineers would make a shit-ton of money in a short period of time going from ill-prepared network to ill-prepared network, one at a time, rolling out IPv6 across their infrastructure as quickly as possible. Read more here….

 
Why is the subnet mask important in determining the network address?

The subnet mask determines how many bits of the network address are relevant (and thus indirectly the size of the network block in terms of how many host addresses are available) –

192.0.2.0, subnet mask 255.255.255.0 means that 192.0.2 is the significant portion of the network number, and that there 8 bits left for host addresses (i.e. 192.0.2.0 thru 192.0.2.255)

192.0.2.0, subnet mask 255.255.255.128 means that 192.0.2.0 is the significant portion of the network number (first three octets and the most significant bit of the last octet), and that there 7 bits left for host addresses (i.e. 192.0.2.0 thru 192.0.2.127)

When in doubt, envision the network number and subnet mask in base 2 (i.e. binary) and it will become much clearer. Read more here…

 

What are some best practices securing my Amazon Virtual Private Cloud (VPC)?

IAM is the new perimeter.

Separate out the roles needed to do each job. (Assuming this is a corporate environment)

Have a role for EC2, another for Networking, another for IAM.

Everyone should not be admin. Everyone should not be able to add/remove IGW’s, NAT gateways, alter security groups and NACLS, or setup peering connections.

Also, another thing… lock down full internet access. Limit to what is needed and that’s it. Read more here….

How can we setup AWS public-private subnet in VPC without NAT server?

Within a single VPC, the subnets’ route tables need to point to each other. This will already work without additional routes because VPC sets up the local target to point to the VPC subnet.

Security groups are not used here since they are attached to instances, and not networks.

See: Amazon Virtual Private Cloud

The NAT EC2 instance (server), or AWS-provided NAT gateway is necessary only if the private subnet internal addresses need to make outbound connections. The NAT will translate the private subnet internal addresses to the public subnet internal addresses, and the AWS VPC Internet Gateway will translate these to external IP addresses, which can then go out to the Internet. Read more here ….

What are the applications (or workloads) that cannot be migrated on to cloud (AWS or Azure or GCP)?

A good example of workloads that currently are not in public clouds are mobile and fixed core telecom networks for tier 1 service providers. This is despite the fact that these core networks are increasingly software based and have largely been decoupled from the hardware. There are a number of reasons for this such as the public cloud providers such as Azure and AWS do not offer the guaranteed availability required by telecom networks. These networks require 99.999% availability and is typically referred to as telecom grade.

The regulatory environment frequently restricts hosting of subscriber data outside the of the operators data centers or in another country and key network functions such as lawful interception cannot contractually be hosted off-prem. Read more here….

How many CIDRs can we add to my own created VPC?

You can add up to 5 IPv4 CIDR blocks, or 1 IPv6 block per VPC. You can further segment the network by utilizing up to 200 subnets per VPC. Amazon VPC Limits. Read more …

Why can’t a subnet’s CIDR be changed once it has been assigned?

Sure it can, but you’ll need to coordinate with the neighbors. You can merge two /25’s into a single /24 quite effortlessly if you control the entire range it covers. In practice you’ll see many tiny allocations in public IPv4 space, like /29’s and even smaller. Those are all assigned to different people. If you want to do a big shuffle there, you have a lot of coordinating to do.. or accept the fallout from the breakage you cause. Read more…

Can one VPC talk to another VPC?

Yes, but a Virtual Private Cloud is usually built for the express purpose of being isolated from unwanted external traffic. I can think of several good reasons to encourage that sort of communication, so the idea is not without merit. Read more..
 

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

What questions to expect in cloud support engineer deployment roles at AWS? 

Cloud Support Engineer (CSE) is a role which requires the following abilities:

  • Wide range of technical skills
  • Good communication and time management
  • Good knowledge about the AWS services, and how to leverage them to solve simple to complex problems.

As your question is related to the deployment Pod, you will probably be asked about deployment methods (A/B testing like blue-green deployment) as well as pipelining strategies. You might be asked during this interview to reason about a simple task and to code it (like parsing a log file). Also review the TCP/IP stack in-depth as well as the tools to troubleshoot it for the networking round. You will eventually have some Linux questions, the range of questions can vary from common CLI tools to Linux internals like signals / syscalls / file descriptors and so on.

Last but not least the Leadership principles, I can only suggest you to prepare a story for each of them. You will quickly find what LP they are looking for and would be able to give the right signal to your interviewer.

Finally, remember that theres a debrief after the (usually 5) stages of your on site interview, and more senior and convincing interviewers tend to defend their vote so don’t screw up with them.

Be natural, focus on the question details and ask for confirmation, be cool but not too much. At the end of the day, remember that your job will be to understand customer issues and provide a solution, so treat your interviewers as if they were customers and they will see a successful CSE in you, be reassured and give you the job. 

Expect questions on cloudformations, Teraform, Aws ec2/rds and stack related questions.

Its a high tech call center. You are expected to take calls, chats of customers and give them technical advice. You will not be doing any of the cool stuff you did earlier (if you are coming from engineering job or DBA). You will surely gain a very good knowledge of multiple AWS services and the one that you will be hired in, however most of the knowledge will be theoretical and nothing practical in day-to-day life.

It also depends on the support team you are being hired for. Networking or compute teams (Ec2) have different interview patterns vs database or big data support.

In any case, basics of OS, networking are critical to the interview. If you have a phone screen, we will be looking for basic/semi advance skills of these and your speciality. For example if you mention Oracle in your resume and you are interviewing for the database team, expect a flurry of those questions.

Other important aspect is the Amazon leadership principles. Half of your interview is based on LPs. If you fail to have scenarios where you do not demonstrate our LPs, you cannot expect to work here even though your technical skills are above average (Having extraordinary skills is a different thing).

The overall interview itself will have 1 phone screen if you are interviewing in the US and 1–2 if outside US. The onsite loop will be 4 rounds , 2 of which are technical (again divided into OS and networking and the specific speciality of the team you are interviewing for ) and 2 of them are leadership principles where we test your soft skills and management skills as they are very important in this job. You need to have a strong view point, disagree if it seems valid to do so, empathy and be a team player while showing the ability to pull off things individually as well. These skills will be critical for cracking LP interviews.

You will NOT be asked to code or write queries as its not part of the job, so you can concentrate on the theoretical part of the subject and also your resume. We will grill you on topics mentioned on your resume to start with.

Traditional monolithic architectures are hard to scale: TRUE

Monolithic architecture is something that build from single piece of material, historically from rock. Monolith term normally use for object made from single large piece of material.” – Non-Technical Definition. “Monolithic application has single code base with multiple modules.

Large Monolithic code-base (often spaghetti code) puts immense cognitive complexity on the developer’s head. As a result, the development velocity is poor. Granular scaling (i.e., scaling part of the application) is not possible. Polyglot programming or polyglot database is challenging.

Drawbacks of Monolithic Architecture

This simple approach has a limitation in size and complexity. Application is too large and complex to fully understand and made changes fast and correctly. The size of the application can slow down the start-up time. You must redeploy the entire application on each update.

18. Sticky Sessions help increase your application’s scability: FALSE

Sticky sessions, also known as session affinity, allow you to route a site user to the particular web server that is managing that individual user’s session. The session’s validity can be determined by a number of methods, including a client-side cookies or via configurable duration parameters that can be set at the load balancer which routes requests to the web servers.

Some advantages with utilizing sticky sessions are that it’s cost effective due to the fact you are storing sessions on the same web servers running your applications and that retrieval of those sessions is generally fast because it eliminates network latency. A drawback for using storing sessions on an individual node is that in the event of a failure, you are likely to lose the sessions that were resident on the failed node. In addition, in the event the number of your web servers change, for example a scale-up scenario, it’s possible that the traffic may be unequally spread across the web servers as active sessions may exist on particular servers. If not mitigated properly, this can hinder the scalability of your applications. Read more here … 

AWS recommends replicating across Availability Zones for resiliency: TRUE

If you need to replicate your data or applications in an AWS Local Zone, AWS recommends that you use one of the following zones as the failover zone:

  • Another Local Zone

  • An Availability Zone in the Region that is not the parent zone. You can use the describe-availability-zones command to view the parent zone.

For more information about AWS Regions and Availability Zones, see AWS Global Infrastructure.

What are the benefits of AWS Cloud Computing?

  • Trade Capital expenses for variable expenses
  • Increase speed and agility
  • Benefit from massive economies at scale
  • Stop spending money on running and maintaining data centers
  • Stop guessing capacity
  • Go global in minutes

What is the default behavior for an EC2 instance when terminated?

After you terminate an instance, it remains visible in the console for a short while, and then the entry is automatically deleted. You cannot delete the terminated instance entry yourself. After an instance is terminated, resources such as tags and volumes are gradually disassociated from the instance, therefore may no longer be visible on the terminated instance after a short while.

When an instance terminates, the data on any instance store volumes associated with that instance is deleted.

By default, Amazon EBS root device volumes are automatically deleted when the instance terminates. However, by default, any additional EBS volumes that you attach at launch, or any EBS volumes that you attach to an existing instance persist even after the instance terminates. This behavior is controlled by the volume’s DeleteOnTermination attribute, which you can modify

For more information, please visit: Terminate Your Instance

How do Amazon EC2 EBS burst credits work?

The documentation on General Purpose SSD (gp2) EBS volumes can be found at this page: New SSD-Backed Elastic Block Storage 

When you first launch an instance with gp2 volumes attached, you get an initial burst credit allowing for up to 30 minutes of 3,000 iops/sec.

After the first 30 minutes, your volume will accrue credits as follows (taken directly from AWS documentation):

Within the General Purpose (SSD) implementation is a Token Bucket model that works as follows

  • Each token represents an “I/O credit” that pays for one read or one write.
  • A bucket is associated with each General Purpose (SSD) volume, and can hold up to 5.4 million tokens.
  • Tokens accumulate at a rate of 3 per configured GB per second, up to the capacity of the bucket.
  • Tokens can be spent at up to 3000 per second per volume.
  • The baseline performance of the volume is equal to the rate at which tokens are accumulated — 3 IOPS per GB per second.

In addition to this, gp2 volumes provide baseline performance of 3 iops per Gb, up to 1Tb (3000 iops). Volumes larger than 1Tb no longer work on the credit system, as they already provide a baseline of 3000 iops. Gp2 volumes have a cap of 10,000 iops regardless of the volume size (so the iops max out for volumes larger than 3.3Tb)

Is elastic IP service free if we associate it with any VM (EC2 server)?

Elastic IP addresses are free when you have them assigned to an instance, feel free to use one! Elastic IPs get disassociated when you stop an instance, so you will get charged in the mean time. The benefit is that you get to keep that IP allocated to your account though, instead of losing it like any other. Once you start the instance you just re-associate it back and you have your old IP again.

Here are the changes associated with the use of Elastic IP addresses

No cost for Elastic IP addresses while in use

* $0.01 per non-attached Elastic IP address per complete hour

* $0.00 per Elastic IP address remap – first 100 remaps / month

* $0.10 per Elastic IP address remap – additional remap / month over 100

If you require any additional information about pricing please reference the link below

Amazon EC2 Pricing – Amazon Web Services

The other cost are as outlined in the paragraph you have quoted.

How do I reduce my AWS EC2 cost? My AWS EC2 expenditure comprises 80% of my AWS bill.

The short answer to reducing your AWS EC2 costs – turn off your instances when you don’t need them.

Your AWS bill is just like any other utility bill, you get charged for however much you used that month. Don’t make the mistake of leaving your instances on 24/7 if you’re only using them during certain days and times (ex. Monday – Friday, 9 to 5).

To automatically start and stop your instances, AWS offers an “EC2 scheduler” solution. A better option would be a cloud cost management tool that not only stops and starts your instances automatically, but also tracks your usage and makes sizing recommendations to optimize your cloud costs and maximize your time and savings.

You could potentially save money using Reserved Instances. But, in non-production environments such as dev, test, QA, and training, Reserved Instances are not your best bet. Why is this the case? These environments are less predictable; you may not know how many instances you need and when you will need them, so it’s better to not waste spend on these usage charges. Instead, schedule such instances (preferably using ParkMyCloud). Scheduling instances to be only up 12 hours per day on weekdays will save you 65% – better than all but the most restrictive 3-year RIs!

You can also save money with:

  • Spot Instances
  • AWS Dedicated Hosts & Dedicated Instances
  • Auto Scaling Groups
  • Rightsizing

What is the difference between an Instance, AMI and Snaphots in AWS? What are they used for?

Well AWS is a web service provider which offers a set of services related to compute, storage, database, network and more to help the business scale and grow

All your concerns are related to AWS EC2 instance, so let me start with an instance

Instance:

  • An EC2 instance is similar to a server where you can host your websites or applications to make it available Globally
  • It is highly scalable and works on the pay-as-you-go model
  • You can increase or decrease the capacity of these instances as per the requirement

AMI:

  • AMI provides the information required to launch the EC2 instance
  • AMI includes the pre-configured templates of the operating system that runs on the AWS
  • Users can launch multiple instances with the same configuration from a single AMI

Snapshot:

  • Snapshots are the incremental backups for the Amazon EBS
  • Data in the EBS are stored in S3 by taking point-to-time snapshots
  • Unique data are only deleted when a snapshot is deleted
  • Multiple EBS can be created using these snapshots

What are the main differences between a VPNs, VPS and VPC?

They are definitely all chalk and cheese to one another.

A VPN (Virtual Private Network) is essentially an encrypted “channel” connecting two networks, or a machine to a network, generally over the public internet.

A VPS (Virtual Private Server) is a rented virtual machine running on someone else’s hardware. AWS EC2 can be thought of as a VPS, but the term is usually used to describe low-cost products offered by lots of other hosting companies.

A VPC (Virtual Private Cloud) is a virtual network in AWS (Amazon Web Services). It can be divided into private and public subnets, have custom routing rules, have internal connections to other VPCs, etc. EC2 instances and other resources are placed in VPCs similarly to how physical data centers have operated for a very long time.

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

What is the use of elastic IP in AWS?

Elastic IP address is basically the static IP (IPv4) address that you can allocate to your resources.

Now, in case that you allocate IP to the resource (and the resource is running), you are not charged anything. On the other hand, if you create Elastic IP, but you do not allocate it to the resource (or the resource is not running), then you are charged some amount (should be around $0.005 per hour if I remember correctly)

Additional info about these:

You are limited to 5 Elastic IP addresses per region. If you require more than that, you can contact AWS support with a request for additional addresses. You need to have a good reason in order to be approved because IPv4 addresses are becoming a scarce resource.

In general, you should be good without Elastic IPs for most of the use-cases (as every EC2 instance has its own public IP, and you can use load balancers, as well as map most of the resources via Route 53).

One of the use-cases that I’ve seen where my client is using Elastic IP is to make it easier for him to access specific EC2 instance via RDP, as well as do deployment through Visual Studio, as he targets the Elastic IP, and thus does not have to watch for any changes in public IP (in case of stopping or rebooting).

Why would you choose not to use AWS Transit Gateway instead of VPC peering?

At this time, AWS Transit Gateway does not support inter region attachments. The transit gateway and the attached VPCs must be in the same region. VPC peering supports inter region peering.

Difference between AWS Workspace and AWS Ec2 VM?

  • The EC2 instance is server instance whilst a Workspace is windows desktop instance
  • Both Windows Server and Windows workstation editions have desktops. Windows Server Core doesn’t not (and AWS doesn’t have an AMI for Windows Server Core that I could find).

  • It is possible to SSH into a Windows instance – this is done on port 22. You would not see a desktop when using SSH if you had enabled it. It is not enabled by default.

  • If you are seeing a desktop, I believe you’re “RDPing” to the Windows instance. This is done with the RDP protocol on port 3389.

  • Two different protocols and two different ports.
  • Workspaces doesn’t allow terminal or ssh services by default. You need to use Workspace client. You still can enable RDP or/and SSH but this is not recommended.
  • Workspaces is a managed desktop service. AWS is taking care of pre-build AMIs, software licenses, joining to domain, scaling etc.
  • What is Amazon EC2? Scalable, pay-as-you-go compute capacity in the cloud. Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.
  • What is Amazon WorkSpaces? Easily provision cloud-based desktops that allow end-users to access applications and resources. With a few clicks in the AWS Management Console, customers can provision a high-quality desktop experience for any number of users at a cost that is highly competitive with traditional desktops and half the cost of most virtual desktop infrastructure (VDI) solutions. End-users can access the documents, applications and resources they need with the device of their choice, including laptops, iPad, Kindle Fire, or Android tablets.
  • Amazon EC2 can be classified as a tool in the “Cloud Hosting” category, while Amazon WorkSpaces is grouped under “Virtual Desktop”.
  • Some of the features offered by Amazon EC2 are:

    • Elastic – Amazon EC2 enables you to increase or decrease capacity within minutes, not hours or days. You can commission one, hundreds or even thousands of server instances simultaneously.
    • Completely Controlled – You have complete control of your instances. You have root access to each one, and you can interact with them as you would any machine.
    • Flexible – You have the choice of multiple instance types, operating systems, and software packages. Amazon EC2 allows you to select a configuration of memory, CPU, instance storage, and the boot partition size that is optimal for your choice of operating system and application.

    On the other hand, Amazon WorkSpaces provides the following key features:

    • Support Multiple Devices- Users can access their Amazon WorkSpaces using their choice of device, such as a laptop computer (Mac OS or Windows), iPad, Kindle Fire, or Android tablet.
    • Keep Your Data Secure and Available- Amazon WorkSpaces provides each user with access to persistent storage in the AWS cloud. When users access their desktops using Amazon WorkSpaces, you control whether your corporate data is stored on multiple client devices, helping you keep your data secure.
    • Choose the Hardware and Software you need- Amazon WorkSpaces offers a choice of bundles providing different amounts of CPU, memory, and storage so you can match your Amazon WorkSpaces to your requirements. Amazon WorkSpaces offers preinstalled applications (including Microsoft Office) or you can bring your own licensed software.

Amazon EBS vs Amazon EFS

An Amazon EBS volume stores data in a single Availability Zone.
To attach an Amazon EC2 instance to an EBS volume, both the Amazon EC2 instance and the EBS volume must reside within the same Availability Zone.

Amazon EFS is a regional service. It stores data in and across multiple Availability Zones.
The duplicate storage enables you to access data concurrently from all the Availability Zones in the Region where a file system is located. Additionally, on-premises servers can access Amazon EFS using AWS Direct Connect.

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep
AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep

AWS Services Cheat Sheet:

Comp­ute

Cate­goryServ­iceDesc­rip­tion
Inst­ances (Virtual machin­es)EC2Provides secure, resizable compute capacity in the cloud. It makes web-scale cloud computing easier for develo­pers. EC2
 EC2 SpotRun fault-­tol­erant workloads for up to 90% off. EC2Spot
 EC2 Autosc­alingAutoma­tically add or remove compute capacity to meet changes in demand. EC2_AustoScaling
 LightsailDesigned to be the easiest way to launch & manage a virtual private server with AWS. An easy-t­o-use cloud platform that offers everything need to build an applic­ation or website. Lightsail
 BatchEnables develo­pers, scient­ists, & engineers to easily & effici­ently run hundreds of thousands of batch computing jobs on AWS. Fully managed batch processing at any scale. Batch
Cont­ain­ersElastic Container Service (ECS)Highly secure, reliable, & scalable way to run contai­ners. ECS
 Elastic Container Registry (ECR)Easily store, manage, & deploy container images. ECR
 Elastic Kubernetes Service (EKS)Fully managed Kubernetes service. EKS
 FargateServerless compute for contai­ners. Fargate
Serv­erl­essLambdaRun code without thinking about servers. Pay only for the compute time you consume. Lamda
Edge and hybridOutpostsRun AWS infras­tru­cture & services on premises for a truly consistent hybrid experi­ence. Outposts
 Snow FamilyCollect and process data in rugged or discon­nected edge enviro­nments. SnowFamily
 WavelengthDeliver ultra-low latency applic­ation for 5G devices. Wavelenth
 VMware Cloud on AWSInnovate faster, rapidly transition to the cloud, & work securely from any location. VMware_On_AWS
 Local ZonesRun latency sensitive applic­ations closer to end-users. LocalZones


Netw­orking and Content Delivery

Use casesFunc­tio­nal­ityServ­iceDesc­rip­tion
Build a cloud networkDefine and provision a logically isolated network for your AWS resourcesVPCVPC lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. VPC
 Connect VPCs and on-pre­mises networks through a central hubTransit GatewayTransit Gateway connects VPCs & on-pre­mises networks through a central hub. This simplifies network & puts an end to complex peering relati­ons­hips. TransitGateway
 Provide private connec­tivity between VPCs, services, and on-pre­mises applic­ationsPrivat­eLinkPrivat­eLink provides private connec­tivity between VPCs & services hosted on AWS or on-pre­mises, securely on the Amazon network. PrivateLink
 Route users to Internet applic­ations with a managed DNS serviceRoute 53Route 53 is a highly available & scalable cloud DNS web service. Route53
Scale your network designAutoma­tically distribute traffic across a pool of resources, such as instances, contai­ners, IP addresses, and Lambda functionsElastic Load BalancingElastic Load Balancing automa­tically distri­butes incoming applic­ation traffic across multiple targets, such as EC2’s, contai­ners, IP addresses, & Lambda functions. ElasticLoadBalancing
 Direct traffic through the AWS Global network to improve global applic­ation perfor­manceGlobal Accele­ratorGlobal Accele­rator is a networking service that sends user’s traffic through AWS’s global network infras­tru­cture, improving internet user perfor­mance by up to 60%. GlobalAccelerator
Secure your network trafficSafeguard applic­ations running on AWS against DDoS attacksShieldShield is a managed Distri­buted Denial of Service (DDoS) protection service that safeguards applic­ations running on AWS. Shield
 Protect your web applic­ations from common web exploitsWAFWAF is a web applic­ation firewall that helps protect your web applic­ations or APIs against common web exploits that may affect availa­bility, compromise security, or consume excessive resources. WAF
 Centrally configure and manage firewall rulesFirewall ManagerFirewall Manager is a security management service which allows to centrally configure & manage firewall rules across accounts & apps in AWS Organi­zation. link text
Build a hybrid IT networkConnect your users to AWS or on-pre­mises resources using a Virtual Private Network(VPN) – ClientVPN solutions establish secure connec­tions between on-pre­mises networks, remote offices, client devices, & the AWS global network. VPN
 Create an encrypted connection between your network and your Amazon VPCs or AWS Transit Gateways(VPN) – Site to SiteSite-t­o-Site VPN creates a secure connection between data center or branch office & AWS cloud resources. site_to_site
 Establish a private, dedicated connection between AWS and your datace­nter, office, or colocation enviro­nmentDirect ConnectDirect Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. DirectConnect
Content delivery networksSecurely deliver data, videos, applic­ations, and APIs to customers globally with low latency, and high transfer speedsCloudFrontCloudFront expedites distri­bution of static & dynamic web content. CloudFront
Build a network for micros­ervices archit­ect­uresProvide applic­ati­on-­level networking for containers and micros­ervicesApp MeshApp Mesh makes it accessible to guide & control micros­ervices operating on AWS. AppMesh
 Create, maintain, and secure APIs at any scaleAPI GatewayAPI Gateway allows the user to design & expand their own REST and WebSocket APIs at any scale. APIGateway
 Discover AWS services connected to your applic­ationsCloud MapCloud Map permits the name & handles the cloud resources. CloudMap

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

Storage

Serv­iceDesc­rip­tion
AWS S3S3 is the storehouse for the internet i.e. object storage built to store & retrieve any amount of data from anywhere S3
AWS BackupAWS Backup is an extern­all­y-a­cce­ssible backup provider that makes it easier to align & optimize the backup of data across AWS services in the cloud. AWS_Backup
Amazon EBSAmazon Elastic Block Store is a web service that provides block-­level storage volumes. EBS
Amazon EFS StorageEFS offers file storage for the user’s Amazon EC2 instances. It’s kind of blob Storage. EFS
Amazon FSxFSx supply fully managed 3rd-party file systems with the native compat­ibility & charac­ter­istic sets for workloads. It’s available as FSx for Windows server (Fully managed file storage built on Windows Server) & Lustre (Fully managed high-p­erf­ormance file system integrated with S3). FSx_Windows FSx_Lustre
AWS Storage GatewayStorage Gateway is a service which connects an on-pre­mises software appliance with cloud-­based storage. Storage_Gateway
AWS DataSyncDataSync makes it simple & fast to move large amounts of data online between on-pre­mises storage & S3, EFS, or FSx for Windows File Server. DataSync
AWS Transfer FamilyThe Transfer Family provides fully managed support for file transfers directly into & out of S3. Transfer_Family
AWS Snow FamilyHighly­-se­cure, portable devices to collect & process data at the edge, and migrate data into and out of AWS. Snow_Family

Clas­sif­ica­tion:
Object storage: S3
File storage servic­es: Elastic File System, FSx for Windows Servers & FSx for Lustre
Block storage: EBS
Back­up: AWS Backup
Data transf­er:
Storage gateway –> 3 types: Tape, File, Volume.
Transfer Family –> SFTP, FTPS, FTP.
Edge computing and storage and Snow Family –> Snowcone, Snowball, Snowmobile

Data­bases

Database typeUse casesServ­iceDesc­rip­tion
Rela­tio­nalTradit­ional applic­ations, ERP, CRM, e-commerceAurora, RDS, RedshiftRDS is a web service that makes it easier to set up, control, and scale a relational database in the cloud. Aurora RDS Redshift
Key-­valueHigh-t­raffic web apps, e-commerce systems, gaming applic­ationsDynamoDBDynamoDB is a fully admini­stered NoSQL database service that offers quick and reliable perfor­mance with integrated scalab­ility. DynamoDB
In-m­emoryCaching, session manage­ment, gaming leader­boards, geospatial applic­ationsElasti­Cache for Memcached & RedisElasti­Cache helps in setting up, managing, and scaling in-memory cache condit­ions. Memcached Redis
Docu­mentContent manage­ment, catalogs, user profilesDocumentDBDocumentDB (with MongoDB compat­ibi­lity) is a quick, depend­able, and fully-­managed database service that makes it easy for you to set up, operate, and scale MongoD­B-c­omp­atible databases.DocumentDB
Wide columnHigh scale industrial apps for equipment mainte­nance, fleet manage­ment, and route optimi­zationKeyspaces (for Apache Cassandra)Keyspaces is a scalable, highly available, and managed Apache Cassan­dra­–co­mpa­tible database service. Keyspaces
GraphFraud detection, social networ­king, recomm­end­ation enginesNeptuneNeptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applic­ations that work with highly connected datasets. Neptune
Time seriesIoT applic­ations, DevOps, industrial telemetryTimestreamTimestream is a fast, scalable, and serverless time series database service for IoT and operat­ional applic­ations that makes it easy to store and analyze trillions of events per day. Timestream
LedgerSystems of record, supply chain, regist­rat­ions, banking transa­ctionsQuantum Ledger Database (QLDB)QLDB is a fully managed ledger database that provides a transp­arent, immutable, and crypto­gra­phi­cally verifiable transa­ction log ‎owned by a central trusted authority. QLDB

Deve­loper Tools

Serv­iceDesc­rip­tion
Cloud9Cloud9 is a cloud-­based IDE that enables the user to write, run, and debug code. Cloud9
CodeAr­tifactCodeAr­tifact is a fully managed artifact repository service that makes it easy for organi­zations of any size to securely store, publish, & share software packages used in their software develo­pment process. CodeArtifact
CodeBuildCodeBuild is a fully managed service that assembles source code, runs unit tests, & also generates artefacts ready to deploy. CodeBuild
CodeGuruCodeGuru is a developer tool powered by machine learning that provides intell­igent recomm­end­ations for improving code quality & identi­fying an applic­ation’s most expensive lines of code. CodeGuru
Cloud Develo­pment KitCloud Develo­pment Kit (AWS CDK) is an open source software develo­pment framework to define cloud applic­ation resources using familiar progra­mming languages. CDK
CodeCommitCodeCommit is a version control service that enables the user to personally store & manage Git archives in the AWS cloud. CodeCommit
CodeDeployCodeDeploy is a fully managed deployment service that automates software deploy­ments to a variety of compute services such as EC2, Fargate, Lambda, & on-pre­mises servers. CodeDeploy
CodePi­pelineCodePi­peline is a fully managed continuous delivery service that helps automate release pipelines for fast & reliable app & infra updates. CodePipeline
CodeStarCodeStar enables to quickly develop, build, & deploy applic­ations on AWS. CodeStar
CLIAWS CLI is a unified tool to manage AWS services & control multiple services from the command line & automate them through scripts. CLI
X-RayX-Ray helps developers analyze & debug produc­tion, distri­buted applic­ations, such as those built using a micros­ervices archit­ecture. X-Ray

Migration & Transfer services

Serv­iceDesc­rip­tion
Migration EvaluatorBuild a data-d­riven business case for AWS. ME
Migration HubMigration Hub provides a single location to track the progress of app migrations across multiple AWS & partner solutions. MigrationHub
Applic­ation Discovery ServiceApplic­ation Discovery Service helps enterprise customers plan migration projects by gathering inform­ation about their on-pre­mises data centers. ADS
Server Migration Service (SMS)SMS is an agentless service which makes it easier & faster to migrate thousands of on-pre­mises workloads to AWS. SMS
Database Migration Service (DMS)DMS helps migrate databases to AWS quickly & securely. DMS
CloudE­ndure MigrationCloudE­ndure Migration simpli­fies, expedites, & reduces the cost of cloud migration by offering a highly automated lift-&-shift solution. CloudEndure
VMware Cloud on AWSRefer compute section.
DataSyncRefer storage section.
Transfer FamilyRefer storage section.
Snow FamilyRefer storage section.

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

SDKs & Toolkits

Serv­iceDesc­rip­tion
CDKCDK uses the famili­arity & expressive power of progra­mming languages for modeling apps. CDK
CorrettoCorretto is a no-cost, multip­lat­form, produc­tio­n-ready distri­bution of the OpenJDK. Corretto
Crypto ToolsCrypto­graphy is hard to do safely & correctly. The AWS Crypto Tools libraries are designed to help everyone do crypto­graphy right, even without special expertise. Crypto Tools
Serverless Applic­ation Model (SAM)SAM is an open-s­ource framework for building serverless applic­ations. It provides shorthand syntax to express functions, APIs, databases, & event source mappings. SAM
Tools for developing and managing applic­ations on AWS

Security, Identity, & Compliance

Cate­goryUse casesServ­iceDesc­rip­tion
Identity & access manage­mentSecurely manage access to services and resourcesIdentity & Access Management (IAM)IAM is a web service for safely contro­lling access to AWS services. IAM
 Securely manage access to services and resourcesSingle Sign-OnSSO helps in simpli­fying, managing SSO access to AWS accounts & business applic­ations. SSO
 Identity management for appsCognitoCognito lets you add user sign-up, sign-in, & access control to web & mobile apps quickly and easily. Cognito
 Managed Microsoft Active DirectoryDirectory ServiceAWS Managed Microsoft Active Directory (AD) enables your direct­ory­-aware workloads & AWS resources to use managed Active Directory (AD) in AWS. DirectoryService
 Simple, secure service to share AWS resourcesResource Access ManagerResource Access Manager (RAM) is a service that enables you to easily & securely share AWS resources with any AWS account or within AWS Organi­zation. RAM
 Central governance and management across AWS accountsOrgani­zationsOrgani­zations helps you centrally govern your enviro­nment as you grow and scale your workloads on AWS. Orgs
Dete­ctionUnified security and compliance centerSecurity HubSecurity Hub gives a compre­hensive view of security alerts & security posture across AWS accounts. SecurityHub
 Managed threat detection serviceGuardDutyGuardDuty is a threat detection service that contin­uously monitors for malicious activity & unauth­orized behavior to protect AWS accounts, workloads, & data stored in S3. GuardDuty
 Analyze applic­ation securityInspectorInspector is a security vulner­ability assessment service improves the security & compliance of the AWS resources. Inspector
 Record and evaluate config­ura­tions of your AWS resourcesConfigConfig is a service that enables to assess, audit, & evaluate the config­ura­tions of AWS resources. Config
 Track user activity and API usageCloudTrailCloudTrail is a service that enables govern­ance, compli­ance, operat­ional auditing, & risk auditing of AWS account. CloudTrail
 Security management for IoT devicesIoT Device DefenderIoT Device Defender is a fully managed service that helps secure fleet of IoT devices. IoTDD
Infr­ast­ructure protec­tionDDoS protectionShieldShield is a managed DDoS protection service that safeguards apps running. It provides always-on detection & automatic inline mitiga­tions that minimize applic­ation downtime & latency. Shield
 Filter malicious web trafficWeb Applic­ation Firewall (WAF)WAF is a web applic­ation firewall that helps protect web apps or APIs against common web exploits that may affect availa­bility, compromise security, or consume excessive resources. WAF
 Central management of firewall rulesFirewall ManagerFirewall Manager eases the user AWS WAF admini­str­ation & mainte­nance activities over multiple accounts & resources. FirewallManager
Data protec­tionDiscover and protect your sensitive data at scaleMacieMacie is a fully managed data (security & privacy) service that uses ML & pattern matching to discover & protect sensitive data. Macie
 Key storage and managementKey Management Service (KMS)KMS makes it easy for to create & manage crypto­graphic keys & control their use across a wide range of AWS services & in your applic­ations. KMS
 Hardware based key storage for regulatory complianceCloudHSMCloudHSM is a cloud-­based hardware security module (HSM) that enables you to easily generate & use your own encryption keys. CloudHSM
 Provision, manage, and deploy public and private SSL/TLS certif­icatesCertif­icate ManagerCertif­icate Manager is a service that easily provision, manage, & deploy public and private SSL/TLS certs for use with AWS services & internal connected resources. ACM
 Rotate, manage, and retrieve secretsSecrets ManagerSecrets Manager assist the user to safely encode, store, & recover creden­tials for any user’s database & other services. SecretsManager
Incident responseInvest­igate potential security issuesDetectiveDetective makes it easy to analyze, invest­igate, & quickly identify the root cause of potential security issues or suspicious activi­ties. Detective
 Fast, automated, cost- effective disaster recoveryCloudE­ndure Disaster RecoveryProvides scalable, cost-e­ffe­ctive business continuity for physical, virtual, & cloud servers. CloudEndure
Comp­lia­nceNo cost, self-s­ervice portal for on-demand access to AWS’ compliance reportsArtifactArtifact is a web service that enables the user to download AWS security & compliance records. Artifact

Data Lakes & Analytics

Cate­goryUse casesServ­iceDesc­rip­tion
Anal­yticsIntera­ctive analyticsAthenaAthena is an intera­ctive query service that makes it easy to analyze data in S3 using standard SQL. Athena
 Big data processingEMREMR is the indust­ry-­leading cloud big data platform for processing vast amounts of data using open source tools such as Apache Spark, Hive, HBase,­Flink, Hudi, & Presto. EMR
 Data wareho­usingRedshiftThe most popular & fastest cloud data warehouse. Redshift
 Real-time analyticsKinesisKinesis makes it easy to collect, process, & analyze real-time, streaming data so one can get timely insights. Kinesis
 Operat­ional analyticsElasti­csearch ServiceElasti­csearch Service is a fully managed service that makes it easy to deploy, secure, & run Elasti­csearch cost effect­ively at scale. ES
 Dashboards & visual­iza­tionsQuicksightQuickSight is a fast, cloud-­powered business intell­igence service that makes it easy to deliver insights to everyone in organi­zation. QuickSight
Data movementReal-time data movement1) Amazon Managed Streaming for Apache Kafka (MSK) 2) Kinesis Data Streams 3) Kinesis Data Firehose 4) Kinesis Data Analytics 5) Kinesis Video Streams 6) GlueMSK is a fully managed service that makes it easy to build & run applic­ations that use Apache Kafka to process streaming data. MSK KDS KDF KDA KVS Glue
Data lakeObject storage1) S3 2) Lake FormationLake Formation is a service that makes it easy to set up a secure data lake in days. A data lake is a centra­lized, curated, & secured repository that stores all data, both in its original form & prepared for analysis. S3 LakeFormation
 Backup & archive1) S3 Glacier 2) BackupS3 Glacier & S3 Glacier Deep Archive are a secure, durable, & extremely low-cost S3 cloud storage classes for data archiving & long-term backup. S3Glacier
 Data catalog1) Glue 2)) Lake FormationRefer as above.
 Third-­party dataData ExchangeData Exchange makes it easy to find, subscribe to, & use third-­party data in the cloud. DataExchange
Pred­ictive analytics && machine learningFrameworks & interfacesDeep Learning AMIsDeep Learning AMIs provide machine learning practi­tioners & resear­chers with the infras­tru­cture & tools to accelerate deep learning in the cloud, at any scale. DeepLearningAMIs
 Platform servicesSageMakerSageMaker is a fully managed service that provides every developer & data scientist with the ability to build, train, & deploy machine learning (ML) models quickly. SageMaker

Containers

Use casesServ­iceDesc­rip­tion
Store, encrypt, and manage container imagesECRRefer compute section
Run contai­nerized applic­ations or build micros­ervicesECSRefer compute section
Manage containers with KubernetesEKSRefer compute section
Run containers without managing serversFargateFargate is a serverless compute engine for containers that works with both ECS & EKS. Fargate
Run containers with server­-level controlEC2Refer compute section
Contai­nerize and migrate existing applic­ationsApp2Co­ntainerApp2Co­ntainer (A2C) is a comman­d-line tool for modern­izing .NET & Java applic­ations into contai­nerized applic­ations. App2Container
Quickly launch and manage contai­nerized applic­ationsCopilotCopilot is a command line interface (CLI) that enables customers to quickly launch & easily manage contai­nerized applic­ations on AWS. Copilot

Serverless

Cate­goryServ­iceDesc­rip­tion
Comp­uteLambdaLambda lets you run code without provis­ioning or managing servers. You pay only for the compute time you consume.
 Lambda@EdgeLambda­@Edge is a feature of Amazon CloudFront that lets you run code closer to users of your applic­ation, which improves perfor­mance & reduces latency.
 FargateRefer containers section
Stor­ageS3Refer storage section
 EFSRefer storage section
Data storesDynamoDBDynamoDB is a key-value & document database that delivers single­-digit millis­econd perfor­mance at any scale.
 Aurora ServerlessAurora Serverless is an on-demand, auto-s­caling config­uration for Amazon Aurora (MySQL & Postgr­eSQ­L-c­omp­atible editions), where the database will automa­tically start up, shut down, & scale capacity up or down based on your applic­ation’s needs.
 RDS ProxyRDS Proxy is a fully managed, highly available database proxy for RDS that makes applic­ations more scalable, resilient to database failures, & more secure.
API ProxyAPI GatewayAPI Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, & secure APIs at any scale.
Appl­ication integr­ationSNSSNS is a fully managed messaging service for both system­-to­-system & app-to­-person (A2P) commun­ica­tion.
 SQSSQS is a fully managed message queuing service that enables to decouple & scale micros­erv­ices, distri­buted systems, & serverless applic­ations.
 AppSyncAppSync is a fully managed service that makes it easy to develop GraphQL APIs by handling the heavy lifting of securely connecting to data sources like AWS DynamoDB, Lambda.
 EventBridgeEventB­ridge is a serverless event bus that makes it easy to connect applic­ations together using data from apps, integrated SaaS apps, & AWS services.
Orch­est­rat­ionStep FunctionsStep Functions is a serverless function orches­trator that makes it easy to sequence Lambda functions & multiple AWS services into busine­ss-­cri­tical applic­ations.
Anal­yticsKinesisKinesis makes it easy to collect, process, & analyze real-time, streaming data so one can get timely insights.
 AthenaAthena is an intera­ctive query service that makes it easy to analyze data in Amazon S3 using standard SQL.

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

Applic­ation Integr­ation

Cate­goryServ­iceDesc­rip­tion
Mess­agingSNSReliable high throughput pub/sub, SMS, email, and mobile push notifi­cations
 SQSMessage queue that sends, stores, and receives messages between applic­ation components at any volume
 MQMessage broker for Apache ActiveMQ that makes migration easy and enables hybrid archit­ectures
Work­flowsStep FunctionsCoordinate multiple AWS services into serverless workflows so you can build and update apps quickly
API manage­mentAPI GatewayCreate, publish, maintain, monitor, & secure APIs at any scale for serverless workloads & web apps
 AppSyncCreate a flexible API to securely access, manipu­late, & combine data from one or more data sources
Event busEventBridgeBuild an event-­driven archit­ecture that connects applic­ation data from your own apps, SaaS, & AWS services
 AppFlowAutomate the flow of data between SaaS applic­ations & AWS services at nearly any scale, without code.


AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

Management & Governance Services

Cate­goryServ­iceDesc­rip­tion
EnableControl TowerThe easiest way to set up and govern a new, secure multi-­account AWS enviro­nment. ControlTower
 Organi­zationsOrgani­zations helps centrally govern enviro­nment as you grow & scale workloads on AWS Organizations
 Well-A­rch­itected ToolWell-A­rch­itected Tool helps review the state of workloads & compares them to the latest AWS archit­ectural best practices. WATool
 BudgetsBudgets allows to set custom budgets to track cost & usage from the simplest to the most complex use cases. Budgets
 License ManagerLicense Manager makes it easier to manage software licenses from software vendors such as Microsoft, SAP, Oracle, & IBM across AWS & on-pre­mises enviro­nments. LicenseManager
Prov­isionCloudF­orm­ationCloudF­orm­ation enables the user to design & provision AWS infras­tru­cture deploy­ments predic­tably & repeat­edly. CloudFormation
 Service CatalogService Catalog allows organi­zations to create & manage catalogs of IT services that are approved for use on AWS. ServiceCatalog
 OpsWorksOpsWorks presents a simple and flexible way to create and maintain stacks and applic­ations. OpsWorks
 Market­placeMarket­place is a digital catalog with thousands of software listings from indepe­ndent software vendors that make it easy to find, test, buy, & deploy software that runs on AWS. Marketplace
Oper­ateCloudWatchCloudWatch offers a reliable, scalable, & flexible monitoring solution that can easily start. CloudWatch
 CloudTrailCloudTrail is a service that enables govern­ance, compli­ance, operat­ional auditing, & risk auditing of AWS account. CloudTrail
 ConfigConfig
 Systems ManagerSystems Manager to plan, proctor, & automate admini­str­ation tasks on the AWS resources. SystemsManager
 Cost & usage reportRefer cost management section
 Cost explorerRefer cost management section
 Managed ServicesOperate your AWS infras­tru­cture on your behalf. ManagedServices
 X RayX-Ray

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

AWS Recommended security best practices

Turn on multif­actor authen­tic­ation for the “root” account
Turn on CloudTrail log file valida­tion.
Enable CloudTrail multi-­region logging.
Integrate CloudTrail with CloudW­atch.
Enable access logging for CloudTrail S3 buckets.
Enable access logging for Elastic Load Balancer (ELB).
Enable Redshift audit logging.
Enable Virtual Private Cloud (VPC) flow logging.
Require multif­actor authen­tic­ation (MFA) to delete CloudTrail buckets
Enable CloudTrail logging across all AWS.
Turn on multi-­factor authen­tic­ation for IAM users.
Enable IAM users for multi-mode access.
Attach IAM policies to groups or roles
Rotate IAM access keys regularly, and standa­rdize on the selected number of days
Set up a strict password policy.
Set the password expiration period to 90 days and prevent reuseC­ustomer Visual­force pages with standard headers
Don’t use expired SSL/TLS certif­icates
User HTTPS for CloudFront distri­butions
Restrict access to CloudTrail bucket.
Encrypt CloudTrail log files at rest
Encrypt Elastic Block Store (EBS) database.
Provision access to resources using IAM roles.
Ensure EC2 security groups don’t have large ranges of ports open
Configure EC2 security groups to restrict inbound access to EC2.
Avoid using root user accounts.
Use secure SSL ciphers when connecting between the client and ELB.
Use secure SSL versions when connecting between client and ELB.
Use a standard naming (tagging) convention for EC2.
Encrypt RDS.
Ensure access keys are not being used with root accounts.
Use secure CloudFront SSL versions.
Enable the requir­e_ssl parameter in all Redshift clusters.
Rotate SSH keys period­ically.
Minimize the number of discrete security groups.
Reduce number of IAM groups.
Terminate unused access keys
Disable access for inactive or unused IAM users
Remove unused IAM access keys
Delete unused SSH Public Keys
Restrict access to AMIs.
Restrict access to EC2 security groups.
Restrict access to RDS instances.
Restrict access to Redshift clusters.
Restrict outbound access.
Disallow unrest­ricted ingress access on uncommon ports.
Restrict access to well-known ports such as CIFS, FTP, ICMP, SMTP, SSH, Remote desktop
Inventory & categorize all existing custom apps by the types of data stored, compliance requir­ements & possible threats they face.
Involve IT security throughout the develo­pment process.
Grant the fewest privileges as possible for applic­ation users
Enforce a single set of data loss prevention policies across custom applic­ations and all other cloud services.
Encrypt highly sensitive data such as protected health inform­ation (PHI) or personally identi­fiable inform­ation (PII).

AWS RE:INVENT 2021 – LATEST PRODUCTS AND SERVICES ANNOUNCED:

1- Read For Me

Read For Me launched at the 2021 AWS re:Invent Builders’ Fair in Las Vegas. A web application which helps the visually impaired ‘hear documents. With the help of AI services such as Amazon Textract, Amazon Comprehend, Amazon Translate and Amazon Polly utilizing an event-driven architecture and serverless technology, users upload a picture of a document, or anything with text, and within a few seconds “hear” that document in their chosen language.

AWS read for me

2- Delivering code and architectures through AWS Proton and Git

Infrastructure operators are looking for ways to centrally define and manage the architecture of their services, while developers need to find a way to quickly and safely deploy their code. In this session, learn how to use AWS Proton to define architectural templates and make them available to development teams in a collaborative manner. Also, learn how to enable development teams to customize their templates so that they fit the needs of their services.

3- Accelerate front-end web and mobile development with AWS Amplify

User-facing web and mobile applications are the primary touchpoint between organizations and their customers. To meet the ever-rising bar for customer experience, developers must deliver high-quality apps with both foundational and differentiating features. AWS Amplify helps front-end web and mobile developers build faster front to back. In this session, review Amplify’s core capabilities like authentication, data, and file storage and explore new capabilities, such as Amplify Geo and extensibility features for easier app customization with AWS services and better integration with existing deployment pipelines. Also learn how customers have been successful using Amplify to innovate in their businesses.

3- Train ML models at scale with Amazon SageMaker, featuring Aurora

Today, AWS customers use Amazon SageMaker to train and tune millions of machine learning (ML) models with billions of parameters. In this session, learn about advanced SageMaker capabilities that can help you manage large-scale model training and tuning, such as distributed training, automatic model tuning, optimizations for deep learning algorithms, debugging, profiling, and model checkpointing, so that even the largest ML models can be trained in record time for the lowest cost. Then, hear from Aurora, a self-driving vehicle technology company, on how they use SageMaker training capabilities to train large perception models for autonomous driving using massive amounts of images, video, and 3D point cloud data.

AWS RE:INVENT 2020 – LATEST PRODUCTS AND SERVICES ANNOUNCED:

1-Modernize log analytics with Amazon Elasticsearch Service

Amazon Elasticsearch Service is uniquely positioned to handle log analytics workloads. With a multitude of open-source and AWS-native service options, users can assemble effective log data ingestion pipelines and couple these with Amazon Elasticsearch Service to build a robust, cost-effective log analytics solution. This session reviews patterns and frameworks leveraged by companies such as Capital One to build an end-to-end log analytics solution using Amazon Elasticsearch Service.
 
2-Achieve compliance as code using AWS Config
Many companies in regulated industries have achieved compliance requirements using AWS Config. They also need a record of the incidents generated by AWS Config in tools such as ServiceNow for audits and remediation. In this session, learn how you can achieve compliance as code using AWS Config. Through the creation of a noncompliant Amazon EC2 machine, this demo shows how AWS Config triggers an incident into a governance, risk, and compliance system for audit recording and remediation. The session also covers best practices for how to automate the setup process with AWS CloudFormation to support many teams.
 
3- Cost-optimize your enterprise workloads with Amazon EBS – Compute

Recent times have underscored the need to enable agility while maintaining the lowest total cost of ownership (TCO). In this session, learn about the latest volume types that further optimize your performance and cost, while enabling you to run newer applications on AWS with high availability. Dive deep into the latest AWS volume launches and cost-optimization strategies for workloads such as databases, virtual desktop infrastructure, and low-latency interactive applications.
4- Amazon Location Service: Enable apps with location features
Location data is a vital ingredient in today’s applications, enabling use cases from asset tracking to geomarketing. Now, developers can use the new Amazon Location Service to add maps, tracking, places, geocoding, and geofences to applications, easily, securely, and affordably. Join this session to see how to get started with the service and integrate high-quality location data from geospatial data providers Esri and HERE. Learn how to move from experimentation to production quickly with location capabilities. This session can help developers who require simple location data and those building sophisticated asset tracking, customer engagement, fleet management, and delivery applications.
 
5- Automate, track, and manage tasks with Amazon Connect Tasks
In this session, learn how Amazon Connect Tasks makes it easy for you to prioritize, assign, and track all the tasks that agents need to complete, including work in external applications needed to resolve customer issues (such as emails, cases, and social posts). Tasks provides a single place for agents to be assigned calls, chats, and tasks, ensuring agents are focused on the highest-priority work. Also, learn how you can also use Tasks with Amazon Connect’s workflow capabilities to automate task-related actions that don’t require agent interaction. Come see how you can use Amazon Connect Tasks to increase customer satisfaction while improving agent productivity.
6- Solve customer issues quickly with Amazon Connect Wisdom
New agent-assist capabilities from Amazon Connect Wisdom make it easier and faster for agents to find the information they need to solve customer issues in real time. In this session, see how agents can use simple ML-powered search to find information stored across knowledge bases, wikis, and FAQs, like Salesforce and ServiceNow. Join the session to hear Traeger Pellet Grills discuss how it’s using these new features, along with Contact Lens for Amazon Connect, to deliver real-time recommendations to agents based on issues automatically detected during calls.
 
 

7- Introducing Amazon Managed Service for Grafana:

Grafana is a popular, open-source data visualization tool that enables you to centrally query and analyze observability data across multiple data sources. Learn how the new Amazon Managed Service for Grafana, announced with Grafana’s parent company Grafana Labs, solves common observability challenges. With the new fully managed service, you can monitor, analyze, and alarm on metrics, logs, and traces while offloading the operational management of security patching, upgrading, and resource scaling to AWS. This session also covers new Grafana capabilities such as advanced security features and native AWS service integrations to simplify configuration and onboarding of data sources.
 
8- Introducing Amazon Managed Service for Prometheus (AMP)

Prometheus is a popular open-source monitoring and alerting solution optimized for container environments. Customers love Prometheus for its active open-source community and flexible query language, using it to monitor containers across AWS and on-premises environments. Amazon Managed Service for Prometheus is a fully managed Prometheus-compatible monitoring service. In this session, learn how you can use the same open-source Prometheus data model, existing instrumentation, and query language to monitor performance with improved scalability, availability, and security without having to manage the underlying infrastructure.

9-Announcing AWS IoT Core for LoRaWAN
Today, enterprises use low-power, long-range wide-area network (LoRaWAN) connectivity to transmit data over long ranges, through walls and floors of buildings, and in commercial and industrial use cases. However, this requires companies to operate their own LoRa network server (LNS). In this session, learn how you can use LoRaWAN for AWS IoT Core to avoid time-consuming and undifferentiated development work, operational overhead of managing infrastructure, or commitment to costly subscription-based pricing from third-party service providers.
 

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

 
10-AWS CloudShell: The fastest way to get started with AWS CLI

AWS CloudShell is a free, browser-based shell available from the AWS console that provides a simple way to interact with AWS resources through the AWS command-line interface (CLI). In this session, see an overview of both AWS CloudShell and the AWS CLI, which when used together are the fastest and easiest ways to automate tasks, write scripts, and explore new AWS services. Also, see a demo of both services and how to quickly and easily get started with each.

11- Introducing AWS IoT SiteWise Edge
Industrial organizations use AWS IoT SiteWise to liberate their industrial equipment data in order to make data-driven decisions. Now with AWS IoT SiteWise Edge, you can collect, organize, process, and monitor your equipment data on premises before sending it to local or AWS Cloud destinations—all while using the same asset models, APIs, and functionality. Learn how you can extend the capabilities of AWS IoT SiteWise to the edge with AWS IoT SiteWise Edge.
 

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

12-AWS Fault Injection Simulator: Fully managed chaos engineering service

AWS Fault Injection Simulator is a fully managed chaos engineering service that helps you improve application resiliency by making it easy and safe to perform controlled chaos engineering experiments on AWS. In this session, see an overview of chaos engineering and AWS Fault Injection Simulator, and then see a demo of how to use AWS Fault Injection Simulator to make applications more resilient to failure.
 
13- Data lakes: Easily build, secure, and share with AWS Lake Formation
Organizations are breaking down data silos and building petabyte-scale data lakes on AWS to democratize access to thousands of end users. Since its launch, AWS Lake Formation has accelerated data lake adoption by making it easy to build and secure data lakes. In this session, AWS Lake Formation GM Mehul A. Shah showcases recent innovations enabling modern data lake use cases. He also introduces a new capability of AWS Lake Formation that enables fine-grained, row-level security and near-real-time analytics in data lakes.
 
14- Understand ML model predictions and biases with Amazon SageMaker Clarify
Machine learning (ML) models may generate predictions that are not fair, whether because of biased data, a model that contains bias, or bias that emerges over time as real-world conditions change. Likewise, closed-box ML models are opaque, making it difficult to explain to internal stakeholders, auditors, external regulators, and customers alike why models make predictions both overall and for individual inferences. In this session, learn how Amazon SageMaker Clarify is providing built-in tools to detect bias across the ML workflow including during data prep, after training, and over time in your deployed model.
 
15- Run Spark on Kubernetes with Amazon EMR on Amazon EKS
Amazon EMR on Amazon EKS introduces a new deployment option in Amazon EMR that allows you to run open-source big data frameworks on Amazon EKS. This session digs into the technical details of Amazon EMR on Amazon EKS, helps you understand benefits for customers using Amazon EMR or running open-source Spark on Amazon EKS, and discusses performance considerations.
 
16- Proactively monitor the health of your business using Amazon Lookout for Metrics
Finding unexpected anomalies in metrics can be challenging. Some organizations look for data that falls outside of arbitrary ranges; if the range is too narrow, they miss important alerts, and if it is too broad, they receive too many false alerts. In this session, learn about Amazon Lookout for Metrics, a fully managed anomaly detection service that is powered by machine learning and over 20 years of anomaly detection expertise at Amazon to quickly help organizations detect anomalies and understand what caused them. This session guides you through setting up your own solution to monitor for anomalies and showcases how to deliver notifications via various integrations with the service.
 

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

 
17- Improve application availability with ML-powered insights using Amazon DevOps Guru
As applications become increasingly distributed and complex, developers and IT operations teams need more automated practices to maintain application availability and reduce the time and effort spent detecting, debugging, and resolving operational issues manually. In this session, discover Amazon DevOps Guru, an ML-powered cloud operations service, informed by years of Amazon.com and AWS operational excellence, that provides an easy and automated way to improve an application’s operational performance and availability. See how you can transform your IT operations and reduce mean time to recovery (MTTR) with contextual insights.
 
18- ML-powered voice authentication with Amazon Connect Voice ID
Amazon Connect Voice ID provides real-time caller authentication that makes voice interactions in contact centers more secure and efficient. Voice ID uses machine learning to verify the identity of genuine customers by analyzing a caller’s unique voice characteristics. This allows contact centers to use an additional security layer that doesn’t rely on the caller answering multiple security questions, and it makes it easy to enroll and verify customers without disrupting the natural flow of the conversation. Join this session to see how fast and secure ML-based voice authentication can power your contact center.
 
19- Introducing EC2 G4ad instances for graphics-intensive apps
G4ad instances feature the latest AMD Radeon Pro V520 GPUs and second-generation AMD EPYC processors. These new instances deliver the best price performance in Amazon EC2 for graphics-intensive applications such as virtual workstations, game streaming, and graphics rendering. This session dives deep into these instances, ideal use cases, and performance benchmarks, and it provides a demo.
 

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

 
20- An introduction to Amazon ECS Anywhere
new capability that enables deployment of Amazon ECS tasks on customer-managed infrastructure. This session covers the evolution of Amazon ECS over time, including new on-premises capabilities to manage your hybrid footprint using a common fully managed control plane and API. You learn some foundational technical details and important tenets that AWS is using to design these capabilities, and the session ends with a short demo of Amazon ECS Anywhere.
 
21- Amazon Aurora Serverless v2: Instant scaling for demanding workloads
Amazon Aurora Serverless is an on-demand, auto scaling configuration of Amazon Aurora that automatically adjusts database capacity based on application demand. With Amazon Aurora Serverless v2, you can now scale database workloads instantly from hundreds to hundreds of thousands of transactions per second and adjust capacity in fine-grained increments to provide just the right amount of database resources. This session dives deep into Aurora Serverless v2 and shows how it can help you operate even the most demanding database workloads worry-free.
 
22- Bringing AWS benefits to all Apple developers with EC2 Mac instances
Apple delights its customers with stunning devices like iPhones, iPads, MacBooks, Apple Watches, and Apple TVs, and developers want to create applications that run on iOS, macOS, iPadOS, tvOS, watchOS, and Safari. In this session, learn how Amazon is innovating to improve the development experience for Apple applications. Come learn how AWS now enables you to develop, build, test, and sign Apple applications with the flexibility, scalability, reliability, and cost benefits of Amazon EC2.
 
23- Enable predictive maintenance for your industrial equipment: Amazon Monitron 
When industrial equipment breaks down, this means costly downtime. To avoid this, you perform maintenance at regular intervals, which is inefficient and increases your maintenance costs. Predictive maintenance allows you to plan the required repair at an optimal time before a breakdown occurs. However, predictive maintenance solutions can be challenging and costly to implement given the high costs and complexity of sensors and infrastructure. You also have to deal with the challenges of interpreting sensor data and accurately detecting faults in order to send alerts. Come learn how Amazon Monitron helps you solve these challenges by offering an out-of-the-box, end-to-end, cost-effective system.
 
24- Introduction to AQUA for Amazon Redshift 
As data grows, we need innovative approaches to get insight from all the information at scale and speed. AQUA is a new hardware-accelerated cache that uses purpose-built analytics processors to deliver up to 10 times better query performance than other cloud data warehouses by automatically boosting certain types of queries. It’s available in preview on Amazon Redshift RA3 nodes in select regions at no extra cost and without any code changes. Attend this session to understand how AQUA works and which analytic workloads will benefit the most from AQUA.
 
25- Amazon Lookout for Vision
Figuring out if a part has been manufactured correctly, or if machine part is damaged, is vitally important. Making this determination usually requires people to inspect objects, which can be slow and error-prone. Some companies have applied automated image analysis—machine vision—to detect anomalies. While useful, these systems can be very difficult and expensive to maintain. In this session, learn how Amazon Lookout for Vision can automate visual inspection across your production lines in few days. Get started in minutes, and perform visual inspection and identify product defects using as few as 30 images, with no machine learning (ML) expertise required.
 
26- AWS Proton: Automating infrastructure provisioning & code deployments
AWS Proton is a new service that enables infrastructure operators to create and manage common container-based and serverless application stacks and automate provisioning and code deployments through a self-service interface for their developers. Learn how infrastructure teams can empower their developers to use serverless and container technologies without them first having to learn, configure, and maintain the underlying resources.
 
27- Introducing Babelfish for Aurora PostgreSQL
Migrating applications from SQL Server to an open-source compatible database can be time-consuming and resource-intensive. Solutions such as the AWS Database Migration Service (AWS DMS) automate data and database schema migration, but there is often more work to do to migrate application code. This session introduces Babelfish for Aurora PostgreSQL, a new translation layer for Amazon Aurora PostgreSQL that enables Amazon Aurora to understand commands from applications designed to run on Microsoft SQL Server. Learn how Babelfish for Aurora PostgreSQL works to reduce the time, risk, and effort of migrating Microsoft SQL Server-based applications to Aurora, and see some of the capabilities that make this possible.
 
 
 
28- Make sense of health data with Amazon HealthLake
Over the past decade, we’ve witnessed a digital transformation in healthcare, with organizations capturing huge volumes of patient information. But this data is often unstructured and difficult to extract, with information trapped in clinical notes, insurance claims, recorded conversations, and more. In this session, explore how the new Amazon HealthLake service removes the heavy lifting of organizing, indexing, and structuring patient information to provide a complete view of each patient’s health record in the FHIR standard format. Come learn how to use prebuilt machine learning models to analyze and understand relationships in the data, identify trends, and make predictions, ultimately delivering better care for patients.
 

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

 
29- Introducing Amazon QuickSight Q: Ask questions on data & get answers in seconds
When business users want to ask new data questions that are not answered by existing business intelligence (BI) dashboards, they rely on BI teams to create or update data models and dashboards, which can take several weeks to complete. In this session, learn how Merlin lets users simply enter their questions on the Merlin search bar and get answers in seconds. Merlin uses natural language processing and semantic data understanding to make sense of the data. It extracts business terminologies and intent from users’ questions, retrieves the corresponding data from the source, and returns the answer in the form of a number, chart, or table in Amazon QuickSight.
 
30- Amazon ECR Public: Share, discover, deploy, and monetize container apps easily
When developers publish images publicly for anyone to find and use—whether for free or under license—they must make copies of common images and upload them to public websites and registries that do not offer the same availability commitment as Amazon ECR. This session explores a new Amazon public registry, Amazon ECR Public, built with AWS experience operating Amazon ECR. Here, developers can share georeplicated container software worldwide for anyone to discover and download. Developers can quickly publish public container images with a single command. Learn how anyone can browse and pull container software for use in their own applications.
 
31- Detect abnormal equipment behavior by analyzing sensor data
Industrial companies are constantly working to avoid unplanned downtime due to equipment failure and to improve operational efficiency. Over the years, they have invested in physical sensors, data connectivity, data storage, and dashboarding to monitor equipment and get real-time alerts. Current data analytics methods include single-variable thresholds and physics-based modeling approaches, which are not effective at detecting certain failure types and operating conditions. In this session, learn how Amazon Lookout for Equipment uses data from your sensors to detect abnormal equipment behavior so that you can take action before machine failures occur and avoid unplanned downtime.
 
32- Real-time ML analytics with Contact Lens for Amazon Connect
In this session, learn how Contact Lens for Amazon Connect enables your contact center supervisors to understand the sentiment of customer conversations, identify call drivers, evaluate compliance with company guidelines, and analyze trends. This can help supervisors train agents, replicate successful interactions, and identify crucial company and product feedback. Your supervisors can conduct fast full-text search on all transcripts to quickly troubleshoot customer issues. With real-time capabilities, you can get alerted to issues during live customer calls and deliver proactive assistance to agents while calls are in progress, improving customer satisfaction. Join this session to see how real-time ML-powered analytics can power your contact center.
 
33- Introducing 15 new Local Zones for ultra-low latency compute across the US
AWS Local Zones places compute, storage, database, and other select services closer to locations where no AWS Region exists today. Last year, AWS launched the first two Local Zones in Los Angeles, and organizations are using Local Zones to deliver applications requiring ultra-low-latency compute. AWS is launching Local Zones in 15 metro areas to extend access across the contiguous US. In this session, learn how you can run latency-sensitive portions of applications local to end users and resources in a specific geography, delivering single-digit millisecond latency for use cases such as media and entertainment content creation, real-time gaming, reservoir simulations, electronic design automation, and machine learning.
 
34- Personalized service with Amazon Connect Customer Profiles
Your customers expect a fast, frictionless, and personalized customer service experience. In this session, learn about Amazon Connect Customer Profiles—a new unified customer profile capability to allow agents to provide more personalized service during a call. Customer Profiles automatically brings together customer information from multiple applications, such as Salesforce, Marketo, Zendesk, ServiceNow, and Amazon Connect contact history, into a unified customer profile. With Customer Profiles, agents have the information they need, when they need it, directly in their agent application, resulting in improved customer satisfaction and reduced call resolution times (by up to 15%).
 
35- Accelerate data preparation with Amazon SageMaker Data Wrangler
Preparing training data can be tedious. Amazon SageMaker Data Wrangler provides a faster, visual way to aggregate and prepare data for machine learning. In this session, learn how to use SageMaker Data Wrangler to connect to data sources and use prebuilt visualization templates and built-in data transforms to streamline the process of cleaning, verifying, and exploring data without having to write a single line of code. See a demonstration of how SageMaker Data Wrangler  can be used to perform simple tasks as well as more advanced use cases. Finally, see how you can take your data preparation workflows into production with a single click.
 

Increase availability with AWS observability solutions

To provide access to critical resources when needed and also limit the potential financial impact of an application outage, a highly available application design is critical. In this session, learn how you can use Amazon CloudWatch and AWS X-Ray to increase the availability of your applications. Join this session to learn how AWS observability solutions can help you proactively detect, efficiently investigate, and quickly resolve operational issues. All of which help you manage and improve your application’s availability.

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

Securing your Amazon EKS applications: Best practices

Security is critical for your Kubernetes-based applications. Join this session to learn about the security features and best practices for Amazon EKS. This session covers encryption and other configurations and policies to keep your containers safe.

 
Andy Jassy Keynote: Live from Seattle, Andy Jassy takes the stage to share his insights and the latest news about AWS customers, products, and services.
 
AWS Partner Keynote
Don’t miss the AWS Partner Keynote with Doug Yeum, head of Global Partner Organization; Sandy Carter, vice president, Global Public Sector Partners and Programs; and Dave McCann, vice president, AWS Migration, Marketplace, and Control Services, to learn how AWS is helping partners modernize their businesses to help their customers transform.
 
Machine Learning Keynote
Join Swami Sivasubramanian for the first-ever Machine Learning Keynote, live at re:Invent. Hear how AWS is freeing builders to innovate on machine learning with the latest developments in AWS machine learning, demos of new technology, and insights from customers.
 
Infrastructure Keynote
Join Peter DeSantis, senior vice president of Global Infrastructure and Customer Support, to learn how AWS has optimized its cloud infrastructure to run some of the world’s most demanding womath.ceilrkloads and give your business a competitive edge.
 
Werner Vogels Keynote – Watch First

Join Dr. Werner Vogels at 8:00AM (PST) as he goes behind the scenes to show how Amazon is solving today’s hardest technology problems. Based on his experience working with some of the largest and most successful applications in the world, Dr. Vogels shares his insights on building truly resilient architectures and what that means for the future of software development.

The evolution of cloud architecture
Cloud architecture has evolved over the years as the nature of adoption has changed and the level of maturity in our thinking continues to develop. In this session, Rudy Valdez, VP of Solutions Architecture and Training & Certification, walks
 
Increasing innovation with serverless applications
Organizations around the world are minimizing operations and maximizing agility by developing with serverless building blocks. Join David Richardson, VP of Serverless, for a closer look at the serverless programming model, including event-dri
 
The extended cloud: AWS powers edge-to-cloud applications
AWS edge computing solutions provide infrastructure and software that move data processing and analysis as close to the endpoint where data is generated as required by customers. In this session, learn about new edge computing capabilities announced at re:Invent and how customers are using purpose-built edge solutions to extend the cloud to the edge.
 

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

 

Containers

Topics on simplifying container deployment, legacy workload migration using containers, optimizing costs for containerized applications, container architectural choices, and more.
 
Getting an insight into your Kubernetes applications

Do you need to know what’s happening with your applications that run on Amazon EKS? In this session, learn how you can combine open-source tools, such as Prometheus and Grafana, with Amazon CloudWatch using CloudWatch Container Insights. Come to this session for a demo of Prometheus metrics with Container Insights.

 
AWS Copilot: Simplifying container development

The hard part is done. You and your team have spent weeks poring over pull requests, building microservices and containerizing them. Congrats! But what do you do now? How do you get those services on AWS? How do you manage multiple environments? How do you automate deployments? AWS Copilot is a new command line tool that makes building, developing, and operating containerized applications on AWS a breeze. In this session, learn how AWS Copilot can help you and your team manage your services and deploy them to production, safely and delightfully.

 
Choosing your container data plane on AWS
Five years ago, if you talked about containers, the assumption was that you were running them on a Linux VM. Fast forward to today, and now that assumption is challenged—in a good way. Come to this session to explore the best data plane option to meet your needs. This session covers the advantages of different abstraction models (Amazon EC2 or AWS Fargate), the operating system (Linux or Windows), the CPU architecture (x86 or Arm), and the commercial model (Spot or On-Demand Instances.)
 
Securing your Amazon EKS applications: Best practices

Security is critical for your Kubernetes-based applications. Join this session to learn about the security features and best practices for Amazon EKS. This session covers encryption and other configurations and policies to keep your containers safe.

 
GitOps compliant: How CommBank multiplied Amazon EKS clusters

In this session, learn how the Commonwealth Bank of Australia (CommBank) built a platform to run containerized applications in a regulated environment and then replicated it across multiple departments using Amazon EKS, AWS CDK, and GitOps. This session covers how to manage multiple multi-team Amazon EKS clusters across multiple AWS accounts while ensuring compliance and observability requirements and integrating Amazon EKS with AWS Identity and Access Management, Amazon CloudWatch, AWS Secrets Manager, Application Load Balancer, Amazon Route 53, and AWS Certificate Manager.

 
Getting up and running with Amazon EKS

Amazon EKS is a fully managed service that makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. Join this session to learn about how Verizon runs its core applications on Amazon EKS at scale. Verizon also discusses how it worked with AWS to overcome several post-Amazon EKS migration challenges and ensured that the platform was robust.

 
Developing CI/CD pipelines with Amazon ECS and AWS Fargate

Containers have helped revolutionize modern application architecture. While managed container services have enabled greater agility in application development, coordinating safe deployments and maintainable infrastructure has become more important than ever. This session outlines how to integrate CI/CD best practices into deployments of your Amazon ECS and AWS Fargate services using pipelines and the latest in AWS developer tooling.

 
Securing your Amazon ECS applications: Best practices

With Amazon ECS, you can run your containerized workloads securely and with ease. In this session, learn how to utilize the full spectrum of Amazon ECS security features and its tight integrations with AWS security features to help you build highly secure applications.

 
Optimize costs and manage spend for containerized applications

Do you have to budget your spend for container workloads? Do you need to be able to optimize your spend in multiple services to reduce waste? If so, this session is for you. It walks you through how you can use AWS services and configurations to improve your cost visibility. You learn how you can select the best compute options for your containers to maximize utilization and reduce duplication. This combined with various AWS purchase options helps you ensure that you’re using the best options for your services and your budget.

 
AWS Fargate: Are serverless containers right for you?

You have a choice of approach when it comes to provisioning compute for your containers. Some users prefer to have more direct control of their instances, while others could do away with the operational heavy lifting. AWS Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. This session explores the benefits and considerations of running on Fargate or directly on Amazon EC2 instances. You hear about new and upcoming features and learn how Amenity Analytics benefits from the serverless operational model.

 
Containers at AWS: More options and power than ever before

Are you confused by the many choices of containers services that you can run on AWS? This session explores all your options and the advantages of each. Whether you are just beginning to learn Docker or are an expert with Kubernetes, join this session to learn how to pick the right services that would work best for you.

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

 
Modernizing with containers

Leading containers migration and modernization initiatives can be daunting, but AWS is making it easier. This session explores architectural choices and common patterns, and it provides real-world customer examples. Learn about core technologies to help you build and operate container environments at scale. Discover how abstractions can reduce the pain for infrastructure teams, operators, and developers. Finally, hear the AWS vision for how to bring it all together with improved usability for more business agility.

 
Improving observability with AWS App Mesh and Amazon ECS

As the number of services grow within an application, it becomes difficult to pinpoint the exact location of errors, reroute traffic after failures, and safely deploy code changes. In this session, learn how to integrate AWS App Mesh with Amazon ECS to export monitoring data and implement consistent communications control logic across your application. This makes it easy to quickly pinpoint the exact locations of errors and automatically reroute network traffic, keeping your container applications highly available and performing well.

 
Best practices for containerizing legacy applications

Enterprises are continually looking to develop new applications using container technologies and leveraging modern CI/CD tools to automate their software delivery lifecycles. This session highlights the types of applications and associated factors that make a candidate suitable to be containerized. It also covers best practices that can be considered as you embark on your modernization journey.

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

Looking at Amazon EKS through a networking lens

Because of its security, reliability, and scalability capabilities, Amazon Elastic Kubernetes Service (Amazon EKS) is used by organization in their most sensitive and mission-critical applications. This session focuses on how Amazon EKS networking works with an Amazon VPC and how to expose your Kubernetes application using Elastic Load Balancing load balancers. It also looks at options for more efficient IP address utilization.

AWS networking best practices in large-scale migrations

Network design is a critical component in your large-scale migration journey. This session covers some of the real-world networking challenges faced when migrating to the cloud. You learn how to overcome these challenges by diving deep into topics such as establishing private connectivity to your on-premises data center and accelerating data migrations using AWS Direct Connect/Direct Connect gateway, centralizing and simplifying your networking with AWS Transit Gateway, and extending your private DNS into the cloud. The session also includes a discussion of related best practices.

Innovating on AWS in a 5G world

5G will be the catalyst for the next industrial revolution. In this session, come learn about key technical use cases for different industry segments that will be enabled by 5G and related technologies, and hear about the architectural patterns that will support these use cases. You also learn about AWS-enabled 5G reference architectures that incorporate AWS services.

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

How to choose the right instance type for ML inference

AWS offers a breadth and depth of machine learning (ML) infrastructure you can use through either a do-it-yourself approach or a fully managed approach with Amazon SageMaker. In this session, explore how to choose the proper instance for ML inference based on latency and throughput requirements, model size and complexity, framework choice, and portability. Join this session to compare and contrast compute-optimized CPU-only instances, such as Amazon EC2 C4 and C5; high-performance GPU instances, such as Amazon EC2 G4 and P3; cost-effective variable-size GPU acceleration with Amazon Elastic Inference; and highest performance/cost with Amazon EC2 Inf1 instances powered by custom-designed AWS Inferentia chips.

Architectural patterns & best practices for workloads on VMware Cloud on AWS

When it comes to architecting your workloads on VMware Cloud on AWS, it is important to understand design patterns and best practices. Come join this session to learn how you can build well-architected cloud-based solutions for your VMware workloads. This session covers infrastructure designs with native AWS service integrations across compute, networking, storage, security, and operations. It also covers the latest announcements for VMware Cloud on AWS and how you can use these new features in your current architecture.

The cutover: Moving your traffic to the cloud

One of the most critical phases of executing a migration is moving traffic from your existing endpoints to your newly deployed resources in the cloud. This session discusses practices and patterns that can be leveraged to ensure a successful cutover to the cloud. The session covers preparation, tools and services, cutover techniques, rollback strategies, and engagement mechanisms to ensure a successful cutover.

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

DeepRacer

AWS DeepRacer is the fastest way to get rolling with machine learning. Developers of all skill levels can get hands-on, learning how to train reinforcement learning models in a cloud based 3D racing simulator. Attend a session to get started, and then test your skills by competing for prizes and glory in an exciting autonomous car racing experience throughout re:Invent!

AWS DeepRacer gives you an interesting and fun way to get started with reinforcement learning (RL). RL is an advanced machine learning (ML) technique that takes a very different approach to training models than other ML methods. Its super power is that it learns very complex behaviors without requiring any labeled training data, and it can make short-term decisions while optimizing for a longer-term goal. AWS DeepRacer makes it fast and easy to build models in Amazon SageMaker and train, test, and iterate quickly and easily on the track in the AWS DeepRacer 3D racing simulator. 

Build cloud-ready apps faster with Red Hat OpenShift Service on AWS (sponsored by Red Hat)
As more organizations are looking to migrate to the cloud, Red Hat OpenShift Service offers a proven, reliable, and consistent platform across the hybrid cloud. Red Hat and AWS recently announced a fully managed joint service that can be deployed directly from the AWS Management Console and can integrate with other AWS Cloud-native services. In this session, you learn about this new service, which delivers production-ready Kubernetes that many enterprises use on premises today, enhancing your ability to shift workloads to the AWS Cloud and making it easier to adopt containers and deploy applications faster. This presentation is brought to you by Red Hat, an AWS Partner.
 

Decoupling serverless workloads with Amazon EventBridge

Event-driven architecture can help you decouple services and simplify dependencies as your applications grow. In this session, you learn how Amazon EventBridge provides new options for developers who are looking to gain the benefits of this approach.

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

Deep dive on Amazon Timestream

Amazon Timestream is a fast, scalable, and serverless time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day at as little as one-tenth the cost of relational databases. In this session, dive deep on Amazon Timestream features and capabilities, including its serverless automatic scaling architecture, its storage tiering that simplifies your data lifecycle management, its purpose-built query engine that lets you access and analyze recent and historical data together, and its built-in time series analytics functions that help you identify trends and patterns in your data in near-real time.

Accelerating outcomes and migrations with Savings Plans

Savings Plans is a flexible pricing model that allows you to save up to 72 percent on Amazon EC2, AWS Fargate, and AWS Lambda. Many AWS users have adopted Savings Plans since its launch in November 2019 for the simplicity, savings, ease of use, and flexibility. In this session, learn how many organizations use Savings Plans to drive more migrations and business outcomes. Hear from Comcast on their compute transformation journey to the cloud and how it started with RIs. As their cloud usage evolved, they adopted Savings Plans to drive business outcomes such as new architecture patterns.

Learn how teams at Amazon rapidly release features at scale

The ability to deploy only configuration changes, separate from code, means you do not have to restart the applications or services that use the configuration and changes take effect immediately. In this session, learn best practices used by teams within Amazon to rapidly release features at scale. Learn about a pattern that uses AWS CodePipeline and AWS AppConfig that will allow you to roll out application configurations without taking applications out of service. This will help you ship features faster across complex environments or regions.

 

Top-paying Cloud certifications:

  1. Google Certified Professional Cloud Architect — $175,761/year
  2. AWS Certified Solutions Architect – Associate — $149,446/year
  3. Azure/Microsoft Cloud Solution Architect – $141,748/yr
  4. Google Cloud Associate Engineer – $145,769/yr
  5. AWS Certified Cloud Practitioner — $131,465/year
  6. Microsoft Certified: Azure Fundamentals — $126,653/year
  7. Microsoft Certified: Azure Administrator Associate — $125,993/year
The Cloud is the future: The AWS Certified Solutions Architect – Associate Averge salary is $149,446/year. Get Certified Now with the apps below:

AWS CCP CLF-C01 on Android –  AWS CCP CLF-C01 on iOS –  AWS CCP CLF-C01 on Windows 10/11

2022 AWS CCP CLF-C01 Practice Exam Course  – Top 300 Questions and Detailed Answers – Success Guaranteed – Save 50% with this link

AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep
AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep

AWS Cloud Practitioner Breaking News –  AWS CCP CLF-C01 Testimonials – AWS Top Stories

  • Suggest course for ML certificate
    by /u/RP_m_13 (AWS Certified Experts) on May 24, 2022 at 2:50 am

    I'm nearly finishing AWS Certified Developer Associate and also have good knowledge (both practical and theoretical in Machine Learning, and Deep Learning) which course would you recommend for AWS Machine Learning Certificate? submitted by /u/RP_m_13 [link] [comments]

  • Offer acceptance email notifications is now available on AWS Marketplace
    by aws@amazon.com (Recent Announcements) on May 23, 2022 at 10:00 pm

    Today, AWS Marketplace announced general availability of Offer Acceptance Email Notifications which will notify users by email when a customer completes an offer subscription. With this launch, customers can now have real-time visibility into Offer Acceptance and Subscription by buyers, allowing them to track the overall progress of an AWS Marketplace transaction. Buyers, ISVs and Channel Partners can now receive relevant details like Agreement ID, Offer ID, and Customer details at the time of subscription, to initiate procurement workflows, internal order creation, revenue recognition and software provisioning/deployment. This feature is available for all AWS Marketplace product types.

  • Announcing new Amazon EC2 C7g instances powered by AWS Graviton3 processors
    by aws@amazon.com (Recent Announcements) on May 23, 2022 at 9:05 pm

    The latest generation compute optimized Amazon EC2 C7g instances are generally available. C7g instances are the first instances powered by the latest AWS Graviton3 processors and deliver up to 25% better performance over Graviton2-based C6g instances for a broad spectrum of applications such as application servers, microservices, batch processing, electronic design automation (EDA), gaming, video encoding, scientific modeling, distributed analytics, high performance computing (HPC), CPU-based machine learning (ML) inference, and ad serving.

  • Amazon Kinesis Data Analytics is now FedRAMP compliant
    by aws@amazon.com (Recent Announcements) on May 23, 2022 at 8:40 pm

    Amazon Kinesis Data Analytics is now authorized as FedRAMP Moderate in US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon) and as FedRAMP High in AWS GovCloud (US-West) and AWS GovCloud (US-East).

  • Amazon CloudFront now provides TLS version and cipher suite in CloudFront-Viewer-TLS-header
    by aws@amazon.com (Recent Announcements) on May 23, 2022 at 7:35 pm

    CloudFront now provides the CloudFront-Viewer-TLS header for use with origin request policies. CloudFront-Viewer-TLS is an HTTP header that includes the TLS version and cipher suite used to negotiate the viewer TLS connection. Previously, TLS information was available in CloudFront access logs to analyze previous requests. Now, customers can access the TLS version and cipher suite in each HTTP request to make real-time decisions such as restricting requests with outdated TLS versions. The CloudFront-Viewer-TLS header value uses the following syntax: <TLS version>:<Cipher Suite>. For example, TLSv1.2:ECDHE-RSA-AES128-SHA256.