AWS Certified Developer – Associate Practice Questions And Answers Dump
Your application reads commands from an SQS queue and sends them to web services hosted by your
partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost. How can you accommodate the partners’ broken web services without wasting your resources?
A. Create a delay queue and set DelaySeconds to 30 seconds
B. Requeue the message with a VisibilityTimeout of 30 seconds.
C. Create a dead letter queue and set the Maximum Receives to 3.
D. Requeue the message with a DelaySeconds of 30 seconds.
Answer:
C. After a message is taken from the queue and returned for the maximum number of retries, it is
automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.
A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently.
What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?
A. Amazon DynamoDB auto scaling
B. Amazon DynamoDB cross-region replication
C. Amazon DynamoDB Streams
D. Amazon DynamoDB Accelerator
Answer:
D. The AWS Documentation mentions the following:
DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios
As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
You are creating a DynamoDB table with the following attributes:
PurchaseOrderNumber (partition key)
CustomerID
PurchaseDate
TotalPurchaseValue
One of your applications must retrieve items from the table to calculate the total value of purchases for a
particular customer over a date range. What secondary index do you need to add to the table?
A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
TotalPurchaseValue attribute
B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
TotalPurchaseValue attribute
C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
TotalPurchaseValue attribute
D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
TotalPurchaseValue attribute
Answer:
C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition
key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the
TotalPurchaseValue into the index provides all the data needed to satisfy the use case.
Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
Global Secondary Indexes defines a new paradigm – different hash/range keys per index.
This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.
Throughput :
Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.
You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?
A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.
Answer:
D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.
You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?
A. Create and assign EIP to each instance
B. Create and attach a second IGW to the VPC.
C. Create and utilize a NAT Gateway
D. Connect to a VPN
Answer:
C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?
A. Security Groups
B. Route Tables
C. Elastic Load Balancer
D. Auto Scaling
Answer:
D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.
You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?
A. RegisterImage
B. CreateImage
C. ami-register-image
D. ami-create-image
Answer:
A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.
When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?
A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
B. Permenantly assigning users to specific instances and always routing their traffic to those instances
C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance
What is one key difference between an Amazon EBS-backed and an instance-store backed instance?
A. Autoscaling requires using Amazon EBS-backed instances
B. Virtual Private Cloud requires EBS backed instances
C. Amazon EBS-backed instances can be stopped and restarted without losing data
D. Instance-store backed instances can be stopped and restarted without losing data
Answer:
C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.
After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command.
ssh -i my_key.pem ec2-user@52.2.222.22
However you receive the following error.
@@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@
What is the most probable reason for this and how can you fix it?
A. You do not have root access on your terminal and need to use the sudo option for this to work.
B. You do not have enough permissions to perform the operation.
C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.
Answer:
D. You need to run something like: chmod 400 my_key.pem
You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?
A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.
Answer:
D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.
EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:
A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
B. Can only be used to launch EC2 instances in the same country as the AMI is stored
C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
D. Can be used to launch EC2 instances in any AWS region
Answer:
C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another
A. The ability to create custom permission policies.
B. Assigning IAM permission policies to more than one user at a time.
C. Easier user/policy management.
D. Allowing EC2 instances to gain access to S3.
Answer:
B. and C.
A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.
What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?
A. Amazon DynamoDB auto scaling
B. Amazon DynamoDB cross-region replication
C. Amazon DynamoDB Streams
D. Amazon DynamoDB Accelerator
Answer:
D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:
As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?
A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.
Answer:
B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence. Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation. During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:
Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
Saved Configurations–
Settings for any options that are not applied directly to the
environment are loaded from a saved configuration, if specified.
Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the
environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.
Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.
Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.
If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI . Settings in configuration files are not applied
directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.
Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?
A. Eventual Consistent Reads
B. Conditional reads for Consistency
C. Strongly Consistent Reads
D. Not possible
Answer:
C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.
You’ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?
A. Create an Opswork stack and deploy the Docker containers
B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.
Answer:
B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.
You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?
A. Create multiple threads and upload the objects in the multiple threads
B. Write the items in batches for better performance
C. Use the Multipart upload API
D. Enable versioning on the Bucket
Answer:
C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.
A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?
A. 6000
B. 10
C. 3600
D. 600
Answer:
B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.
You can specify the Write capacity in the Capacity tab of the DynamoDB table.
AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers for the AWS Certified Developer – Associate Exam.
The AWS Certified Developer-Associate Examination (DVA-C01) is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals guided by certification industry best practices and guidelines.
Your results for the examination are reported as a score from 100 – 1000, with a minimum passing score of 720.
Domain 1: Deployment (22%)
1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
1.2 Deploy applications using Elastic Beanstalk.
1.3 Prepare the application deployment package to be deployed to AWS.
1.4 Deploy serverless applications
22%
Domain 2: Security (26%)
2.1 Make authenticated calls to AWS services.
2.2 Implement encryption using AWS services.
2.3 Implement application authentication and authorization.
26%
Domain 3: Development with AWS Services (30%)
3.1 Write code for serverless applications.
3.2 Translate functional requirements into application design.
3.3 Implement application design into application code.
3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.
30%
Domain 4: Refactoring
4.1 Optimize application to best use AWS services and features.
4.2 Migrate existing application code to run on AWS.
10%
Domain 5: Monitoring and Troubleshooting (10%)
5.1 Write code that can be monitored.
5.2 Perform root cause analysis on faults found in testing or production.
What is the AWS Certified Cloud Practitioner Exam?
The AWS Cloud Practitioner Exam is an introduction to AWS services and the intention is to examine the candidates ability to define what the AWS cloud is and its global infrastructure. It provides an overview of AWS core services security aspects, pricing and support services. The main objective is to provide an overall understanding about the Amazon Web Services Cloud platform. The course helps you get the conceptual understanding of the AWS and can help you know about the basics of AWS and cloud computing, including the services, cases and benefits.
Which of the following service is most useful when a Disaster Recovery method is triggered in AWS.
A. Amazon Route 53
B. Amazon SNS
C. Amazon SQS
D. Amazon Inspector
Answer:
A. Route 53 is a domain name system service by AWS. When a Disaster does occur , it can be easy to switch to secondary sites using the Route53 service.
Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that
computers use to connect to each other. Amazon Route 53 is fully compliant with IPv6 as well.
Which of the following disaster recovery deployment mechanisms that has the highest downtime
A. Pilot light
B. Warm standby
C. Multi Site
D. Backup and Restore
Answer:
D. The below snapshot from the AWS Documentation shows the spectrum of the Disaster recovery methods. If you go to the further end of the spectrum you have the least time for downtime for the users.
AWS Disaster Recovery Techniques
Your company is planning to host resources in the AWS Cloud. They want to use services which can be used to decouple resources hosted on the cloud. Which of the following services can help fulfil this requirement?
A. AWS EBS Volumes
B. AWS EBS Snapshots
C. AWS Glacier
D. AWS SQS
Answer:
D. AWS SQS: Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components.
What is the availability and durability rating of S3 Standard Storage Class?
Choose the correct answer:
A. 99.999999999% Durability and 99.99% Availability
B. 99.999999999% Availability and 99.90% Durability
C. 99.999999999% Durability and 99.00% Availability
D. 99.999999999% Availability and 99.99% Durability
Answer:
A. 99.999999999% Durability and 99.99% Availability
S3 Standard Storage class has a rating of 99.999999999% durability (referred to as 11 nines) and 99.99% availability.
What AWS database is primarily used to analyze data using standard SQL formatting with compatibility for your existing business intelligence tools
A. Redshift
B. RDS
C. DynamoDB
D. ElastiCache
Answer:
A. Redshift is a database offering that is fully-managed and used for data warehousing and analytics, including compatibility with existing business intelligence tools.
Which of the following are the benefits of AWS Organizations?
Choose the 2 correct answers:
A. Analyze cost before migrating to AWS.
B. Centrally manage access polices across multiple AWS accounts.
C. Automate AWS account creation and management.
D. Provide technical help (by AWS) for issues in your AWS account.
Answer:
B. and C.:
CENTRALLY MANAGE POLICIES ACROSS MULTIPLE AWS ACCOUNTS
AUTOMATE AWS ACCOUNT CREATION AND MANAGEMENT
CONTROL ACCESS TO AWS SERVICES
CONSOLIDATE BILLING ACROSS MULTIPLE AWS ACCOUNTS
There is a requirement hosting a set of servers in the Cloud for a short period of 3 months. Which of the following types of instances should be chosen to be cost effective.
A. Spot Instances
B. On-Demand
C. No Upfront costs Reserved
D. Partial Upfront costs Reserved
Answer:
B. Since the requirement is just for 3 months, then the best cost effective option is to use On-Demand Instances.
Which of the following is not a disaster recovery deployment technique.
A. Pilot light
B. Warm standby
C. Single Site
D. Multi-Site
Answer:
C. The following figure shows a spectrum for the four scenarios, arranged by how quickly a system can be available to users after a DR event.
AWS Disaster Recovery Techniques
Which of the following are attributes to the costing for using the Simple Storage Service. Choose 2 answers from the options given below
A. The storage class used for the objects stored.
B. Number of S3 buckets.
C. The total size in gigabytes of all objects stored.
D. Using encryption in S3
Answer:
A. and C: Below is a snapshot of the costing calculator for AWS S3.
Amazon S3 is storage for the Internet. It is designed to make web-scale computing easier for developers.
What service helps you to aggregate logs from your EC2 instance? Choose one answer from the options below:
A. SQS
B. S3
C. Cloudtrail
D. Cloudwatch Logs
Answer:
D. You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, and other sources. You can then retrieve the associated log data from CloudWatch Log.
A company is deploying a new two-tier web application in AWS. The company wants to store their most frequently used data so that the response time for the application is improved. Which AWS service provides the solution for the company’s requirements?
A. MySQL Installed on two Amazon EC2 Instances in a single Availability Zone
B. Amazon RDS for MySQL with Multi-AZ
C. Amazon ElastiCache
D. Amazon DynamoDB
Answer:
C. Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases.
You have a distributed application that periodically processes large volumes of data across multiple Amazon EC2 Instances. The application is designed to recover gracefully from Amazon EC2 instance failures. You are required to accomplish this task in the most cost-effective way. Which of the following will meet
your requirements?
A. Spot Instances
B. Reserved Instances
C. Dedicated Instances
On-Demand Instances
Answer:
A. When you think of cost effectiveness, you can either have to choose Spot or Reserved instances. Now when you have a regular processing job, the best is to use spot instances and since your application is designed recover gracefully from Amazon EC2 instance failures, then even if you lose the Spot instance , there is no issue because your application can recover.
Which of the following features is associated with a Subnet in a VPC to protect against Incoming traffic requests?
A. AWS Inspector
B. Subnet Groups
C. Security Groups
D. NACL
Answer:
D. A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC.
A company is deploying a two-tier, highly available web application to AWS. Which service provides durable storage for static content while utilizing Overall CPU resources for the web tier?
A. Amazon EBC volume.
B. Amazon S3
C. Amazon EC2 instance store
D. Amazon RDS instance
Answer:
B. Amazon S3 is the default storage service that should be considered for companies. It provides durable storage for all static content.
What are characteristics of Amazon S3? Choose 2 answers from the options given below.
A. S3 allows you to store objects of virtually unlimited size.
B. S3 allows you to store unlimited amounts of data.
C. S3 should be used to host relational database.
D. Objects are directly accessible via a URL.
Answer:
B. and D.: Each object does have a limitation in S3, but you can store virtually unlimited amounts of data. Also each object gets a directly accessible URL
When working on the costing for on-demand EC2 instances , which are the following are attributes which determine the costing of the EC2 Instance. Choose 3 answers from the options given below
A. Instance Type
B. AMI Type
C. Region
D. Edge location
Answer:
A. B. C. : See components making up the pricing below.
AWS AMI Pricing
You have a mission-critical application which must be globally available at all times. If this is the case, which of the below deployment mechanisms would you employ
A. Deployment to multiple edge locations
B. Deployment to multiple Availability Zones
D. Deployment to multiple Data Centers
D. Deployment to multiple Regions
Answer:
D. Regions represent different geographic locations and it is best to host your application across multiple regions for disaster recovery.
Which of the following are right principles when designing cloud based systems. Choose 2 answers from the options below
A. Build Tightly-coupled components
B. Build loosely-coupled components
C. Assume everything will fail
D. Use as many services as possible
Answer:
B. and C. Always build components which are loosely coupled. This is so that even if one component does fail, the entire system does not fail. Also if you build with the assumption that everything will fail, then you will ensure that the right measures are taken to build a highly available and fault tolerant system.
You have 2 accounts in your AWS account. One for the Dev and the other for QA. All are part of
consolidated billing. The master account has purchase 3 reserved instances. The Dev department is currently using 2 reserved instances. The QA team is planning on using 3 instances which of the same instance type. What is the pricing tier of the instances that can be used by the QA Team?
A. No Reserved and 3 on-demand
B. One Reserved and 2 on-demand
C. Two Reserved and 1 on-demand
D. Three Reserved and no on-demand
Answer:
B. Since all are a part of consolidating billing, the pricing of reserved instances can be shared by All. And since 2 are already used by the Dev team , another one can be used by the QA team. The rest of the instances can be on-demand instances.
Which of the following storage mechanisms can be used to store messages effectively which can be used across distributed systems?
A. Amazon Glacier
B. Amazon EBS Volumes
C. Amazon EBS Snapshots
D. Amazon SQS
Answer:
D. Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications ormicroservices. It moves data between distributed application components and helps you decouple these components.
You are exploring what services AWS has off-hand. You have a large number of data sets that need to be processed. Which of the following services can help fulfil this requirement.
A. EMR
B. S3
C. Glacier
D. Storage Gateway
Answer:
A. Amazon EMR helps you analyze and process vast amounts of data by distributing the computational work across a cluster of virtual servers running in the AWS Cloud. The cluster is managed using an open-source framework called Hadoop. Amazon EMR lets you focus on crunching or analyzing your data without having to worry about time-consuming setup, management, and tuning of Hadoop clusters or the compute capacity they rely on.
Which of the following services allows you to analyze EC2 Instances against pre-defined security templates to check for vulnerabilities
A. AWS Trusted Advisor
B. AWS Inspector
C. AWS WAF
D. AWS Shield
Answer:
B. Amazon Inspector enables you to analyze the behaviour of your AWS resources and helps you to identify potential security issues. Using Amazon Inspector, you can define a collection of AWS resources that you want to include in an assessment target. You can then create an assessment template and launch a security assessment run of this target.
Your company is planning to offload some of the batch processing workloads on to AWS. These jobs can be interrupted and resumed at any time. Which of the following instance types would be the most cost effective to use for this purpose.
A. On-Demand
B. Spot
C. Full Upfront Reserved
D. Partial Upfront Reserved
Answer:
B. Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. For example, Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks
Which of the below cannot be used to get data onto Amazon Glacier.
A. AWS Glacier API
B. AWS Console
C. AWS Glacier SDK
D. AWS S3 Lifecycle policies
Answer:
B. Note that the AWS Console cannot be used to upload data onto Glacier. The console can only be used to create a Glacier vault which can be used to upload the data.
Which of the following from AWS can be used to transfer petabytes of data from on-premise locations to the AWS Cloud.
A. AWS Import/Export
B. AWS EC2
C. AWS Snowball
D. AWS Transfer
Answer:
C. Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data& into and out of the AWS cloud. Using Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. Transferring data with Snowball is simple, fast, secure, and can be as little as one-fifth the cost of high-speed Internet.
Which of the following services allows you to analyze EC2 Instances against pre-defined security templates to check for vulnerabilities
A. AWS Trusted Advisor
B. AWS Inspector
C. AWS WAF
D. AWS Shield
Answer:
B. Amazon Inspector enables you to analyze the behavior of your AWS resources and helps you to identify potential security issues. Using Amazon Inspector, you can define a collection of AWS resources that you want to include in an assessment target. You can then create an assessment template and launch a security assessment run of this target.
Your company wants to move an existing Oracle database to the AWS Cloud. Which of the following services can help facilitate this move.
A. AWS Database Migration Service
B. AWS VM Migration Service
C. AWS Inspector
D. AWS Trusted Advisor
Answer:
A. AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open source databases.
Which of the following features of AWS RDS allows for offloading reads of the database.
A. Cross region replication
B. Creating Read Replica’s
C. Using snapshots
D. Using Multi-AZ feature
Answer:
B. You can reduce the load on your source DB Instance by routing read queries from your applications to the read replica. Read replicas allow you to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.
Which of the following does AWS perform on its behalf for EBS volumes to make it less prone to failure?
A. Replication of the volume across Availability Zones
B. Replication of the volume in the same Availability Zone
C. Replication of the volume across Regions
D. Replication of the volume across Edge locations
Answer:
B. When you create an EBS volume in an Availability Zone, it is automatically replicated within that zone to prevent data loss due to failure of any single hardware component
Your company is planning to host a large ecommerce application on the AWS Cloud. One of their major concerns is Internet attacks such as DDos attacks. Which of the following services can help mitigate this concern. Choose 2 answers from the options given below
A. A. Cloudfront
B. AWS Shield
C. C. AWS EC2
D. AWS Config
Answer:
A. and B. : One of the first techniques to mitigate DDoS attacks is to minimize the surface area that can be attacked thereby limiting the options for attackers and allowing you to build protections in a single place. We want to ensure that we do not expose our application or resources to ports, protocols or applications from where they do not expect any communication. Thus, minimizing the possible points of attack and letting us concentrate our mitigation efforts. In some cases, you can do this by placing your computation resources behind Content Distribution
Networks (CDNs), Load Balancers and restricting direct Internet traffic to certain parts of your infrastructure
like your database servers. In other cases, you can use firewalls or Access Control Lists (ACLs) to control what traffic reaches your applications.
Which of the following are 2 ways that AWS allows to link accounts
A. Consolidating billing
B. AWS Organizations
C. Cost Explorer
D. IAM
Answer:
A. and B. : You can use the consolidated billing feature in AWS Organizations to consolidate payment for multiple AWS accounts or multiple AISPL accounts. With consolidated billing, you can see a combined view of AWS charges incurred by all of your accounts. You also can get a cost report for each member account that is associated with your master account. Consolidated billing is offered at no additional charge.
Which of the following helps in DDos protection. Choose 2 answers from the options given below
A. Cloudfront
B. AWS Shield
C. AWS EC2
D. AWS Config
Answer:
A. and B. : One of the first techniques to mitigate DDoS attacks is to minimize the surface area that can be attacked thereby limiting the options for attackers and allowing you to build protections in a single place. We want to ensure that we do not expose our application or resources to ports, protocols or applications from where they do not expect any communication. Thus, minimizing the possible points of attack and letting us concentrate our mitigation efforts. In some cases, you can do this by placing your computation resources behind; Content Distribution Networks (CDNs), Load Balancers and restricting direct Internet traffic to certain parts of your infrastructure like your database servers. In other cases, you can use firewalls or Access Control Lists (ACLs) to control what traffic reaches your applications.
A company wants to host a self-managed database in AWS. How would you ideally implement this solution?
A. Using the AWS DynamoDB service
B. Using the AWS RDS service
C. Hosting a database on an EC2 Instance
D. Using the Amazon Aurora service
Answer:
C. If you want a self-managed database, that means you want complete control over the database engine and the underlying infrastructure. In such a case you need to host the database on an EC2 Instance
There is a requirement to host a database server for a minimum period of one year. Which of the following would result in the least cost?
A. Spot Instances
B. On-Demand
C. No Upfront costs Reserved
D. Partial Upfront costs Reserved
Answer:
D. : If the database is going to be used for a minimum of one year at least , then it is better to get Reserved Instances. You can save on costs , and if you use a partial upfront options , you can get a better discount
which of the below can be used to import data into Amazon Glacier? Choose 3 answers from the options given below:
A. AWS Glacier API
B. AWS Console
C. AWS Glacier SDK
D. AWS S3 Lifecycle policies
Answer:
A. C. and D. : The AWS Console cannot be used to upload data onto Glacier. The console can only be used to create a Glacier vault which can be used to upload the data.
Which of the following can be used to secure EC2 Instances hosted in AWS. Choose 2 answers
A. Usage of Security Groups
B. Usage of AMI’s
C. Usage of Network Access Control Lists
D. Usage of the Internet gateway
Answer:
A and C: Security groups acts as a virtual firewall for your instance to control inbound and outbound traffic. Network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for
controlling traffic in and out of one or more subnets.
You plan to deploy an application on AWS. This application needs to be PCI Compliant. Which of the below steps are needed to ensure the compliance? Choose 2 answers from the below list:
A. Chhose AWS services which are PCI Compliant
B. Ensure the right steps are taken during application development for PCI Compliance
C. Encure the AWS Services are made PCI Compliant
D. Do an audit after the deployment of the application for PCI Compliance.
Which AWS Services can be used to store files? Choose 2 answers from the options given below:
A. Amazon CloudWatch
B. Amazon Simple Storage Service (Amazon S3)
C. Amazon Elastic Block Store (Amazon EBS)
D. AWS COnfig
D. AWS Amazon Athena
Answer:
B. and C. Amazon S3 is a Object storage built to store and retrieve any amount of data from anywhere. Amazon Elastic Block Store is a Persistent block storage for Amazon EC2.
Question: What best describes Amazon Web Services (AWS)?
Choose the correct answer:
A. AWS is the cloud.
B. AWS only provides compute and storage services.
C. AWS is a cloud services provider.
D. None of the above.
Answer:
C: AWS is defined as a cloud services provider. They provide hundreds of services of which compute and storage are included (not not limited to).
Reference: https://aws.amazon.com/
Question: Which AWS service can be used as a global content delivery network (CDN) service?
A. Amazon SES
B. Amazon CouldTrail
C. Amazon CloudFront
D. Amazon S3
Answer:
C: Amazon CloudFront is a web service that gives businesses and web application developers an easy
and cost effective way to distribute content with low latency and high data transfer speeds. Like other AWS services, Amazon CloudFront is a self-service, pay-per-use offering, requiring no long term commitments or minimum fees. With CloudFront, your files are delivered to end-users using a global network of edge locations.Reference: https://aws.amazon.com/cloudfront/
What best describes the concept of fault tolerance?
Choose the correct answer:
A. The ability for a system to withstand a certain amount of failure and still remain functional.
B. The ability for a system to grow in size, capacity, and/or scope.
C. The ability for a system to be accessible when you attempt to access it.
D. The ability for a system to grow and shrink based on demand.
Answer:
A: Fault tolerance describes the concept of a system (in our case a web application) to have failure in some of its components and still remain accessible (highly available). Fault tolerant web applications will have at least two web servers (in case one fails).
Question: The firm you work for is considering migrating to AWS. They are concerned about cost and the initial investment needed. Which of the following features of AWS pricing helps lower the initial investment amount needed? Choose 2 answers from the options given below:
A. The ability to choose the lowest cost vendor.
B. The ability to pay as you go
C. No upfront costs
D. Discounts for upfront payments
Answer:
B and C: The best features of moving to the AWS Cloud is: No upfront cost and The ability to pay as you go where the customer only pays for the resources needed. Reference: https://aws.amazon.com/pricing/
Question: Your company has
started using AWS. Your IT Security team is concerned with the
security of hosting resources in the Cloud. Which AWS service provides security optimization recommendations that could help the IT Security team secure resources using AWS?
A. AWS API Gateway
B. Reserved Instances
C. AWS Trusted Advisor
D. AWS Spot Instances
Answer:
C: An online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment, Trusted Advisor provides real time guidance to help you provision your resources following AWS best practices. Reference: https://aws.amazon.com/premiumsupport/trustedadvisor/
What is the relationship between AWS global infrastructure and the concept of high availability?
Choose the correct answer:
A. AWS is centrally located in one location and is subject to widespread outages if something happens at that one location.
B. AWS regions and Availability Zones allow for redundant architecture to be placed in isolated parts of the world.
C. Each AWS region handles a different AWS services, and you must use all regions to fully use AWS.
D. None of the above
Answer
B: As an AWS user, you can create your applications infrastructure and duplicate it. By placing duplicate infrastructure in multiple regions, high availability is created because if one region fails you have a backup (in a another region) to use.
Question: You are hosting a number of EC2 Instances on AWS. You are looking to monitor CPU Utilization on the Instance. Which service would you use to collect and track performance metrics for AWS services?
C: Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. Reference: https://aws.amazon.com/cloudwatch/
Question: Which of the following support plans give access to all the checks in the Trusted Advisor service. Choose 2 answers from the options given below:
A. The ability for a system to grow and shrink based on demand.
B. The ability for a system to grow in size, capacity, and/or scope.
C. The ability for a system be be accessible when you attempt to access it.
D. The ability for a system to withstand a certain amount of failure and still remain functional.
Answer
B: Scalability refers to the concept of a system being able to easily (and cost-effectively) scale UP. For web applications, this means the ability to easily add server capacity when demand requires.
Question: If you wanted to monitor all events in your AWS account, which of the below services would you use?
A. AWS CloudWatch
B. AWS CloudWatch logs
C. AWS Config
D. AWS CloudTrail
Answer:
D: AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk
auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. Reference: https://aws.amazon.com/cloudtrail/
What are the four primary benefits of using the cloud/AWS?
Choose the correct answer:
A. Fault tolerance, scalability, elasticity, and high availability.
B. Elasticity, scalability, easy access, limited storage.
C. Fault tolerance, scalability, sometimes available, unlimited storage
D. Unlimited storage, limited compute capacity, fault tolerance, and high availability.
Answer:
A: Fault tolerance, scalability, elasticity, and high availability are the four primary benefits of AWS/the cloud.
What best describes a simplified definition of the “cloud”?
Choose the correct answer:
A. All the computers in your local home network.
B. Your internet service provider
C. A computer located somewhere else that you are utilizing in some capacity.
D. An on-premisis data center that your company owns.
Answer
D: The simplest definition of the cloud is a computer that is located somewhere else that you are utilizing in some capacity. AWS is a cloud services provider, as the provide access to computers they own (located at AWS data centers), that you use for various purposes.
Question: Your development team is planning to host a development environment on the cloud. This consists of EC2 and RDS instances. This environment will probably only be required for 2 months. Which types of instances would you use for this purpose?
A. On-Demand
B. Spot
C. Reserved
D. Dedicated
Answer:
A: The best and cost effective option would be to use On-Demand Instances. The AWS documentation gives the following additional information on On-Demand EC2 Instances. With On-Demand instances you only pay for
EC2 instances you use. The use of On-Demand instances frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs. Reference: https://aws.amazon.com/ec2/pricing/on-demand/
Question: Which of the following can be used to secure EC2 Instances?
A. Security Groups
B. EC2 Lists
C. AWS Configs
D. AWS CloudWatch
Answer:
A: security group< acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC could be assigned to a different set of security groups. If you don't specify a particular group at launch time, the instance is automatically assigned to the default security group for the VPC. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html
Exam Topics:
The AWS Cloud Practitioner exam is broken down into 4 domains
Cloud Concepts
Security
Technology
Billing and Pricing.
What is the purpose of a DNS server?
Choose the correct answer:
A. To act as an internet search engine.
B. To protect you from hacking attacks.
C. To convert common language domain names to IP addresses.
D. To serve web application content.
Answer:
C: Domain name system servers act as a “third party” that provides the service of converting common language domain names to IP addresses (which are required for a web browser to properly make a request for web content).
What best describes the concept of high availability?
Choose the correct answer:
A. The ability for a system to grow in size, capacity, and/or scope.
B. The ability for a system to withstand a certain amount of failure and still remain functional.
C. The ability for a system to grow and shrink based on demand.
D. The ability for a system to be accessible when you attempt to access it.
Answer:
D: High availability refers to the concept that something will be accessible when you try to access it. An object or web application is “highly available” when it is accessible a vast majority of the time.
What is the major difference between AWS’s RDS and DynamoDB database services?
Choose the correct answer:
A. RDS offers NoSQL database options, and DynamoDB offers SQL database options.
B. RDS offers one SQL database option, and DynamoDB offers many NoSQL database options.
C. RDS offers SQL database options, and DynamoDB offers a NoSQL database option.
D. None of the above
Answer:
C. RDS is a SQL database service (that offers several database engine options), and DynamoDB is a NoSQL database option that only offers one NoSQL engine.
Reference:
What are two open source in-memory engines supported by ElastiCache?
If you want to have SMS or email notifications sent to various members of your department with status updates on resources in your AWS account, what service should you choose?
Choose the correct answer:
A. SNS
B. GetSMS
C. RDS
D. STS
Answer:
A. Simple Notification Service (SNS) is what publishes messages to SMS and/or email endpoints.