What is the AWS Certified Developer Associate Exam?
This AWS Certified Developer-Associate Examination is intended for individuals who perform a Developer role. It validates an examinee’s ability to:
Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS
Recommended general IT knowledge The target candidate should have the following: – In-depth knowledge of at least one high-level programming language – Understanding of application lifecycle management – The ability to write code for serverless applications – Understanding of the use of containers in the development process
Recommended AWS knowledge The target candidate should be able to do the following:
Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications
Identify key features of AWS services
Understand the AWS shared responsibility model
Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
Use and interact with AWS services
Apply basic understanding of cloud-native applications to write code
Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
Author, maintain, and debug code modules on AWS
What is considered out of scope for the target candidate? The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam: – Design architectures (for example, distributed system, microservices) – Design and implement CI/CD pipelines
Administer IAM users and groups
Administer Amazon Elastic Container Service (Amazon ECS)
Design AWS networking infrastructure (for example, Amazon VPC, AWS Direct Connect)
Understand compliance and licensing
Exam content Response types There are two types of questions on the exam: – Multiple choice: Has one correct response and three incorrect responses (distractors) – Multiple response: Has two or more correct responses out of five or more response options Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose. Distractors are generally plausible responses that match the content area. Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.
Unscored content The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.
Exam results The AWS Certified Developer – Associate (DVA-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines. Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720. Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels. Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam. Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.
Content outline This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context. The percentage in each domain represents only scored content.
Domain 1: Deployment 22% Domain 2: Security 26% Domain 3: Development with AWS Services 30% Domain 4: Refactoring 10% Domain 5: Monitoring and Troubleshooting 12%
Domain 1: Deployment 1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns. – Commit code to a repository and invoke build, test and/or deployment actions – Use labels and branches for version and release management – Use AWS CodePipeline to orchestrate workflows against different environments – Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS CodeDeploy for CI/CD purposes – Perform a roll back plan based on application deployment policy
1.2 Deploy applications using AWS Elastic Beanstalk. – Utilize existing supported environments to define a new application stack – Package the application – Introduce a new application version into the Elastic Beanstalk environment – Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable) – Validate application health using Elastic Beanstalk dashboard – Use Amazon CloudWatch Logs to instrument application logging
1.3 Prepare the application deployment package to be deployed to AWS. – Manage the dependencies of the code module (like environment variables, config files and static image files) within the package – Outline the package/container directory structure and organize files appropriately – Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores)
1.4 Deploy serverless applications. – Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template – Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)
Domain 2: Security 2.1 Make authenticated calls to AWS services. – Communicate required policy based on least privileges required by application. – Assume an IAM role to access a service – Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)
2.2 Implement encryption using AWS services. – Encrypt data at rest (client side; server side; envelope encryption) using AWS services – Encrypt data in transit
2.3 Implement application authentication and authorization. – Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools – Use Amazon Cognito-provided credentials to write code that access AWS services. – Use Amazon Cognito sync to synchronize user profiles and data – Use developer-authenticated identities to interact between end user devices, backend authentication, and Amazon Cognito
Domain 3: Development with AWS Services 3.1 Write code for serverless applications. – Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications) – Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler) – Create an API endpoint using Amazon API Gateway – Create and test appropriate API actions like GET, POST using the API endpoint – Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes) – Compute read/write capacity units for Amazon DynamoDB based on application requirements – Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis) – Invoke an AWS Lambda function synchronously and asynchronously
3.2 Translate functional requirements into application design. – Determine real-time vs. batch processing for a given use case – Determine use of synchronous vs. asynchronous for a given use case – Determine use of event vs. schedule/poll for a given use case – Account for tradeoffs for consistency models in an application design
Domain 4: Refactoring 4.1 Optimize applications to best use AWS services and features. Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache) Apply an Amazon S3 naming scheme for optimal read performance
4.2 Migrate existing application code to run on AWS. – Isolate dependencies – Run the application as one or more stateless processes – Develop in order to enable horizontal scalability – Externalize state
Domain 5: Monitoring and Troubleshooting
5.1 Write code that can be monitored. – Create custom Amazon CloudWatch metrics – Perform logging in a manner available to systems operators – Instrument application source code to enable tracing in AWS X-Ray
5.2 Perform root cause analysis on faults found in testing or production. – Interpret the outputs from the logging mechanism in AWS to identify errors in logs – Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues – Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component
Which key tools, technologies, and concepts might be covered on the exam?
The following is a non-exhaustive list of the tools and technologies that could appear on the exam. This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam. The general tools and technologies in this list appear in no particular order. AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance: – Analytics – Application Integration – Containers – Cost and Capacity Management – Data Movement – Developer Tools – Instances (virtual machines) – Management and Governance – Networking and Content Delivery – Security – Serverless
Management and Governance: – AWS CloudFormation – Amazon CloudWatch
Networking and Content Delivery: – Amazon API Gateway – Amazon CloudFront – Elastic Load Balancing
Security, Identity, and Compliance: – Amazon Cognito – AWS Identity and Access Management (IAM) – AWS Key Management Service (AWS KMS)
Storage: – Amazon S3
Out-of-scope AWS services and features
The following is a non-exhaustive list of AWS services and features that are not covered on the exam. These services and features do not represent every AWS offering that is excluded from the exam content. Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant. Out-of-scope AWS services and features include the following: – AWS Application Discovery Service – Amazon AppStream 2.0 – Amazon Chime – Amazon Connect – AWS Database Migration Service (AWS DMS) – AWS Device Farm – Amazon Elastic Transcoder – Amazon GameLift – Amazon Lex – Amazon Machine Learning (Amazon ML) – AWS Managed Services – Amazon Mobile Analytics – Amazon Polly
– Amazon QuickSight – Amazon Rekognition – AWS Server Migration Service (AWS SMS) – AWS Service Catalog – AWS Shield Advanced – AWS Shield Standard – AWS Snow Family – AWS Storage Gateway – AWS WAF – Amazon WorkMail – Amazon WorkSpaces
To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
AWS Certified Developer – Associate Practice Questions And Answers Dump
Q0: Your application reads commands from an SQS queue and sends them to web services hosted by your partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost. How can you accommodate the partners’ broken web services without wasting your resources?
A. Create a delay queue and set DelaySeconds to 30 seconds
B. Requeue the message with a VisibilityTimeout of 30 seconds.
C. Create a dead letter queue and set the Maximum Receives to 3.
D. Requeue the message with a DelaySeconds of 30 seconds.
C. After a message is taken from the queue and returned for the maximum number of retries, it is automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.
Q1: A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently. What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?
DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios
As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
Q2: You are creating a DynamoDB table with the following attributes:
PurchaseOrderNumber (partition key)
CustomerID
PurchaseDate
TotalPurchaseValue
One of your applications must retrieve items from the table to calculate the total value of purchases for a particular customer over a date range. What secondary index do you need to add to the table?
A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the TotalPurchaseValue attribute
B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the TotalPurchaseValue attribute
C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the TotalPurchaseValue attribute
D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the TotalPurchaseValue attribute
C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the TotalPurchaseValue into the index provides all the data needed to satisfy the use case.
Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
Global Secondary Indexes defines a new paradigm – different hash/range keys per index. This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.
Throughput :
Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q5: Lambda allows you to upload code and dependencies for function packages:
A. Only from a directly uploaded zip file
B. Only via SFTP
C. Only from a zip file in AWS S3
D. From a zip file in AWS S3 or uploaded directly from elsewhere
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q7: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?
A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.
D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.
Q8: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?
A. Create and assign EIP to each instance
B. Create and attach a second IGW to the VPC.
C. Create and utilize a NAT Gateway
D. Connect to a VPN
C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
Q9: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?
A. Security Groups
B. Route Tables
C. Elastic Load Balancer
D. Auto Scaling
D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.
Q11: You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?
A. RegisterImage
B. CreateImage
C. ami-register-image
D. ami-create-image
A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.
Q12: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?
A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
B. Permenantly assigning users to specific instances and always routing their traffic to those instances
C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance
Q14: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?
A. Autoscaling requires using Amazon EBS-backed instances
B. Virtual Private Cloud requires EBS backed instances
C. Amazon EBS-backed instances can be stopped and restarted without losing data
D. Instance-store backed instances can be stopped and restarted without losing data
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.
Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command. ssh -i my_key.pem ec2-user@52.2.222.22 However you receive the following error. @@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@ What is the most probable reason for this and how can you fix it?
A. You do not have root access on your terminal and need to use the sudo option for this to work.
B. You do not have enough permissions to perform the operation.
C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.
D. You need to run something like: chmod 400 my_key.pem
Q16: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?
A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.
D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.
Q17: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:
A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
B. Can only be used to launch EC2 instances in the same country as the AMI is stored
C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
D. Can be used to launch EC2 instances in any AWS region
C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another
Q18: Which of the following statements is true about the Elastic File System (EFS)?
A. EFS can scale out to meet capacity requirements and scale back down when no longer needed
B. EFS can be used by multiple EC2 instances simultaneously
C. EFS cannot be used by an instance using EBS
D. EFS can be configured on an instance before launch just like an IAM role or EBS volumes
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
A. The ability to create custom permission policies.
B. Assigning IAM permission policies to more than one user at a time.
C. Easier user/policy management.
D. Allowing EC2 instances to gain access to S3.
B. and C.
A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.
Q22: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?
A. Amazon DynamoDB auto scaling
B. Amazon DynamoDB cross-region replication
C. Amazon DynamoDB Streams
D. Amazon DynamoDB Accelerator
D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:
As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
Q23: A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?
A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.
B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence. Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation. During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:
Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
Saved Configurations– Settings for any options that are not applied directly to the environment are loaded from a saved configuration, if specified.
Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.
Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.
Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.
If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI . Settings in configuration files are not applied directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.
Q24: What statements are true about Availability Zones (AZs) and Regions?
A. There is only one AZ in each AWS Region
B. AZs are geographically separated inside a region to help protect against natural disasters affecting more than one at a time.
C. AZs can be moved between AWS Regions based on your needs
D. There are (almost always) two or more AZs in each AWS Region
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q26: Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?
A. Eventual Consistent Reads
B. Conditional reads for Consistency
C. Strongly Consistent Reads
D. Not possible
C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.
Q27: You’ ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?
A. Create an Opswork stack and deploy the Docker containers
B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.
B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.
Q28: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?
A. Create multiple threads and upload the objects in the multiple threads
B. Write the items in batches for better performance
C. Use the Multipart upload API
D. Enable versioning on the Bucket
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.
Q29: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?
A. 6000
B. 10
C. 3600
D. 600
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.
You can specify the Write capacity in the Capacity tab of the DynamoDB table.
Q33: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?
A. Create and assign EIP to each instance
B. Create and attach a second IGW to the VPC.
C. Create and utilize a NAT Gateway
D. Connect to a VPN
C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet. Reference: AWS Network Address Translation Gateway
Q34: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?
A. Security Groups
B. Route Tables
C. Elastic Load Balancer
D. Auto Scaling
D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture. Reference: AWS Autoscalling
Q31: An organization is using an Amazon ElastiCache cluster in front of their Amazon RDS instance. The organization would like the Developer to implement logic into the code so that the cluster only retrieves data from RDS when there is a cache miss. What strategy can the Developer implement to achieve this?
A. Lazy loading
B. Write-through
C. Error retries
D. Exponential backoff
Answer:
Answer – A Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. All other options are incorrect. Reference: Caching Strategies
Q32: A developer is writing an application that will run on Ec2 instances and read messages from SQS queue. The nessages will arrive every 15-60 seconds. How should the Developer efficiently query the queue for new messages?
A. Use long polling
B. Set a custom visibility timeout
C. Use short polling
D. Implement exponential backoff
Answer – A Long polling will help insure that the applications make less requests for messages in a shorter period of time. This is more cost effective. Since the messages are only going to be available after 15 seconds and we don’t know exacly when they would be available, it is better to use Long Polling. Reference: Amazon SQS Long Polling
Q33: You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With new Lambda function working as per expectation which of the following will shift traffic from original Lambda function to new Lambda function in the shortest time frame?
A. Canary10Percent5Minutes
B. Linear10PercentEvery10Minutes
C. Canary10Percent15Minutes
D. Linear10PercentEvery1Minute
Answer – A With Canary Deployment Preference type, Traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while remaining all traffic is shifted after 5 minutes. Reference: Gradual Code Deployment
Q34: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?
A. AWS::Serverless::Api
B. AWS::Serverless::Application
C. AWS::Serverless::Layerversion
D. AWS::Serverless::Function
Answer – B AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets. Reference: Declaring Serverless Resources
Q35: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?
A. Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
B. Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
C. Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
D. Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.
Answer – D With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys. Reference: AWS Key Management Service Concepts
Q36: You are developing an application that will be comprised of the following architecture –
A set of Ec2 instances to process the videos.
These (Ec2 instances) will be spun up by an autoscaling group.
SQS Queues to maintain the processing messages.
There will be 2 pricing tiers.
How will you ensure that the premium customers videos are given more preference?
A. Create 2 Autoscaling Groups, one for normal and one for premium customers
B. Create 2 set of Ec2 Instances, one for normal and one for premium customers
C. Create 2 SQS queus, one for normal and one for premium customers
D. Create 2 Elastic Load Balancers, one for normal and one for premium customers.
Answer – C The ideal option would be to create 2 SQS queues. Messages can then be processed by the application from the high priority queue first.<br? The other options are not the ideal options. They would lead to extra costs and also extra maintenance. Reference: SQS
Q37: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?
A. CustomerID
B. CustomerName
C. Location
D. Age
Answer- A Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on.. Use composite attributes. Try to combine more than one attribute to form a unique key. Reference: Choosing the right DynamoDB Partition Key
Q38: A developer is making use of AWS services to develop an application. He has been asked to develop the application in a manner to compensate any network delays. Which of the following two mechanisms should he implement in the application?
A. Multiple SQS queues
B. Exponential backoff algorithm
C. Retries in your application code
D. Consider using the Java sdk.
Answer- B. and C. In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values, and should be set based on the operation being performed, as well as other local factors, such as network latency. Reference: Error Retries and Exponential Backoff in AWS
Q39: An application is being developed that is going to write data to a DynamoDB table. You have to setup the read and write throughput for the table. Data is going to be read at the rate of 300 items every 30 seconds. Each item is of size 6KB. The reads can be eventual consistent reads. What should be the read capacity that needs to be set on the table?
A. 10
B. 20
C. 6
D. 30
Answer – A
Since there are 300 items read every 30 seconds , that means there are (300/30) = 10 items read every second. Since each item is 6KB in size , that means , 2 reads will be required for each item. So we have total of 2*10 = 20 reads for the number of items per second Since eventual consistency is required , we can divide the number of reads(20) by 2 , and in the end we get the Read Capacity of 10.
Q40: You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load balancer. You have been requested to monitor the incoming connections to the Elastic Load Balancer. Which of the below options can suffice this requirement?
A. Use AWS CloudTrail with your load balancer
B. Enable access logs on the load balancer
C. Use a CloudWatch Logs Agent
D. Create a custom metric CloudWatch lter on your load balancer
Answer – B Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues. Reference: Access Logs for Your Application Load Balancer
Q41: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?
A. Enable versioning for the underlying S3 bucket.
B. Enable Replication so that the objects get replicated to the other bucket
C. Enable CORS for the bucket
D. Change the Bucket policy for the bucket to allow access from the other bucket
Answer – C
Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.
Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:
Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.
Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.
Q42: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below
A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.
Answer- C The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3. You can then provides access to the objects based on the key values generated via the user id.
Q43: Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.
A. Publish your data to CloudWatch Logs, and congure your application to Autoscale to handle the load on demand.
B. Publish your log data to an Amazon S3 bucket. Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is congured to pull down your log les stored an Amazon S3
C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is congured to process your logging data.
D. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.
Answer:
Answer – C Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data. Reference: Amazon Kinesis
Q44: You’ve been instructed to develop a mobile application that will make use of AWS services. You need to decide on a data store to store the user sessions. Which of the following would be an ideal data store for session management?
A. AWS Simple Storage Service
B. AWS DynamoDB
C. AWS RDS
D. AWS Redshift
Answer:
Answer – B DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management Reference: Scalable Session Handling in PHP Using Amazon DynamoDB
Q45: Your application currently interacts with a DynamoDB table. Records are inserted into the table via the application. There is now a requirement to ensure that whenever items are updated in the DynamoDB primary table , another record is inserted into a secondary table. Which of the below feature should be used when developing such a solution?
A. AWS DynamoDB Encryption
B. AWS DynamoDB Streams
C. AWS DynamoDB Accelerator
D. AWSTable Accelerator
Answer – B DynamoDB Streams Use Cases and Design Patterns This post describes some common use cases you might encounter, along with their design options and solutions, when migrating data from relational data stores to Amazon DynamoDB. We will consider how to manage the following scenarios:
How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
How do you trigger an event based on a particular transaction?
How do you audit or archive transactions?
How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?
Relational databases provide native support for transactions, triggers, auditing, and replication. Typically, a transaction in a database refers to performing create, read, update, and delete (CRUD) operations against multiple tables in a block. A transaction can have only two states—success or failure. In other words, there is no partial completion. As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit. You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time. AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region. All of the other options are incorrect since none of these would meet the core requirement. Reference: DynamoDB Streams Use Cases and Design Patterns
Q46: An application has been making use of AWS DynamoDB for its back-end data store. The size of the table has now grown to 20 GB , and the scans on the table are causing throttling errors. Which of the following should now be implemented to avoid such errors?
A. Large Page size
B. Reduced page size
C. Parallel Scans
D. Sequential scans
Answer – B When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity. You can use the following technique to minimize the impact of a scan on a table’s provisioned throughput. Reduce page size Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request. For example, suppose that each item is 4 KB and you set the page size to 40 items. A Query request would then consume only 20 eventually consistent read operations or 40 strongly consistent read operations. A larger number of smaller Query or Scan operations would allow your other critical requests to succeed without throttling. Reference1: Rate-Limited Scans in Amazon DynamoDB
Q47: Which of the following is correct way of passing a stage variable to an HTTP URL ? (Select TWO.)
A. http://example.com/${}/prod
B. http://example.com/${stageVariables.}/prod
C. http://${stageVariables.}.example.com/dev/operation
D. http://${stageVariables}.example.com/dev/operation
E. http://${}.example.com/dev/operation
F. http://example.com/${stageVariables}/prod
Answer – B. and C. A stage variable can be used as part of HTTP integration URL as in following cases, · A full URI without protocol · A full domain · A subdomain · A path · A query string In the above case , option B & C displays stage variable as a path & sub-domain. Reference: Amazon API Gateway Stage Variables Reference
Q48: Your company is planning on creating new development environments in AWS. They want to make use of their existing Chef recipes which they use for their on-premise configuration for servers in AWS. Which of the following service would be ideal to use in this regard?
A. AWS Elastic Beanstalk
B. AWS OpsWork
C. AWS Cloudformation
D. AWS SQS
Answer – B AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments All other options are invalid since they cannot be used to work with Chef recipes for configuration management. Reference: AWS OpsWorks
Q49: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?
A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access.
C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.
Answer – C With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used Reference: About Web Identity Federation
Q50: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?
A. Cognito Data
B. Cognito Events
C. Cognito Streams
D. Cognito Callbacks
Answer – C Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams Reference:
Q51: You’ve developed a set of scripts using AWS Lambda. These scripts need to access EC2 Instances in a VPC. Which of the following needs to be done to ensure that the AWS Lambda function can access the resources in the VPC. Choose 2 answers from the options given below
A. Ensure that the subnet ID’s are mentioned when conguring the Lambda function
B. Ensure that the NACL ID’s are mentioned when conguring the Lambda function
C. Ensure that the Security Group ID’s are mentioned when conguring the Lambda function
D. Ensure that the VPC Flow Log ID’s are mentioned when conguring the Lambda function
Answer: A and C. AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPCspecific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC. Reference: Configuring a Lambda Function to Access Resources in an Amazon VPC
Q52: You’ve currently been tasked to migrate an existing on-premise environment into Elastic Beanstalk. The application does not make use of Docker containers. You also can’t see any relevant environments in the beanstalk service that would be suitable to host your application. What should you consider doing in this case?
A. Migrate your application to using Docker containers and then migrate the app to the Elastic Beanstalk environment.
B. Consider using Cloudformation to deploy your environment to Elastic Beanstalk
C. Consider using Packer to create a custom platform
D. Consider deploying your application using the Elastic Container Service
Answer – C Elastic Beanstalk supports custom platforms. A custom platform is a more advanced customization than a Custom Image in several ways. A custom platform lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility allows you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn’t provide a platform out of the box. Compare that to custom images, where you modify an AMI for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform’s software stack. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. To create a custom platform, you build an Amazon Machine Image (AMI) from one of the supported operating systems—Ubuntu, RHEL, or Amazon Linux (see the flavor entry in Platform.yaml File Format for the exact version numbers)—and add further customizations. You create your own Elastic Beanstalk platform using Packer, which is an open-source tool for creating machine images for many platforms, including AMIs for use with Amazon EC2. An Elastic Beanstalk platform comprises an AMI configured to run a set of software that supports an application, and metadata that can include custom configuration options and default configuration option settings. Reference: AWS Elastic Beanstalk Custom Platforms
Q53: Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.
A. 10
B. 160
C. 155
D. 16
Answer – B. Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below. Reference: Read/Write Capacity Mode
Q57: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?
Q60: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?
A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.
Q63: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?
A. Transforms
B. Outputs
C. Resources
D. Instances
Answer: C. The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3. Reference: Resources
Q64: Which AWS service can be used to fully automate your entire release process?
A. CodeDeploy
B. CodePipeline
C. CodeCommit
D. CodeBuild
Answer: B. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates
Q65: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?
A. Outputs
B. Transforms
C. Resources
D. Exports
Answer: A. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Reference: CloudFormation Outputs
Q66: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?
A. Inputs
B. Resources
C. Transforms
D. Files
Answer: C. Transforms is used to reference code located in S3 and also specififying the use of the Serverless Application Model (SAM) for Lambda deployments. Reference: Transforms
Q67: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file used to specify source files and lifecycle hooks?
Q68: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?
A. Share the code using an EBS volume
B. Copy and paste the code into the template each time you need to use it
C. Use a cloudformation nested stack
D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.
Q72: Which of the following is an encrypted key used by KMS to encrypt your data
A. Custmoer Mamaged Key
B. Encryption Key
C. Envelope Key
D. Customer Master Key
Answer: C. Your Data key also known as the Enveloppe key is encrypted using the master key.This approach is known as Envelope encryption. Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.
Q75: A developer is preparing a deployment package for a Java implementation of an AWS Lambda function. What should the developer include in the deployment package? (Select TWO.) A. Compiled application code B. Java runtime environment C. References to the event sources D. Lambda execution role E. Application dependencies
Answer: C. E. Notes: To create a Lambda function, you first create a Lambda function deployment package. This package is a .zip or .jar file consisting of your code and any dependencies. Reference:Lambda deployment packages.
Q76: A developer uses AWS CodeDeploy to deploy a Python application to a fleet of Amazon EC2 instances that run behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. What should the developer include in the CodeDeploy deployment package? A. A launch template for the Amazon EC2 Auto Scaling group B. A CodeDeploy AppSpec file C. An EC2 role that grants the application access to AWS services D. An IAM policy that grants the application access to AWS services
Answer: B. Notes: The CodeDeploy AppSpec (application specific) file is unique to CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file. Reference: CodeDeploy application specification (AppSpec) files. Category: Deployment
Q76: A company is working on a project to enhance its serverless application development process. The company hosts applications on AWS Lambda. The development team regularly updates the Lambda code and wants to use stable code in production. Which combination of steps should the development team take to configure Lambda functions to meet both development and production requirements? (Select TWO.)
A. Create a new Lambda version every time a new code release needs testing. B. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready unqualified Amazon Resource Name (ARN) version. Point the Development alias to the $LATEST version. C. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to the production-ready qualified Amazon Resource Name (ARN) version. Point the Development alias to the variable LAMBDA_TASK_ROOT. D. Create a new Lambda layer every time a new code release needs testing. E. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready Lambda layer Amazon Resource Name (ARN). Point the Development alias to the $LATEST layer ARN.
Answer: A. B. Notes: Lambda function versions are designed to manage deployment of functions. They can be used for code changes, without affecting the stable production version of the code. By creating separate aliases for Production and Development, systems can initiate the correct alias as needed. A Lambda function alias can be used to point to a specific Lambda function version. Using the functionality to update an alias and its linked version, the development team can update the required version as needed. The $LATEST version is the newest published version. Reference: Lambda function versions.
Q77: Each time a developer publishes a new version of an AWS Lambda function, all the dependent event source mappings need to be updated with the reference to the new version’s Amazon Resource Name (ARN). These updates are time consuming and error-prone. Which combination of actions should the developer take to avoid performing these updates when publishing a new Lambda version? (Select TWO.) A. Update event source mappings with the ARN of the Lambda layer. B. Point a Lambda alias to a new version of the Lambda function. C. Create a Lambda alias for each published version of the Lambda function. D. Point a Lambda alias to a new Lambda function alias. E. Update the event source mappings with the Lambda alias ARN.
Answer: B. E. Notes: A Lambda alias is a pointer to a specific Lambda function version. Instead of using ARNs for the Lambda function in event source mappings, you can use an alias ARN. You do not need to update your event source mappings when you promote a new version or roll back to a previous version. Reference: Lambda function aliases. Category: Deployment
Q78: A company wants to store sensitive user data in Amazon S3 and encrypt this data at rest. The company must manage the encryption keys and use Amazon S3 to perform the encryption. How can a developer meet these requirements? A. Enable default encryption for the S3 bucket by using the option for server-side encryption with customer-provided encryption keys (SSE-C). B. Enable client-side encryption with an encryption key. Upload the encrypted object to the S3 bucket. C. Enable server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Upload an object to the S3 bucket. D. Enable server-side encryption with customer-provided encryption keys (SSE-C). Upload an object to the S3 bucket.
Q79: A company is developing a Python application that submits data to an Amazon DynamoDB table. The company requires client-side encryption of specific data items and end-to-end protection for the encrypted data in transit and at rest. Which combination of steps will meet the requirement for the encryption of specific data items? (Select TWO.)
A. Generate symmetric encryption keys with AWS Key Management Service (AWS KMS). B. Generate asymmetric encryption keys with AWS Key Management Service (AWS KMS). C. Use generated keys with the DynamoDB Encryption Client. D. Use generated keys to configure DynamoDB table encryption with AWS managed customer master keys (CMKs). E. Use generated keys to configure DynamoDB table encryption with AWS owned customer master keys (CMKs).
Answer: A. C. Notes: When the DynamoDB Encryption Client is configured to use AWS KMS, it uses a customer master key (CMK) that is always encrypted when used outside of AWS KMS. This cryptographic materials provider returns a unique encryption key and signing key for every table item. This method of encryption uses a symmetric CMK. Reference: Direct KMS Materials Provider. Category: Deployment
Q80: A company is developing a REST API with Amazon API Gateway. Access to the API should be limited to users in the existing Amazon Cognito user pool. Which combination of steps should a developer perform to secure the API? (Select TWO.) A. Create an AWS Lambda authorizer for the API. B. Create an Amazon Cognito authorizer for the API. C. Configure the authorizer for the API resource. D. Configure the API methods to use the authorizer. E. Configure the authorizer for the API stage.
Answer: B. D. Notes: An Amazon Cognito authorizer should be used for integration with Amazon Cognito user pools. In addition to creating an authorizer, you are required to configure an API method to use that authorizer for the API. Reference: Control access to a REST API using Amazon Cognito user pools as authorizer. Category: Security
Q81: A developer is implementing a mobile app to provide personalized services to app users. The application code makes calls to Amazon S3 and Amazon Simple Queue Service (Amazon SQS). Which options can the developer use to authenticate the app users? (Select TWO.) A. Authenticate to the Amazon Cognito identity pool directly. B. Authenticate to AWS Identity and Access Management (IAM) directly. C. Authenticate to the Amazon Cognito user pool directly. D. Federate authentication by using Login with Amazon with the users managed with AWS Security Token Service (AWS STS). E. Federate authentication by using Login with Amazon with the users managed with the Amazon Cognito user pool.
Answer: C. E. Notes: The Amazon Cognito user pool provides direct user authentication. The Amazon Cognito user pool provides a federated authentication option with third-party identity provider (IdP), including amazon.com. Reference: Adding User Pool Sign-in Through a Third Party. Category: Security
Question: A company is implementing several order processing workflows. Each workflow is implemented by using AWS Lambda functions for each task. Which combination of steps should a developer follow to implement these workflows? (Select TWO.) A. Define a AWS Step Functions task for each Lambda function. B. Define a AWS Step Functions task for each workflow. C. Write code that polls the AWS Step Functions invocation to coordinate each workflow. D. Define an AWS Step Functions state machine for each workflow. E. Define an AWS Step Functions state machine for each Lambda function. Answer: A. D. Notes: Step Functions is based on state machines and tasks. A state machine is a workflow. Tasks perform work by coordinating with other AWS services, such as Lambda. A state machine is a workflow. It can be used to express a workflow as a number of states, their relationships, and their input and output. You can coordinate individual tasks with Step Functions by expressing your workflow as a finite state machine, written in the Amazon States Language. ReferenceText: Getting Started with AWS Step Functions. ReferenceUrl: https://aws.amazon.com/step-functions/getting-started/ Category: Development
What is the AWS Certified Developer Associate Exam?
This AWS Certified Developer-Associate Examination is intended for individuals who perform a Developer role. It validates an examinee’s ability to:
Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS
Recommended general IT knowledge The target candidate should have the following: – In-depth knowledge of at least one high-level programming language – Understanding of application lifecycle management – The ability to write code for serverless applications – Understanding of the use of containers in the development process
Recommended AWS knowledge The target candidate should be able to do the following:
Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications
Identify key features of AWS services
Understand the AWS shared responsibility model
Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
Use and interact with AWS services
Apply basic understanding of cloud-native applications to write code
Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
Author, maintain, and debug code modules on AWS
What is considered out of scope for the target candidate? The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam: – Design architectures (for example, distributed system, microservices) – Design and implement CI/CD pipelines
Administer IAM users and groups
Administer Amazon Elastic Container Service (Amazon ECS)
Design AWS networking infrastructure (for example, Amazon VPC, AWS Direct Connect)
Understand compliance and licensing
Exam content Response types There are two types of questions on the exam: – Multiple choice: Has one correct response and three incorrect responses (distractors) – Multiple response: Has two or more correct responses out of five or more response options Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose. Distractors are generally plausible responses that match the content area. Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.
Unscored content The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.
Exam results The AWS Certified Developer – Associate (DVA-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines. Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720. Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels. Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam. Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.
Content outline This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context. The percentage in each domain represents only scored content.
Domain 1: Deployment 22% Domain 2: Security 26% Domain 3: Development with AWS Services 30% Domain 4: Refactoring 10% Domain 5: Monitoring and Troubleshooting 12%
Domain 1: Deployment 1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns. – Commit code to a repository and invoke build, test and/or deployment actions – Use labels and branches for version and release management – Use AWS CodePipeline to orchestrate workflows against different environments – Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS CodeDeploy for CI/CD purposes – Perform a roll back plan based on application deployment policy
1.2 Deploy applications using AWS Elastic Beanstalk. – Utilize existing supported environments to define a new application stack – Package the application – Introduce a new application version into the Elastic Beanstalk environment – Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable) – Validate application health using Elastic Beanstalk dashboard – Use Amazon CloudWatch Logs to instrument application logging
1.3 Prepare the application deployment package to be deployed to AWS. – Manage the dependencies of the code module (like environment variables, config files and static image files) within the package – Outline the package/container directory structure and organize files appropriately – Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores)
1.4 Deploy serverless applications. – Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template – Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)
Domain 2: Security 2.1 Make authenticated calls to AWS services. – Communicate required policy based on least privileges required by application. – Assume an IAM role to access a service – Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)
2.2 Implement encryption using AWS services. – Encrypt data at rest (client side; server side; envelope encryption) using AWS services – Encrypt data in transit
2.3 Implement application authentication and authorization. – Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools – Use Amazon Cognito-provided credentials to write code that access AWS services. – Use Amazon Cognito sync to synchronize user profiles and data – Use developer-authenticated identities to interact between end user devices, backend authentication, and Amazon Cognito
Domain 3: Development with AWS Services 3.1 Write code for serverless applications. – Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications) – Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler) – Create an API endpoint using Amazon API Gateway – Create and test appropriate API actions like GET, POST using the API endpoint – Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes) – Compute read/write capacity units for Amazon DynamoDB based on application requirements – Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis) – Invoke an AWS Lambda function synchronously and asynchronously
3.2 Translate functional requirements into application design. – Determine real-time vs. batch processing for a given use case – Determine use of synchronous vs. asynchronous for a given use case – Determine use of event vs. schedule/poll for a given use case – Account for tradeoffs for consistency models in an application design
Domain 4: Refactoring 4.1 Optimize applications to best use AWS services and features. Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache) Apply an Amazon S3 naming scheme for optimal read performance
4.2 Migrate existing application code to run on AWS. – Isolate dependencies – Run the application as one or more stateless processes – Develop in order to enable horizontal scalability – Externalize state
Domain 5: Monitoring and Troubleshooting
5.1 Write code that can be monitored. – Create custom Amazon CloudWatch metrics – Perform logging in a manner available to systems operators – Instrument application source code to enable tracing in AWS X-Ray
5.2 Perform root cause analysis on faults found in testing or production. – Interpret the outputs from the logging mechanism in AWS to identify errors in logs – Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues – Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component
Which key tools, technologies, and concepts might be covered on the exam?
The following is a non-exhaustive list of the tools and technologies that could appear on the exam. This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam. The general tools and technologies in this list appear in no particular order. AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance: – Analytics – Application Integration – Containers – Cost and Capacity Management – Data Movement – Developer Tools – Instances (virtual machines) – Management and Governance – Networking and Content Delivery – Security – Serverless
Management and Governance: – AWS CloudFormation – Amazon CloudWatch
Networking and Content Delivery: – Amazon API Gateway – Amazon CloudFront – Elastic Load Balancing
Security, Identity, and Compliance: – Amazon Cognito – AWS Identity and Access Management (IAM) – AWS Key Management Service (AWS KMS)
Storage: – Amazon S3
Out-of-scope AWS services and features
The following is a non-exhaustive list of AWS services and features that are not covered on the exam. These services and features do not represent every AWS offering that is excluded from the exam content. Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant. Out-of-scope AWS services and features include the following: – AWS Application Discovery Service – Amazon AppStream 2.0 – Amazon Chime – Amazon Connect – AWS Database Migration Service (AWS DMS) – AWS Device Farm – Amazon Elastic Transcoder – Amazon GameLift – Amazon Lex – Amazon Machine Learning (Amazon ML) – AWS Managed Services – Amazon Mobile Analytics – Amazon Polly
– Amazon QuickSight – Amazon Rekognition – AWS Server Migration Service (AWS SMS) – AWS Service Catalog – AWS Shield Advanced – AWS Shield Standard – AWS Snow Family – AWS Storage Gateway – AWS WAF – Amazon WorkMail – Amazon WorkSpaces
To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
AWS Certified Developer – Associate Practice Questions And Answers Dump
Q0: Your application reads commands from an SQS queue and sends them to web services hosted by your partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost. How can you accommodate the partners’ broken web services without wasting your resources?
A. Create a delay queue and set DelaySeconds to 30 seconds
B. Requeue the message with a VisibilityTimeout of 30 seconds.
C. Create a dead letter queue and set the Maximum Receives to 3.
D. Requeue the message with a DelaySeconds of 30 seconds.
C. After a message is taken from the queue and returned for the maximum number of retries, it is automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.
Q1: A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently. What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?
A. Amazon DynamoDB auto scaling
B. Amazon DynamoDB cross-region replication
C. Amazon DynamoDB Streams
D. Amazon DynamoDB Accelerator
D. The AWS Documentation mentions the following:
DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios
As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
Q2: You are creating a DynamoDB table with the following attributes:
PurchaseOrderNumber (partition key)
CustomerID
PurchaseDate
TotalPurchaseValue
One of your applications must retrieve items from the table to calculate the total value of purchases for a particular customer over a date range. What secondary index do you need to add to the table?
A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the TotalPurchaseValue attribute
B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the TotalPurchaseValue attribute
C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the TotalPurchaseValue attribute
D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the TotalPurchaseValue attribute
C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the TotalPurchaseValue into the index provides all the data needed to satisfy the use case.
Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
Global Secondary Indexes defines a new paradigm – different hash/range keys per index. This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.
Throughput :
Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q5: Lambda allows you to upload code and dependencies for function packages:
A. Only from a directly uploaded zip file
B. Only via SFTP
C. Only from a zip file in AWS S3
D. From a zip file in AWS S3 or uploaded directly from elsewhere
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q7: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?
A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.
D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.
Q8: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?
A. Create and assign EIP to each instance
B. Create and attach a second IGW to the VPC.
C. Create and utilize a NAT Gateway
D. Connect to a VPN
C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
Q9: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?
A. Security Groups
B. Route Tables
C. Elastic Load Balancer
D. Auto Scaling
D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.
Q11: You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?
A. RegisterImage
B. CreateImage
C. ami-register-image
D. ami-create-image
A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.
Q12: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?
A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
B. Permenantly assigning users to specific instances and always routing their traffic to those instances
C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance
Q14: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?
A. Autoscaling requires using Amazon EBS-backed instances
B. Virtual Private Cloud requires EBS backed instances
C. Amazon EBS-backed instances can be stopped and restarted without losing data
D. Instance-store backed instances can be stopped and restarted without losing data
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.
Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command. ssh -i my_key.pem ec2-user@52.2.222.22 However you receive the following error. @@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@ What is the most probable reason for this and how can you fix it?
A. You do not have root access on your terminal and need to use the sudo option for this to work.
B. You do not have enough permissions to perform the operation.
C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.
D. You need to run something like: chmod 400 my_key.pem
Q16: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?
A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.
D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.
Q17: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:
A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
B. Can only be used to launch EC2 instances in the same country as the AMI is stored
C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
D. Can be used to launch EC2 instances in any AWS region
C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another
Q18: Which of the following statements is true about the Elastic File System (EFS)?
A. EFS can scale out to meet capacity requirements and scale back down when no longer needed
B. EFS can be used by multiple EC2 instances simultaneously
C. EFS cannot be used by an instance using EBS
D. EFS can be configured on an instance before launch just like an IAM role or EBS volumes
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
A. The ability to create custom permission policies.
B. Assigning IAM permission policies to more than one user at a time.
C. Easier user/policy management.
D. Allowing EC2 instances to gain access to S3.
B. and C.
A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.
Q22: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?
A. Amazon DynamoDB auto scaling
B. Amazon DynamoDB cross-region replication
C. Amazon DynamoDB Streams
D. Amazon DynamoDB Accelerator
D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:
As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
Q23: A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?
A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.
B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence. Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation. During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:
Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
Saved Configurations– Settings for any options that are not applied directly to the environment are loaded from a saved configuration, if specified.
Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.
Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.
Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.
If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI . Settings in configuration files are not applied directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.
Q24: What statements are true about Availability Zones (AZs) and Regions?
A. There is only one AZ in each AWS Region
B. AZs are geographically separated inside a region to help protect against natural disasters affecting more than one at a time.
C. AZs can be moved between AWS Regions based on your needs
D. There are (almost always) two or more AZs in each AWS Region
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q26: Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?
A. Eventual Consistent Reads
B. Conditional reads for Consistency
C. Strongly Consistent Reads
D. Not possible
C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.
Q27: You’ ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?
A. Create an Opswork stack and deploy the Docker containers
B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.
B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.
Q28: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?
A. Create multiple threads and upload the objects in the multiple threads
B. Write the items in batches for better performance
C. Use the Multipart upload API
D. Enable versioning on the Bucket
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.
Q29: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?
A. 6000
B. 10
C. 3600
D. 600
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.
You can specify the Write capacity in the Capacity tab of the DynamoDB table.
Q33: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?
A. Create and assign EIP to each instance
B. Create and attach a second IGW to the VPC.
C. Create and utilize a NAT Gateway
D. Connect to a VPN
C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet. Reference: AWS Network Address Translation Gateway
Q34: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?
A. Security Groups
B. Route Tables
C. Elastic Load Balancer
D. Auto Scaling
D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture. Reference: AWS Autoscalling
Q31: An organization is using an Amazon ElastiCache cluster in front of their Amazon RDS instance. The organization would like the Developer to implement logic into the code so that the cluster only retrieves data from RDS when there is a cache miss. What strategy can the Developer implement to achieve this?
A. Lazy loading
B. Write-through
C. Error retries
D. Exponential backoff
Answer:
Answer – A Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. All other options are incorrect. Reference: Caching Strategies
Q32: A developer is writing an application that will run on Ec2 instances and read messages from SQS queue. The nessages will arrive every 15-60 seconds. How should the Developer efficiently query the queue for new messages?
A. Use long polling
B. Set a custom visibility timeout
C. Use short polling
D. Implement exponential backoff
Answer – A Long polling will help insure that the applications make less requests for messages in a shorter period of time. This is more cost effective. Since the messages are only going to be available after 15 seconds and we don’t know exacly when they would be available, it is better to use Long Polling. Reference: Amazon SQS Long Polling
Q33: You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With new Lambda function working as per expectation which of the following will shift traffic from original Lambda function to new Lambda function in the shortest time frame?
A. Canary10Percent5Minutes
B. Linear10PercentEvery10Minutes
C. Canary10Percent15Minutes
D. Linear10PercentEvery1Minute
Answer – A With Canary Deployment Preference type, Traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while remaining all traffic is shifted after 5 minutes. Reference: Gradual Code Deployment
Q34: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?
A. AWS::Serverless::Api
B. AWS::Serverless::Application
C. AWS::Serverless::Layerversion
D. AWS::Serverless::Function
Answer – B AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets. Reference: Declaring Serverless Resources
Q35: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?
A. Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
B. Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
C. Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
D. Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.
Answer – D With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys. Reference: AWS Key Management Service Concepts
Q36: You are developing an application that will be comprised of the following architecture –
A set of Ec2 instances to process the videos.
These (Ec2 instances) will be spun up by an autoscaling group.
SQS Queues to maintain the processing messages.
There will be 2 pricing tiers.
How will you ensure that the premium customers videos are given more preference?
A. Create 2 Autoscaling Groups, one for normal and one for premium customers
B. Create 2 set of Ec2 Instances, one for normal and one for premium customers
C. Create 2 SQS queus, one for normal and one for premium customers
D. Create 2 Elastic Load Balancers, one for normal and one for premium customers.
Answer – C The ideal option would be to create 2 SQS queues. Messages can then be processed by the application from the high priority queue first.<br? The other options are not the ideal options. They would lead to extra costs and also extra maintenance. Reference: SQS
Q37: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?
A. CustomerID
B. CustomerName
C. Location
D. Age
Answer- A Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on.. Use composite attributes. Try to combine more than one attribute to form a unique key. Reference: Choosing the right DynamoDB Partition Key
Q38: A developer is making use of AWS services to develop an application. He has been asked to develop the application in a manner to compensate any network delays. Which of the following two mechanisms should he implement in the application?
A. Multiple SQS queues
B. Exponential backoff algorithm
C. Retries in your application code
D. Consider using the Java sdk.
Answer- B. and C. In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values, and should be set based on the operation being performed, as well as other local factors, such as network latency. Reference: Error Retries and Exponential Backoff in AWS
Q39: An application is being developed that is going to write data to a DynamoDB table. You have to setup the read and write throughput for the table. Data is going to be read at the rate of 300 items every 30 seconds. Each item is of size 6KB. The reads can be eventual consistent reads. What should be the read capacity that needs to be set on the table?
A. 10
B. 20
C. 6
D. 30
Answer – A
Since there are 300 items read every 30 seconds , that means there are (300/30) = 10 items read every second. Since each item is 6KB in size , that means , 2 reads will be required for each item. So we have total of 2*10 = 20 reads for the number of items per second Since eventual consistency is required , we can divide the number of reads(20) by 2 , and in the end we get the Read Capacity of 10.
Q40: You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load balancer. You have been requested to monitor the incoming connections to the Elastic Load Balancer. Which of the below options can suffice this requirement?
A. Use AWS CloudTrail with your load balancer
B. Enable access logs on the load balancer
C. Use a CloudWatch Logs Agent
D. Create a custom metric CloudWatch lter on your load balancer
Answer – B Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues. Reference: Access Logs for Your Application Load Balancer
Q41: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?
A. Enable versioning for the underlying S3 bucket.
B. Enable Replication so that the objects get replicated to the other bucket
C. Enable CORS for the bucket
D. Change the Bucket policy for the bucket to allow access from the other bucket
Answer – C
Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.
Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:
Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.
Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.
Q42: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below
A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.
Answer- C The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3. You can then provides access to the objects based on the key values generated via the user id.
Q43: Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.
A. Publish your data to CloudWatch Logs, and congure your application to Autoscale to handle the load on demand.
B. Publish your log data to an Amazon S3 bucket. Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is congured to pull down your log les stored an Amazon S3
C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is congured to process your logging data.
D. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.
Answer:
Answer – C Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data. Reference: Amazon Kinesis
Q44: You’ve been instructed to develop a mobile application that will make use of AWS services. You need to decide on a data store to store the user sessions. Which of the following would be an ideal data store for session management?
A. AWS Simple Storage Service
B. AWS DynamoDB
C. AWS RDS
D. AWS Redshift
Answer:
Answer – B DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management Reference: Scalable Session Handling in PHP Using Amazon DynamoDB
Q45: Your application currently interacts with a DynamoDB table. Records are inserted into the table via the application. There is now a requirement to ensure that whenever items are updated in the DynamoDB primary table , another record is inserted into a secondary table. Which of the below feature should be used when developing such a solution?
A. AWS DynamoDB Encryption
B. AWS DynamoDB Streams
C. AWS DynamoDB Accelerator
D. AWSTable Accelerator
Answer – B DynamoDB Streams Use Cases and Design Patterns This post describes some common use cases you might encounter, along with their design options and solutions, when migrating data from relational data stores to Amazon DynamoDB. We will consider how to manage the following scenarios:
How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
How do you trigger an event based on a particular transaction?
How do you audit or archive transactions?
How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?
Relational databases provide native support for transactions, triggers, auditing, and replication. Typically, a transaction in a database refers to performing create, read, update, and delete (CRUD) operations against multiple tables in a block. A transaction can have only two states—success or failure. In other words, there is no partial completion. As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit. You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time. AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region. All of the other options are incorrect since none of these would meet the core requirement. Reference: DynamoDB Streams Use Cases and Design Patterns
Q46: An application has been making use of AWS DynamoDB for its back-end data store. The size of the table has now grown to 20 GB , and the scans on the table are causing throttling errors. Which of the following should now be implemented to avoid such errors?
A. Large Page size
B. Reduced page size
C. Parallel Scans
D. Sequential scans
Answer – B When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity. You can use the following technique to minimize the impact of a scan on a table’s provisioned throughput. Reduce page size Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request. For example, suppose that each item is 4 KB and you set the page size to 40 items. A Query request would then consume only 20 eventually consistent read operations or 40 strongly consistent read operations. A larger number of smaller Query or Scan operations would allow your other critical requests to succeed without throttling. Reference1: Rate-Limited Scans in Amazon DynamoDB
Q47: Which of the following is correct way of passing a stage variable to an HTTP URL ? (Select TWO.)
A. http://example.com/${}/prod
B. http://example.com/${stageVariables.}/prod
C. http://${stageVariables.}.example.com/dev/operation
D. http://${stageVariables}.example.com/dev/operation
E. http://${}.example.com/dev/operation
F. http://example.com/${stageVariables}/prod
Answer – B. and C. A stage variable can be used as part of HTTP integration URL as in following cases, · A full URI without protocol · A full domain · A subdomain · A path · A query string In the above case , option B & C displays stage variable as a path & sub-domain. Reference: Amazon API Gateway Stage Variables Reference
Q48: Your company is planning on creating new development environments in AWS. They want to make use of their existing Chef recipes which they use for their on-premise configuration for servers in AWS. Which of the following service would be ideal to use in this regard?
A. AWS Elastic Beanstalk
B. AWS OpsWork
C. AWS Cloudformation
D. AWS SQS
Answer – B AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments All other options are invalid since they cannot be used to work with Chef recipes for configuration management. Reference: AWS OpsWorks
Q49: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?
A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access.
C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.
Answer – C With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used Reference: About Web Identity Federation
Q50: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?
A. Cognito Data
B. Cognito Events
C. Cognito Streams
D. Cognito Callbacks
Answer – C Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams Reference:
Q51: You’ve developed a set of scripts using AWS Lambda. These scripts need to access EC2 Instances in a VPC. Which of the following needs to be done to ensure that the AWS Lambda function can access the resources in the VPC. Choose 2 answers from the options given below
A. Ensure that the subnet ID’s are mentioned when conguring the Lambda function
B. Ensure that the NACL ID’s are mentioned when conguring the Lambda function
C. Ensure that the Security Group ID’s are mentioned when conguring the Lambda function
D. Ensure that the VPC Flow Log ID’s are mentioned when conguring the Lambda function
Answer: A and C. AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPCspecific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC. Reference: Configuring a Lambda Function to Access Resources in an Amazon VPC
Q52: You’ve currently been tasked to migrate an existing on-premise environment into Elastic Beanstalk. The application does not make use of Docker containers. You also can’t see any relevant environments in the beanstalk service that would be suitable to host your application. What should you consider doing in this case?
A. Migrate your application to using Docker containers and then migrate the app to the Elastic Beanstalk environment.
B. Consider using Cloudformation to deploy your environment to Elastic Beanstalk
C. Consider using Packer to create a custom platform
D. Consider deploying your application using the Elastic Container Service
Answer – C Elastic Beanstalk supports custom platforms. A custom platform is a more advanced customization than a Custom Image in several ways. A custom platform lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility allows you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn’t provide a platform out of the box. Compare that to custom images, where you modify an AMI for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform’s software stack. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. To create a custom platform, you build an Amazon Machine Image (AMI) from one of the supported operating systems—Ubuntu, RHEL, or Amazon Linux (see the flavor entry in Platform.yaml File Format for the exact version numbers)—and add further customizations. You create your own Elastic Beanstalk platform using Packer, which is an open-source tool for creating machine images for many platforms, including AMIs for use with Amazon EC2. An Elastic Beanstalk platform comprises an AMI configured to run a set of software that supports an application, and metadata that can include custom configuration options and default configuration option settings. Reference: AWS Elastic Beanstalk Custom Platforms
Q53: Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.
A. 10
B. 160
C. 155
D. 16
Answer – B. Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below. Reference: Read/Write Capacity Mode
Q57: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?
Q60: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?
A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.
Q63: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?
A. Transforms
B. Outputs
C. Resources
D. Instances
Answer: C. The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3. Reference: Resources
Q64: Which AWS service can be used to fully automate your entire release process?
A. CodeDeploy
B. CodePipeline
C. CodeCommit
D. CodeBuild
Answer: B. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates
Q65: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?
A. Outputs
B. Transforms
C. Resources
D. Exports
Answer: A. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Reference: CloudFormation Outputs
Q66: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?
A. Inputs
B. Resources
C. Transforms
D. Files
Answer: C. Transforms is used to reference code located in S3 and also specififying the use of the Serverless Application Model (SAM) for Lambda deployments. Reference: Transforms
Q67: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file used to specify source files and lifecycle hooks?
Q68: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?
A. Share the code using an EBS volume
B. Copy and paste the code into the template each time you need to use it
C. Use a cloudformation nested stack
D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.
Q72: Which of the following is an encrypted key used by KMS to encrypt your data
A. Custmoer Mamaged Key
B. Encryption Key
C. Envelope Key
D. Customer Master Key
Answer: C. Your Data key also known as the Enveloppe key is encrypted using the master key.This approach is known as Envelope encryption. Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.
Q75: A developer is preparing a deployment package for a Java implementation of an AWS Lambda function. What should the developer include in the deployment package? (Select TWO.) A. Compiled application code B. Java runtime environment C. References to the event sources D. Lambda execution role E. Application dependencies
Answer: C. E. Notes: To create a Lambda function, you first create a Lambda function deployment package. This package is a .zip or .jar file consisting of your code and any dependencies. Reference:Lambda deployment packages.
Q76: A developer uses AWS CodeDeploy to deploy a Python application to a fleet of Amazon EC2 instances that run behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. What should the developer include in the CodeDeploy deployment package? A. A launch template for the Amazon EC2 Auto Scaling group B. A CodeDeploy AppSpec file C. An EC2 role that grants the application access to AWS services D. An IAM policy that grants the application access to AWS services
Answer: B. Notes: The CodeDeploy AppSpec (application specific) file is unique to CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file. Reference: CodeDeploy application specification (AppSpec) files. Category: Deployment
Q76: A company is working on a project to enhance its serverless application development process. The company hosts applications on AWS Lambda. The development team regularly updates the Lambda code and wants to use stable code in production. Which combination of steps should the development team take to configure Lambda functions to meet both development and production requirements? (Select TWO.)
A. Create a new Lambda version every time a new code release needs testing. B. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready unqualified Amazon Resource Name (ARN) version. Point the Development alias to the $LATEST version. C. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to the production-ready qualified Amazon Resource Name (ARN) version. Point the Development alias to the variable LAMBDA_TASK_ROOT. D. Create a new Lambda layer every time a new code release needs testing. E. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready Lambda layer Amazon Resource Name (ARN). Point the Development alias to the $LATEST layer ARN.
Answer: A. B. Notes: Lambda function versions are designed to manage deployment of functions. They can be used for code changes, without affecting the stable production version of the code. By creating separate aliases for Production and Development, systems can initiate the correct alias as needed. A Lambda function alias can be used to point to a specific Lambda function version. Using the functionality to update an alias and its linked version, the development team can update the required version as needed. The $LATEST version is the newest published version. Reference: Lambda function versions.
Q77: Each time a developer publishes a new version of an AWS Lambda function, all the dependent event source mappings need to be updated with the reference to the new version’s Amazon Resource Name (ARN). These updates are time consuming and error-prone. Which combination of actions should the developer take to avoid performing these updates when publishing a new Lambda version? (Select TWO.) A. Update event source mappings with the ARN of the Lambda layer. B. Point a Lambda alias to a new version of the Lambda function. C. Create a Lambda alias for each published version of the Lambda function. D. Point a Lambda alias to a new Lambda function alias. E. Update the event source mappings with the Lambda alias ARN.
Answer: B. E. Notes: A Lambda alias is a pointer to a specific Lambda function version. Instead of using ARNs for the Lambda function in event source mappings, you can use an alias ARN. You do not need to update your event source mappings when you promote a new version or roll back to a previous version. Reference: Lambda function aliases. Category: Deployment
Q78: A company wants to store sensitive user data in Amazon S3 and encrypt this data at rest. The company must manage the encryption keys and use Amazon S3 to perform the encryption. How can a developer meet these requirements? A. Enable default encryption for the S3 bucket by using the option for server-side encryption with customer-provided encryption keys (SSE-C). B. Enable client-side encryption with an encryption key. Upload the encrypted object to the S3 bucket. C. Enable server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Upload an object to the S3 bucket. D. Enable server-side encryption with customer-provided encryption keys (SSE-C). Upload an object to the S3 bucket.
Q79: A company is developing a Python application that submits data to an Amazon DynamoDB table. The company requires client-side encryption of specific data items and end-to-end protection for the encrypted data in transit and at rest. Which combination of steps will meet the requirement for the encryption of specific data items? (Select TWO.)
A. Generate symmetric encryption keys with AWS Key Management Service (AWS KMS). B. Generate asymmetric encryption keys with AWS Key Management Service (AWS KMS). C. Use generated keys with the DynamoDB Encryption Client. D. Use generated keys to configure DynamoDB table encryption with AWS managed customer master keys (CMKs). E. Use generated keys to configure DynamoDB table encryption with AWS owned customer master keys (CMKs).
Answer: A. C. Notes: When the DynamoDB Encryption Client is configured to use AWS KMS, it uses a customer master key (CMK) that is always encrypted when used outside of AWS KMS. This cryptographic materials provider returns a unique encryption key and signing key for every table item. This method of encryption uses a symmetric CMK. Reference: Direct KMS Materials Provider. Category: Deployment
Q80: A company is developing a REST API with Amazon API Gateway. Access to the API should be limited to users in the existing Amazon Cognito user pool. Which combination of steps should a developer perform to secure the API? (Select TWO.) A. Create an AWS Lambda authorizer for the API. B. Create an Amazon Cognito authorizer for the API. C. Configure the authorizer for the API resource. D. Configure the API methods to use the authorizer. E. Configure the authorizer for the API stage.
Answer: B. D. Notes: An Amazon Cognito authorizer should be used for integration with Amazon Cognito user pools. In addition to creating an authorizer, you are required to configure an API method to use that authorizer for the API. Reference: Control access to a REST API using Amazon Cognito user pools as authorizer. Category: Security
Q81: A developer is implementing a mobile app to provide personalized services to app users. The application code makes calls to Amazon S3 and Amazon Simple Queue Service (Amazon SQS). Which options can the developer use to authenticate the app users? (Select TWO.) A. Authenticate to the Amazon Cognito identity pool directly. B. Authenticate to AWS Identity and Access Management (IAM) directly. C. Authenticate to the Amazon Cognito user pool directly. D. Federate authentication by using Login with Amazon with the users managed with AWS Security Token Service (AWS STS). E. Federate authentication by using Login with Amazon with the users managed with the Amazon Cognito user pool.
Answer: C. E. Notes: The Amazon Cognito user pool provides direct user authentication. The Amazon Cognito user pool provides a federated authentication option with third-party identity provider (IdP), including amazon.com. Reference: Adding User Pool Sign-in Through a Third Party. Category: Security
Q82: A company is implementing several order processing workflows. Each workflow is implemented by using AWS Lambda functions for each task. Which combination of steps should a developer follow to implement these workflows? (Select TWO.) A. Define a AWS Step Functions task for each Lambda function. B. Define a AWS Step Functions task for each workflow. C. Write code that polls the AWS Step Functions invocation to coordinate each workflow. D. Define an AWS Step Functions state machine for each workflow. E. Define an AWS Step Functions state machine for each Lambda function.
Answer: A. D. Notes: Step Functions is based on state machines and tasks. A state machine is a workflow. Tasks perform work by coordinating with other AWS services, such as Lambda. A state machine is a workflow. It can be used to express a workflow as a number of states, their relationships, and their input and output. You can coordinate individual tasks with Step Functions by expressing your workflow as a finite state machine, written in the Amazon States Language. Reference: Getting Started with AWS Step Functions.
Category: Development
Q83: A company is migrating a web service to the AWS Cloud. The web service accepts requests by using HTTP (port 80). The company wants to use an AWS Lambda function to process HTTP requests. Which application design will satisfy these requirements? A. Create an Amazon API Gateway API. Configure proxy integration with the Lambda function. B. Create an Amazon API Gateway API. Configure non-proxy integration with the Lambda function. C. Configure the Lambda function to listen to inbound network connections on port 80. D. Configure the Lambda function as a target in the Application Load Balancer target group.
Answer: D. Notes: Elastic Load Balancing supports Lambda functions as a target for an Application Load Balancer. You can use load balancer rules to route HTTP requests to a function, based on the path or the header values. Then, process the request and return an HTTP response from your Lambda function. Reference: Using AWS Lambda with an Application Load Balancer. Category: Development
Q84: A company is developing an image processing application. When an image is uploaded to an Amazon S3 bucket, a number of independent and separate services must be invoked to process the image. The services do not have to be available immediately, but they must process every image. Which application design satisfies these requirements? A. Configure an Amazon S3 event notification that publishes to an Amazon Simple Queue Service (Amazon SQS) queue. Each service pulls the message from the same queue. B. Configure an Amazon S3 event notification that publishes to an Amazon Simple Notification Service (Amazon SNS) topic. Each service subscribes to the same topic. C. Configure an Amazon S3 event notification that publishes to an Amazon Simple Queue Service (Amazon SQS) queue. Subscribe a separate Amazon Simple Notification Service (Amazon SNS) topic for each service to an Amazon SQS queue. D. Configure an Amazon S3 event notification that publishes to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe a separate Simple Queue Service (Amazon SQS) queue for each service to the Amazon SNS topic.
Answer: D. Notes: Each service can subscribe to an individual Amazon SQS queue, which receives an event notification from the Amazon SNS topic. This is a fanout architectural implementation. Reference: Common Amazon SNS scenarios. Category: Development
Q85: A developer wants to implement Amazon EC2 Auto Scaling for a Multi-AZ web application. However, the developer is concerned that user sessions will be lost during scale-in events. How can the developer store the session state and share it across the EC2 instances? A. Write the sessions to an Amazon Kinesis data stream. Configure the application to poll the stream. B. Publish the sessions to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe each instance in the group to the topic. C. Store the sessions in an Amazon ElastiCache for Memcached cluster. Configure the application to use the Memcached API. D. Write the sessions to an Amazon Elastic Block Store (Amazon EBS) volume. Mount the volume to each instance in the group.
Answer: C. Notes: ElastiCache for Memcached is a distributed in-memory data store or cache environment in the cloud. It will meet the developer’s requirement of persistent storage and is fast to access. Reference: What is Amazon ElastiCache for Memcached?
Q86: A developer is integrating a legacy web application that runs on a fleet of Amazon EC2 instances with an Amazon DynamoDB table. There is no AWS SDK for the programming language that was used to implement the web application. Which combination of steps should the developer perform to make an API call to Amazon DynamoDB from the instances? (Select TWO.) A. Make an HTTPS POST request to the DynamoDB API endpoint for the AWS Region. In the request body, include an XML document that contains the request attributes. B. Make an HTTPS POST request to the DynamoDB API endpoint for the AWS Region. In the request body, include a JSON document that contains the request attributes. C. Sign the requests by using AWS access keys and Signature Version 4. D. Use an EC2 SSH key to calculate Signature Version 4 of the request. E. Provide the signature value through the HTTP X-API-Key header.
Answer: B. C. Notes: The HTTPS-based low-level AWS API for DynamoDB uses JSON as a wire protocol format. When you send HTTP requests to AWS, you sign the requests so that AWS can identify who sent them. Requests are signed with your AWS access key, which consists of an access key ID and secret access key. AWS supports two signature versions: Signature Version 4 and Signature Version 2. AWS recommends the use of Signature Version 4. Reference: Signing AWS API requests. Category: Development
Q87: A developer has written several custom applications that read and write to the same Amazon DynamoDB table. Each time the data in the DynamoDB table is modified, this change should be sent to an external API. Which combination of steps should the developer perform to accomplish this task? (Select TWO.) A. Configure an AWS Lambda function to poll the stream and call the external API. B. Configure an event in Amazon EventBridge (Amazon CloudWatch Events) that publishes the change to an Amazon Managed Streaming for Apache Kafka (Amazon MSK) data stream. C. Create a trigger in the DynamoDB table to publish the change to an Amazon Kinesis data stream. D. Deliver the stream to an Amazon Simple Notification Service (Amazon SNS) topic and subscribe the API to the topic. E. Enable DynamoDB Streams on the table.
Answer: A. E. Notes: If you enable DynamoDB Streams on a table, you can associate the stream Amazon Resource Name (ARN) with an Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table’s stream. Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. You can enable DynamoDB Streams on a table to create an event that invokes an AWS Lambda function. Reference: Tutorial: Process New Items with DynamoDB Streams and Lambda. Category: Monitoring
Q88: A company is migrating the create, read, update, and delete (CRUD) functionality of an existing Java web application to AWS Lambda. Which minimal code refactoring is necessary for the CRUD operations to run in the Lambda function? A. Implement a Lambda handler function. B. Import an AWS X-Ray package. C. Rewrite the application code in Python. D. Add a reference to the Lambda execution role.
Answer: A. Notes: Every Lambda function needs a Lambda-specific handler. Specifics of authoring vary between runtimes, but all runtimes share a common programming model that defines the interface between your code and the runtime code. You tell the runtime which method to run by defining a handler in the function configuration. The runtime runs that method. Next, the runtime passes in objects to the handler that contain the invocation event and context, such as the function name and request ID. Reference: Getting started with Lambda. Category: Refactoring
Q89: A company plans to use AWS log monitoring services to monitor an application that runs on premises. Currently, the application runs on a recent version of Ubuntu Server and outputs the logs to a local file. Which combination of steps should a developer perform to accomplish this goal? (Select TWO.) A. Update the application code to include calls to the agent API for log collection. B. Install the Amazon Elastic Container Service (Amazon ECS) container agent on the server. C. Install the unified Amazon CloudWatch agent on the server. D. Configure the long-term AWS credentials on the server to enable log collection by the agent. E. Attach an IAM role to the server to enable log collection by the agent.
Answer: C. D. Notes: The unified CloudWatch agent needs to be installed on the server. Ubuntu Server 18.04 is one of the many supported operating systems. When you install the unified CloudWatch agent on an on-premises server, you will specify a named profile that contains the credentials of the IAM user. Reference: Collecting metrics and logs from Amazon EC2 instances and on-premises servers with the CloudWatch agent. Category: Monitoring
Q90: A developer wants to monitor invocations of an AWS Lambda function by using Amazon CloudWatch Logs. The developer added a number of print statements to the function code that write the logging information to the stdout stream. After running the function, the developer does not see any log data being generated. Why does the log data NOT appear in the CloudWatch logs? A. The log data is not written to the stderr stream. B. Lambda function logging is not automatically enabled. C. The execution role for the Lambda function did not grant permissions to write log data to CloudWatch Logs. D. The Lambda function outputs the logs to an Amazon S3 bucket.
Answer: C. Notes: The function needs permission to call CloudWatch Logs. Update the execution role to grant the permission. You can use the managed policy of AWSLambdaBasicExecutionRole. Reference: Troubleshoot execution issues in Lambda. Category: Monitoting
Q91: Which of the following are best practices you should implement into ongoing deployments of your application? (Select THREE.)
A. Use stage variables to manage secrets across environments B. Create account-specific AWS SAM templates for each environment C. Use an AutoPublish alias D. Use traffic shifting with pre- and post-deployment hooks E. Test throughout the pipeline
Q92: You are handing off maintenance of your new serverless application to an incoming team lead. Which recommendations would you make? (Select THREE.)
A. Keep up to date with the quotas and payload sizes for each AWS service you are using
B. Analyze production access patterns to identify potential improvements
C. Design your services to extend their life as long as possible
D. Minimize changes to your production application
E. Compare the value of using the latest first-class integrations versus using Lambda between AWS services
Answer: A. B. D.
Notes: Keep up to date with the quotas and payload sizes for each AWS service you are using,
What is the AWS Certified Developer Associate Exam?
This AWS Certified Developer-Associate Examination is intended for individuals who perform a Developer role. It validates an examinee’s ability to:
Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS
Recommended general IT knowledge The target candidate should have the following: – In-depth knowledge of at least one high-level programming language – Understanding of application lifecycle management – The ability to write code for serverless applications – Understanding of the use of containers in the development process
Recommended AWS knowledge The target candidate should be able to do the following:
Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications
Identify key features of AWS services
Understand the AWS shared responsibility model
Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
Use and interact with AWS services
Apply basic understanding of cloud-native applications to write code
Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
Author, maintain, and debug code modules on AWS
What is considered out of scope for the target candidate? The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam: – Design architectures (for example, distributed system, microservices) – Design and implement CI/CD pipelines
Administer IAM users and groups
Administer Amazon Elastic Container Service (Amazon ECS)
Design AWS networking infrastructure (for example, Amazon VPC, AWS Direct Connect)
Understand compliance and licensing
Exam content Response types There are two types of questions on the exam: – Multiple choice: Has one correct response and three incorrect responses (distractors) – Multiple response: Has two or more correct responses out of five or more response options Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose. Distractors are generally plausible responses that match the content area. Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.
Unscored content The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.
Exam results The AWS Certified Developer – Associate (DVA-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines. Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720. Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels. Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam. Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.
Content outline This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context. The percentage in each domain represents only scored content.
Domain 1: Deployment 22% Domain 2: Security 26% Domain 3: Development with AWS Services 30% Domain 4: Refactoring 10% Domain 5: Monitoring and Troubleshooting 12%
Domain 1: Deployment 1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns. – Commit code to a repository and invoke build, test and/or deployment actions – Use labels and branches for version and release management – Use AWS CodePipeline to orchestrate workflows against different environments – Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS CodeDeploy for CI/CD purposes – Perform a roll back plan based on application deployment policy
1.2 Deploy applications using AWS Elastic Beanstalk. – Utilize existing supported environments to define a new application stack – Package the application – Introduce a new application version into the Elastic Beanstalk environment – Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable) – Validate application health using Elastic Beanstalk dashboard – Use Amazon CloudWatch Logs to instrument application logging
1.3 Prepare the application deployment package to be deployed to AWS. – Manage the dependencies of the code module (like environment variables, config files and static image files) within the package – Outline the package/container directory structure and organize files appropriately – Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores)
1.4 Deploy serverless applications. – Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template – Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)
Domain 2: Security 2.1 Make authenticated calls to AWS services. – Communicate required policy based on least privileges required by application. – Assume an IAM role to access a service – Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)
2.2 Implement encryption using AWS services. – Encrypt data at rest (client side; server side; envelope encryption) using AWS services – Encrypt data in transit
2.3 Implement application authentication and authorization. – Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools – Use Amazon Cognito-provided credentials to write code that access AWS services. – Use Amazon Cognito sync to synchronize user profiles and data – Use developer-authenticated identities to interact between end user devices, backend authentication, and Amazon Cognito
Domain 3: Development with AWS Services 3.1 Write code for serverless applications. – Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications) – Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler) – Create an API endpoint using Amazon API Gateway – Create and test appropriate API actions like GET, POST using the API endpoint – Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes) – Compute read/write capacity units for Amazon DynamoDB based on application requirements – Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis) – Invoke an AWS Lambda function synchronously and asynchronously
3.2 Translate functional requirements into application design. – Determine real-time vs. batch processing for a given use case – Determine use of synchronous vs. asynchronous for a given use case – Determine use of event vs. schedule/poll for a given use case – Account for tradeoffs for consistency models in an application design
Domain 4: Refactoring 4.1 Optimize applications to best use AWS services and features. Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache) Apply an Amazon S3 naming scheme for optimal read performance
4.2 Migrate existing application code to run on AWS. – Isolate dependencies – Run the application as one or more stateless processes – Develop in order to enable horizontal scalability – Externalize state
Domain 5: Monitoring and Troubleshooting
5.1 Write code that can be monitored. – Create custom Amazon CloudWatch metrics – Perform logging in a manner available to systems operators – Instrument application source code to enable tracing in AWS X-Ray
5.2 Perform root cause analysis on faults found in testing or production. – Interpret the outputs from the logging mechanism in AWS to identify errors in logs – Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues – Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component
Which key tools, technologies, and concepts might be covered on the exam?
The following is a non-exhaustive list of the tools and technologies that could appear on the exam. This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam. The general tools and technologies in this list appear in no particular order. AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance: – Analytics – Application Integration – Containers – Cost and Capacity Management – Data Movement – Developer Tools – Instances (virtual machines) – Management and Governance – Networking and Content Delivery – Security – Serverless
Management and Governance: – AWS CloudFormation – Amazon CloudWatch
Networking and Content Delivery: – Amazon API Gateway – Amazon CloudFront – Elastic Load Balancing
Security, Identity, and Compliance: – Amazon Cognito – AWS Identity and Access Management (IAM) – AWS Key Management Service (AWS KMS)
Storage: – Amazon S3
Out-of-scope AWS services and features
The following is a non-exhaustive list of AWS services and features that are not covered on the exam. These services and features do not represent every AWS offering that is excluded from the exam content. Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant. Out-of-scope AWS services and features include the following: – AWS Application Discovery Service – Amazon AppStream 2.0 – Amazon Chime – Amazon Connect – AWS Database Migration Service (AWS DMS) – AWS Device Farm – Amazon Elastic Transcoder – Amazon GameLift – Amazon Lex – Amazon Machine Learning (Amazon ML) – AWS Managed Services – Amazon Mobile Analytics – Amazon Polly
– Amazon QuickSight – Amazon Rekognition – AWS Server Migration Service (AWS SMS) – AWS Service Catalog – AWS Shield Advanced – AWS Shield Standard – AWS Snow Family – AWS Storage Gateway – AWS WAF – Amazon WorkMail – Amazon WorkSpaces
To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
AWS Certified Developer – Associate Practice Questions And Answers Dump
Q0: Your application reads commands from an SQS queue and sends them to web services hosted by your partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost. How can you accommodate the partners’ broken web services without wasting your resources?
A. Create a delay queue and set DelaySeconds to 30 seconds
B. Requeue the message with a VisibilityTimeout of 30 seconds.
C. Create a dead letter queue and set the Maximum Receives to 3.
D. Requeue the message with a DelaySeconds of 30 seconds.
C. After a message is taken from the queue and returned for the maximum number of retries, it is automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.
Q1: A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently. What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?
A. Amazon DynamoDB auto scaling
B. Amazon DynamoDB cross-region replication
C. Amazon DynamoDB Streams
D. Amazon DynamoDB Accelerator
D. The AWS Documentation mentions the following:
DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios
As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
Q2: You are creating a DynamoDB table with the following attributes:
PurchaseOrderNumber (partition key)
CustomerID
PurchaseDate
TotalPurchaseValue
One of your applications must retrieve items from the table to calculate the total value of purchases for a particular customer over a date range. What secondary index do you need to add to the table?
A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the TotalPurchaseValue attribute
B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the TotalPurchaseValue attribute
C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the TotalPurchaseValue attribute
D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the TotalPurchaseValue attribute
C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the TotalPurchaseValue into the index provides all the data needed to satisfy the use case.
Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
Global Secondary Indexes defines a new paradigm – different hash/range keys per index. This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.
Throughput :
Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q5: Lambda allows you to upload code and dependencies for function packages:
A. Only from a directly uploaded zip file
B. Only via SFTP
C. Only from a zip file in AWS S3
D. From a zip file in AWS S3 or uploaded directly from elsewhere
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q7: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?
A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.
D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.
Q8: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?
A. Create and assign EIP to each instance
B. Create and attach a second IGW to the VPC.
C. Create and utilize a NAT Gateway
D. Connect to a VPN
C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
Q9: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?
A. Security Groups
B. Route Tables
C. Elastic Load Balancer
D. Auto Scaling
D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.
Q11: You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?
A. RegisterImage
B. CreateImage
C. ami-register-image
D. ami-create-image
A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.
Q12: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?
A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
B. Permenantly assigning users to specific instances and always routing their traffic to those instances
C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance
Q14: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?
A. Autoscaling requires using Amazon EBS-backed instances
B. Virtual Private Cloud requires EBS backed instances
C. Amazon EBS-backed instances can be stopped and restarted without losing data
D. Instance-store backed instances can be stopped and restarted without losing data
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.
Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command. ssh -i my_key.pem ec2-user@52.2.222.22 However you receive the following error. @@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@ What is the most probable reason for this and how can you fix it?
A. You do not have root access on your terminal and need to use the sudo option for this to work.
B. You do not have enough permissions to perform the operation.
C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.
D. You need to run something like: chmod 400 my_key.pem
Q16: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?
A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.
D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.
Q17: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:
A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
B. Can only be used to launch EC2 instances in the same country as the AMI is stored
C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
D. Can be used to launch EC2 instances in any AWS region
C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another
Q18: Which of the following statements is true about the Elastic File System (EFS)?
A. EFS can scale out to meet capacity requirements and scale back down when no longer needed
B. EFS can be used by multiple EC2 instances simultaneously
C. EFS cannot be used by an instance using EBS
D. EFS can be configured on an instance before launch just like an IAM role or EBS volumes
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
A. The ability to create custom permission policies.
B. Assigning IAM permission policies to more than one user at a time.
C. Easier user/policy management.
D. Allowing EC2 instances to gain access to S3.
B. and C.
A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.
Q22: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?
A. Amazon DynamoDB auto scaling
B. Amazon DynamoDB cross-region replication
C. Amazon DynamoDB Streams
D. Amazon DynamoDB Accelerator
D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:
As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
Q23: A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?
A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.
B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence. Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation. During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:
Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
Saved Configurations– Settings for any options that are not applied directly to the environment are loaded from a saved configuration, if specified.
Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.
Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.
Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.
If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI . Settings in configuration files are not applied directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.
Q24: What statements are true about Availability Zones (AZs) and Regions?
A. There is only one AZ in each AWS Region
B. AZs are geographically separated inside a region to help protect against natural disasters affecting more than one at a time.
C. AZs can be moved between AWS Regions based on your needs
D. There are (almost always) two or more AZs in each AWS Region
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q26: Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?
A. Eventual Consistent Reads
B. Conditional reads for Consistency
C. Strongly Consistent Reads
D. Not possible
C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.
Q27: You’ ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?
A. Create an Opswork stack and deploy the Docker containers
B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.
B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.
Q28: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?
A. Create multiple threads and upload the objects in the multiple threads
B. Write the items in batches for better performance
C. Use the Multipart upload API
D. Enable versioning on the Bucket
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.
Q29: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?
A. 6000
B. 10
C. 3600
D. 600
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.
You can specify the Write capacity in the Capacity tab of the DynamoDB table.
Q33: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?
A. Create and assign EIP to each instance
B. Create and attach a second IGW to the VPC.
C. Create and utilize a NAT Gateway
D. Connect to a VPN
C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet. Reference: AWS Network Address Translation Gateway
Q34: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?
A. Security Groups
B. Route Tables
C. Elastic Load Balancer
D. Auto Scaling
D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture. Reference: AWS Autoscalling
Q31: An organization is using an Amazon ElastiCache cluster in front of their Amazon RDS instance. The organization would like the Developer to implement logic into the code so that the cluster only retrieves data from RDS when there is a cache miss. What strategy can the Developer implement to achieve this?
A. Lazy loading
B. Write-through
C. Error retries
D. Exponential backoff
Answer:
Answer – A Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. All other options are incorrect. Reference: Caching Strategies
Q32: A developer is writing an application that will run on Ec2 instances and read messages from SQS queue. The nessages will arrive every 15-60 seconds. How should the Developer efficiently query the queue for new messages?
A. Use long polling
B. Set a custom visibility timeout
C. Use short polling
D. Implement exponential backoff
Answer – A Long polling will help insure that the applications make less requests for messages in a shorter period of time. This is more cost effective. Since the messages are only going to be available after 15 seconds and we don’t know exacly when they would be available, it is better to use Long Polling. Reference: Amazon SQS Long Polling
Q33: You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With new Lambda function working as per expectation which of the following will shift traffic from original Lambda function to new Lambda function in the shortest time frame?
A. Canary10Percent5Minutes
B. Linear10PercentEvery10Minutes
C. Canary10Percent15Minutes
D. Linear10PercentEvery1Minute
Answer – A With Canary Deployment Preference type, Traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while remaining all traffic is shifted after 5 minutes. Reference: Gradual Code Deployment
Q34: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?
A. AWS::Serverless::Api
B. AWS::Serverless::Application
C. AWS::Serverless::Layerversion
D. AWS::Serverless::Function
Answer – B AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets. Reference: Declaring Serverless Resources
Q35: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?
A. Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
B. Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
C. Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
D. Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.
Answer – D With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys. Reference: AWS Key Management Service Concepts
Q36: You are developing an application that will be comprised of the following architecture –
A set of Ec2 instances to process the videos.
These (Ec2 instances) will be spun up by an autoscaling group.
SQS Queues to maintain the processing messages.
There will be 2 pricing tiers.
How will you ensure that the premium customers videos are given more preference?
A. Create 2 Autoscaling Groups, one for normal and one for premium customers
B. Create 2 set of Ec2 Instances, one for normal and one for premium customers
C. Create 2 SQS queus, one for normal and one for premium customers
D. Create 2 Elastic Load Balancers, one for normal and one for premium customers.
Answer – C The ideal option would be to create 2 SQS queues. Messages can then be processed by the application from the high priority queue first.<br? The other options are not the ideal options. They would lead to extra costs and also extra maintenance. Reference: SQS
Q37: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?
A. CustomerID
B. CustomerName
C. Location
D. Age
Answer- A Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on.. Use composite attributes. Try to combine more than one attribute to form a unique key. Reference: Choosing the right DynamoDB Partition Key
Q38: A developer is making use of AWS services to develop an application. He has been asked to develop the application in a manner to compensate any network delays. Which of the following two mechanisms should he implement in the application?
A. Multiple SQS queues
B. Exponential backoff algorithm
C. Retries in your application code
D. Consider using the Java sdk.
Answer- B. and C. In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values, and should be set based on the operation being performed, as well as other local factors, such as network latency. Reference: Error Retries and Exponential Backoff in AWS
Q39: An application is being developed that is going to write data to a DynamoDB table. You have to setup the read and write throughput for the table. Data is going to be read at the rate of 300 items every 30 seconds. Each item is of size 6KB. The reads can be eventual consistent reads. What should be the read capacity that needs to be set on the table?
A. 10
B. 20
C. 6
D. 30
Answer – A
Since there are 300 items read every 30 seconds , that means there are (300/30) = 10 items read every second. Since each item is 6KB in size , that means , 2 reads will be required for each item. So we have total of 2*10 = 20 reads for the number of items per second Since eventual consistency is required , we can divide the number of reads(20) by 2 , and in the end we get the Read Capacity of 10.
Q40: You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load balancer. You have been requested to monitor the incoming connections to the Elastic Load Balancer. Which of the below options can suffice this requirement?
A. Use AWS CloudTrail with your load balancer
B. Enable access logs on the load balancer
C. Use a CloudWatch Logs Agent
D. Create a custom metric CloudWatch lter on your load balancer
Answer – B Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues. Reference: Access Logs for Your Application Load Balancer
Q41: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?
A. Enable versioning for the underlying S3 bucket.
B. Enable Replication so that the objects get replicated to the other bucket
C. Enable CORS for the bucket
D. Change the Bucket policy for the bucket to allow access from the other bucket
Answer – C
Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.
Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:
Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.
Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.
Q42: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below
A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.
Answer- C The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3. You can then provides access to the objects based on the key values generated via the user id.
Q43: Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.
A. Publish your data to CloudWatch Logs, and congure your application to Autoscale to handle the load on demand.
B. Publish your log data to an Amazon S3 bucket. Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is congured to pull down your log les stored an Amazon S3
C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is congured to process your logging data.
D. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.
Answer:
Answer – C Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data. Reference: Amazon Kinesis
Q44: You’ve been instructed to develop a mobile application that will make use of AWS services. You need to decide on a data store to store the user sessions. Which of the following would be an ideal data store for session management?
A. AWS Simple Storage Service
B. AWS DynamoDB
C. AWS RDS
D. AWS Redshift
Answer:
Answer – B DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management Reference: Scalable Session Handling in PHP Using Amazon DynamoDB
Q45: Your application currently interacts with a DynamoDB table. Records are inserted into the table via the application. There is now a requirement to ensure that whenever items are updated in the DynamoDB primary table , another record is inserted into a secondary table. Which of the below feature should be used when developing such a solution?
A. AWS DynamoDB Encryption
B. AWS DynamoDB Streams
C. AWS DynamoDB Accelerator
D. AWSTable Accelerator
Answer – B DynamoDB Streams Use Cases and Design Patterns This post describes some common use cases you might encounter, along with their design options and solutions, when migrating data from relational data stores to Amazon DynamoDB. We will consider how to manage the following scenarios:
How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
How do you trigger an event based on a particular transaction?
How do you audit or archive transactions?
How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?
Relational databases provide native support for transactions, triggers, auditing, and replication. Typically, a transaction in a database refers to performing create, read, update, and delete (CRUD) operations against multiple tables in a block. A transaction can have only two states—success or failure. In other words, there is no partial completion. As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit. You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time. AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region. All of the other options are incorrect since none of these would meet the core requirement. Reference: DynamoDB Streams Use Cases and Design Patterns
Q46: An application has been making use of AWS DynamoDB for its back-end data store. The size of the table has now grown to 20 GB , and the scans on the table are causing throttling errors. Which of the following should now be implemented to avoid such errors?
A. Large Page size
B. Reduced page size
C. Parallel Scans
D. Sequential scans
Answer – B When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity. You can use the following technique to minimize the impact of a scan on a table’s provisioned throughput. Reduce page size Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request. For example, suppose that each item is 4 KB and you set the page size to 40 items. A Query request would then consume only 20 eventually consistent read operations or 40 strongly consistent read operations. A larger number of smaller Query or Scan operations would allow your other critical requests to succeed without throttling. Reference1: Rate-Limited Scans in Amazon DynamoDB
Q47: Which of the following is correct way of passing a stage variable to an HTTP URL ? (Select TWO.)
A. http://example.com/${}/prod
B. http://example.com/${stageVariables.}/prod
C. http://${stageVariables.}.example.com/dev/operation
D. http://${stageVariables}.example.com/dev/operation
E. http://${}.example.com/dev/operation
F. http://example.com/${stageVariables}/prod
Answer – B. and C. A stage variable can be used as part of HTTP integration URL as in following cases, · A full URI without protocol · A full domain · A subdomain · A path · A query string In the above case , option B & C displays stage variable as a path & sub-domain. Reference: Amazon API Gateway Stage Variables Reference
Q48: Your company is planning on creating new development environments in AWS. They want to make use of their existing Chef recipes which they use for their on-premise configuration for servers in AWS. Which of the following service would be ideal to use in this regard?
A. AWS Elastic Beanstalk
B. AWS OpsWork
C. AWS Cloudformation
D. AWS SQS
Answer – B AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments All other options are invalid since they cannot be used to work with Chef recipes for configuration management. Reference: AWS OpsWorks
Q49: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?
A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access.
C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.
Answer – C With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used Reference: About Web Identity Federation
Q50: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?
A. Cognito Data
B. Cognito Events
C. Cognito Streams
D. Cognito Callbacks
Answer – C Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams Reference:
Q51: You’ve developed a set of scripts using AWS Lambda. These scripts need to access EC2 Instances in a VPC. Which of the following needs to be done to ensure that the AWS Lambda function can access the resources in the VPC. Choose 2 answers from the options given below
A. Ensure that the subnet ID’s are mentioned when conguring the Lambda function
B. Ensure that the NACL ID’s are mentioned when conguring the Lambda function
C. Ensure that the Security Group ID’s are mentioned when conguring the Lambda function
D. Ensure that the VPC Flow Log ID’s are mentioned when conguring the Lambda function
Answer: A and C. AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPCspecific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC. Reference: Configuring a Lambda Function to Access Resources in an Amazon VPC
Q52: You’ve currently been tasked to migrate an existing on-premise environment into Elastic Beanstalk. The application does not make use of Docker containers. You also can’t see any relevant environments in the beanstalk service that would be suitable to host your application. What should you consider doing in this case?
A. Migrate your application to using Docker containers and then migrate the app to the Elastic Beanstalk environment.
B. Consider using Cloudformation to deploy your environment to Elastic Beanstalk
C. Consider using Packer to create a custom platform
D. Consider deploying your application using the Elastic Container Service
Answer – C Elastic Beanstalk supports custom platforms. A custom platform is a more advanced customization than a Custom Image in several ways. A custom platform lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility allows you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn’t provide a platform out of the box. Compare that to custom images, where you modify an AMI for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform’s software stack. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. To create a custom platform, you build an Amazon Machine Image (AMI) from one of the supported operating systems—Ubuntu, RHEL, or Amazon Linux (see the flavor entry in Platform.yaml File Format for the exact version numbers)—and add further customizations. You create your own Elastic Beanstalk platform using Packer, which is an open-source tool for creating machine images for many platforms, including AMIs for use with Amazon EC2. An Elastic Beanstalk platform comprises an AMI configured to run a set of software that supports an application, and metadata that can include custom configuration options and default configuration option settings. Reference: AWS Elastic Beanstalk Custom Platforms
Q53: Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.
A. 10
B. 160
C. 155
D. 16
Answer – B. Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below. Reference: Read/Write Capacity Mode
Q57: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?
Q60: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?
A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.
Q63: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?
A. Transforms
B. Outputs
C. Resources
D. Instances
Answer: C. The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3. Reference: Resources
Q64: Which AWS service can be used to fully automate your entire release process?
A. CodeDeploy
B. CodePipeline
C. CodeCommit
D. CodeBuild
Answer: B. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates
Q65: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?
A. Outputs
B. Transforms
C. Resources
D. Exports
Answer: A. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Reference: CloudFormation Outputs
Q66: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?
A. Inputs
B. Resources
C. Transforms
D. Files
Answer: C. Transforms is used to reference code located in S3 and also specififying the use of the Serverless Application Model (SAM) for Lambda deployments. Reference: Transforms
Q67: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file used to specify source files and lifecycle hooks?
Q68: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?
A. Share the code using an EBS volume
B. Copy and paste the code into the template each time you need to use it
C. Use a cloudformation nested stack
D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.
Q72: Which of the following is an encrypted key used by KMS to encrypt your data
A. Custmoer Mamaged Key
B. Encryption Key
C. Envelope Key
D. Customer Master Key
Answer: C. Your Data key also known as the Enveloppe key is encrypted using the master key.This approach is known as Envelope encryption. Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.
Q75: A developer is preparing a deployment package for a Java implementation of an AWS Lambda function. What should the developer include in the deployment package? (Select TWO.) A. Compiled application code B. Java runtime environment C. References to the event sources D. Lambda execution role E. Application dependencies
Answer: C. E. Notes: To create a Lambda function, you first create a Lambda function deployment package. This package is a .zip or .jar file consisting of your code and any dependencies. Reference:Lambda deployment packages.
Q76: A developer uses AWS CodeDeploy to deploy a Python application to a fleet of Amazon EC2 instances that run behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. What should the developer include in the CodeDeploy deployment package? A. A launch template for the Amazon EC2 Auto Scaling group B. A CodeDeploy AppSpec file C. An EC2 role that grants the application access to AWS services D. An IAM policy that grants the application access to AWS services
Answer: B. Notes: The CodeDeploy AppSpec (application specific) file is unique to CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file. Reference: CodeDeploy application specification (AppSpec) files. Category: Deployment
Q76: A company is working on a project to enhance its serverless application development process. The company hosts applications on AWS Lambda. The development team regularly updates the Lambda code and wants to use stable code in production. Which combination of steps should the development team take to configure Lambda functions to meet both development and production requirements? (Select TWO.)
A. Create a new Lambda version every time a new code release needs testing. B. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready unqualified Amazon Resource Name (ARN) version. Point the Development alias to the $LATEST version. C. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to the production-ready qualified Amazon Resource Name (ARN) version. Point the Development alias to the variable LAMBDA_TASK_ROOT. D. Create a new Lambda layer every time a new code release needs testing. E. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready Lambda layer Amazon Resource Name (ARN). Point the Development alias to the $LATEST layer ARN.
Answer: A. B. Notes: Lambda function versions are designed to manage deployment of functions. They can be used for code changes, without affecting the stable production version of the code. By creating separate aliases for Production and Development, systems can initiate the correct alias as needed. A Lambda function alias can be used to point to a specific Lambda function version. Using the functionality to update an alias and its linked version, the development team can update the required version as needed. The $LATEST version is the newest published version. Reference: Lambda function versions.
Q77: Each time a developer publishes a new version of an AWS Lambda function, all the dependent event source mappings need to be updated with the reference to the new version’s Amazon Resource Name (ARN). These updates are time consuming and error-prone. Which combination of actions should the developer take to avoid performing these updates when publishing a new Lambda version? (Select TWO.) A. Update event source mappings with the ARN of the Lambda layer. B. Point a Lambda alias to a new version of the Lambda function. C. Create a Lambda alias for each published version of the Lambda function. D. Point a Lambda alias to a new Lambda function alias. E. Update the event source mappings with the Lambda alias ARN.
Answer: B. E. Notes: A Lambda alias is a pointer to a specific Lambda function version. Instead of using ARNs for the Lambda function in event source mappings, you can use an alias ARN. You do not need to update your event source mappings when you promote a new version or roll back to a previous version. Reference: Lambda function aliases. Category: Deployment
Q78: A company wants to store sensitive user data in Amazon S3 and encrypt this data at rest. The company must manage the encryption keys and use Amazon S3 to perform the encryption. How can a developer meet these requirements? A. Enable default encryption for the S3 bucket by using the option for server-side encryption with customer-provided encryption keys (SSE-C). B. Enable client-side encryption with an encryption key. Upload the encrypted object to the S3 bucket. C. Enable server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Upload an object to the S3 bucket. D. Enable server-side encryption with customer-provided encryption keys (SSE-C). Upload an object to the S3 bucket.
Q79: A company is developing a Python application that submits data to an Amazon DynamoDB table. The company requires client-side encryption of specific data items and end-to-end protection for the encrypted data in transit and at rest. Which combination of steps will meet the requirement for the encryption of specific data items? (Select TWO.)
A. Generate symmetric encryption keys with AWS Key Management Service (AWS KMS). B. Generate asymmetric encryption keys with AWS Key Management Service (AWS KMS). C. Use generated keys with the DynamoDB Encryption Client. D. Use generated keys to configure DynamoDB table encryption with AWS managed customer master keys (CMKs). E. Use generated keys to configure DynamoDB table encryption with AWS owned customer master keys (CMKs).
Answer: A. C. Notes: When the DynamoDB Encryption Client is configured to use AWS KMS, it uses a customer master key (CMK) that is always encrypted when used outside of AWS KMS. This cryptographic materials provider returns a unique encryption key and signing key for every table item. This method of encryption uses a symmetric CMK. Reference: Direct KMS Materials Provider. Category: Deployment
Q80: A company is developing a REST API with Amazon API Gateway. Access to the API should be limited to users in the existing Amazon Cognito user pool. Which combination of steps should a developer perform to secure the API? (Select TWO.) A. Create an AWS Lambda authorizer for the API. B. Create an Amazon Cognito authorizer for the API. C. Configure the authorizer for the API resource. D. Configure the API methods to use the authorizer. E. Configure the authorizer for the API stage.
Answer: B. D. Notes: An Amazon Cognito authorizer should be used for integration with Amazon Cognito user pools. In addition to creating an authorizer, you are required to configure an API method to use that authorizer for the API. Reference: Control access to a REST API using Amazon Cognito user pools as authorizer. Category: Security
Q81: A developer is implementing a mobile app to provide personalized services to app users. The application code makes calls to Amazon S3 and Amazon Simple Queue Service (Amazon SQS). Which options can the developer use to authenticate the app users? (Select TWO.) A. Authenticate to the Amazon Cognito identity pool directly. B. Authenticate to AWS Identity and Access Management (IAM) directly. C. Authenticate to the Amazon Cognito user pool directly. D. Federate authentication by using Login with Amazon with the users managed with AWS Security Token Service (AWS STS). E. Federate authentication by using Login with Amazon with the users managed with the Amazon Cognito user pool.
Answer: C. E. Notes: The Amazon Cognito user pool provides direct user authentication. The Amazon Cognito user pool provides a federated authentication option with third-party identity provider (IdP), including amazon.com. Reference: Adding User Pool Sign-in Through a Third Party. Category: Security
Question: A company is implementing several order processing workflows. Each workflow is implemented by using AWS Lambda functions for each task. Which combination of steps should a developer follow to implement these workflows? (Select TWO.) A. Define a AWS Step Functions task for each Lambda function. B. Define a AWS Step Functions task for each workflow. C. Write code that polls the AWS Step Functions invocation to coordinate each workflow. D. Define an AWS Step Functions state machine for each workflow. E. Define an AWS Step Functions state machine for each Lambda function. Answer: A. D. Notes: Step Functions is based on state machines and tasks. A state machine is a workflow. Tasks perform work by coordinating with other AWS services, such as Lambda. A state machine is a workflow. It can be used to express a workflow as a number of states, their relationships, and their input and output. You can coordinate individual tasks with Step Functions by expressing your workflow as a finite state machine, written in the Amazon States Language. ReferenceText: Getting Started with AWS Step Functions. ReferenceUrl: https://aws.amazon.com/step-functions/getting-started/ Category: Development
What is the AWS Certified Developer Associate Exam?
This AWS Certified Developer-Associate Examination is intended for individuals who perform a Developer role. It validates an examinee’s ability to:
Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS
Recommended general IT knowledge The target candidate should have the following: – In-depth knowledge of at least one high-level programming language – Understanding of application lifecycle management – The ability to write code for serverless applications – Understanding of the use of containers in the development process
Recommended AWS knowledge The target candidate should be able to do the following:
Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications
Identify key features of AWS services
Understand the AWS shared responsibility model
Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
Use and interact with AWS services
Apply basic understanding of cloud-native applications to write code
Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
Author, maintain, and debug code modules on AWS
What is considered out of scope for the target candidate? The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam: – Design architectures (for example, distributed system, microservices) – Design and implement CI/CD pipelines
Administer IAM users and groups
Administer Amazon Elastic Container Service (Amazon ECS)
Design AWS networking infrastructure (for example, Amazon VPC, AWS Direct Connect)
Understand compliance and licensing
Exam content Response types There are two types of questions on the exam: – Multiple choice: Has one correct response and three incorrect responses (distractors) – Multiple response: Has two or more correct responses out of five or more response options Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose. Distractors are generally plausible responses that match the content area. Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.
Unscored content The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.
Exam results The AWS Certified Developer – Associate (DVA-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines. Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720. Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels. Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam. Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.
Content outline This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context. The percentage in each domain represents only scored content.
Domain 1: Deployment 22% Domain 2: Security 26% Domain 3: Development with AWS Services 30% Domain 4: Refactoring 10% Domain 5: Monitoring and Troubleshooting 12%
Domain 1: Deployment 1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns. – Commit code to a repository and invoke build, test and/or deployment actions – Use labels and branches for version and release management – Use AWS CodePipeline to orchestrate workflows against different environments – Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS CodeDeploy for CI/CD purposes – Perform a roll back plan based on application deployment policy
1.2 Deploy applications using AWS Elastic Beanstalk. – Utilize existing supported environments to define a new application stack – Package the application – Introduce a new application version into the Elastic Beanstalk environment – Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable) – Validate application health using Elastic Beanstalk dashboard – Use Amazon CloudWatch Logs to instrument application logging
1.3 Prepare the application deployment package to be deployed to AWS. – Manage the dependencies of the code module (like environment variables, config files and static image files) within the package – Outline the package/container directory structure and organize files appropriately – Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores)
1.4 Deploy serverless applications. – Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template – Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)
Domain 2: Security 2.1 Make authenticated calls to AWS services. – Communicate required policy based on least privileges required by application. – Assume an IAM role to access a service – Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)
2.2 Implement encryption using AWS services. – Encrypt data at rest (client side; server side; envelope encryption) using AWS services – Encrypt data in transit
2.3 Implement application authentication and authorization. – Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools – Use Amazon Cognito-provided credentials to write code that access AWS services. – Use Amazon Cognito sync to synchronize user profiles and data – Use developer-authenticated identities to interact between end user devices, backend authentication, and Amazon Cognito
Domain 3: Development with AWS Services 3.1 Write code for serverless applications. – Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications) – Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler) – Create an API endpoint using Amazon API Gateway – Create and test appropriate API actions like GET, POST using the API endpoint – Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes) – Compute read/write capacity units for Amazon DynamoDB based on application requirements – Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis) – Invoke an AWS Lambda function synchronously and asynchronously
3.2 Translate functional requirements into application design. – Determine real-time vs. batch processing for a given use case – Determine use of synchronous vs. asynchronous for a given use case – Determine use of event vs. schedule/poll for a given use case – Account for tradeoffs for consistency models in an application design
Domain 4: Refactoring 4.1 Optimize applications to best use AWS services and features. Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache) Apply an Amazon S3 naming scheme for optimal read performance
4.2 Migrate existing application code to run on AWS. – Isolate dependencies – Run the application as one or more stateless processes – Develop in order to enable horizontal scalability – Externalize state
Domain 5: Monitoring and Troubleshooting
5.1 Write code that can be monitored. – Create custom Amazon CloudWatch metrics – Perform logging in a manner available to systems operators – Instrument application source code to enable tracing in AWS X-Ray
5.2 Perform root cause analysis on faults found in testing or production. – Interpret the outputs from the logging mechanism in AWS to identify errors in logs – Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues – Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component
Which key tools, technologies, and concepts might be covered on the exam?
The following is a non-exhaustive list of the tools and technologies that could appear on the exam. This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam. The general tools and technologies in this list appear in no particular order. AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance: – Analytics – Application Integration – Containers – Cost and Capacity Management – Data Movement – Developer Tools – Instances (virtual machines) – Management and Governance – Networking and Content Delivery – Security – Serverless
Management and Governance: – AWS CloudFormation – Amazon CloudWatch
Networking and Content Delivery: – Amazon API Gateway – Amazon CloudFront – Elastic Load Balancing
Security, Identity, and Compliance: – Amazon Cognito – AWS Identity and Access Management (IAM) – AWS Key Management Service (AWS KMS)
Storage: – Amazon S3
Out-of-scope AWS services and features
The following is a non-exhaustive list of AWS services and features that are not covered on the exam. These services and features do not represent every AWS offering that is excluded from the exam content. Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant. Out-of-scope AWS services and features include the following: – AWS Application Discovery Service – Amazon AppStream 2.0 – Amazon Chime – Amazon Connect – AWS Database Migration Service (AWS DMS) – AWS Device Farm – Amazon Elastic Transcoder – Amazon GameLift – Amazon Lex – Amazon Machine Learning (Amazon ML) – AWS Managed Services – Amazon Mobile Analytics – Amazon Polly
– Amazon QuickSight – Amazon Rekognition – AWS Server Migration Service (AWS SMS) – AWS Service Catalog – AWS Shield Advanced – AWS Shield Standard – AWS Snow Family – AWS Storage Gateway – AWS WAF – Amazon WorkMail – Amazon WorkSpaces
To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
AWS Certified Developer – Associate Practice Questions And Answers Dump
Q0: Your application reads commands from an SQS queue and sends them to web services hosted by your partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost. How can you accommodate the partners’ broken web services without wasting your resources?
A. Create a delay queue and set DelaySeconds to 30 seconds
B. Requeue the message with a VisibilityTimeout of 30 seconds.
C. Create a dead letter queue and set the Maximum Receives to 3.
D. Requeue the message with a DelaySeconds of 30 seconds.
C. After a message is taken from the queue and returned for the maximum number of retries, it is automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.
Q1: A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently. What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?
A. Amazon DynamoDB auto scaling
B. Amazon DynamoDB cross-region replication
C. Amazon DynamoDB Streams
D. Amazon DynamoDB Accelerator
D. The AWS Documentation mentions the following:
DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios
As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
Q2: You are creating a DynamoDB table with the following attributes:
PurchaseOrderNumber (partition key)
CustomerID
PurchaseDate
TotalPurchaseValue
One of your applications must retrieve items from the table to calculate the total value of purchases for a particular customer over a date range. What secondary index do you need to add to the table?
A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the TotalPurchaseValue attribute
B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the TotalPurchaseValue attribute
C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the TotalPurchaseValue attribute
D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the TotalPurchaseValue attribute
C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the TotalPurchaseValue into the index provides all the data needed to satisfy the use case.
Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
Global Secondary Indexes defines a new paradigm – different hash/range keys per index. This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.
Throughput :
Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q5: Lambda allows you to upload code and dependencies for function packages:
A. Only from a directly uploaded zip file
B. Only via SFTP
C. Only from a zip file in AWS S3
D. From a zip file in AWS S3 or uploaded directly from elsewhere
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q7: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?
A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.
D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.
Q8: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?
A. Create and assign EIP to each instance
B. Create and attach a second IGW to the VPC.
C. Create and utilize a NAT Gateway
D. Connect to a VPN
C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
Q9: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?
A. Security Groups
B. Route Tables
C. Elastic Load Balancer
D. Auto Scaling
D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.
Q11: You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?
A. RegisterImage
B. CreateImage
C. ami-register-image
D. ami-create-image
A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.
Q12: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?
A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
B. Permenantly assigning users to specific instances and always routing their traffic to those instances
C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance
Q14: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?
A. Autoscaling requires using Amazon EBS-backed instances
B. Virtual Private Cloud requires EBS backed instances
C. Amazon EBS-backed instances can be stopped and restarted without losing data
D. Instance-store backed instances can be stopped and restarted without losing data
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.
Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command. ssh -i my_key.pem ec2-user@52.2.222.22 However you receive the following error. @@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@ What is the most probable reason for this and how can you fix it?
A. You do not have root access on your terminal and need to use the sudo option for this to work.
B. You do not have enough permissions to perform the operation.
C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.
D. You need to run something like: chmod 400 my_key.pem
Q16: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?
A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.
D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.
Q17: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:
A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
B. Can only be used to launch EC2 instances in the same country as the AMI is stored
C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
D. Can be used to launch EC2 instances in any AWS region
C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another
Q18: Which of the following statements is true about the Elastic File System (EFS)?
A. EFS can scale out to meet capacity requirements and scale back down when no longer needed
B. EFS can be used by multiple EC2 instances simultaneously
C. EFS cannot be used by an instance using EBS
D. EFS can be configured on an instance before launch just like an IAM role or EBS volumes
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
A. The ability to create custom permission policies.
B. Assigning IAM permission policies to more than one user at a time.
C. Easier user/policy management.
D. Allowing EC2 instances to gain access to S3.
B. and C.
A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.
Q22: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?
A. Amazon DynamoDB auto scaling
B. Amazon DynamoDB cross-region replication
C. Amazon DynamoDB Streams
D. Amazon DynamoDB Accelerator
D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:
As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
Q23: A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?
A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.
B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence. Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation. During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:
Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
Saved Configurations– Settings for any options that are not applied directly to the environment are loaded from a saved configuration, if specified.
Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.
Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.
Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.
If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI . Settings in configuration files are not applied directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.
Q24: What statements are true about Availability Zones (AZs) and Regions?
A. There is only one AZ in each AWS Region
B. AZs are geographically separated inside a region to help protect against natural disasters affecting more than one at a time.
C. AZs can be moved between AWS Regions based on your needs
D. There are (almost always) two or more AZs in each AWS Region
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q26: Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?
A. Eventual Consistent Reads
B. Conditional reads for Consistency
C. Strongly Consistent Reads
D. Not possible
C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.
Q27: You’ ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?
A. Create an Opswork stack and deploy the Docker containers
B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.
B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.
Q28: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?
A. Create multiple threads and upload the objects in the multiple threads
B. Write the items in batches for better performance
C. Use the Multipart upload API
D. Enable versioning on the Bucket
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.
Q29: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?
A. 6000
B. 10
C. 3600
D. 600
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.
You can specify the Write capacity in the Capacity tab of the DynamoDB table.
Q33: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?
A. Create and assign EIP to each instance
B. Create and attach a second IGW to the VPC.
C. Create and utilize a NAT Gateway
D. Connect to a VPN
C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet. Reference: AWS Network Address Translation Gateway
Q34: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?
A. Security Groups
B. Route Tables
C. Elastic Load Balancer
D. Auto Scaling
D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture. Reference: AWS Autoscalling
Q31: An organization is using an Amazon ElastiCache cluster in front of their Amazon RDS instance. The organization would like the Developer to implement logic into the code so that the cluster only retrieves data from RDS when there is a cache miss. What strategy can the Developer implement to achieve this?
A. Lazy loading
B. Write-through
C. Error retries
D. Exponential backoff
Answer:
Answer – A Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. All other options are incorrect. Reference: Caching Strategies
Q32: A developer is writing an application that will run on Ec2 instances and read messages from SQS queue. The nessages will arrive every 15-60 seconds. How should the Developer efficiently query the queue for new messages?
A. Use long polling
B. Set a custom visibility timeout
C. Use short polling
D. Implement exponential backoff
Answer – A Long polling will help insure that the applications make less requests for messages in a shorter period of time. This is more cost effective. Since the messages are only going to be available after 15 seconds and we don’t know exacly when they would be available, it is better to use Long Polling. Reference: Amazon SQS Long Polling
Q33: You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With new Lambda function working as per expectation which of the following will shift traffic from original Lambda function to new Lambda function in the shortest time frame?
A. Canary10Percent5Minutes
B. Linear10PercentEvery10Minutes
C. Canary10Percent15Minutes
D. Linear10PercentEvery1Minute
Answer – A With Canary Deployment Preference type, Traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while remaining all traffic is shifted after 5 minutes. Reference: Gradual Code Deployment
Q34: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?
A. AWS::Serverless::Api
B. AWS::Serverless::Application
C. AWS::Serverless::Layerversion
D. AWS::Serverless::Function
Answer – B AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets. Reference: Declaring Serverless Resources
Q35: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?
A. Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
B. Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
C. Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
D. Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.
Answer – D With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys. Reference: AWS Key Management Service Concepts
Q36: You are developing an application that will be comprised of the following architecture –
A set of Ec2 instances to process the videos.
These (Ec2 instances) will be spun up by an autoscaling group.
SQS Queues to maintain the processing messages.
There will be 2 pricing tiers.
How will you ensure that the premium customers videos are given more preference?
A. Create 2 Autoscaling Groups, one for normal and one for premium customers
B. Create 2 set of Ec2 Instances, one for normal and one for premium customers
C. Create 2 SQS queus, one for normal and one for premium customers
D. Create 2 Elastic Load Balancers, one for normal and one for premium customers.
Answer – C The ideal option would be to create 2 SQS queues. Messages can then be processed by the application from the high priority queue first.<br? The other options are not the ideal options. They would lead to extra costs and also extra maintenance. Reference: SQS
Q37: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?
A. CustomerID
B. CustomerName
C. Location
D. Age
Answer- A Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on.. Use composite attributes. Try to combine more than one attribute to form a unique key. Reference: Choosing the right DynamoDB Partition Key
Q38: A developer is making use of AWS services to develop an application. He has been asked to develop the application in a manner to compensate any network delays. Which of the following two mechanisms should he implement in the application?
A. Multiple SQS queues
B. Exponential backoff algorithm
C. Retries in your application code
D. Consider using the Java sdk.
Answer- B. and C. In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values, and should be set based on the operation being performed, as well as other local factors, such as network latency. Reference: Error Retries and Exponential Backoff in AWS
Q39: An application is being developed that is going to write data to a DynamoDB table. You have to setup the read and write throughput for the table. Data is going to be read at the rate of 300 items every 30 seconds. Each item is of size 6KB. The reads can be eventual consistent reads. What should be the read capacity that needs to be set on the table?
A. 10
B. 20
C. 6
D. 30
Answer – A
Since there are 300 items read every 30 seconds , that means there are (300/30) = 10 items read every second. Since each item is 6KB in size , that means , 2 reads will be required for each item. So we have total of 2*10 = 20 reads for the number of items per second Since eventual consistency is required , we can divide the number of reads(20) by 2 , and in the end we get the Read Capacity of 10.
Q40: You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load balancer. You have been requested to monitor the incoming connections to the Elastic Load Balancer. Which of the below options can suffice this requirement?
A. Use AWS CloudTrail with your load balancer
B. Enable access logs on the load balancer
C. Use a CloudWatch Logs Agent
D. Create a custom metric CloudWatch lter on your load balancer
Answer – B Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues. Reference: Access Logs for Your Application Load Balancer
Q41: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?
A. Enable versioning for the underlying S3 bucket.
B. Enable Replication so that the objects get replicated to the other bucket
C. Enable CORS for the bucket
D. Change the Bucket policy for the bucket to allow access from the other bucket
Answer – C
Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.
Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:
Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.
Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.
Q42: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below
A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.
Answer- C The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3. You can then provides access to the objects based on the key values generated via the user id.
Q43: Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.
A. Publish your data to CloudWatch Logs, and congure your application to Autoscale to handle the load on demand.
B. Publish your log data to an Amazon S3 bucket. Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is congured to pull down your log les stored an Amazon S3
C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is congured to process your logging data.
D. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.
Answer:
Answer – C Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data. Reference: Amazon Kinesis
Q44: You’ve been instructed to develop a mobile application that will make use of AWS services. You need to decide on a data store to store the user sessions. Which of the following would be an ideal data store for session management?
A. AWS Simple Storage Service
B. AWS DynamoDB
C. AWS RDS
D. AWS Redshift
Answer:
Answer – B DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management Reference: Scalable Session Handling in PHP Using Amazon DynamoDB
Q45: Your application currently interacts with a DynamoDB table. Records are inserted into the table via the application. There is now a requirement to ensure that whenever items are updated in the DynamoDB primary table , another record is inserted into a secondary table. Which of the below feature should be used when developing such a solution?
A. AWS DynamoDB Encryption
B. AWS DynamoDB Streams
C. AWS DynamoDB Accelerator
D. AWSTable Accelerator
Answer – B DynamoDB Streams Use Cases and Design Patterns This post describes some common use cases you might encounter, along with their design options and solutions, when migrating data from relational data stores to Amazon DynamoDB. We will consider how to manage the following scenarios:
How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
How do you trigger an event based on a particular transaction?
How do you audit or archive transactions?
How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?
Relational databases provide native support for transactions, triggers, auditing, and replication. Typically, a transaction in a database refers to performing create, read, update, and delete (CRUD) operations against multiple tables in a block. A transaction can have only two states—success or failure. In other words, there is no partial completion. As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit. You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time. AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region. All of the other options are incorrect since none of these would meet the core requirement. Reference: DynamoDB Streams Use Cases and Design Patterns
Q46: An application has been making use of AWS DynamoDB for its back-end data store. The size of the table has now grown to 20 GB , and the scans on the table are causing throttling errors. Which of the following should now be implemented to avoid such errors?
A. Large Page size
B. Reduced page size
C. Parallel Scans
D. Sequential scans
Answer – B When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity. You can use the following technique to minimize the impact of a scan on a table’s provisioned throughput. Reduce page size Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request. For example, suppose that each item is 4 KB and you set the page size to 40 items. A Query request would then consume only 20 eventually consistent read operations or 40 strongly consistent read operations. A larger number of smaller Query or Scan operations would allow your other critical requests to succeed without throttling. Reference1: Rate-Limited Scans in Amazon DynamoDB
Q47: Which of the following is correct way of passing a stage variable to an HTTP URL ? (Select TWO.)
A. http://example.com/${}/prod
B. http://example.com/${stageVariables.}/prod
C. http://${stageVariables.}.example.com/dev/operation
D. http://${stageVariables}.example.com/dev/operation
E. http://${}.example.com/dev/operation
F. http://example.com/${stageVariables}/prod
Answer – B. and C. A stage variable can be used as part of HTTP integration URL as in following cases, · A full URI without protocol · A full domain · A subdomain · A path · A query string In the above case , option B & C displays stage variable as a path & sub-domain. Reference: Amazon API Gateway Stage Variables Reference
Q48: Your company is planning on creating new development environments in AWS. They want to make use of their existing Chef recipes which they use for their on-premise configuration for servers in AWS. Which of the following service would be ideal to use in this regard?
A. AWS Elastic Beanstalk
B. AWS OpsWork
C. AWS Cloudformation
D. AWS SQS
Answer – B AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments All other options are invalid since they cannot be used to work with Chef recipes for configuration management. Reference: AWS OpsWorks
Q49: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?
A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access.
C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.
Answer – C With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used Reference: About Web Identity Federation
Q50: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?
A. Cognito Data
B. Cognito Events
C. Cognito Streams
D. Cognito Callbacks
Answer – C Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams Reference:
Q51: You’ve developed a set of scripts using AWS Lambda. These scripts need to access EC2 Instances in a VPC. Which of the following needs to be done to ensure that the AWS Lambda function can access the resources in the VPC. Choose 2 answers from the options given below
A. Ensure that the subnet ID’s are mentioned when conguring the Lambda function
B. Ensure that the NACL ID’s are mentioned when conguring the Lambda function
C. Ensure that the Security Group ID’s are mentioned when conguring the Lambda function
D. Ensure that the VPC Flow Log ID’s are mentioned when conguring the Lambda function
Answer: A and C. AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPCspecific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC. Reference: Configuring a Lambda Function to Access Resources in an Amazon VPC
Q52: You’ve currently been tasked to migrate an existing on-premise environment into Elastic Beanstalk. The application does not make use of Docker containers. You also can’t see any relevant environments in the beanstalk service that would be suitable to host your application. What should you consider doing in this case?
A. Migrate your application to using Docker containers and then migrate the app to the Elastic Beanstalk environment.
B. Consider using Cloudformation to deploy your environment to Elastic Beanstalk
C. Consider using Packer to create a custom platform
D. Consider deploying your application using the Elastic Container Service
Answer – C Elastic Beanstalk supports custom platforms. A custom platform is a more advanced customization than a Custom Image in several ways. A custom platform lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility allows you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn’t provide a platform out of the box. Compare that to custom images, where you modify an AMI for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform’s software stack. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. To create a custom platform, you build an Amazon Machine Image (AMI) from one of the supported operating systems—Ubuntu, RHEL, or Amazon Linux (see the flavor entry in Platform.yaml File Format for the exact version numbers)—and add further customizations. You create your own Elastic Beanstalk platform using Packer, which is an open-source tool for creating machine images for many platforms, including AMIs for use with Amazon EC2. An Elastic Beanstalk platform comprises an AMI configured to run a set of software that supports an application, and metadata that can include custom configuration options and default configuration option settings. Reference: AWS Elastic Beanstalk Custom Platforms
Q53: Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.
A. 10
B. 160
C. 155
D. 16
Answer – B. Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below. Reference: Read/Write Capacity Mode
Q57: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?
Q60: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?
A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.
Q63: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?
A. Transforms
B. Outputs
C. Resources
D. Instances
Answer: C. The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3. Reference: Resources
Q64: Which AWS service can be used to fully automate your entire release process?
A. CodeDeploy
B. CodePipeline
C. CodeCommit
D. CodeBuild
Answer: B. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates
Q65: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?
A. Outputs
B. Transforms
C. Resources
D. Exports
Answer: A. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Reference: CloudFormation Outputs
Q66: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?
A. Inputs
B. Resources
C. Transforms
D. Files
Answer: C. Transforms is used to reference code located in S3 and also specififying the use of the Serverless Application Model (SAM) for Lambda deployments. Reference: Transforms
Q67: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file used to specify source files and lifecycle hooks?
Q68: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?
A. Share the code using an EBS volume
B. Copy and paste the code into the template each time you need to use it
C. Use a cloudformation nested stack
D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.
Q72: Which of the following is an encrypted key used by KMS to encrypt your data
A. Custmoer Mamaged Key
B. Encryption Key
C. Envelope Key
D. Customer Master Key
Answer: C. Your Data key also known as the Enveloppe key is encrypted using the master key.This approach is known as Envelope encryption. Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.
Q75: A developer is preparing a deployment package for a Java implementation of an AWS Lambda function. What should the developer include in the deployment package? (Select TWO.) A. Compiled application code B. Java runtime environment C. References to the event sources D. Lambda execution role E. Application dependencies
Answer: C. E. Notes: To create a Lambda function, you first create a Lambda function deployment package. This package is a .zip or .jar file consisting of your code and any dependencies. Reference:Lambda deployment packages.
Q76: A developer uses AWS CodeDeploy to deploy a Python application to a fleet of Amazon EC2 instances that run behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. What should the developer include in the CodeDeploy deployment package? A. A launch template for the Amazon EC2 Auto Scaling group B. A CodeDeploy AppSpec file C. An EC2 role that grants the application access to AWS services D. An IAM policy that grants the application access to AWS services
Answer: B. Notes: The CodeDeploy AppSpec (application specific) file is unique to CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file. Reference: CodeDeploy application specification (AppSpec) files. Category: Deployment
Q76: A company is working on a project to enhance its serverless application development process. The company hosts applications on AWS Lambda. The development team regularly updates the Lambda code and wants to use stable code in production. Which combination of steps should the development team take to configure Lambda functions to meet both development and production requirements? (Select TWO.)
A. Create a new Lambda version every time a new code release needs testing. B. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready unqualified Amazon Resource Name (ARN) version. Point the Development alias to the $LATEST version. C. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to the production-ready qualified Amazon Resource Name (ARN) version. Point the Development alias to the variable LAMBDA_TASK_ROOT. D. Create a new Lambda layer every time a new code release needs testing. E. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready Lambda layer Amazon Resource Name (ARN). Point the Development alias to the $LATEST layer ARN.
Answer: A. B. Notes: Lambda function versions are designed to manage deployment of functions. They can be used for code changes, without affecting the stable production version of the code. By creating separate aliases for Production and Development, systems can initiate the correct alias as needed. A Lambda function alias can be used to point to a specific Lambda function version. Using the functionality to update an alias and its linked version, the development team can update the required version as needed. The $LATEST version is the newest published version. Reference: Lambda function versions.
Q77: Each time a developer publishes a new version of an AWS Lambda function, all the dependent event source mappings need to be updated with the reference to the new version’s Amazon Resource Name (ARN). These updates are time consuming and error-prone. Which combination of actions should the developer take to avoid performing these updates when publishing a new Lambda version? (Select TWO.) A. Update event source mappings with the ARN of the Lambda layer. B. Point a Lambda alias to a new version of the Lambda function. C. Create a Lambda alias for each published version of the Lambda function. D. Point a Lambda alias to a new Lambda function alias. E. Update the event source mappings with the Lambda alias ARN.
Answer: B. E. Notes: A Lambda alias is a pointer to a specific Lambda function version. Instead of using ARNs for the Lambda function in event source mappings, you can use an alias ARN. You do not need to update your event source mappings when you promote a new version or roll back to a previous version. Reference: Lambda function aliases. Category: Deployment
Q78: A company wants to store sensitive user data in Amazon S3 and encrypt this data at rest. The company must manage the encryption keys and use Amazon S3 to perform the encryption. How can a developer meet these requirements? A. Enable default encryption for the S3 bucket by using the option for server-side encryption with customer-provided encryption keys (SSE-C). B. Enable client-side encryption with an encryption key. Upload the encrypted object to the S3 bucket. C. Enable server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Upload an object to the S3 bucket. D. Enable server-side encryption with customer-provided encryption keys (SSE-C). Upload an object to the S3 bucket.
Q79: A company is developing a Python application that submits data to an Amazon DynamoDB table. The company requires client-side encryption of specific data items and end-to-end protection for the encrypted data in transit and at rest. Which combination of steps will meet the requirement for the encryption of specific data items? (Select TWO.)
A. Generate symmetric encryption keys with AWS Key Management Service (AWS KMS). B. Generate asymmetric encryption keys with AWS Key Management Service (AWS KMS). C. Use generated keys with the DynamoDB Encryption Client. D. Use generated keys to configure DynamoDB table encryption with AWS managed customer master keys (CMKs). E. Use generated keys to configure DynamoDB table encryption with AWS owned customer master keys (CMKs).
Answer: A. C. Notes: When the DynamoDB Encryption Client is configured to use AWS KMS, it uses a customer master key (CMK) that is always encrypted when used outside of AWS KMS. This cryptographic materials provider returns a unique encryption key and signing key for every table item. This method of encryption uses a symmetric CMK. Reference: Direct KMS Materials Provider. Category: Deployment
Q80: A company is developing a REST API with Amazon API Gateway. Access to the API should be limited to users in the existing Amazon Cognito user pool. Which combination of steps should a developer perform to secure the API? (Select TWO.) A. Create an AWS Lambda authorizer for the API. B. Create an Amazon Cognito authorizer for the API. C. Configure the authorizer for the API resource. D. Configure the API methods to use the authorizer. E. Configure the authorizer for the API stage.
Answer: B. D. Notes: An Amazon Cognito authorizer should be used for integration with Amazon Cognito user pools. In addition to creating an authorizer, you are required to configure an API method to use that authorizer for the API. Reference: Control access to a REST API using Amazon Cognito user pools as authorizer. Category: Security
Q81: A developer is implementing a mobile app to provide personalized services to app users. The application code makes calls to Amazon S3 and Amazon Simple Queue Service (Amazon SQS). Which options can the developer use to authenticate the app users? (Select TWO.) A. Authenticate to the Amazon Cognito identity pool directly. B. Authenticate to AWS Identity and Access Management (IAM) directly. C. Authenticate to the Amazon Cognito user pool directly. D. Federate authentication by using Login with Amazon with the users managed with AWS Security Token Service (AWS STS). E. Federate authentication by using Login with Amazon with the users managed with the Amazon Cognito user pool.
Answer: C. E. Notes: The Amazon Cognito user pool provides direct user authentication. The Amazon Cognito user pool provides a federated authentication option with third-party identity provider (IdP), including amazon.com. Reference: Adding User Pool Sign-in Through a Third Party. Category: Security
Q82: A company is implementing several order processing workflows. Each workflow is implemented by using AWS Lambda functions for each task. Which combination of steps should a developer follow to implement these workflows? (Select TWO.) A. Define a AWS Step Functions task for each Lambda function. B. Define a AWS Step Functions task for each workflow. C. Write code that polls the AWS Step Functions invocation to coordinate each workflow. D. Define an AWS Step Functions state machine for each workflow. E. Define an AWS Step Functions state machine for each Lambda function.
Answer: A. D. Notes: Step Functions is based on state machines and tasks. A state machine is a workflow. Tasks perform work by coordinating with other AWS services, such as Lambda. A state machine is a workflow. It can be used to express a workflow as a number of states, their relationships, and their input and output. You can coordinate individual tasks with Step Functions by expressing your workflow as a finite state machine, written in the Amazon States Language. Reference: Getting Started with AWS Step Functions.
Category: Development
Q83: A company is migrating a web service to the AWS Cloud. The web service accepts requests by using HTTP (port 80). The company wants to use an AWS Lambda function to process HTTP requests. Which application design will satisfy these requirements? A. Create an Amazon API Gateway API. Configure proxy integration with the Lambda function. B. Create an Amazon API Gateway API. Configure non-proxy integration with the Lambda function. C. Configure the Lambda function to listen to inbound network connections on port 80. D. Configure the Lambda function as a target in the Application Load Balancer target group.
Answer: D. Notes: Elastic Load Balancing supports Lambda functions as a target for an Application Load Balancer. You can use load balancer rules to route HTTP requests to a function, based on the path or the header values. Then, process the request and return an HTTP response from your Lambda function. Reference: Using AWS Lambda with an Application Load Balancer. Category: Development
Q84: A company is developing an image processing application. When an image is uploaded to an Amazon S3 bucket, a number of independent and separate services must be invoked to process the image. The services do not have to be available immediately, but they must process every image. Which application design satisfies these requirements? A. Configure an Amazon S3 event notification that publishes to an Amazon Simple Queue Service (Amazon SQS) queue. Each service pulls the message from the same queue. B. Configure an Amazon S3 event notification that publishes to an Amazon Simple Notification Service (Amazon SNS) topic. Each service subscribes to the same topic. C. Configure an Amazon S3 event notification that publishes to an Amazon Simple Queue Service (Amazon SQS) queue. Subscribe a separate Amazon Simple Notification Service (Amazon SNS) topic for each service to an Amazon SQS queue. D. Configure an Amazon S3 event notification that publishes to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe a separate Simple Queue Service (Amazon SQS) queue for each service to the Amazon SNS topic.
Answer: D. Notes: Each service can subscribe to an individual Amazon SQS queue, which receives an event notification from the Amazon SNS topic. This is a fanout architectural implementation. Reference: Common Amazon SNS scenarios. Category: Development
Q85: A developer wants to implement Amazon EC2 Auto Scaling for a Multi-AZ web application. However, the developer is concerned that user sessions will be lost during scale-in events. How can the developer store the session state and share it across the EC2 instances? A. Write the sessions to an Amazon Kinesis data stream. Configure the application to poll the stream. B. Publish the sessions to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe each instance in the group to the topic. C. Store the sessions in an Amazon ElastiCache for Memcached cluster. Configure the application to use the Memcached API. D. Write the sessions to an Amazon Elastic Block Store (Amazon EBS) volume. Mount the volume to each instance in the group.
Answer: C. Notes: ElastiCache for Memcached is a distributed in-memory data store or cache environment in the cloud. It will meet the developer’s requirement of persistent storage and is fast to access. Reference: What is Amazon ElastiCache for Memcached?
Q86: A developer is integrating a legacy web application that runs on a fleet of Amazon EC2 instances with an Amazon DynamoDB table. There is no AWS SDK for the programming language that was used to implement the web application. Which combination of steps should the developer perform to make an API call to Amazon DynamoDB from the instances? (Select TWO.) A. Make an HTTPS POST request to the DynamoDB API endpoint for the AWS Region. In the request body, include an XML document that contains the request attributes. B. Make an HTTPS POST request to the DynamoDB API endpoint for the AWS Region. In the request body, include a JSON document that contains the request attributes. C. Sign the requests by using AWS access keys and Signature Version 4. D. Use an EC2 SSH key to calculate Signature Version 4 of the request. E. Provide the signature value through the HTTP X-API-Key header.
Answer: B. C. Notes: The HTTPS-based low-level AWS API for DynamoDB uses JSON as a wire protocol format. When you send HTTP requests to AWS, you sign the requests so that AWS can identify who sent them. Requests are signed with your AWS access key, which consists of an access key ID and secret access key. AWS supports two signature versions: Signature Version 4 and Signature Version 2. AWS recommends the use of Signature Version 4. Reference: Signing AWS API requests. Category: Development
Q87: A developer has written several custom applications that read and write to the same Amazon DynamoDB table. Each time the data in the DynamoDB table is modified, this change should be sent to an external API. Which combination of steps should the developer perform to accomplish this task? (Select TWO.) A. Configure an AWS Lambda function to poll the stream and call the external API. B. Configure an event in Amazon EventBridge (Amazon CloudWatch Events) that publishes the change to an Amazon Managed Streaming for Apache Kafka (Amazon MSK) data stream. C. Create a trigger in the DynamoDB table to publish the change to an Amazon Kinesis data stream. D. Deliver the stream to an Amazon Simple Notification Service (Amazon SNS) topic and subscribe the API to the topic. E. Enable DynamoDB Streams on the table.
Answer: A. E. Notes: If you enable DynamoDB Streams on a table, you can associate the stream Amazon Resource Name (ARN) with an Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table’s stream. Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. You can enable DynamoDB Streams on a table to create an event that invokes an AWS Lambda function. Reference: Tutorial: Process New Items with DynamoDB Streams and Lambda. Category: Monitoring
Q88: A company is migrating the create, read, update, and delete (CRUD) functionality of an existing Java web application to AWS Lambda. Which minimal code refactoring is necessary for the CRUD operations to run in the Lambda function? A. Implement a Lambda handler function. B. Import an AWS X-Ray package. C. Rewrite the application code in Python. D. Add a reference to the Lambda execution role.
Answer: A. Notes: Every Lambda function needs a Lambda-specific handler. Specifics of authoring vary between runtimes, but all runtimes share a common programming model that defines the interface between your code and the runtime code. You tell the runtime which method to run by defining a handler in the function configuration. The runtime runs that method. Next, the runtime passes in objects to the handler that contain the invocation event and context, such as the function name and request ID. Reference: Getting started with Lambda. Category: Refactoring
Q89: A company plans to use AWS log monitoring services to monitor an application that runs on premises. Currently, the application runs on a recent version of Ubuntu Server and outputs the logs to a local file. Which combination of steps should a developer perform to accomplish this goal? (Select TWO.) A. Update the application code to include calls to the agent API for log collection. B. Install the Amazon Elastic Container Service (Amazon ECS) container agent on the server. C. Install the unified Amazon CloudWatch agent on the server. D. Configure the long-term AWS credentials on the server to enable log collection by the agent. E. Attach an IAM role to the server to enable log collection by the agent.
Answer: C. D. Notes: The unified CloudWatch agent needs to be installed on the server. Ubuntu Server 18.04 is one of the many supported operating systems. When you install the unified CloudWatch agent on an on-premises server, you will specify a named profile that contains the credentials of the IAM user. Reference: Collecting metrics and logs from Amazon EC2 instances and on-premises servers with the CloudWatch agent. Category: Monitoring
Q90: A developer wants to monitor invocations of an AWS Lambda function by using Amazon CloudWatch Logs. The developer added a number of print statements to the function code that write the logging information to the stdout stream. After running the function, the developer does not see any log data being generated. Why does the log data NOT appear in the CloudWatch logs? A. The log data is not written to the stderr stream. B. Lambda function logging is not automatically enabled. C. The execution role for the Lambda function did not grant permissions to write log data to CloudWatch Logs. D. The Lambda function outputs the logs to an Amazon S3 bucket.
Answer: C. Notes: The function needs permission to call CloudWatch Logs. Update the execution role to grant the permission. You can use the managed policy of AWSLambdaBasicExecutionRole. Reference: Troubleshoot execution issues in Lambda. Category: Monitoting
Q91: Which of the following are best practices you should implement into ongoing deployments of your application? (Select THREE.)
A. Use stage variables to manage secrets across environments B. Create account-specific AWS SAM templates for each environment C. Use an AutoPublish alias D. Use traffic shifting with pre- and post-deployment hooks E. Test throughout the pipeline
Q92: You are handing off maintenance of your new serverless application to an incoming team lead. Which recommendations would you make? (Select THREE.)
A. Keep up to date with the quotas and payload sizes for each AWS service you are using
B. Analyze production access patterns to identify potential improvements
C. Design your services to extend their life as long as possible
D. Minimize changes to your production application
E. Compare the value of using the latest first-class integrations versus using Lambda between AWS services
Q93: You are handing off maintenance of your new serverless application to an incoming team lead. Which recommendations would you make? (Select THREE.)
A. Keep up to date with the quotas and payload sizes for each AWS service you are using
B. Analyze production access patterns to identify potential improvements
C. Design your services to extend their life as long as possible
D. Minimize changes to your production application
E. Compare the value of using the latest first-class integrations versus using Lambda between AWS services
Answer: A. B. D. Notes: Keep up to date with the quotas and payload sizes for each AWS service you are using. Analyze production access patterns to identify potential improvements. Minimize changes to your production application
Q94: Your application needs to connect to an Amazon RDS instance on the backend. What is the best recommendation to the developer whose function must read from and write to the Amazon RDS instance?
A. Initialize the number of connections you want outside of the handler
B. Use the database TTL setting to clean up connections
C. Use reserved concurrency to limit the number of concurrent functions that would try to write to the database
D. Use the database proxy feature to provide connection pooling for the functions
Answer: D. Notes: Use the database proxy feature to provide connection pooling for the functions
Question 95: A developer reports that a third-party library they need cannot be shared in the Lambda invocation environment. Which suggestion would you make?
A. Decrease the deployment package size
B. Set a provisioned concurrency of one so that the library doesn’t need to be shared across environments
C. Use reserved concurrency for the function that needs to use the library
D. Load the third-party library onto an Amazon EFS volume
Answer: D Notes: Load the third-party library onto an Amazon EFS volume
AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers for the AWS Certified Developer – Associate Exam.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The AWS Certified Developer-Associate Examination (DVA-C01) is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals guided by certification industry best practices and guidelines.
Your results for the examination are reported as a score from 100 – 1000, with a minimum passing score of 720.
Domain 1: Deployment (22%) 1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns. 1.2 Deploy applications using Elastic Beanstalk. 1.3 Prepare the application deployment package to be deployed to AWS. 1.4 Deploy serverless applications
22%
Domain 2: Security (26%) 2.1 Make authenticated calls to AWS services. 2.2 Implement encryption using AWS services. 2.3 Implement application authentication and authorization.
26%
Domain 3: Development with AWS Services (30%) 3.1 Write code for serverless applications. 3.2 Translate functional requirements into application design. 3.3 Implement application design into application code. 3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.
30%
Domain 4: Refactoring 4.1 Optimize application to best use AWS services and features. 4.2 Migrate existing application code to run on AWS.
10%
Domain 5: Monitoring and Troubleshooting (10%) 5.1 Write code that can be monitored. 5.2 Perform root cause analysis on faults found in testing or production.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers for the AWS Certified Developer – Associate Exam.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The AWS Certified Developer-Associate Examination (DVA-C01) is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals guided by certification industry best practices and guidelines.
Your results for the examination are reported as a score from 100 – 1000, with a minimum passing score of 720.
Domain 1: Deployment (22%) 1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns. 1.2 Deploy applications using Elastic Beanstalk. 1.3 Prepare the application deployment package to be deployed to AWS. 1.4 Deploy serverless applications
22%
Domain 2: Security (26%) 2.1 Make authenticated calls to AWS services. 2.2 Implement encryption using AWS services. 2.3 Implement application authentication and authorization.
26%
Domain 3: Development with AWS Services (30%) 3.1 Write code for serverless applications. 3.2 Translate functional requirements into application design. 3.3 Implement application design into application code. 3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.
30%
Domain 4: Refactoring 4.1 Optimize application to best use AWS services and features. 4.2 Migrate existing application code to run on AWS.
10%
Domain 5: Monitoring and Troubleshooting (10%) 5.1 Write code that can be monitored. 5.2 Perform root cause analysis on faults found in testing or production.
In this AWS tutorial, we are going to discuss how we can make the best use of AWS services to build a highly scalable, and fault tolerant configuration of EC2 instances. The use of Load Balancers and Auto Scaling Groups falls under a number of best practices in AWS, including Performance Efficiency, Reliability and high availability.
Before we dive into this hands-on tutorial on how exactly we can build this solution, let’s have a brief recap on what an Auto Scaling group is, and what a Load balancer is.
Autoscaling group (ASG)
An Autoscaling group (ASG) is a logical grouping of instances which can scale up and scale down depending on pre-configured settings. By setting Scaling policies of your ASG, you can choose how many EC2 instances are launched and terminated based on your application’s load. You can do this based on manual, dynamic, scheduled or predictive scaling.
Elastic Load Balancer (ELB)
An Elastic Load Balancer (ELB) is a name describing a number of services within AWS designed to distribute traffic across multiple EC2 instances in order to provide enhanced scalability, availability, security and more. The particular type of Load Balancer we will be using today is an Application Load Balancer (ALB). The ALB is a Layer 7 Load Balancer designed to distribute HTTP/HTTPS traffic across multiple nodes – with added features such as TLS termination, Sticky Sessions and Complex routing configurations.
Getting Started
First of all, we open our AWS management console and head to the EC2 management console.
We scroll down on the left-hand side and select ‘Launch Templates’. A Launch Template is a configuration template which defines the settings for EC2 instances launched by the ASG.
Under Launch Templates, we will select “Create launch template”.
We specify the name ‘MyTestTemplate’ and use the same text in the description.
Under the ‘Auto Scaling guidance’ box, tick the box which says ‘Provide guidance to help me set up a template that I can use with EC2 Auto Scaling’ and scroll down to launch template contents.
When it comes to choosing our AMI (Amazon Machine Image) we can choose the Amazon Linux 2 under ‘Quick Start’.
The Amazon Linux 2 AMI is free tier eligible, and easy to use for our demonstration purposes.
Next, we select the ‘t2.micro’ under instance types, as this is also free tier eligible.
Under Network Settings, we create a new Security Group called ExampleSG in our default VPC, allowing HTTP access to everyone. It should look like this.
AWS Certified Developer Associate exam: Additional Information for reference
Below are some useful reference links that would help you to learn about AWS Certified Developer Associate Exam.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers for the AWS Certified Developer – Associate Exam.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The AWS Certified Developer-Associate Examination (DVA-C01) is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals guided by certification industry best practices and guidelines.
Your results for the examination are reported as a score from 100 – 1000, with a minimum passing score of 720.
Domain 1: Deployment (22%) 1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns. 1.2 Deploy applications using Elastic Beanstalk. 1.3 Prepare the application deployment package to be deployed to AWS. 1.4 Deploy serverless applications
22%
Domain 2: Security (26%) 2.1 Make authenticated calls to AWS services. 2.2 Implement encryption using AWS services. 2.3 Implement application authentication and authorization.
26%
Domain 3: Development with AWS Services (30%) 3.1 Write code for serverless applications. 3.2 Translate functional requirements into application design. 3.3 Implement application design into application code. 3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.
30%
Domain 4: Refactoring 4.1 Optimize application to best use AWS services and features. 4.2 Migrate existing application code to run on AWS.
10%
Domain 5: Monitoring and Troubleshooting (10%) 5.1 Write code that can be monitored. 5.2 Perform root cause analysis on faults found in testing or production.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below: