2022 AWS Certified Developer Associate Exam Preparation: Questions and Answers Dump

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

2022 AWS Certified Developer Associate Exam Preparation: Questions and Answers Dump.

Welcome to AWS Certified Developer Associate Exam Preparation:

Definition and Objectives, Top 100 Questions and Answers dump, White papers, Courses, Labs and Training Materials, Exam info and details, References, Jobs, Others AWS Certificates

2022 AWS Certified Developer Associate Exam Preparation:  Questions and Answers Dump
2022 AWS Certified Developer Associate Exam Preparation: Questions and Answers Dump
 
#AWS #Developer #AWSCloud #DVAC02 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

What is the AWS Certified Developer Associate Exam?

This AWS Certified Developer-Associate Examination is intended for individuals who perform a Developer role. It validates an examinee’s ability to:

  • Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
  • Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS

Recommended general IT knowledge
The target candidate should have the following:
– In-depth knowledge of at least one high-level programming language
– Understanding of application lifecycle management
– The ability to write code for serverless applications
– Understanding of the use of containers in the development process

Recommended AWS knowledge
The target candidate should be able to do the following:

  • Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications
  • Identify key features of AWS services
  • Understand the AWS shared responsibility model
  • Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
  • Use and interact with AWS services
  • Apply basic understanding of cloud-native applications to write code
  • Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
  • Author, maintain, and debug code modules on AWS

What is considered out of scope for the target candidate?
The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam:
– Design architectures (for example, distributed system, microservices)
– Design and implement CI/CD pipelines

  • Administer IAM users and groups
  • Administer Amazon Elastic Container Service (Amazon ECS)
  • Design AWS networking infrastructure (for example, Amazon VPC, AWS Direct Connect)
  • Understand compliance and licensing

Exam content
Response types
There are two types of questions on the exam:
– Multiple choice: Has one correct response and three incorrect responses (distractors)
– Multiple response: Has two or more correct responses out of five or more response options
Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose.
Distractors are generally plausible responses that match the content area.
Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Unscored content
The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.

Exam results
The AWS Certified Developer – Associate (DVA-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines.
Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720.
Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.
Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam.
Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.

Content outline
This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context.
The percentage in each domain represents only scored content.


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Gemini, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Domain 1: Deployment 22%
Domain 2: Security 26%
Domain 3: Development with AWS Services 30%
Domain 4: Refactoring 10%
Domain 5: Monitoring and Troubleshooting 12%

Domain 1: Deployment
1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
–  Commit code to a repository and invoke build, test and/or deployment actions
–  Use labels and branches for version and release management
–  Use AWS CodePipeline to orchestrate workflows against different environments
–  Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS
CodeDeploy for CI/CD purposes
–  Perform a roll back plan based on application deployment policy

1.2 Deploy applications using AWS Elastic Beanstalk.
–  Utilize existing supported environments to define a new application stack
–  Package the application
–  Introduce a new application version into the Elastic Beanstalk environment
–  Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable)
–  Validate application health using Elastic Beanstalk dashboard
–  Use Amazon CloudWatch Logs to instrument application logging

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

1.3 Prepare the application deployment package to be deployed to AWS.
–  Manage the dependencies of the code module (like environment variables, config files and static image files) within the package
–  Outline the package/container directory structure and organize files appropriately
–  Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores)

1.4 Deploy serverless applications.
–  Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template
–  Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)

Domain 2: Security
2.1 Make authenticated calls to AWS services.
–  Communicate required policy based on least privileges required by application.
–  Assume an IAM role to access a service
–  Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)

2.2 Implement encryption using AWS services.
– Encrypt data at rest (client side; server side; envelope encryption) using AWS services
–  Encrypt data in transit

2.3 Implement application authentication and authorization.
– Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools
–  Use Amazon Cognito-provided credentials to write code that access AWS services.
–  Use Amazon Cognito sync to synchronize user profiles and data
–  Use developer-authenticated identities to interact between end user devices, backend
authentication, and Amazon Cognito

Domain 3: Development with AWS Services
3.1 Write code for serverless applications.
– Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications)
– Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler)
– Create an API endpoint using Amazon API Gateway
–  Create and test appropriate API actions like GET, POST using the API endpoint
–  Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes)
–  Compute read/write capacity units for Amazon DynamoDB based on application requirements
–  Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis)
–  Invoke an AWS Lambda function synchronously and asynchronously

3.2 Translate functional requirements into application design.
– Determine real-time vs. batch processing for a given use case
– Determine use of synchronous vs. asynchronous for a given use case
– Determine use of event vs. schedule/poll for a given use case
– Account for tradeoffs for consistency models in an application design

Domain 4: Refactoring
4.1 Optimize applications to best use AWS services and features.
 Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache)
 Apply an Amazon S3 naming scheme for optimal read performance

4.2 Migrate existing application code to run on AWS.
– Isolate dependencies
– Run the application as one or more stateless processes
– Develop in order to enable horizontal scalability
– Externalize state

Domain 5: Monitoring and Troubleshooting

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

5.1 Write code that can be monitored.
– Create custom Amazon CloudWatch metrics
– Perform logging in a manner available to systems operators
– Instrument application source code to enable tracing in AWS X-Ray

5.2 Perform root cause analysis on faults found in testing or production.
– Interpret the outputs from the logging mechanism in AWS to identify errors in logs
– Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues
– Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component

Which key tools, technologies, and concepts might be covered on the exam?

The following is a non-exhaustive list of the tools and technologies that could appear on the exam.
This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam.
The general tools and technologies in this list appear in no particular order.
AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance:
– Analytics
– Application Integration
– Containers
– Cost and Capacity Management
– Data Movement
– Developer Tools
– Instances (virtual machines)
– Management and Governance
– Networking and Content Delivery
– Security
– Serverless

AWS services and features

Analytics:
– Amazon Elasticsearch Service (Amazon ES)
– Amazon Kinesis
Application Integration:
– Amazon EventBridge (Amazon CloudWatch Events)
– Amazon Simple Notification Service (Amazon SNS)
– Amazon Simple Queue Service (Amazon SQS)
– AWS Step Functions

Compute:
– Amazon EC2
– AWS Elastic Beanstalk
– AWS Lambda

Containers:
– Amazon Elastic Container Registry (Amazon ECR)
– Amazon Elastic Container Service (Amazon ECS)
– Amazon Elastic Kubernetes Services (Amazon EKS)

Database:
– Amazon DynamoDB
– Amazon ElastiCache
– Amazon RDS

Developer Tools:
– AWS CodeArtifact
– AWS CodeBuild
– AWS CodeCommit
– AWS CodeDeploy
– Amazon CodeGuru
– AWS CodePipeline
– AWS CodeStar
– AWS Fault Injection Simulator
– AWS X-Ray

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

Management and Governance:
– AWS CloudFormation
– Amazon CloudWatch

Networking and Content Delivery:
– Amazon API Gateway
– Amazon CloudFront
– Elastic Load Balancing

Security, Identity, and Compliance:
– Amazon Cognito
– AWS Identity and Access Management (IAM)
– AWS Key Management Service (AWS KMS)

Storage:
– Amazon S3

Out-of-scope AWS services and features

The following is a non-exhaustive list of AWS services and features that are not covered on the exam.
These services and features do not represent every AWS offering that is excluded from the exam content.
Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant.
Out-of-scope AWS services and features include the following:
– AWS Application Discovery Service
– Amazon AppStream 2.0
– Amazon Chime
– Amazon Connect
– AWS Database Migration Service (AWS DMS)
– AWS Device Farm
– Amazon Elastic Transcoder
– Amazon GameLift
– Amazon Lex
– Amazon Machine Learning (Amazon ML)
– AWS Managed Services
– Amazon Mobile Analytics
– Amazon Polly

– Amazon QuickSight
– Amazon Rekognition
– AWS Server Migration Service (AWS SMS)
– AWS Service Catalog
– AWS Shield Advanced
– AWS Shield Standard
– AWS Snow Family
– AWS Storage Gateway
– AWS WAF
– Amazon WorkMail
– Amazon WorkSpaces

To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Top

AWS Certified Developer – Associate Practice Questions And Answers Dump

Q0: Your application reads commands from an SQS queue and sends them to web services hosted by your
partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost.
How can you accommodate the partners’ broken web services without wasting your resources?

  • A. Create a delay queue and set DelaySeconds to 30 seconds
  • B. Requeue the message with a VisibilityTimeout of 30 seconds.
  • C. Create a dead letter queue and set the Maximum Receives to 3.
  • D. Requeue the message with a DelaySeconds of 30 seconds.
2022 AWS Certified Developer Associate Exam Preparation:  Questions and Answers Dump
AWS Developer Associates DVA-C01 PRO
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 


C. After a message is taken from the queue and returned for the maximum number of retries, it is
automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.

Reference: Amazon SQS Dead-Letter Queues


Top

Q1: A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently.
What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. The AWS Documentation mentions the following:

DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

Q2: You are creating a DynamoDB table with the following attributes:

  • PurchaseOrderNumber (partition key)
  • CustomerID
  • PurchaseDate
  • TotalPurchaseValue

One of your applications must retrieve items from the table to calculate the total value of purchases for a
particular customer over a date range. What secondary index do you need to add to the table?

  • A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute
  • C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute


C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition
key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the
TotalPurchaseValue into the index provides all the data needed to satisfy the use case.

Reference: AWS DynamoDB Global Secondary Indexes

Difference between local and global indexes in DynamoDB

    • Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
    • Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
    • Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
    • Global Secondary Indexes defines a new paradigm – different hash/range keys per index.
      This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
    • Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
    • Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.

Throughput :

  • Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
  • Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.


Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q3: When referencing the remaining time left for a Lambda function to run within the function’s code you would use:

  • A. The event object
  • B. The timeLeft object
  • C. The remains object
  • D. The context object


D. The context object.

Reference: AWS Lambda


Top

Q4: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context
D. event, context
def handler_name(event, context):

return some_value

Reference: AWS Lambda Function Handler in Python

Top

Q5: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda Deployment Package

Top

Q6: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime

C. Function code and libraries not included within the runtime

Reference: AWS Lambda Deployment Package in PowerShell

Top

Q7: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?

  • A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
  • B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
  • C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
  • D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.


D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.

Reference: Comparison of Security Groups and Network ACLs

AWS Security Groups and NACL


Top

Q8: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.

Reference: AWS Network Address Translation Gateway


Top

Q9: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.

Reference: AWS Autoscalling


Top

Q10: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda

Top

Q11: You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?

  • A. RegisterImage
  • B. CreateImage
  • C. ami-register-image
  • D. ami-create-image

A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.

Reference: API RegisterImage

Top

Q12: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?

  • A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
  • B. Permenantly assigning users to specific instances and always routing their traffic to those instances
  • C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
  • D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance

Top

Q13: Which API call would best be used to describe an Amazon Machine Image?

  • A. ami-describe-image
  • B. ami-describe-images
  • C. DescribeImage
  • D. DescribeImages

D. In general, API actions stick to the PascalCase style with the first letter of every word capitalized.

Reference: API DescribeImages

Top

Q14: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?

  • A. Autoscaling requires using Amazon EBS-backed instances
  • B. Virtual Private Cloud requires EBS backed instances
  • C. Amazon EBS-backed instances can be stopped and restarted without losing data
  • D. Instance-store backed instances can be stopped and restarted without losing data

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.

Reference: What is the difference between EBS and Instance Store?

Top

Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command.
ssh -i my_key.pem ec2-user@52.2.222.22
However you receive the following error.
@@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@
What is the most probable reason for this and how can you fix it?

  • A. You do not have root access on your terminal and need to use the sudo option for this to work.
  • B. You do not have enough permissions to perform the operation.
  • C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
  • D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.

D. You need to run something like: chmod 400 my_key.pem

Reference:

Top

Q16: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?

  • A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
  • B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
  • C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
  • D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.

D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.

Reference: AWS Root Device Storage

Top

Q17: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:

  • A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
  • B. Can only be used to launch EC2 instances in the same country as the AMI is stored
  • C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
  • D. Can be used to launch EC2 instances in any AWS region

C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another

Reference: https://aws.amazon.com/amazon-linux-ami/

Top

Q18: Which of the following statements is true about the Elastic File System (EFS)?

  • A. EFS can scale out to meet capacity requirements and scale back down when no longer needed
  • B. EFS can be used by multiple EC2 instances simultaneously
  • C. EFS cannot be used by an instance using EBS
  • D. EFS can be configured on an instance before launch just like an IAM role or EBS volumes

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

A. and B.

Reference: https://aws.amazon.com/efs/

Top

Q19: IAM Policies, at a minimum, contain what elements?

  • A. ID
  • B. Effects
  • C. Resources
  • D. Sid
  • E. Principle
  • F. Actions

B. C. and F.

Effect – Use Allow or Deny to indicate whether the policy allows or denies access.

Resource – Specify a list of resources to which the actions apply.

Action – Include a list of actions that the policy allows or denies.

Id, Sid aren’t required fields in IAM Policies. But they are optional fields

Reference: AWS IAM Access Policies

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q20: What are the main benefits of IAM groups?

  • A. The ability to create custom permission policies.
  • B. Assigning IAM permission policies to more than one user at a time.
  • C. Easier user/policy management.
  • D. Allowing EC2 instances to gain access to S3.

B. and C.

A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.

Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html

 

Top

Q21: What are benefits of using AWS STS?

  • A. Grant access to AWS resources without having to create an IAM identity for them
  • B. Since credentials are temporary, you don’t have to rotate or revoke them
  • C. Temporary security credentials can be extended indefinitely
  • D. Temporary security credentials can be restricted to a specific region

Top

Q22: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q23: A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?

  • A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
  • B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
  • C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
  • D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.

B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence.
Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation.
During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:

  • Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
  • Saved Configurations
    Settings for any options that are not applied directly to the
    environment are loaded from a saved configuration, if specified.
  • Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the
    environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.

     

    Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.

  • Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.

If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI
.
Settings in configuration files are not applied
directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.
If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.

Reference: Managing ec2 features – Elastic beanstalk

Q24: What statements are true about Availability Zones (AZs) and Regions?

  • A. There is only one AZ in each AWS Region
  • B. AZs are geographically separated inside a region to help protect against natural disasters affecting more than one at a time.
  • C. AZs can be moved between AWS Regions based on your needs
  • D. There are (almost always) two or more AZs in each AWS Region

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B and D.

Reference: AWS global infrastructure/

Top

Q25: An AWS Region contains:

  • A. Edge Locations
  • B. Data Centers
  • C. AWS Services
  • D. Availability Zones


B. C. D. Edge locations are actually distinct locations that don’t explicitly fall within AWS regions.

Reference: AWS Global Infrastructure


Top

Q26: Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?

  • A. Eventual Consistent Reads
  • B. Conditional reads for Consistency
  • C. Strongly Consistent Reads
  • D. Not possible


C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.

Reference: https://aws.amazon.com/dynamodb/faqs/


Top

Q27: You’ ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?

  • A. Create an Opswork stack and deploy the Docker containers
  • B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
  • C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
  • D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.


B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.

Reference: Create and Deploy Docker in AWS


Top

Q28: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?

  • A. Create multiple threads and upload the objects in the multiple threads
  • B. Write the items in batches for better performance
  • C. Use the Multipart upload API
  • D. Enable versioning on the Bucket

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 


C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.

Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html


Top

Q29: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?

  • A. 6000
  • B. 10
  • C. 3600
  • D. 600

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.

You can specify the Write capacity in the Capacity tab of the DynamoDB table.

Reference: AWS working with tables

Q30: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context


D. event, context def handler_name(event, context):

return some_value
Reference: AWS Lambda Function Handler in Python

Top

Q31: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda Deployment Package

Top

Q32: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime


C. Function code and libraries not included within the runtime
Reference: AWS Lambda Deployment Package in PowerShell

Top

Q33: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
Reference: AWS Network Address Translation Gateway

Top

Q34: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.
Reference: AWS Autoscalling

Top

Q30: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

Answer:


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda

Top

Q31: An organization is using an Amazon ElastiCache cluster in front of their Amazon RDS instance. The organization would like the Developer to implement logic into the code so that the cluster only retrieves data from RDS when there is a cache miss. What strategy can the Developer implement to achieve this?

  • A. Lazy loading
  • B. Write-through
  • C. Error retries
  • D. Exponential backoff

Answer:


Answer – A
Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. All other options are incorrect.
Reference: Caching Strategies

Top

Q32: A developer is writing an application that will run on Ec2 instances and read messages from SQS queue. The nessages will arrive every 15-60 seconds. How should the Developer efficiently query the queue for new messages?

  • A. Use long polling
  • B. Set a custom visibility timeout
  • C. Use short polling
  • D. Implement exponential backoff


Answer – A Long polling will help insure that the applications make less requests for messages in a shorter period of time. This is more cost effective. Since the messages are only going to be available after 15 seconds and we don’t know exacly when they would be available, it is better to use Long Polling.
Reference: Amazon SQS Long Polling

Top

Q33: You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With new Lambda function working as per expectation which of the following will shift traffic from original Lambda function to new Lambda function in the shortest time frame?

  • A. Canary10Percent5Minutes
  • B. Linear10PercentEvery10Minutes
  • C. Canary10Percent15Minutes
  • D. Linear10PercentEvery1Minute


Answer – A
With Canary Deployment Preference type, Traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while remaining all traffic is shifted after 5 minutes.
Reference: Gradual Code Deployment

Top

Q34: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?

  • A. AWS::Serverless::Api
  • B. AWS::Serverless::Application
  • C. AWS::Serverless::Layerversion
  • D. AWS::Serverless::Function


Answer – B
AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets.
Reference: Declaring Serverless Resources

Top

Q35: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?

  • A. Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
  • B. Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
  • C. Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
  • D. Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.


Answer – D
With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys.
Reference: AWS Key Management Service Concepts

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q36: You are developing an application that will be comprised of the following architecture –

  1. A set of Ec2 instances to process the videos.
  2. These (Ec2 instances) will be spun up by an autoscaling group.
  3. SQS Queues to maintain the processing messages.
  4. There will be 2 pricing tiers.

How will you ensure that the premium customers videos are given more preference?

  • A. Create 2 Autoscaling Groups, one for normal and one for premium customers
  • B. Create 2 set of Ec2 Instances, one for normal and one for premium customers
  • C. Create 2 SQS queus, one for normal and one for premium customers
  • D. Create 2 Elastic Load Balancers, one for normal and one for premium customers.


Answer – C
The ideal option would be to create 2 SQS queues. Messages can then be processed by the application from the high priority queue first.<br? The other options are not the ideal options. They would lead to extra costs and also extra maintenance.
Reference: SQS

Top

Q37: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?

  • A. CustomerID
  • B. CustomerName
  • C. Location
  • D. Age


Answer- A
Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on..
Use composite attributes. Try to combine more than one attribute to form a unique key.
Reference: Choosing the right DynamoDB Partition Key

Top

Q38: A developer is making use of AWS services to develop an application. He has been asked to develop the application in a manner to compensate any network delays. Which of the following two mechanisms should he implement in the application?

  • A. Multiple SQS queues
  • B. Exponential backoff algorithm
  • C. Retries in your application code
  • D. Consider using the Java sdk.


Answer- B. and C.
In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values, and should be set based on the operation being performed, as well as other local factors, such as network latency.
Reference: Error Retries and Exponential Backoff in AWS

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q39: An application is being developed that is going to write data to a DynamoDB table. You have to setup the read and write throughput for the table. Data is going to be read at the rate of 300 items every 30 seconds. Each item is of size 6KB. The reads can be eventual consistent reads. What should be the read capacity that needs to be set on the table?

  • A. 10
  • B. 20
  • C. 6
  • D. 30


Answer – A

Since there are 300 items read every 30 seconds , that means there are (300/30) = 10 items read every second.
Since each item is 6KB in size , that means , 2 reads will be required for each item.
So we have total of 2*10 = 20 reads for the number of items per second
Since eventual consistency is required , we can divide the number of reads(20) by 2 , and in the end we get the Read Capacity of 10.

Reference: Read/Write Capacity Mode


Top

Q40: You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load balancer. You have been requested to monitor the incoming connections to the Elastic Load Balancer. Which of the below options can suffice this requirement?

  • A. Use AWS CloudTrail with your load balancer
  • B. Enable access logs on the load balancer
  • C. Use a CloudWatch Logs Agent
  • D. Create a custom metric CloudWatch lter on your load balancer


Answer – B
Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues.
Reference: Access Logs for Your Application Load Balancer

Top

Q41: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?

  • A. Enable versioning for the underlying S3 bucket.
  • B. Enable Replication so that the objects get replicated to the other bucket
  • C. Enable CORS for the bucket
  • D. Change the Bucket policy for the bucket to allow access from the other bucket


Answer – C

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.

Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:

Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.

Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.

Reference: Cross-Origin Resource Sharing (CORS)


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q42: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below

  • A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
  • B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
  • C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
  • D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.


Answer- C
The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3.
You can then provides access to the objects based on the key values generated via the user id.

Reference: The AWS Security Token Service (STS)


Top

Q43: Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.

  • A. Publish your data to CloudWatch Logs, and congure your application to Autoscale to handle the load on demand.
  • B. Publish your log data to an Amazon S3 bucket.  Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is congured to pull down your log les stored an Amazon S3
  • C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is congured to process your logging data.
  • D. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.

Answer:


Answer – C
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data.
Reference: Amazon Kinesis

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q44: You’ve been instructed to develop a mobile application that will make use of AWS services. You need to decide on a data store to store the user sessions. Which of the following would be an ideal data store for session management?

  • A. AWS Simple Storage Service
  • B. AWS DynamoDB
  • C. AWS RDS
  • D. AWS Redshift

Answer:


Answer – B
DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management
Reference: Scalable Session Handling in PHP Using Amazon DynamoDB

Top

Q45: Your application currently interacts with a DynamoDB table. Records are inserted into the table via the application. There is now a requirement to ensure that whenever items are updated in the DynamoDB primary table , another record is inserted into a secondary table. Which of the below feature should be used when developing such a solution?

  • A. AWS DynamoDB Encryption
  • B. AWS DynamoDB Streams
  • C. AWS DynamoDB Accelerator
  • D. AWSTable Accelerator


Answer – B
DynamoDB Streams Use Cases and Design Patterns This post describes some common use cases you might encounter, along with their design options and solutions, when migrating data from relational data stores to Amazon DynamoDB. We will consider how to manage the following scenarios:

  • How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
  • How do you trigger an event based on a particular transaction?
  • How do you audit or archive transactions?
  • How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?

Relational databases provide native support for transactions, triggers, auditing, and replication. Typically, a transaction in a database refers to performing create, read, update, and delete (CRUD) operations against multiple tables in a block. A transaction can have only two states—success or failure. In other words, there is no partial completion. As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit. You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time. AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region. All of the other options are incorrect since none of these would meet the core requirement.
Reference: DynamoDB Streams Use Cases and Design Patterns


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q46: An application has been making use of AWS DynamoDB for its back-end data store. The size of the table has now grown to 20 GB , and the scans on the table are causing throttling errors. Which of the following should now be implemented to avoid such errors?

  • A. Large Page size
  • B. Reduced page size
  • C. Parallel Scans
  • D. Sequential scans

Answer – B
When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity. You can use the following technique to minimize the impact of a scan on a table’s provisioned throughput. Reduce page size Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request. For example, suppose that each item is 4 KB and you set the page size to 40 items. A Query request would then consume only 20 eventually consistent read operations or 40 strongly consistent read operations. A larger number of smaller Query or Scan operations would allow your other critical requests to succeed without throttling.
Reference1: Rate-Limited Scans in Amazon DynamoDB

Reference2: Best Practices for Querying and Scanning Data


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q47: Which of the following is correct way of passing a stage variable to an HTTP URL ? (Select TWO.)

  • A. http://example.com/${}/prod
  • B. http://example.com/${stageVariables.}/prod
  • C. http://${stageVariables.}.example.com/dev/operation
  • D. http://${stageVariables}.example.com/dev/operation
  • E. http://${}.example.com/dev/operation
  • F. http://example.com/${stageVariables}/prod


Answer – B. and C.
A stage variable can be used as part of HTTP integration URL as in following cases, ·         A full URI without protocol ·         A full domain ·         A subdomain ·         A path ·         A query string In the above case , option B & C displays stage variable as a path & sub-domain.
Reference: Amazon API Gateway Stage Variables Reference

Top

Q48: Your company is planning on creating new development environments in AWS. They want to make use of their existing Chef recipes which they use for their on-premise configuration for servers in AWS. Which of the following service would be ideal to use in this regard?

  • A. AWS Elastic Beanstalk
  • B. AWS OpsWork
  • C. AWS Cloudformation
  • D. AWS SQS


Answer – B
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments All other options are invalid since they cannot be used to work with Chef recipes for configuration management.
Reference: AWS OpsWorks

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q49: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?

  • A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
  • B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access.
  • C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
  • D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.


Answer – C
With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used
Reference: About Web Identity Federation

Top

Q50: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?

  • A. Cognito Data
  • B. Cognito Events
  • C. Cognito Streams
  • D. Cognito Callbacks


Answer – C
Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams
Reference:

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q51: You’ve developed a set of scripts using AWS Lambda. These scripts need to access EC2 Instances in a VPC. Which of the following needs to be done to ensure that the AWS Lambda function can access the resources in the VPC. Choose 2 answers from the options given below

  • A. Ensure that the subnet ID’s are mentioned when conguring the Lambda function
  • B. Ensure that the NACL ID’s are mentioned when conguring the Lambda function
  • C. Ensure that the Security Group ID’s are mentioned when conguring the Lambda function
  • D. Ensure that the VPC Flow Log ID’s are mentioned when conguring the Lambda function


Answer: A and C.
AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPCspecific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC.
Reference: Configuring a Lambda Function to Access Resources in an Amazon VPC

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q52: You’ve currently been tasked to migrate an existing on-premise environment into Elastic Beanstalk. The application does not make use of Docker containers. You also can’t see any relevant environments in the beanstalk service that would be suitable to host your application. What should you consider doing in this case?

  • A. Migrate your application to using Docker containers and then migrate the app to the Elastic Beanstalk environment.
  • B. Consider using Cloudformation to deploy your environment to Elastic Beanstalk
  • C. Consider using Packer to create a custom platform
  • D. Consider deploying your application using the Elastic Container Service


Answer – C
Elastic Beanstalk supports custom platforms. A custom platform is a more advanced customization than a Custom Image in several ways. A custom platform lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility allows you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn’t provide a platform out of the box. Compare that to custom images, where you modify an AMI for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform’s software stack. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. To create a custom platform, you build an Amazon Machine Image (AMI) from one of the supported operating systems—Ubuntu, RHEL, or Amazon Linux (see the flavor entry in Platform.yaml File Format for the exact version numbers)—and add further customizations. You create your own Elastic Beanstalk platform using Packer, which is an open-source tool for creating machine images for many platforms, including AMIs for use with Amazon EC2. An Elastic Beanstalk platform comprises an AMI configured to run a set of software that supports an application, and metadata that can include custom configuration options and default configuration option settings.
Reference: AWS Elastic Beanstalk Custom Platforms

Top

Q53: Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.

  • A. 10
  • B. 160
  • C. 155
  • D. 16


Answer – B.
Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.
Reference: Read/Write Capacity Mode

Top

Top

Q54: Which AWS Service can be used to automatically install your application code onto EC2, on premises systems and Lambda?

  • A. CodeCommit
  • B. X-Ray
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q55: Which AWS service can be used to compile source code, run tests and package code?

  • A. CodePipeline
  • B. CodeCommit
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy Answer: B.

Reference: AWS CodeBuild


Top

Q56: How can your prevent CloudFormation from deleting your entire stack on failure? (Choose 2)

  • A. Set the Rollback on failure radio button to No in the CloudFormation console
  • B. Set Termination Protection to Enabled in the CloudFormation console
  • C. Use the –disable-rollback flag with the AWS CLI
  • D. Use the –enable-termination-protection protection flag with the AWS CLI

Answer: A. and C.

Reference: Protecting a Stack From Being Deleted

Top

Q57: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?

  • A. Continuous Integration
  • B. Continuous Deployment
  • C. Continuous Delivery
  • D. Continuous Development

Top

Q58: When deploying application code to EC2, the AppSpec file can be written in which language?

  • A. JSON
  • B. JSON or YAML
  • C. XML
  • D. YAML

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q59: Part of your CloudFormation deployment fails due to a mis-configuration, by defaukt what will happen?

  • A. CloudFormation will rollback only the failed components
  • B. CloudFormation will rollback the entire stack
  • C. Failed component will remain available for debugging purposes
  • D. CloudFormation will ask you if you want to continue with the deployment


Top

Q60: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?

  • A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
  • B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
  • C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
  • D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.

Answer: C

Reference: Getting Started with Amazon SNS


Top

Q61: Which AWS service can be used to centrally store and version control your application source code, binaries and libraries

  • A. CodeCommit
  • B. CodeBuild
  • C. CodePipeline
  • D. ElasticFileSystem

Answer: A

Reference: AWS CodeCommit


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q62: You are using CloudFormation to create a new S3 bucket, which of the following sections would you use to define the properties of your bucket?

  • A. Conditions
  • B. Parameters
  • C. Outputs
  • D. Resources

Answer: D

Reference: Resources


Top

Q63: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?

  • A. Transforms
  • B. Outputs
  • C. Resources
  • D. Instances

Answer: C.
The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3.
Reference: Resources

Top

Q64: Which AWS service can be used to fully automate your entire release process?

  • A. CodeDeploy
  • B. CodePipeline
  • C. CodeCommit
  • D. CodeBuild

Answer: B.
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates

Reference: AWS CodePipeline


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q65: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?

  • A. Outputs
  • B. Transforms
  • C. Resources
  • D. Exports

Answer: A.
Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack.
Reference: CloudFormation Outputs

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q66: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?

  • A. Inputs
  • B. Resources
  • C. Transforms
  • D. Files

Answer: C.
Transforms is used to reference code located in S3 and also specififying the use of the Serverless Application Model (SAM) for Lambda deployments.
Reference: Transforms

Top

Q67: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file
used to specify source files and lifecycle hooks?

  • A. buildspec.yml
  • B. appspec.json
  • C. appspec.yml
  • D. buildspec.json

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q68: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?

  • A. Share the code using an EBS volume
  • B. Copy and paste the code into the template each time you need to use it
  • C. Use a cloudformation nested stack
  • D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.

Answer: C.

Reference: Working with Nested Stacks

Top

Q69: In the CodeDeploy AppSpec file, what are hooks used for?

  • A. To reference AWS resources that will be used during the deployment
  • B. Hooks are reserved for future use
  • C. To specify files you want to copy during the deployment.
  • D. To specify, scripts or function that you want to run at set points in the deployment lifecycle

Answer: D.
The ‘hooks’ section for an EC2/On-Premises deployment contains mappings that link deployment lifecycle event hooks to one or more scripts.

Reference: AppSpec ‘hooks’ Section

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q70: Which command can you use to encrypt a plain text file using CMK?

  • A. aws kms-encrypt
  • B. aws iam encrypt
  • C. aws kms encrypt
  • D. aws encrypt

Answer: C.
aws kms encrypt –key-id 1234abcd-12ab-34cd-56ef-1234567890ab –plaintext fileb://ExamplePlaintextFile –output text –query CiphertextBlob > C:\Temp\ExampleEncryptedFile.base64

Reference: AWS CLI Encrypt

Top

Q72: Which of the following is an encrypted key used by KMS to encrypt your data

  • A. Custmoer Mamaged Key
  • B. Encryption Key
  • C. Envelope Key
  • D. Customer Master Key

Answer: C.
Your Data key also known as the Enveloppe key is encrypted using the master key.This approach is known as Envelope encryption.
Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.

Reference: Envelope Encryption

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q73: Which of the following statements are correct? (Choose 2)

  • A. The Customer Master Key is used to encrypt and decrypt the Envelope Key or Data Key
  • B. The Envelope Key or Data Key is used to encrypt and decrypt plain text files.
  • C. The envelope Key or Data Key is used to encrypt and decrypt the Customer Master Key.
  • D. The Customer MasterKey is used to encrypt and decrypt plain text files.

Answer: A. and B.

Reference: AWS Key Management Service Concepts

Top

Q74: Which of the following statements is correct in relation to kMS/ (Choose 2)

  • A. KMS Encryption keys are regional
  • B. You cannot export your customer master key
  • C. You can export your customer master key.
  • D. KMS encryption Keys are global

Answer: A. and B.

Reference: AWS Key Management Service FAQs

Q75:  A developer is preparing a deployment package for a Java implementation of an AWS Lambda function. What should the developer include in the deployment package? (Select TWO.)
A. Compiled application code
B. Java runtime environment
C. References to the event sources
D. Lambda execution role
E. Application dependencies


Answer: C. E.
Notes: To create a Lambda function, you first create a Lambda function deployment package. This package is a .zip or .jar file consisting of your code and any dependencies.
Reference: Lambda deployment packages.

Q76: A developer uses AWS CodeDeploy to deploy a Python application to a fleet of Amazon EC2 instances that run behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. What should the developer include in the CodeDeploy deployment package?
A. A launch template for the Amazon EC2 Auto Scaling group
B. A CodeDeploy AppSpec file
C. An EC2 role that grants the application access to AWS services
D. An IAM policy that grants the application access to AWS services


Answer: B.
Notes: The CodeDeploy AppSpec (application specific) file is unique to CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file.
Reference: CodeDeploy application specification (AppSpec) files.
Category: Deployment

Q76: A company is working on a project to enhance its serverless application development process. The company hosts applications on AWS Lambda. The development team regularly updates the Lambda code and wants to use stable code in production. Which combination of steps should the development team take to configure Lambda functions to meet both development and production requirements? (Select TWO.)

A. Create a new Lambda version every time a new code release needs testing.
B. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready unqualified Amazon Resource Name (ARN) version. Point the Development alias to the $LATEST version.
C. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to the production-ready qualified Amazon Resource Name (ARN) version. Point the Development alias to the variable LAMBDA_TASK_ROOT.
D. Create a new Lambda layer every time a new code release needs testing.
E. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready Lambda layer Amazon Resource Name (ARN). Point the Development alias to the $LATEST layer ARN.


Answer: A. B.
Notes: Lambda function versions are designed to manage deployment of functions. They can be used for code changes, without affecting the stable production version of the code. By creating separate aliases for Production and Development, systems can initiate the correct alias as needed. A Lambda function alias can be used to point to a specific Lambda function version. Using the functionality to update an alias and its linked version, the development team can update the required version as needed. The $LATEST version is the newest published version.
Reference: Lambda function versions.

For more information about Lambda layers, see Creating and sharing Lambda layers.

For more information about Lambda function aliases, see Lambda function aliases.

Category: Deployment

Q77: Each time a developer publishes a new version of an AWS Lambda function, all the dependent event source mappings need to be updated with the reference to the new version’s Amazon Resource Name (ARN). These updates are time consuming and error-prone. Which combination of actions should the developer take to avoid performing these updates when publishing a new Lambda version? (Select TWO.)
A. Update event source mappings with the ARN of the Lambda layer.
B. Point a Lambda alias to a new version of the Lambda function.
C. Create a Lambda alias for each published version of the Lambda function.
D. Point a Lambda alias to a new Lambda function alias.
E. Update the event source mappings with the Lambda alias ARN.


Answer: B. E.
Notes: A Lambda alias is a pointer to a specific Lambda function version. Instead of using ARNs for the Lambda function in event source mappings, you can use an alias ARN. You do not need to update your event source mappings when you promote a new version or roll back to a previous version.
Reference: Lambda function aliases.
Category: Deployment

Q78:  A company wants to store sensitive user data in Amazon S3 and encrypt this data at rest. The company must manage the encryption keys and use Amazon S3 to perform the encryption. How can a developer meet these requirements?
A. Enable default encryption for the S3 bucket by using the option for server-side encryption with customer-provided encryption keys (SSE-C).
B. Enable client-side encryption with an encryption key. Upload the encrypted object to the S3 bucket.
C. Enable server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Upload an object to the S3 bucket.
D. Enable server-side encryption with customer-provided encryption keys (SSE-C). Upload an object to the S3 bucket.


Answer: D.
Notes: When you upload an object, Amazon S3 uses the encryption key you provide to apply AES-256 encryption to your data and removes the encryption key from memory.
Reference: Protecting data using server-side encryption with customer-provided encryption keys (SSE-C).

Category: Security

Q79: A company is developing a Python application that submits data to an Amazon DynamoDB table. The company requires client-side encryption of specific data items and end-to-end protection for the encrypted data in transit and at rest. Which combination of steps will meet the requirement for the encryption of specific data items? (Select TWO.)

A. Generate symmetric encryption keys with AWS Key Management Service (AWS KMS).
B. Generate asymmetric encryption keys with AWS Key Management Service (AWS KMS).
C. Use generated keys with the DynamoDB Encryption Client.
D. Use generated keys to configure DynamoDB table encryption with AWS managed customer master keys (CMKs).
E. Use generated keys to configure DynamoDB table encryption with AWS owned customer master keys (CMKs).


Answer: A. C.
Notes: When the DynamoDB Encryption Client is configured to use AWS KMS, it uses a customer master key (CMK) that is always encrypted when used outside of AWS KMS. This cryptographic materials provider returns a unique encryption key and signing key for every table item. This method of encryption uses a symmetric CMK.
Reference: Direct KMS Materials Provider.
Category: Deployment

Q80: A company is developing a REST API with Amazon API Gateway. Access to the API should be limited to users in the existing Amazon Cognito user pool. Which combination of steps should a developer perform to secure the API? (Select TWO.)
A. Create an AWS Lambda authorizer for the API.
B. Create an Amazon Cognito authorizer for the API.
C. Configure the authorizer for the API resource.
D. Configure the API methods to use the authorizer.
E. Configure the authorizer for the API stage.


Answer: B. D.
Notes: An Amazon Cognito authorizer should be used for integration with Amazon Cognito user pools. In addition to creating an authorizer, you are required to configure an API method to use that authorizer for the API.
Reference: Control access to a REST API using Amazon Cognito user pools as authorizer.
Category: Security

Q81: A developer is implementing a mobile app to provide personalized services to app users. The application code makes calls to Amazon S3 and Amazon Simple Queue Service (Amazon SQS). Which options can the developer use to authenticate the app users? (Select TWO.)
A. Authenticate to the Amazon Cognito identity pool directly.
B. Authenticate to AWS Identity and Access Management (IAM) directly.
C. Authenticate to the Amazon Cognito user pool directly.
D. Federate authentication by using Login with Amazon with the users managed with AWS Security Token Service (AWS STS).
E. Federate authentication by using Login with Amazon with the users managed with the Amazon Cognito user pool.


Answer: C. E.
Notes: The Amazon Cognito user pool provides direct user authentication. The Amazon Cognito user pool provides a federated authentication option with third-party identity provider (IdP), including amazon.com.
Reference: Adding User Pool Sign-in Through a Third Party.
Category: Security

Question: A company is implementing several order processing workflows. Each workflow is implemented by using AWS Lambda functions for each task. Which combination of steps should a developer follow to implement these workflows? (Select TWO.)
A. Define a AWS Step Functions task for each Lambda function.
B. Define a AWS Step Functions task for each workflow.
C. Write code that polls the AWS Step Functions invocation to coordinate each workflow.
D. Define an AWS Step Functions state machine for each workflow.
E. Define an AWS Step Functions state machine for each Lambda function.
Answer: A. D.
Notes: Step Functions is based on state machines and tasks. A state machine is a workflow. Tasks perform work by coordinating with other AWS services, such as Lambda. A state machine is a workflow. It can be used to express a workflow as a number of states, their relationships, and their input and output. You can coordinate individual tasks with Step Functions by expressing your workflow as a finite state machine, written in the Amazon States Language.
ReferenceText: Getting Started with AWS Step Functions.
ReferenceUrl: https://aws.amazon.com/step-functions/getting-started/
Category: Development

Welcome to AWS Certified Developer Associate Exam Preparation: Definition and Objectives, Top 100 Questions and Answers dump, White papers, Courses, Labs and Training Materials, Exam info and details, References, Jobs, Others AWS Certificates

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

What is the AWS Certified Developer Associate Exam?

This AWS Certified Developer-Associate Examination is intended for individuals who perform a Developer role. It validates an examinee’s ability to:

  • Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
  • Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS

Recommended general IT knowledge
The target candidate should have the following:
– In-depth knowledge of at least one high-level programming language
– Understanding of application lifecycle management
– The ability to write code for serverless applications
– Understanding of the use of containers in the development process

Recommended AWS knowledge
The target candidate should be able to do the following:

  • Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications
  • Identify key features of AWS services
  • Understand the AWS shared responsibility model
  • Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
  • Use and interact with AWS services
  • Apply basic understanding of cloud-native applications to write code
  • Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
  • Author, maintain, and debug code modules on AWS

What is considered out of scope for the target candidate?
The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam:
– Design architectures (for example, distributed system, microservices)
– Design and implement CI/CD pipelines

  • Administer IAM users and groups
  • Administer Amazon Elastic Container Service (Amazon ECS)
  • Design AWS networking infrastructure (for example, Amazon VPC, AWS Direct Connect)
  • Understand compliance and licensing

Exam content
Response types
There are two types of questions on the exam:
– Multiple choice: Has one correct response and three incorrect responses (distractors)
– Multiple response: Has two or more correct responses out of five or more response options
Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose.
Distractors are generally plausible responses that match the content area.
Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.

Unscored content
The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.

Exam results
The AWS Certified Developer – Associate (DVA-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines.
Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720.
Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.
Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam.
Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.

Content outline
This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context.
The percentage in each domain represents only scored content.

Domain 1: Deployment 22%
Domain 2: Security 26%
Domain 3: Development with AWS Services 30%
Domain 4: Refactoring 10%
Domain 5: Monitoring and Troubleshooting 12%

Domain 1: Deployment
1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
–  Commit code to a repository and invoke build, test and/or deployment actions
–  Use labels and branches for version and release management
–  Use AWS CodePipeline to orchestrate workflows against different environments
–  Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS
CodeDeploy for CI/CD purposes
–  Perform a roll back plan based on application deployment policy

1.2 Deploy applications using AWS Elastic Beanstalk.
–  Utilize existing supported environments to define a new application stack
–  Package the application
–  Introduce a new application version into the Elastic Beanstalk environment
–  Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable)
–  Validate application health using Elastic Beanstalk dashboard
–  Use Amazon CloudWatch Logs to instrument application logging

1.3 Prepare the application deployment package to be deployed to AWS.
–  Manage the dependencies of the code module (like environment variables, config files and static image files) within the package
–  Outline the package/container directory structure and organize files appropriately
–  Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores)

1.4 Deploy serverless applications.
–  Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template
–  Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)

Domain 2: Security
2.1 Make authenticated calls to AWS services.
–  Communicate required policy based on least privileges required by application.
–  Assume an IAM role to access a service
–  Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)

2.2 Implement encryption using AWS services.
– Encrypt data at rest (client side; server side; envelope encryption) using AWS services
–  Encrypt data in transit

2.3 Implement application authentication and authorization.
– Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools
–  Use Amazon Cognito-provided credentials to write code that access AWS services.
–  Use Amazon Cognito sync to synchronize user profiles and data
–  Use developer-authenticated identities to interact between end user devices, backend
authentication, and Amazon Cognito

Domain 3: Development with AWS Services
3.1 Write code for serverless applications.
– Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications)
– Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler)
– Create an API endpoint using Amazon API Gateway
–  Create and test appropriate API actions like GET, POST using the API endpoint
–  Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes)
–  Compute read/write capacity units for Amazon DynamoDB based on application requirements
–  Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis)
–  Invoke an AWS Lambda function synchronously and asynchronously

3.2 Translate functional requirements into application design.
– Determine real-time vs. batch processing for a given use case
– Determine use of synchronous vs. asynchronous for a given use case
– Determine use of event vs. schedule/poll for a given use case
– Account for tradeoffs for consistency models in an application design

Domain 4: Refactoring
4.1 Optimize applications to best use AWS services and features.
 Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache)
 Apply an Amazon S3 naming scheme for optimal read performance

4.2 Migrate existing application code to run on AWS.
– Isolate dependencies
– Run the application as one or more stateless processes
– Develop in order to enable horizontal scalability
– Externalize state

Domain 5: Monitoring and Troubleshooting

5.1 Write code that can be monitored.
– Create custom Amazon CloudWatch metrics
– Perform logging in a manner available to systems operators
– Instrument application source code to enable tracing in AWS X-Ray

5.2 Perform root cause analysis on faults found in testing or production.
– Interpret the outputs from the logging mechanism in AWS to identify errors in logs
– Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues
– Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component

Which key tools, technologies, and concepts might be covered on the exam?

The following is a non-exhaustive list of the tools and technologies that could appear on the exam.
This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam.
The general tools and technologies in this list appear in no particular order.
AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance:
– Analytics
– Application Integration
– Containers
– Cost and Capacity Management
– Data Movement
– Developer Tools
– Instances (virtual machines)
– Management and Governance
– Networking and Content Delivery
– Security
– Serverless

AWS services and features

Analytics:
– Amazon Elasticsearch Service (Amazon ES)
– Amazon Kinesis
Application Integration:
– Amazon EventBridge (Amazon CloudWatch Events)
– Amazon Simple Notification Service (Amazon SNS)
– Amazon Simple Queue Service (Amazon SQS)
– AWS Step Functions

Compute:
– Amazon EC2
– AWS Elastic Beanstalk
– AWS Lambda

Containers:
– Amazon Elastic Container Registry (Amazon ECR)
– Amazon Elastic Container Service (Amazon ECS)
– Amazon Elastic Kubernetes Services (Amazon EKS)

Database:
– Amazon DynamoDB
– Amazon ElastiCache
– Amazon RDS

Developer Tools:
– AWS CodeArtifact
– AWS CodeBuild
– AWS CodeCommit
– AWS CodeDeploy
– Amazon CodeGuru
– AWS CodePipeline
– AWS CodeStar
– AWS Fault Injection Simulator
– AWS X-Ray

Management and Governance:
– AWS CloudFormation
– Amazon CloudWatch

Networking and Content Delivery:
– Amazon API Gateway
– Amazon CloudFront
– Elastic Load Balancing

Security, Identity, and Compliance:
– Amazon Cognito
– AWS Identity and Access Management (IAM)
– AWS Key Management Service (AWS KMS)

Storage:
– Amazon S3

Out-of-scope AWS services and features

The following is a non-exhaustive list of AWS services and features that are not covered on the exam.
These services and features do not represent every AWS offering that is excluded from the exam content.
Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant.
Out-of-scope AWS services and features include the following:
– AWS Application Discovery Service
– Amazon AppStream 2.0
– Amazon Chime
– Amazon Connect
– AWS Database Migration Service (AWS DMS)
– AWS Device Farm
– Amazon Elastic Transcoder
– Amazon GameLift
– Amazon Lex
– Amazon Machine Learning (Amazon ML)
– AWS Managed Services
– Amazon Mobile Analytics
– Amazon Polly

– Amazon QuickSight
– Amazon Rekognition
– AWS Server Migration Service (AWS SMS)
– AWS Service Catalog
– AWS Shield Advanced
– AWS Shield Standard
– AWS Snow Family
– AWS Storage Gateway
– AWS WAF
– Amazon WorkMail
– Amazon WorkSpaces

To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Top

AWS Certified Developer – Associate Practice Questions And Answers Dump

Q0: Your application reads commands from an SQS queue and sends them to web services hosted by your
partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost.
How can you accommodate the partners’ broken web services without wasting your resources?

  • A. Create a delay queue and set DelaySeconds to 30 seconds
  • B. Requeue the message with a VisibilityTimeout of 30 seconds.
  • C. Create a dead letter queue and set the Maximum Receives to 3.
  • D. Requeue the message with a DelaySeconds of 30 seconds.
AWS Developer Associates DVA-C01 PRO
AWS Developer Associates DVA-C01 PRO
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 


C. After a message is taken from the queue and returned for the maximum number of retries, it is
automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.

Reference: Amazon SQS Dead-Letter Queues


Top

Q1: A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently.
What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. The AWS Documentation mentions the following:

DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

Q2: You are creating a DynamoDB table with the following attributes:

  • PurchaseOrderNumber (partition key)
  • CustomerID
  • PurchaseDate
  • TotalPurchaseValue

One of your applications must retrieve items from the table to calculate the total value of purchases for a
particular customer over a date range. What secondary index do you need to add to the table?

  • A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute
  • C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute


C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition
key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the
TotalPurchaseValue into the index provides all the data needed to satisfy the use case.

Reference: AWS DynamoDB Global Secondary Indexes

Difference between local and global indexes in DynamoDB

    • Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
    • Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
    • Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
    • Global Secondary Indexes defines a new paradigm – different hash/range keys per index.
      This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
    • Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
    • Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.

Throughput :

  • Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
  • Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.


Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q3: When referencing the remaining time left for a Lambda function to run within the function’s code you would use:

  • A. The event object
  • B. The timeLeft object
  • C. The remains object
  • D. The context object


D. The context object.

Reference: AWS Lambda


Top

Q4: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context
D. event, context
def handler_name(event, context):

return some_value

Reference: AWS Lambda Function Handler in Python

Top

Q5: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda Deployment Package

Top

Q6: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime

C. Function code and libraries not included within the runtime

Reference: AWS Lambda Deployment Package in PowerShell

Top

Q7: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?

  • A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
  • B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
  • C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
  • D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.


D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.

Reference: Comparison of Security Groups and Network ACLs

AWS Security Groups and NACL


Top

Q8: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.

Reference: AWS Network Address Translation Gateway


Top

Q9: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.

Reference: AWS Autoscalling


Top

Q10: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda

Top

Q11: You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?

  • A. RegisterImage
  • B. CreateImage
  • C. ami-register-image
  • D. ami-create-image

A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.

Reference: API RegisterImage

Top

Q12: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?

  • A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
  • B. Permenantly assigning users to specific instances and always routing their traffic to those instances
  • C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
  • D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance

Top

Q13: Which API call would best be used to describe an Amazon Machine Image?

  • A. ami-describe-image
  • B. ami-describe-images
  • C. DescribeImage
  • D. DescribeImages

D. In general, API actions stick to the PascalCase style with the first letter of every word capitalized.

Reference: API DescribeImages

Top

Q14: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?

  • A. Autoscaling requires using Amazon EBS-backed instances
  • B. Virtual Private Cloud requires EBS backed instances
  • C. Amazon EBS-backed instances can be stopped and restarted without losing data
  • D. Instance-store backed instances can be stopped and restarted without losing data

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.

Reference: What is the difference between EBS and Instance Store?

Top

Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command.
ssh -i my_key.pem ec2-user@52.2.222.22
However you receive the following error.
@@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@
What is the most probable reason for this and how can you fix it?

  • A. You do not have root access on your terminal and need to use the sudo option for this to work.
  • B. You do not have enough permissions to perform the operation.
  • C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
  • D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.

D. You need to run something like: chmod 400 my_key.pem

Reference:

Top

Q16: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?

  • A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
  • B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
  • C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
  • D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.

D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.

Reference: AWS Root Device Storage

Top

Q17: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:

  • A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
  • B. Can only be used to launch EC2 instances in the same country as the AMI is stored
  • C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
  • D. Can be used to launch EC2 instances in any AWS region

C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another

Reference: https://aws.amazon.com/amazon-linux-ami/

Top

Q18: Which of the following statements is true about the Elastic File System (EFS)?

  • A. EFS can scale out to meet capacity requirements and scale back down when no longer needed
  • B. EFS can be used by multiple EC2 instances simultaneously
  • C. EFS cannot be used by an instance using EBS
  • D. EFS can be configured on an instance before launch just like an IAM role or EBS volumes

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

A. and B.

Reference: https://aws.amazon.com/efs/

Top

Q19: IAM Policies, at a minimum, contain what elements?

  • A. ID
  • B. Effects
  • C. Resources
  • D. Sid
  • E. Principle
  • F. Actions

B. C. and F.

Effect – Use Allow or Deny to indicate whether the policy allows or denies access.

Resource – Specify a list of resources to which the actions apply.

Action – Include a list of actions that the policy allows or denies.

Id, Sid aren’t required fields in IAM Policies. But they are optional fields

Reference: AWS IAM Access Policies

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q20: What are the main benefits of IAM groups?

  • A. The ability to create custom permission policies.
  • B. Assigning IAM permission policies to more than one user at a time.
  • C. Easier user/policy management.
  • D. Allowing EC2 instances to gain access to S3.

B. and C.

A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.

Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html

 

Top

Q21: What are benefits of using AWS STS?

  • A. Grant access to AWS resources without having to create an IAM identity for them
  • B. Since credentials are temporary, you don’t have to rotate or revoke them
  • C. Temporary security credentials can be extended indefinitely
  • D. Temporary security credentials can be restricted to a specific region

Top

Q22: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q23: A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?

  • A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
  • B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
  • C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
  • D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.

B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence.
Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation.
During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:

  • Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
  • Saved Configurations
    Settings for any options that are not applied directly to the
    environment are loaded from a saved configuration, if specified.
  • Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the
    environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.

     

    Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.

  • Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.

If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI
.
Settings in configuration files are not applied
directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.
If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.

Reference: Managing ec2 features – Elastic beanstalk

Q24: What statements are true about Availability Zones (AZs) and Regions?

  • A. There is only one AZ in each AWS Region
  • B. AZs are geographically separated inside a region to help protect against natural disasters affecting more than one at a time.
  • C. AZs can be moved between AWS Regions based on your needs
  • D. There are (almost always) two or more AZs in each AWS Region

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B and D.

Reference: AWS global infrastructure/

Top

Q25: An AWS Region contains:

  • A. Edge Locations
  • B. Data Centers
  • C. AWS Services
  • D. Availability Zones


B. C. D. Edge locations are actually distinct locations that don’t explicitly fall within AWS regions.

Reference: AWS Global Infrastructure


Top

Q26: Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?

  • A. Eventual Consistent Reads
  • B. Conditional reads for Consistency
  • C. Strongly Consistent Reads
  • D. Not possible


C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.

Reference: https://aws.amazon.com/dynamodb/faqs/


Top

Q27: You’ ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?

  • A. Create an Opswork stack and deploy the Docker containers
  • B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
  • C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
  • D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.


B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.

Reference: Create and Deploy Docker in AWS


Top

Q28: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?

  • A. Create multiple threads and upload the objects in the multiple threads
  • B. Write the items in batches for better performance
  • C. Use the Multipart upload API
  • D. Enable versioning on the Bucket

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 


C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.

Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html


Top

Q29: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?

  • A. 6000
  • B. 10
  • C. 3600
  • D. 600

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.

You can specify the Write capacity in the Capacity tab of the DynamoDB table.

Reference: AWS working with tables

Q30: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context


D. event, context def handler_name(event, context):

return some_value
Reference: AWS Lambda Function Handler in Python

Top

Q31: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda Deployment Package

Top

Q32: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime


C. Function code and libraries not included within the runtime
Reference: AWS Lambda Deployment Package in PowerShell

Top

Q33: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
Reference: AWS Network Address Translation Gateway

Top

Q34: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.
Reference: AWS Autoscalling

Top

Q30: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

Answer:


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda

Top

Q31: An organization is using an Amazon ElastiCache cluster in front of their Amazon RDS instance. The organization would like the Developer to implement logic into the code so that the cluster only retrieves data from RDS when there is a cache miss. What strategy can the Developer implement to achieve this?

  • A. Lazy loading
  • B. Write-through
  • C. Error retries
  • D. Exponential backoff

Answer:


Answer – A
Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. All other options are incorrect.
Reference: Caching Strategies

Top

Q32: A developer is writing an application that will run on Ec2 instances and read messages from SQS queue. The nessages will arrive every 15-60 seconds. How should the Developer efficiently query the queue for new messages?

  • A. Use long polling
  • B. Set a custom visibility timeout
  • C. Use short polling
  • D. Implement exponential backoff


Answer – A Long polling will help insure that the applications make less requests for messages in a shorter period of time. This is more cost effective. Since the messages are only going to be available after 15 seconds and we don’t know exacly when they would be available, it is better to use Long Polling.
Reference: Amazon SQS Long Polling

Top

Q33: You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With new Lambda function working as per expectation which of the following will shift traffic from original Lambda function to new Lambda function in the shortest time frame?

  • A. Canary10Percent5Minutes
  • B. Linear10PercentEvery10Minutes
  • C. Canary10Percent15Minutes
  • D. Linear10PercentEvery1Minute


Answer – A
With Canary Deployment Preference type, Traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while remaining all traffic is shifted after 5 minutes.
Reference: Gradual Code Deployment

Top

Q34: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?

  • A. AWS::Serverless::Api
  • B. AWS::Serverless::Application
  • C. AWS::Serverless::Layerversion
  • D. AWS::Serverless::Function


Answer – B
AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets.
Reference: Declaring Serverless Resources

Top

Q35: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?

  • A. Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
  • B. Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
  • C. Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
  • D. Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.


Answer – D
With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys.
Reference: AWS Key Management Service Concepts

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q36: You are developing an application that will be comprised of the following architecture –

  1. A set of Ec2 instances to process the videos.
  2. These (Ec2 instances) will be spun up by an autoscaling group.
  3. SQS Queues to maintain the processing messages.
  4. There will be 2 pricing tiers.

How will you ensure that the premium customers videos are given more preference?

  • A. Create 2 Autoscaling Groups, one for normal and one for premium customers
  • B. Create 2 set of Ec2 Instances, one for normal and one for premium customers
  • C. Create 2 SQS queus, one for normal and one for premium customers
  • D. Create 2 Elastic Load Balancers, one for normal and one for premium customers.


Answer – C
The ideal option would be to create 2 SQS queues. Messages can then be processed by the application from the high priority queue first.<br? The other options are not the ideal options. They would lead to extra costs and also extra maintenance.
Reference: SQS

Top

Q37: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?

  • A. CustomerID
  • B. CustomerName
  • C. Location
  • D. Age


Answer- A
Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on..
Use composite attributes. Try to combine more than one attribute to form a unique key.
Reference: Choosing the right DynamoDB Partition Key

Top

Q38: A developer is making use of AWS services to develop an application. He has been asked to develop the application in a manner to compensate any network delays. Which of the following two mechanisms should he implement in the application?

  • A. Multiple SQS queues
  • B. Exponential backoff algorithm
  • C. Retries in your application code
  • D. Consider using the Java sdk.


Answer- B. and C.
In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values, and should be set based on the operation being performed, as well as other local factors, such as network latency.
Reference: Error Retries and Exponential Backoff in AWS

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q39: An application is being developed that is going to write data to a DynamoDB table. You have to setup the read and write throughput for the table. Data is going to be read at the rate of 300 items every 30 seconds. Each item is of size 6KB. The reads can be eventual consistent reads. What should be the read capacity that needs to be set on the table?

  • A. 10
  • B. 20
  • C. 6
  • D. 30


Answer – A

Since there are 300 items read every 30 seconds , that means there are (300/30) = 10 items read every second.
Since each item is 6KB in size , that means , 2 reads will be required for each item.
So we have total of 2*10 = 20 reads for the number of items per second
Since eventual consistency is required , we can divide the number of reads(20) by 2 , and in the end we get the Read Capacity of 10.

Reference: Read/Write Capacity Mode


Top

Q40: You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load balancer. You have been requested to monitor the incoming connections to the Elastic Load Balancer. Which of the below options can suffice this requirement?

  • A. Use AWS CloudTrail with your load balancer
  • B. Enable access logs on the load balancer
  • C. Use a CloudWatch Logs Agent
  • D. Create a custom metric CloudWatch lter on your load balancer


Answer – B
Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues.
Reference: Access Logs for Your Application Load Balancer

Top

Q41: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?

  • A. Enable versioning for the underlying S3 bucket.
  • B. Enable Replication so that the objects get replicated to the other bucket
  • C. Enable CORS for the bucket
  • D. Change the Bucket policy for the bucket to allow access from the other bucket


Answer – C

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.

Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:

Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.

Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.

Reference: Cross-Origin Resource Sharing (CORS)


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q42: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below

  • A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
  • B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
  • C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
  • D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.


Answer- C
The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3.
You can then provides access to the objects based on the key values generated via the user id.

Reference: The AWS Security Token Service (STS)


Top

Q43: Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.

  • A. Publish your data to CloudWatch Logs, and congure your application to Autoscale to handle the load on demand.
  • B. Publish your log data to an Amazon S3 bucket.  Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is congured to pull down your log les stored an Amazon S3
  • C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is congured to process your logging data.
  • D. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.

Answer:


Answer – C
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data.
Reference: Amazon Kinesis

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q44: You’ve been instructed to develop a mobile application that will make use of AWS services. You need to decide on a data store to store the user sessions. Which of the following would be an ideal data store for session management?

  • A. AWS Simple Storage Service
  • B. AWS DynamoDB
  • C. AWS RDS
  • D. AWS Redshift

Answer:


Answer – B
DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management
Reference: Scalable Session Handling in PHP Using Amazon DynamoDB

Top

Q45: Your application currently interacts with a DynamoDB table. Records are inserted into the table via the application. There is now a requirement to ensure that whenever items are updated in the DynamoDB primary table , another record is inserted into a secondary table. Which of the below feature should be used when developing such a solution?

  • A. AWS DynamoDB Encryption
  • B. AWS DynamoDB Streams
  • C. AWS DynamoDB Accelerator
  • D. AWSTable Accelerator


Answer – B
DynamoDB Streams Use Cases and Design Patterns This post describes some common use cases you might encounter, along with their design options and solutions, when migrating data from relational data stores to Amazon DynamoDB. We will consider how to manage the following scenarios:

  • How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
  • How do you trigger an event based on a particular transaction?
  • How do you audit or archive transactions?
  • How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?

Relational databases provide native support for transactions, triggers, auditing, and replication. Typically, a transaction in a database refers to performing create, read, update, and delete (CRUD) operations against multiple tables in a block. A transaction can have only two states—success or failure. In other words, there is no partial completion. As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit. You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time. AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region. All of the other options are incorrect since none of these would meet the core requirement.
Reference: DynamoDB Streams Use Cases and Design Patterns


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q46: An application has been making use of AWS DynamoDB for its back-end data store. The size of the table has now grown to 20 GB , and the scans on the table are causing throttling errors. Which of the following should now be implemented to avoid such errors?

  • A. Large Page size
  • B. Reduced page size
  • C. Parallel Scans
  • D. Sequential scans

Answer – B
When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity. You can use the following technique to minimize the impact of a scan on a table’s provisioned throughput. Reduce page size Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request. For example, suppose that each item is 4 KB and you set the page size to 40 items. A Query request would then consume only 20 eventually consistent read operations or 40 strongly consistent read operations. A larger number of smaller Query or Scan operations would allow your other critical requests to succeed without throttling.
Reference1: Rate-Limited Scans in Amazon DynamoDB

Reference2: Best Practices for Querying and Scanning Data


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q47: Which of the following is correct way of passing a stage variable to an HTTP URL ? (Select TWO.)

  • A. http://example.com/${}/prod
  • B. http://example.com/${stageVariables.}/prod
  • C. http://${stageVariables.}.example.com/dev/operation
  • D. http://${stageVariables}.example.com/dev/operation
  • E. http://${}.example.com/dev/operation
  • F. http://example.com/${stageVariables}/prod


Answer – B. and C.
A stage variable can be used as part of HTTP integration URL as in following cases, ·         A full URI without protocol ·         A full domain ·         A subdomain ·         A path ·         A query string In the above case , option B & C displays stage variable as a path & sub-domain.
Reference: Amazon API Gateway Stage Variables Reference

Top

Q48: Your company is planning on creating new development environments in AWS. They want to make use of their existing Chef recipes which they use for their on-premise configuration for servers in AWS. Which of the following service would be ideal to use in this regard?

  • A. AWS Elastic Beanstalk
  • B. AWS OpsWork
  • C. AWS Cloudformation
  • D. AWS SQS


Answer – B
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments All other options are invalid since they cannot be used to work with Chef recipes for configuration management.
Reference: AWS OpsWorks

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q49: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?

  • A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
  • B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access.
  • C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
  • D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.


Answer – C
With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used
Reference: About Web Identity Federation

Top

Q50: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?

  • A. Cognito Data
  • B. Cognito Events
  • C. Cognito Streams
  • D. Cognito Callbacks


Answer – C
Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams
Reference:

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q51: You’ve developed a set of scripts using AWS Lambda. These scripts need to access EC2 Instances in a VPC. Which of the following needs to be done to ensure that the AWS Lambda function can access the resources in the VPC. Choose 2 answers from the options given below

  • A. Ensure that the subnet ID’s are mentioned when conguring the Lambda function
  • B. Ensure that the NACL ID’s are mentioned when conguring the Lambda function
  • C. Ensure that the Security Group ID’s are mentioned when conguring the Lambda function
  • D. Ensure that the VPC Flow Log ID’s are mentioned when conguring the Lambda function


Answer: A and C.
AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPCspecific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC.
Reference: Configuring a Lambda Function to Access Resources in an Amazon VPC

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q52: You’ve currently been tasked to migrate an existing on-premise environment into Elastic Beanstalk. The application does not make use of Docker containers. You also can’t see any relevant environments in the beanstalk service that would be suitable to host your application. What should you consider doing in this case?

  • A. Migrate your application to using Docker containers and then migrate the app to the Elastic Beanstalk environment.
  • B. Consider using Cloudformation to deploy your environment to Elastic Beanstalk
  • C. Consider using Packer to create a custom platform
  • D. Consider deploying your application using the Elastic Container Service


Answer – C
Elastic Beanstalk supports custom platforms. A custom platform is a more advanced customization than a Custom Image in several ways. A custom platform lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility allows you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn’t provide a platform out of the box. Compare that to custom images, where you modify an AMI for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform’s software stack. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. To create a custom platform, you build an Amazon Machine Image (AMI) from one of the supported operating systems—Ubuntu, RHEL, or Amazon Linux (see the flavor entry in Platform.yaml File Format for the exact version numbers)—and add further customizations. You create your own Elastic Beanstalk platform using Packer, which is an open-source tool for creating machine images for many platforms, including AMIs for use with Amazon EC2. An Elastic Beanstalk platform comprises an AMI configured to run a set of software that supports an application, and metadata that can include custom configuration options and default configuration option settings.
Reference: AWS Elastic Beanstalk Custom Platforms

Top

Q53: Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.

  • A. 10
  • B. 160
  • C. 155
  • D. 16


Answer – B.
Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.
Reference: Read/Write Capacity Mode

Top

Top

Q54: Which AWS Service can be used to automatically install your application code onto EC2, on premises systems and Lambda?

  • A. CodeCommit
  • B. X-Ray
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q55: Which AWS service can be used to compile source code, run tests and package code?

  • A. CodePipeline
  • B. CodeCommit
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy Answer: B.

Reference: AWS CodeBuild


Top

Q56: How can your prevent CloudFormation from deleting your entire stack on failure? (Choose 2)

  • A. Set the Rollback on failure radio button to No in the CloudFormation console
  • B. Set Termination Protection to Enabled in the CloudFormation console
  • C. Use the –disable-rollback flag with the AWS CLI
  • D. Use the –enable-termination-protection protection flag with the AWS CLI

Answer: A. and C.

Reference: Protecting a Stack From Being Deleted

Top

Q57: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?

  • A. Continuous Integration
  • B. Continuous Deployment
  • C. Continuous Delivery
  • D. Continuous Development

Top

Q58: When deploying application code to EC2, the AppSpec file can be written in which language?

  • A. JSON
  • B. JSON or YAML
  • C. XML
  • D. YAML

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q59: Part of your CloudFormation deployment fails due to a mis-configuration, by defaukt what will happen?

  • A. CloudFormation will rollback only the failed components
  • B. CloudFormation will rollback the entire stack
  • C. Failed component will remain available for debugging purposes
  • D. CloudFormation will ask you if you want to continue with the deployment


Top

Q60: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?

  • A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
  • B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
  • C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
  • D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.

Answer: C

Reference: Getting Started with Amazon SNS


Top

Q61: Which AWS service can be used to centrally store and version control your application source code, binaries and libraries

  • A. CodeCommit
  • B. CodeBuild
  • C. CodePipeline
  • D. ElasticFileSystem

Answer: A

Reference: AWS CodeCommit


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q62: You are using CloudFormation to create a new S3 bucket, which of the following sections would you use to define the properties of your bucket?

  • A. Conditions
  • B. Parameters
  • C. Outputs
  • D. Resources

Answer: D

Reference: Resources


Top

Q63: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?

  • A. Transforms
  • B. Outputs
  • C. Resources
  • D. Instances

Answer: C.
The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3.
Reference: Resources

Top

Q64: Which AWS service can be used to fully automate your entire release process?

  • A. CodeDeploy
  • B. CodePipeline
  • C. CodeCommit
  • D. CodeBuild

Answer: B.
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates

Reference: AWS CodePipeline


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q65: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?

  • A. Outputs
  • B. Transforms
  • C. Resources
  • D. Exports

Answer: A.
Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack.
Reference: CloudFormation Outputs

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q66: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?

  • A. Inputs
  • B. Resources
  • C. Transforms
  • D. Files

Answer: C.
Transforms is used to reference code located in S3 and also specififying the use of the Serverless Application Model (SAM) for Lambda deployments.
Reference: Transforms

Top

Q67: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file
used to specify source files and lifecycle hooks?

  • A. buildspec.yml
  • B. appspec.json
  • C. appspec.yml
  • D. buildspec.json

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q68: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?

  • A. Share the code using an EBS volume
  • B. Copy and paste the code into the template each time you need to use it
  • C. Use a cloudformation nested stack
  • D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.

Answer: C.

Reference: Working with Nested Stacks

Top

Q69: In the CodeDeploy AppSpec file, what are hooks used for?

  • A. To reference AWS resources that will be used during the deployment
  • B. Hooks are reserved for future use
  • C. To specify files you want to copy during the deployment.
  • D. To specify, scripts or function that you want to run at set points in the deployment lifecycle

Answer: D.
The ‘hooks’ section for an EC2/On-Premises deployment contains mappings that link deployment lifecycle event hooks to one or more scripts.

Reference: AppSpec ‘hooks’ Section

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q70: Which command can you use to encrypt a plain text file using CMK?

  • A. aws kms-encrypt
  • B. aws iam encrypt
  • C. aws kms encrypt
  • D. aws encrypt

Answer: C.
aws kms encrypt –key-id 1234abcd-12ab-34cd-56ef-1234567890ab –plaintext fileb://ExamplePlaintextFile –output text –query CiphertextBlob > C:\Temp\ExampleEncryptedFile.base64

Reference: AWS CLI Encrypt

Top

Q72: Which of the following is an encrypted key used by KMS to encrypt your data

  • A. Custmoer Mamaged Key
  • B. Encryption Key
  • C. Envelope Key
  • D. Customer Master Key

Answer: C.
Your Data key also known as the Enveloppe key is encrypted using the master key.This approach is known as Envelope encryption.
Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.

Reference: Envelope Encryption

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q73: Which of the following statements are correct? (Choose 2)

  • A. The Customer Master Key is used to encrypt and decrypt the Envelope Key or Data Key
  • B. The Envelope Key or Data Key is used to encrypt and decrypt plain text files.
  • C. The envelope Key or Data Key is used to encrypt and decrypt the Customer Master Key.
  • D. The Customer MasterKey is used to encrypt and decrypt plain text files.

Answer: A. and B.

Reference: AWS Key Management Service Concepts

Top

 
 

Q74: Which of the following statements is correct in relation to kMS/ (Choose 2)

  • A. KMS Encryption keys are regional
  • B. You cannot export your customer master key
  • C. You can export your customer master key.
  • D. KMS encryption Keys are global

Answer: A. and B.

Reference: AWS Key Management Service FAQs

Q75:  A developer is preparing a deployment package for a Java implementation of an AWS Lambda function. What should the developer include in the deployment package? (Select TWO.)
A. Compiled application code
B. Java runtime environment
C. References to the event sources
D. Lambda execution role
E. Application dependencies


Answer: C. E.
Notes: To create a Lambda function, you first create a Lambda function deployment package. This package is a .zip or .jar file consisting of your code and any dependencies.
Reference: Lambda deployment packages.

Q76: A developer uses AWS CodeDeploy to deploy a Python application to a fleet of Amazon EC2 instances that run behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. What should the developer include in the CodeDeploy deployment package?
A. A launch template for the Amazon EC2 Auto Scaling group
B. A CodeDeploy AppSpec file
C. An EC2 role that grants the application access to AWS services
D. An IAM policy that grants the application access to AWS services


Answer: B.
Notes: The CodeDeploy AppSpec (application specific) file is unique to CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file.
Reference: CodeDeploy application specification (AppSpec) files.
Category: Deployment

Q76: A company is working on a project to enhance its serverless application development process. The company hosts applications on AWS Lambda. The development team regularly updates the Lambda code and wants to use stable code in production. Which combination of steps should the development team take to configure Lambda functions to meet both development and production requirements? (Select TWO.)

A. Create a new Lambda version every time a new code release needs testing.
B. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready unqualified Amazon Resource Name (ARN) version. Point the Development alias to the $LATEST version.
C. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to the production-ready qualified Amazon Resource Name (ARN) version. Point the Development alias to the variable LAMBDA_TASK_ROOT.
D. Create a new Lambda layer every time a new code release needs testing.
E. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready Lambda layer Amazon Resource Name (ARN). Point the Development alias to the $LATEST layer ARN.


Answer: A. B.
Notes: Lambda function versions are designed to manage deployment of functions. They can be used for code changes, without affecting the stable production version of the code. By creating separate aliases for Production and Development, systems can initiate the correct alias as needed. A Lambda function alias can be used to point to a specific Lambda function version. Using the functionality to update an alias and its linked version, the development team can update the required version as needed. The $LATEST version is the newest published version.
Reference: Lambda function versions.

For more information about Lambda layers, see Creating and sharing Lambda layers.

For more information about Lambda function aliases, see Lambda function aliases.

Category: Deployment

Q77: Each time a developer publishes a new version of an AWS Lambda function, all the dependent event source mappings need to be updated with the reference to the new version’s Amazon Resource Name (ARN). These updates are time consuming and error-prone. Which combination of actions should the developer take to avoid performing these updates when publishing a new Lambda version? (Select TWO.)
A. Update event source mappings with the ARN of the Lambda layer.
B. Point a Lambda alias to a new version of the Lambda function.
C. Create a Lambda alias for each published version of the Lambda function.
D. Point a Lambda alias to a new Lambda function alias.
E. Update the event source mappings with the Lambda alias ARN.


Answer: B. E.
Notes: A Lambda alias is a pointer to a specific Lambda function version. Instead of using ARNs for the Lambda function in event source mappings, you can use an alias ARN. You do not need to update your event source mappings when you promote a new version or roll back to a previous version.
Reference: Lambda function aliases.
Category: Deployment

Q78:  A company wants to store sensitive user data in Amazon S3 and encrypt this data at rest. The company must manage the encryption keys and use Amazon S3 to perform the encryption. How can a developer meet these requirements?
A. Enable default encryption for the S3 bucket by using the option for server-side encryption with customer-provided encryption keys (SSE-C).
B. Enable client-side encryption with an encryption key. Upload the encrypted object to the S3 bucket.
C. Enable server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Upload an object to the S3 bucket.
D. Enable server-side encryption with customer-provided encryption keys (SSE-C). Upload an object to the S3 bucket.


Answer: D.
Notes: When you upload an object, Amazon S3 uses the encryption key you provide to apply AES-256 encryption to your data and removes the encryption key from memory.
Reference: Protecting data using server-side encryption with customer-provided encryption keys (SSE-C).

Category: Security

Q79: A company is developing a Python application that submits data to an Amazon DynamoDB table. The company requires client-side encryption of specific data items and end-to-end protection for the encrypted data in transit and at rest. Which combination of steps will meet the requirement for the encryption of specific data items? (Select TWO.)

A. Generate symmetric encryption keys with AWS Key Management Service (AWS KMS).
B. Generate asymmetric encryption keys with AWS Key Management Service (AWS KMS).
C. Use generated keys with the DynamoDB Encryption Client.
D. Use generated keys to configure DynamoDB table encryption with AWS managed customer master keys (CMKs).
E. Use generated keys to configure DynamoDB table encryption with AWS owned customer master keys (CMKs).


Answer: A. C.
Notes: When the DynamoDB Encryption Client is configured to use AWS KMS, it uses a customer master key (CMK) that is always encrypted when used outside of AWS KMS. This cryptographic materials provider returns a unique encryption key and signing key for every table item. This method of encryption uses a symmetric CMK.
Reference: Direct KMS Materials Provider.
Category: Deployment

Q80: A company is developing a REST API with Amazon API Gateway. Access to the API should be limited to users in the existing Amazon Cognito user pool. Which combination of steps should a developer perform to secure the API? (Select TWO.)
A. Create an AWS Lambda authorizer for the API.
B. Create an Amazon Cognito authorizer for the API.
C. Configure the authorizer for the API resource.
D. Configure the API methods to use the authorizer.
E. Configure the authorizer for the API stage.


Answer: B. D.
Notes: An Amazon Cognito authorizer should be used for integration with Amazon Cognito user pools. In addition to creating an authorizer, you are required to configure an API method to use that authorizer for the API.
Reference: Control access to a REST API using Amazon Cognito user pools as authorizer.
Category: Security

Q81: A developer is implementing a mobile app to provide personalized services to app users. The application code makes calls to Amazon S3 and Amazon Simple Queue Service (Amazon SQS). Which options can the developer use to authenticate the app users? (Select TWO.)
A. Authenticate to the Amazon Cognito identity pool directly.
B. Authenticate to AWS Identity and Access Management (IAM) directly.
C. Authenticate to the Amazon Cognito user pool directly.
D. Federate authentication by using Login with Amazon with the users managed with AWS Security Token Service (AWS STS).
E. Federate authentication by using Login with Amazon with the users managed with the Amazon Cognito user pool.


Answer: C. E.
Notes: The Amazon Cognito user pool provides direct user authentication. The Amazon Cognito user pool provides a federated authentication option with third-party identity provider (IdP), including amazon.com.
Reference: Adding User Pool Sign-in Through a Third Party.
Category: Security

 
 

Q82: A company is implementing several order processing workflows. Each workflow is implemented by using AWS Lambda functions for each task. Which combination of steps should a developer follow to implement these workflows? (Select TWO.)
A. Define a AWS Step Functions task for each Lambda function.
B. Define a AWS Step Functions task for each workflow.
C. Write code that polls the AWS Step Functions invocation to coordinate each workflow.
D. Define an AWS Step Functions state machine for each workflow.
E. Define an AWS Step Functions state machine for each Lambda function.


Answer: A. D.
Notes: Step Functions is based on state machines and tasks. A state machine is a workflow. Tasks perform work by coordinating with other AWS services, such as Lambda. A state machine is a workflow. It can be used to express a workflow as a number of states, their relationships, and their input and output. You can coordinate individual tasks with Step Functions by expressing your workflow as a finite state machine, written in the Amazon States Language.
Reference: Getting Started with AWS Step Functions.

Category: Development

Q83: A company is migrating a web service to the AWS Cloud. The web service accepts requests by using HTTP (port 80). The company wants to use an AWS Lambda function to process HTTP requests. Which application design will satisfy these requirements?
A. Create an Amazon API Gateway API. Configure proxy integration with the Lambda function.
B. Create an Amazon API Gateway API. Configure non-proxy integration with the Lambda function.
C. Configure the Lambda function to listen to inbound network connections on port 80.
D. Configure the Lambda function as a target in the Application Load Balancer target group.


Answer: D.
Notes: Elastic Load Balancing supports Lambda functions as a target for an Application Load Balancer. You can use load balancer rules to route HTTP requests to a function, based on the path or the header values. Then, process the request and return an HTTP response from your Lambda function.
Reference: Using AWS Lambda with an Application Load Balancer.
Category: Development

Q84: A company is developing an image processing application. When an image is uploaded to an Amazon S3 bucket, a number of independent and separate services must be invoked to process the image. The services do not have to be available immediately, but they must process every image. Which application design satisfies these requirements?
A. Configure an Amazon S3 event notification that publishes to an Amazon Simple Queue Service (Amazon SQS) queue. Each service pulls the message from the same queue.
B. Configure an Amazon S3 event notification that publishes to an Amazon Simple Notification Service (Amazon SNS) topic. Each service subscribes to the same topic.
C. Configure an Amazon S3 event notification that publishes to an Amazon Simple Queue Service (Amazon SQS) queue. Subscribe a separate Amazon Simple Notification Service (Amazon SNS) topic for each service to an Amazon SQS queue.
D. Configure an Amazon S3 event notification that publishes to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe a separate Simple Queue Service (Amazon SQS) queue for each service to the Amazon SNS topic.


Answer: D.
Notes: Each service can subscribe to an individual Amazon SQS queue, which receives an event notification from the Amazon SNS topic. This is a fanout architectural implementation.
Reference: Common Amazon SNS scenarios.
Category: Development

Q85: A developer wants to implement Amazon EC2 Auto Scaling for a Multi-AZ web application. However, the developer is concerned that user sessions will be lost during scale-in events. How can the developer store the session state and share it across the EC2 instances?
A. Write the sessions to an Amazon Kinesis data stream. Configure the application to poll the stream.
B. Publish the sessions to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe each instance in the group to the topic.
C. Store the sessions in an Amazon ElastiCache for Memcached cluster. Configure the application to use the Memcached API.
D. Write the sessions to an Amazon Elastic Block Store (Amazon EBS) volume. Mount the volume to each instance in the group.


Answer: C.
Notes: ElastiCache for Memcached is a distributed in-memory data store or cache environment in the cloud. It will meet the developer’s requirement of persistent storage and is fast to access.
Reference: What is Amazon ElastiCache for Memcached?

Category: Development

 
 
 

Q86: A developer is integrating a legacy web application that runs on a fleet of Amazon EC2 instances with an Amazon DynamoDB table. There is no AWS SDK for the programming language that was used to implement the web application. Which combination of steps should the developer perform to make an API call to Amazon DynamoDB from the instances? (Select TWO.)
A. Make an HTTPS POST request to the DynamoDB API endpoint for the AWS Region. In the request body, include an XML document that contains the request attributes.
B. Make an HTTPS POST request to the DynamoDB API endpoint for the AWS Region. In the request body, include a JSON document that contains the request attributes.
C. Sign the requests by using AWS access keys and Signature Version 4.
D. Use an EC2 SSH key to calculate Signature Version 4 of the request.
E. Provide the signature value through the HTTP X-API-Key header.


Answer: B. C.
Notes: The HTTPS-based low-level AWS API for DynamoDB uses JSON as a wire protocol format. When you send HTTP requests to AWS, you sign the requests so that AWS can identify who sent them. Requests are signed with your AWS access key, which consists of an access key ID and secret access key. AWS supports two signature versions: Signature Version 4 and Signature Version 2. AWS recommends the use of Signature Version 4.
Reference: Signing AWS API requests.
Category: Development

Q87: A developer has written several custom applications that read and write to the same Amazon DynamoDB table. Each time the data in the DynamoDB table is modified, this change should be sent to an external API. Which combination of steps should the developer perform to accomplish this task? (Select TWO.)
A. Configure an AWS Lambda function to poll the stream and call the external API.
B. Configure an event in Amazon EventBridge (Amazon CloudWatch Events) that publishes the change to an Amazon Managed Streaming for Apache Kafka (Amazon MSK) data stream.
C. Create a trigger in the DynamoDB table to publish the change to an Amazon Kinesis data stream.
D. Deliver the stream to an Amazon Simple Notification Service (Amazon SNS) topic and subscribe the API to the topic.
E. Enable DynamoDB Streams on the table.


Answer: A. E.
Notes: If you enable DynamoDB Streams on a table, you can associate the stream Amazon Resource Name (ARN) with an Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table’s stream. Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. You can enable DynamoDB Streams on a table to create an event that invokes an AWS Lambda function.
Reference: Tutorial: Process New Items with DynamoDB Streams and Lambda.
Category: Monitoring

 
 
 

Q88: A company is migrating the create, read, update, and delete (CRUD) functionality of an existing Java web application to AWS Lambda. Which minimal code refactoring is necessary for the CRUD operations to run in the Lambda function?
A. Implement a Lambda handler function.
B. Import an AWS X-Ray package.
C. Rewrite the application code in Python.
D. Add a reference to the Lambda execution role.


Answer: A.
Notes: Every Lambda function needs a Lambda-specific handler. Specifics of authoring vary between runtimes, but all runtimes share a common programming model that defines the interface between your code and the runtime code. You tell the runtime which method to run by defining a handler in the function configuration. The runtime runs that method. Next, the runtime passes in objects to the handler that contain the invocation event and context, such as the function name and request ID.
Reference: Getting started with Lambda.
Category: Refactoring

Top

Q89: A company plans to use AWS log monitoring services to monitor an application that runs on premises. Currently, the application runs on a recent version of Ubuntu Server and outputs the logs to a local file. Which combination of steps should a developer perform to accomplish this goal? (Select TWO.)
A. Update the application code to include calls to the agent API for log collection.
B. Install the Amazon Elastic Container Service (Amazon ECS) container agent on the server.
C. Install the unified Amazon CloudWatch agent on the server.
D. Configure the long-term AWS credentials on the server to enable log collection by the agent.
E. Attach an IAM role to the server to enable log collection by the agent.


Answer: C. D.
Notes: The unified CloudWatch agent needs to be installed on the server. Ubuntu Server 18.04 is one of the many supported operating systems. When you install the unified CloudWatch agent on an on-premises server, you will specify a named profile that contains the credentials of the IAM user.
Reference: Collecting metrics and logs from Amazon EC2 instances and on-premises servers with the CloudWatch agent.
Category: Monitoring

Q90: A developer wants to monitor invocations of an AWS Lambda function by using Amazon CloudWatch Logs. The developer added a number of print statements to the function code that write the logging information to the stdout stream. After running the function, the developer does not see any log data being generated. Why does the log data NOT appear in the CloudWatch logs?
A. The log data is not written to the stderr stream.
B. Lambda function logging is not automatically enabled.
C. The execution role for the Lambda function did not grant permissions to write log data to CloudWatch Logs.
D. The Lambda function outputs the logs to an Amazon S3 bucket.


Answer: C.
Notes: The function needs permission to call CloudWatch Logs. Update the execution role to grant the permission. You can use the managed policy of AWSLambdaBasicExecutionRole.
Reference: Troubleshoot execution issues in Lambda.
Category: Monitoting

Q91: Which of the following are best practices you should implement into ongoing deployments of your application? (Select THREE.)

A. Use stage variables to manage secrets across environments
B. Create account-specific AWS SAM templates for each environment
C. Use an AutoPublish alias
D. Use traffic shifting with pre- and post-deployment hooks
E. Test throughout the pipeline


Answer: C. D. E.
Notes: Use an AutoPublish alias, Use traffic shifting with pre- and post-deployment hooks, Test throughout the pipeline
Reference: https://enoumen.com/2019/06/23/aws-solution-architect-associate-exam-prep-facts-and-summaries-questions-and-answers-dump/

Q92: You are handing off maintenance of your new serverless application to an incoming team lead. Which recommendations would you make? (Select THREE.)

A. Keep up to date with the quotas and payload sizes for each AWS service you are using

B. Analyze production access patterns to identify potential improvements

C. Design your services to extend their life as long as possible

D. Minimize changes to your production application

E. Compare the value of using the latest first-class integrations versus using Lambda between AWS services


Answer: A. B. D.

Notes: Keep up to date with the quotas and payload sizes for each AWS service you are using, 

2022 AWS Certified Developer Associate Exam Preparation: Questions and Answers Dump.

Welcome to AWS Certified Developer Associate Exam Preparation:

Definition and Objectives, Top 100 Questions and Answers dump, White papers, Courses, Labs and Training Materials, Exam info and details, References, Jobs, Others AWS Certificates

2022 AWS Certified Developer Associate Exam Preparation:  Questions and Answers Dump
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

What is the AWS Certified Developer Associate Exam?

This AWS Certified Developer-Associate Examination is intended for individuals who perform a Developer role. It validates an examinee’s ability to:

  • Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
  • Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS

Recommended general IT knowledge
The target candidate should have the following:
– In-depth knowledge of at least one high-level programming language
– Understanding of application lifecycle management
– The ability to write code for serverless applications
– Understanding of the use of containers in the development process

Recommended AWS knowledge
The target candidate should be able to do the following:

  • Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications
  • Identify key features of AWS services
  • Understand the AWS shared responsibility model
  • Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
  • Use and interact with AWS services
  • Apply basic understanding of cloud-native applications to write code
  • Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
  • Author, maintain, and debug code modules on AWS

What is considered out of scope for the target candidate?
The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam:
– Design architectures (for example, distributed system, microservices)
– Design and implement CI/CD pipelines

  • Administer IAM users and groups
  • Administer Amazon Elastic Container Service (Amazon ECS)
  • Design AWS networking infrastructure (for example, Amazon VPC, AWS Direct Connect)
  • Understand compliance and licensing

Exam content
Response types
There are two types of questions on the exam:
– Multiple choice: Has one correct response and three incorrect responses (distractors)
– Multiple response: Has two or more correct responses out of five or more response options
Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose.
Distractors are generally plausible responses that match the content area.
Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.

Unscored content
The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.

Exam results
The AWS Certified Developer – Associate (DVA-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines.
Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720.
Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.
Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam.
Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.

Content outline
This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context.
The percentage in each domain represents only scored content.

Domain 1: Deployment 22%
Domain 2: Security 26%
Domain 3: Development with AWS Services 30%
Domain 4: Refactoring 10%
Domain 5: Monitoring and Troubleshooting 12%

Domain 1: Deployment
1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
–  Commit code to a repository and invoke build, test and/or deployment actions
–  Use labels and branches for version and release management
–  Use AWS CodePipeline to orchestrate workflows against different environments
–  Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS
CodeDeploy for CI/CD purposes
–  Perform a roll back plan based on application deployment policy

1.2 Deploy applications using AWS Elastic Beanstalk.
–  Utilize existing supported environments to define a new application stack
–  Package the application
–  Introduce a new application version into the Elastic Beanstalk environment
–  Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable)
–  Validate application health using Elastic Beanstalk dashboard
–  Use Amazon CloudWatch Logs to instrument application logging

1.3 Prepare the application deployment package to be deployed to AWS.
–  Manage the dependencies of the code module (like environment variables, config files and static image files) within the package
–  Outline the package/container directory structure and organize files appropriately
–  Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores)

1.4 Deploy serverless applications.
–  Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template
–  Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)

Domain 2: Security
2.1 Make authenticated calls to AWS services.
–  Communicate required policy based on least privileges required by application.
–  Assume an IAM role to access a service
–  Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)

2.2 Implement encryption using AWS services.
– Encrypt data at rest (client side; server side; envelope encryption) using AWS services
–  Encrypt data in transit

2.3 Implement application authentication and authorization.
– Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools
–  Use Amazon Cognito-provided credentials to write code that access AWS services.
–  Use Amazon Cognito sync to synchronize user profiles and data
–  Use developer-authenticated identities to interact between end user devices, backend
authentication, and Amazon Cognito

Domain 3: Development with AWS Services
3.1 Write code for serverless applications.
– Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications)
– Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler)
– Create an API endpoint using Amazon API Gateway
–  Create and test appropriate API actions like GET, POST using the API endpoint
–  Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes)
–  Compute read/write capacity units for Amazon DynamoDB based on application requirements
–  Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis)
–  Invoke an AWS Lambda function synchronously and asynchronously

3.2 Translate functional requirements into application design.
– Determine real-time vs. batch processing for a given use case
– Determine use of synchronous vs. asynchronous for a given use case
– Determine use of event vs. schedule/poll for a given use case
– Account for tradeoffs for consistency models in an application design

Domain 4: Refactoring
4.1 Optimize applications to best use AWS services and features.
 Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache)
 Apply an Amazon S3 naming scheme for optimal read performance

4.2 Migrate existing application code to run on AWS.
– Isolate dependencies
– Run the application as one or more stateless processes
– Develop in order to enable horizontal scalability
– Externalize state

Domain 5: Monitoring and Troubleshooting

5.1 Write code that can be monitored.
– Create custom Amazon CloudWatch metrics
– Perform logging in a manner available to systems operators
– Instrument application source code to enable tracing in AWS X-Ray

5.2 Perform root cause analysis on faults found in testing or production.
– Interpret the outputs from the logging mechanism in AWS to identify errors in logs
– Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues
– Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component

Which key tools, technologies, and concepts might be covered on the exam?

The following is a non-exhaustive list of the tools and technologies that could appear on the exam.
This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam.
The general tools and technologies in this list appear in no particular order.
AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance:
– Analytics
– Application Integration
– Containers
– Cost and Capacity Management
– Data Movement
– Developer Tools
– Instances (virtual machines)
– Management and Governance
– Networking and Content Delivery
– Security
– Serverless

AWS services and features

Analytics:
– Amazon Elasticsearch Service (Amazon ES)
– Amazon Kinesis
Application Integration:
– Amazon EventBridge (Amazon CloudWatch Events)
– Amazon Simple Notification Service (Amazon SNS)
– Amazon Simple Queue Service (Amazon SQS)
– AWS Step Functions

Compute:
– Amazon EC2
– AWS Elastic Beanstalk
– AWS Lambda

Containers:
– Amazon Elastic Container Registry (Amazon ECR)
– Amazon Elastic Container Service (Amazon ECS)
– Amazon Elastic Kubernetes Services (Amazon EKS)

Database:
– Amazon DynamoDB
– Amazon ElastiCache
– Amazon RDS

Developer Tools:
– AWS CodeArtifact
– AWS CodeBuild
– AWS CodeCommit
– AWS CodeDeploy
– Amazon CodeGuru
– AWS CodePipeline
– AWS CodeStar
– AWS Fault Injection Simulator
– AWS X-Ray

Management and Governance:
– AWS CloudFormation
– Amazon CloudWatch

Networking and Content Delivery:
– Amazon API Gateway
– Amazon CloudFront
– Elastic Load Balancing

Security, Identity, and Compliance:
– Amazon Cognito
– AWS Identity and Access Management (IAM)
– AWS Key Management Service (AWS KMS)

Storage:
– Amazon S3

Out-of-scope AWS services and features

The following is a non-exhaustive list of AWS services and features that are not covered on the exam.
These services and features do not represent every AWS offering that is excluded from the exam content.
Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant.
Out-of-scope AWS services and features include the following:
– AWS Application Discovery Service
– Amazon AppStream 2.0
– Amazon Chime
– Amazon Connect
– AWS Database Migration Service (AWS DMS)
– AWS Device Farm
– Amazon Elastic Transcoder
– Amazon GameLift
– Amazon Lex
– Amazon Machine Learning (Amazon ML)
– AWS Managed Services
– Amazon Mobile Analytics
– Amazon Polly

– Amazon QuickSight
– Amazon Rekognition
– AWS Server Migration Service (AWS SMS)
– AWS Service Catalog
– AWS Shield Advanced
– AWS Shield Standard
– AWS Snow Family
– AWS Storage Gateway
– AWS WAF
– Amazon WorkMail
– Amazon WorkSpaces

To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Top

AWS Certified Developer – Associate Practice Questions And Answers Dump

Q0: Your application reads commands from an SQS queue and sends them to web services hosted by your
partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost.
How can you accommodate the partners’ broken web services without wasting your resources?

  • A. Create a delay queue and set DelaySeconds to 30 seconds
  • B. Requeue the message with a VisibilityTimeout of 30 seconds.
  • C. Create a dead letter queue and set the Maximum Receives to 3.
  • D. Requeue the message with a DelaySeconds of 30 seconds.
2022 AWS Certified Developer Associate Exam Preparation:  Questions and Answers Dump
AWS Developer Associates DVA-C01 PRO
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 


C. After a message is taken from the queue and returned for the maximum number of retries, it is
automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.

Reference: Amazon SQS Dead-Letter Queues


Top

Q1: A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently.
What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. The AWS Documentation mentions the following:

DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

Q2: You are creating a DynamoDB table with the following attributes:

  • PurchaseOrderNumber (partition key)
  • CustomerID
  • PurchaseDate
  • TotalPurchaseValue

One of your applications must retrieve items from the table to calculate the total value of purchases for a
particular customer over a date range. What secondary index do you need to add to the table?

  • A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute
  • C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute


C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition
key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the
TotalPurchaseValue into the index provides all the data needed to satisfy the use case.

Reference: AWS DynamoDB Global Secondary Indexes

Difference between local and global indexes in DynamoDB

    • Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
    • Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
    • Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
    • Global Secondary Indexes defines a new paradigm – different hash/range keys per index.
      This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
    • Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
    • Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.

Throughput :

  • Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
  • Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.


Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q3: When referencing the remaining time left for a Lambda function to run within the function’s code you would use:

  • A. The event object
  • B. The timeLeft object
  • C. The remains object
  • D. The context object


D. The context object.

Reference: AWS Lambda


Top

Q4: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context
D. event, context
def handler_name(event, context):

return some_value

Reference: AWS Lambda Function Handler in Python

Top

Q5: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda Deployment Package

Top

Q6: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime

C. Function code and libraries not included within the runtime

Reference: AWS Lambda Deployment Package in PowerShell

Top

Q7: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?

  • A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
  • B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
  • C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
  • D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.


D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.

Reference: Comparison of Security Groups and Network ACLs

AWS Security Groups and NACL


Top

Q8: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.

Reference: AWS Network Address Translation Gateway


Top

Q9: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.

Reference: AWS Autoscalling


Top

Q10: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda

Top

Q11: You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?

  • A. RegisterImage
  • B. CreateImage
  • C. ami-register-image
  • D. ami-create-image

A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.

Reference: API RegisterImage

Top

Q12: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?

  • A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
  • B. Permenantly assigning users to specific instances and always routing their traffic to those instances
  • C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
  • D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance

Top

Q13: Which API call would best be used to describe an Amazon Machine Image?

  • A. ami-describe-image
  • B. ami-describe-images
  • C. DescribeImage
  • D. DescribeImages

D. In general, API actions stick to the PascalCase style with the first letter of every word capitalized.

Reference: API DescribeImages

Top

Q14: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?

  • A. Autoscaling requires using Amazon EBS-backed instances
  • B. Virtual Private Cloud requires EBS backed instances
  • C. Amazon EBS-backed instances can be stopped and restarted without losing data
  • D. Instance-store backed instances can be stopped and restarted without losing data

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.

Reference: What is the difference between EBS and Instance Store?

Top

Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command.
ssh -i my_key.pem ec2-user@52.2.222.22
However you receive the following error.
@@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@
What is the most probable reason for this and how can you fix it?

  • A. You do not have root access on your terminal and need to use the sudo option for this to work.
  • B. You do not have enough permissions to perform the operation.
  • C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
  • D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.

D. You need to run something like: chmod 400 my_key.pem

Reference:

Top

Q16: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?

  • A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
  • B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
  • C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
  • D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.

D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.

Reference: AWS Root Device Storage

Top

Q17: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:

  • A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
  • B. Can only be used to launch EC2 instances in the same country as the AMI is stored
  • C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
  • D. Can be used to launch EC2 instances in any AWS region

C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another

Reference: https://aws.amazon.com/amazon-linux-ami/

Top

Q18: Which of the following statements is true about the Elastic File System (EFS)?

  • A. EFS can scale out to meet capacity requirements and scale back down when no longer needed
  • B. EFS can be used by multiple EC2 instances simultaneously
  • C. EFS cannot be used by an instance using EBS
  • D. EFS can be configured on an instance before launch just like an IAM role or EBS volumes

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

A. and B.

Reference: https://aws.amazon.com/efs/

Top

Q19: IAM Policies, at a minimum, contain what elements?

  • A. ID
  • B. Effects
  • C. Resources
  • D. Sid
  • E. Principle
  • F. Actions

B. C. and F.

Effect – Use Allow or Deny to indicate whether the policy allows or denies access.

Resource – Specify a list of resources to which the actions apply.

Action – Include a list of actions that the policy allows or denies.

Id, Sid aren’t required fields in IAM Policies. But they are optional fields

Reference: AWS IAM Access Policies

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q20: What are the main benefits of IAM groups?

  • A. The ability to create custom permission policies.
  • B. Assigning IAM permission policies to more than one user at a time.
  • C. Easier user/policy management.
  • D. Allowing EC2 instances to gain access to S3.

B. and C.

A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.

Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html

 

Top

Q21: What are benefits of using AWS STS?

  • A. Grant access to AWS resources without having to create an IAM identity for them
  • B. Since credentials are temporary, you don’t have to rotate or revoke them
  • C. Temporary security credentials can be extended indefinitely
  • D. Temporary security credentials can be restricted to a specific region

Top

Q22: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q23: A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?

  • A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
  • B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
  • C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
  • D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.

B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence.
Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation.
During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:

  • Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
  • Saved Configurations
    Settings for any options that are not applied directly to the
    environment are loaded from a saved configuration, if specified.
  • Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the
    environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.

     

    Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.

  • Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.

If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI
.
Settings in configuration files are not applied
directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.
If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.

Reference: Managing ec2 features – Elastic beanstalk

Q24: What statements are true about Availability Zones (AZs) and Regions?

  • A. There is only one AZ in each AWS Region
  • B. AZs are geographically separated inside a region to help protect against natural disasters affecting more than one at a time.
  • C. AZs can be moved between AWS Regions based on your needs
  • D. There are (almost always) two or more AZs in each AWS Region

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B and D.

Reference: AWS global infrastructure/

Top

Q25: An AWS Region contains:

  • A. Edge Locations
  • B. Data Centers
  • C. AWS Services
  • D. Availability Zones


B. C. D. Edge locations are actually distinct locations that don’t explicitly fall within AWS regions.

Reference: AWS Global Infrastructure


Top

Q26: Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?

  • A. Eventual Consistent Reads
  • B. Conditional reads for Consistency
  • C. Strongly Consistent Reads
  • D. Not possible


C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.

Reference: https://aws.amazon.com/dynamodb/faqs/


Top

Q27: You’ ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?

  • A. Create an Opswork stack and deploy the Docker containers
  • B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
  • C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
  • D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.


B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.

Reference: Create and Deploy Docker in AWS


Top

Q28: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?

  • A. Create multiple threads and upload the objects in the multiple threads
  • B. Write the items in batches for better performance
  • C. Use the Multipart upload API
  • D. Enable versioning on the Bucket

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 


C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.

Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html


Top

Q29: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?

  • A. 6000
  • B. 10
  • C. 3600
  • D. 600

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.

You can specify the Write capacity in the Capacity tab of the DynamoDB table.

Reference: AWS working with tables

Q30: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context


D. event, context def handler_name(event, context):

return some_value
Reference: AWS Lambda Function Handler in Python

Top

Q31: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda Deployment Package

Top

Q32: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime


C. Function code and libraries not included within the runtime
Reference: AWS Lambda Deployment Package in PowerShell

Top

Q33: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
Reference: AWS Network Address Translation Gateway

Top

Q34: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.
Reference: AWS Autoscalling

Top

Q30: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

Answer:


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda

Top

Q31: An organization is using an Amazon ElastiCache cluster in front of their Amazon RDS instance. The organization would like the Developer to implement logic into the code so that the cluster only retrieves data from RDS when there is a cache miss. What strategy can the Developer implement to achieve this?

  • A. Lazy loading
  • B. Write-through
  • C. Error retries
  • D. Exponential backoff

Answer:


Answer – A
Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. All other options are incorrect.
Reference: Caching Strategies

Top

Q32: A developer is writing an application that will run on Ec2 instances and read messages from SQS queue. The nessages will arrive every 15-60 seconds. How should the Developer efficiently query the queue for new messages?

  • A. Use long polling
  • B. Set a custom visibility timeout
  • C. Use short polling
  • D. Implement exponential backoff


Answer – A Long polling will help insure that the applications make less requests for messages in a shorter period of time. This is more cost effective. Since the messages are only going to be available after 15 seconds and we don’t know exacly when they would be available, it is better to use Long Polling.
Reference: Amazon SQS Long Polling

Top

Q33: You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With new Lambda function working as per expectation which of the following will shift traffic from original Lambda function to new Lambda function in the shortest time frame?

  • A. Canary10Percent5Minutes
  • B. Linear10PercentEvery10Minutes
  • C. Canary10Percent15Minutes
  • D. Linear10PercentEvery1Minute


Answer – A
With Canary Deployment Preference type, Traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while remaining all traffic is shifted after 5 minutes.
Reference: Gradual Code Deployment

Top

Q34: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?

  • A. AWS::Serverless::Api
  • B. AWS::Serverless::Application
  • C. AWS::Serverless::Layerversion
  • D. AWS::Serverless::Function


Answer – B
AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets.
Reference: Declaring Serverless Resources

Top

Q35: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?

  • A. Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
  • B. Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
  • C. Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
  • D. Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.


Answer – D
With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys.
Reference: AWS Key Management Service Concepts

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q36: You are developing an application that will be comprised of the following architecture –

  1. A set of Ec2 instances to process the videos.
  2. These (Ec2 instances) will be spun up by an autoscaling group.
  3. SQS Queues to maintain the processing messages.
  4. There will be 2 pricing tiers.

How will you ensure that the premium customers videos are given more preference?

  • A. Create 2 Autoscaling Groups, one for normal and one for premium customers
  • B. Create 2 set of Ec2 Instances, one for normal and one for premium customers
  • C. Create 2 SQS queus, one for normal and one for premium customers
  • D. Create 2 Elastic Load Balancers, one for normal and one for premium customers.


Answer – C
The ideal option would be to create 2 SQS queues. Messages can then be processed by the application from the high priority queue first.<br? The other options are not the ideal options. They would lead to extra costs and also extra maintenance.
Reference: SQS

Top

Q37: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?

  • A. CustomerID
  • B. CustomerName
  • C. Location
  • D. Age


Answer- A
Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on..
Use composite attributes. Try to combine more than one attribute to form a unique key.
Reference: Choosing the right DynamoDB Partition Key

Top

Q38: A developer is making use of AWS services to develop an application. He has been asked to develop the application in a manner to compensate any network delays. Which of the following two mechanisms should he implement in the application?

  • A. Multiple SQS queues
  • B. Exponential backoff algorithm
  • C. Retries in your application code
  • D. Consider using the Java sdk.


Answer- B. and C.
In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values, and should be set based on the operation being performed, as well as other local factors, such as network latency.
Reference: Error Retries and Exponential Backoff in AWS

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q39: An application is being developed that is going to write data to a DynamoDB table. You have to setup the read and write throughput for the table. Data is going to be read at the rate of 300 items every 30 seconds. Each item is of size 6KB. The reads can be eventual consistent reads. What should be the read capacity that needs to be set on the table?

  • A. 10
  • B. 20
  • C. 6
  • D. 30


Answer – A

Since there are 300 items read every 30 seconds , that means there are (300/30) = 10 items read every second.
Since each item is 6KB in size , that means , 2 reads will be required for each item.
So we have total of 2*10 = 20 reads for the number of items per second
Since eventual consistency is required , we can divide the number of reads(20) by 2 , and in the end we get the Read Capacity of 10.

Reference: Read/Write Capacity Mode


Top

Q40: You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load balancer. You have been requested to monitor the incoming connections to the Elastic Load Balancer. Which of the below options can suffice this requirement?

  • A. Use AWS CloudTrail with your load balancer
  • B. Enable access logs on the load balancer
  • C. Use a CloudWatch Logs Agent
  • D. Create a custom metric CloudWatch lter on your load balancer


Answer – B
Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues.
Reference: Access Logs for Your Application Load Balancer

Top

Q41: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?

  • A. Enable versioning for the underlying S3 bucket.
  • B. Enable Replication so that the objects get replicated to the other bucket
  • C. Enable CORS for the bucket
  • D. Change the Bucket policy for the bucket to allow access from the other bucket


Answer – C

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.

Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:

Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.

Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.

Reference: Cross-Origin Resource Sharing (CORS)


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q42: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below

  • A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
  • B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
  • C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
  • D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.


Answer- C
The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3.
You can then provides access to the objects based on the key values generated via the user id.

Reference: The AWS Security Token Service (STS)


Top

Q43: Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.

  • A. Publish your data to CloudWatch Logs, and congure your application to Autoscale to handle the load on demand.
  • B. Publish your log data to an Amazon S3 bucket.  Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is congured to pull down your log les stored an Amazon S3
  • C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is congured to process your logging data.
  • D. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.

Answer:


Answer – C
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data.
Reference: Amazon Kinesis

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q44: You’ve been instructed to develop a mobile application that will make use of AWS services. You need to decide on a data store to store the user sessions. Which of the following would be an ideal data store for session management?

  • A. AWS Simple Storage Service
  • B. AWS DynamoDB
  • C. AWS RDS
  • D. AWS Redshift

Answer:


Answer – B
DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management
Reference: Scalable Session Handling in PHP Using Amazon DynamoDB

Top

Q45: Your application currently interacts with a DynamoDB table. Records are inserted into the table via the application. There is now a requirement to ensure that whenever items are updated in the DynamoDB primary table , another record is inserted into a secondary table. Which of the below feature should be used when developing such a solution?

  • A. AWS DynamoDB Encryption
  • B. AWS DynamoDB Streams
  • C. AWS DynamoDB Accelerator
  • D. AWSTable Accelerator


Answer – B
DynamoDB Streams Use Cases and Design Patterns This post describes some common use cases you might encounter, along with their design options and solutions, when migrating data from relational data stores to Amazon DynamoDB. We will consider how to manage the following scenarios:

  • How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
  • How do you trigger an event based on a particular transaction?
  • How do you audit or archive transactions?
  • How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?

Relational databases provide native support for transactions, triggers, auditing, and replication. Typically, a transaction in a database refers to performing create, read, update, and delete (CRUD) operations against multiple tables in a block. A transaction can have only two states—success or failure. In other words, there is no partial completion. As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit. You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time. AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region. All of the other options are incorrect since none of these would meet the core requirement.
Reference: DynamoDB Streams Use Cases and Design Patterns


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q46: An application has been making use of AWS DynamoDB for its back-end data store. The size of the table has now grown to 20 GB , and the scans on the table are causing throttling errors. Which of the following should now be implemented to avoid such errors?

  • A. Large Page size
  • B. Reduced page size
  • C. Parallel Scans
  • D. Sequential scans

Answer – B
When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity. You can use the following technique to minimize the impact of a scan on a table’s provisioned throughput. Reduce page size Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request. For example, suppose that each item is 4 KB and you set the page size to 40 items. A Query request would then consume only 20 eventually consistent read operations or 40 strongly consistent read operations. A larger number of smaller Query or Scan operations would allow your other critical requests to succeed without throttling.
Reference1: Rate-Limited Scans in Amazon DynamoDB

Reference2: Best Practices for Querying and Scanning Data


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q47: Which of the following is correct way of passing a stage variable to an HTTP URL ? (Select TWO.)

  • A. http://example.com/${}/prod
  • B. http://example.com/${stageVariables.}/prod
  • C. http://${stageVariables.}.example.com/dev/operation
  • D. http://${stageVariables}.example.com/dev/operation
  • E. http://${}.example.com/dev/operation
  • F. http://example.com/${stageVariables}/prod


Answer – B. and C.
A stage variable can be used as part of HTTP integration URL as in following cases, ·         A full URI without protocol ·         A full domain ·         A subdomain ·         A path ·         A query string In the above case , option B & C displays stage variable as a path & sub-domain.
Reference: Amazon API Gateway Stage Variables Reference

Top

Q48: Your company is planning on creating new development environments in AWS. They want to make use of their existing Chef recipes which they use for their on-premise configuration for servers in AWS. Which of the following service would be ideal to use in this regard?

  • A. AWS Elastic Beanstalk
  • B. AWS OpsWork
  • C. AWS Cloudformation
  • D. AWS SQS


Answer – B
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments All other options are invalid since they cannot be used to work with Chef recipes for configuration management.
Reference: AWS OpsWorks

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q49: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?

  • A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
  • B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access.
  • C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
  • D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.


Answer – C
With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used
Reference: About Web Identity Federation

Top

Q50: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?

  • A. Cognito Data
  • B. Cognito Events
  • C. Cognito Streams
  • D. Cognito Callbacks


Answer – C
Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams
Reference:

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q51: You’ve developed a set of scripts using AWS Lambda. These scripts need to access EC2 Instances in a VPC. Which of the following needs to be done to ensure that the AWS Lambda function can access the resources in the VPC. Choose 2 answers from the options given below

  • A. Ensure that the subnet ID’s are mentioned when conguring the Lambda function
  • B. Ensure that the NACL ID’s are mentioned when conguring the Lambda function
  • C. Ensure that the Security Group ID’s are mentioned when conguring the Lambda function
  • D. Ensure that the VPC Flow Log ID’s are mentioned when conguring the Lambda function


Answer: A and C.
AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPCspecific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC.
Reference: Configuring a Lambda Function to Access Resources in an Amazon VPC

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q52: You’ve currently been tasked to migrate an existing on-premise environment into Elastic Beanstalk. The application does not make use of Docker containers. You also can’t see any relevant environments in the beanstalk service that would be suitable to host your application. What should you consider doing in this case?

  • A. Migrate your application to using Docker containers and then migrate the app to the Elastic Beanstalk environment.
  • B. Consider using Cloudformation to deploy your environment to Elastic Beanstalk
  • C. Consider using Packer to create a custom platform
  • D. Consider deploying your application using the Elastic Container Service


Answer – C
Elastic Beanstalk supports custom platforms. A custom platform is a more advanced customization than a Custom Image in several ways. A custom platform lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility allows you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn’t provide a platform out of the box. Compare that to custom images, where you modify an AMI for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform’s software stack. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. To create a custom platform, you build an Amazon Machine Image (AMI) from one of the supported operating systems—Ubuntu, RHEL, or Amazon Linux (see the flavor entry in Platform.yaml File Format for the exact version numbers)—and add further customizations. You create your own Elastic Beanstalk platform using Packer, which is an open-source tool for creating machine images for many platforms, including AMIs for use with Amazon EC2. An Elastic Beanstalk platform comprises an AMI configured to run a set of software that supports an application, and metadata that can include custom configuration options and default configuration option settings.
Reference: AWS Elastic Beanstalk Custom Platforms

Top

Q53: Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.

  • A. 10
  • B. 160
  • C. 155
  • D. 16


Answer – B.
Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.
Reference: Read/Write Capacity Mode

Top

Top

Q54: Which AWS Service can be used to automatically install your application code onto EC2, on premises systems and Lambda?

  • A. CodeCommit
  • B. X-Ray
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q55: Which AWS service can be used to compile source code, run tests and package code?

  • A. CodePipeline
  • B. CodeCommit
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy Answer: B.

Reference: AWS CodeBuild


Top

Q56: How can your prevent CloudFormation from deleting your entire stack on failure? (Choose 2)

  • A. Set the Rollback on failure radio button to No in the CloudFormation console
  • B. Set Termination Protection to Enabled in the CloudFormation console
  • C. Use the –disable-rollback flag with the AWS CLI
  • D. Use the –enable-termination-protection protection flag with the AWS CLI

Answer: A. and C.

Reference: Protecting a Stack From Being Deleted

Top

Q57: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?

  • A. Continuous Integration
  • B. Continuous Deployment
  • C. Continuous Delivery
  • D. Continuous Development

Top

Q58: When deploying application code to EC2, the AppSpec file can be written in which language?

  • A. JSON
  • B. JSON or YAML
  • C. XML
  • D. YAML

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q59: Part of your CloudFormation deployment fails due to a mis-configuration, by defaukt what will happen?

  • A. CloudFormation will rollback only the failed components
  • B. CloudFormation will rollback the entire stack
  • C. Failed component will remain available for debugging purposes
  • D. CloudFormation will ask you if you want to continue with the deployment


Top

Q60: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?

  • A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
  • B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
  • C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
  • D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.

Answer: C

Reference: Getting Started with Amazon SNS


Top

Q61: Which AWS service can be used to centrally store and version control your application source code, binaries and libraries

  • A. CodeCommit
  • B. CodeBuild
  • C. CodePipeline
  • D. ElasticFileSystem

Answer: A

Reference: AWS CodeCommit


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q62: You are using CloudFormation to create a new S3 bucket, which of the following sections would you use to define the properties of your bucket?

  • A. Conditions
  • B. Parameters
  • C. Outputs
  • D. Resources

Answer: D

Reference: Resources


Top

Q63: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?

  • A. Transforms
  • B. Outputs
  • C. Resources
  • D. Instances

Answer: C.
The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3.
Reference: Resources

Top

Q64: Which AWS service can be used to fully automate your entire release process?

  • A. CodeDeploy
  • B. CodePipeline
  • C. CodeCommit
  • D. CodeBuild

Answer: B.
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates

Reference: AWS CodePipeline


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q65: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?

  • A. Outputs
  • B. Transforms
  • C. Resources
  • D. Exports

Answer: A.
Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack.
Reference: CloudFormation Outputs

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q66: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?

  • A. Inputs
  • B. Resources
  • C. Transforms
  • D. Files

Answer: C.
Transforms is used to reference code located in S3 and also specififying the use of the Serverless Application Model (SAM) for Lambda deployments.
Reference: Transforms

Top

Q67: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file
used to specify source files and lifecycle hooks?

  • A. buildspec.yml
  • B. appspec.json
  • C. appspec.yml
  • D. buildspec.json

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q68: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?

  • A. Share the code using an EBS volume
  • B. Copy and paste the code into the template each time you need to use it
  • C. Use a cloudformation nested stack
  • D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.

Answer: C.

Reference: Working with Nested Stacks

Top

Q69: In the CodeDeploy AppSpec file, what are hooks used for?

  • A. To reference AWS resources that will be used during the deployment
  • B. Hooks are reserved for future use
  • C. To specify files you want to copy during the deployment.
  • D. To specify, scripts or function that you want to run at set points in the deployment lifecycle

Answer: D.
The ‘hooks’ section for an EC2/On-Premises deployment contains mappings that link deployment lifecycle event hooks to one or more scripts.

Reference: AppSpec ‘hooks’ Section

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q70: Which command can you use to encrypt a plain text file using CMK?

  • A. aws kms-encrypt
  • B. aws iam encrypt
  • C. aws kms encrypt
  • D. aws encrypt

Answer: C.
aws kms encrypt –key-id 1234abcd-12ab-34cd-56ef-1234567890ab –plaintext fileb://ExamplePlaintextFile –output text –query CiphertextBlob > C:\Temp\ExampleEncryptedFile.base64

Reference: AWS CLI Encrypt

Top

Q72: Which of the following is an encrypted key used by KMS to encrypt your data

  • A. Custmoer Mamaged Key
  • B. Encryption Key
  • C. Envelope Key
  • D. Customer Master Key

Answer: C.
Your Data key also known as the Enveloppe key is encrypted using the master key.This approach is known as Envelope encryption.
Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.

Reference: Envelope Encryption

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q73: Which of the following statements are correct? (Choose 2)

  • A. The Customer Master Key is used to encrypt and decrypt the Envelope Key or Data Key
  • B. The Envelope Key or Data Key is used to encrypt and decrypt plain text files.
  • C. The envelope Key or Data Key is used to encrypt and decrypt the Customer Master Key.
  • D. The Customer MasterKey is used to encrypt and decrypt plain text files.

Answer: A. and B.

Reference: AWS Key Management Service Concepts

Top

Q74: Which of the following statements is correct in relation to kMS/ (Choose 2)

  • A. KMS Encryption keys are regional
  • B. You cannot export your customer master key
  • C. You can export your customer master key.
  • D. KMS encryption Keys are global

Answer: A. and B.

Reference: AWS Key Management Service FAQs

Q75:  A developer is preparing a deployment package for a Java implementation of an AWS Lambda function. What should the developer include in the deployment package? (Select TWO.)
A. Compiled application code
B. Java runtime environment
C. References to the event sources
D. Lambda execution role
E. Application dependencies


Answer: C. E.
Notes: To create a Lambda function, you first create a Lambda function deployment package. This package is a .zip or .jar file consisting of your code and any dependencies.
Reference: Lambda deployment packages.

Q76: A developer uses AWS CodeDeploy to deploy a Python application to a fleet of Amazon EC2 instances that run behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. What should the developer include in the CodeDeploy deployment package?
A. A launch template for the Amazon EC2 Auto Scaling group
B. A CodeDeploy AppSpec file
C. An EC2 role that grants the application access to AWS services
D. An IAM policy that grants the application access to AWS services


Answer: B.
Notes: The CodeDeploy AppSpec (application specific) file is unique to CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file.
Reference: CodeDeploy application specification (AppSpec) files.
Category: Deployment

Q76: A company is working on a project to enhance its serverless application development process. The company hosts applications on AWS Lambda. The development team regularly updates the Lambda code and wants to use stable code in production. Which combination of steps should the development team take to configure Lambda functions to meet both development and production requirements? (Select TWO.)

A. Create a new Lambda version every time a new code release needs testing.
B. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready unqualified Amazon Resource Name (ARN) version. Point the Development alias to the $LATEST version.
C. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to the production-ready qualified Amazon Resource Name (ARN) version. Point the Development alias to the variable LAMBDA_TASK_ROOT.
D. Create a new Lambda layer every time a new code release needs testing.
E. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready Lambda layer Amazon Resource Name (ARN). Point the Development alias to the $LATEST layer ARN.


Answer: A. B.
Notes: Lambda function versions are designed to manage deployment of functions. They can be used for code changes, without affecting the stable production version of the code. By creating separate aliases for Production and Development, systems can initiate the correct alias as needed. A Lambda function alias can be used to point to a specific Lambda function version. Using the functionality to update an alias and its linked version, the development team can update the required version as needed. The $LATEST version is the newest published version.
Reference: Lambda function versions.

For more information about Lambda layers, see Creating and sharing Lambda layers.

For more information about Lambda function aliases, see Lambda function aliases.

Category: Deployment

Q77: Each time a developer publishes a new version of an AWS Lambda function, all the dependent event source mappings need to be updated with the reference to the new version’s Amazon Resource Name (ARN). These updates are time consuming and error-prone. Which combination of actions should the developer take to avoid performing these updates when publishing a new Lambda version? (Select TWO.)
A. Update event source mappings with the ARN of the Lambda layer.
B. Point a Lambda alias to a new version of the Lambda function.
C. Create a Lambda alias for each published version of the Lambda function.
D. Point a Lambda alias to a new Lambda function alias.
E. Update the event source mappings with the Lambda alias ARN.


Answer: B. E.
Notes: A Lambda alias is a pointer to a specific Lambda function version. Instead of using ARNs for the Lambda function in event source mappings, you can use an alias ARN. You do not need to update your event source mappings when you promote a new version or roll back to a previous version.
Reference: Lambda function aliases.
Category: Deployment

Q78:  A company wants to store sensitive user data in Amazon S3 and encrypt this data at rest. The company must manage the encryption keys and use Amazon S3 to perform the encryption. How can a developer meet these requirements?
A. Enable default encryption for the S3 bucket by using the option for server-side encryption with customer-provided encryption keys (SSE-C).
B. Enable client-side encryption with an encryption key. Upload the encrypted object to the S3 bucket.
C. Enable server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Upload an object to the S3 bucket.
D. Enable server-side encryption with customer-provided encryption keys (SSE-C). Upload an object to the S3 bucket.


Answer: D.
Notes: When you upload an object, Amazon S3 uses the encryption key you provide to apply AES-256 encryption to your data and removes the encryption key from memory.
Reference: Protecting data using server-side encryption with customer-provided encryption keys (SSE-C).

Category: Security

Q79: A company is developing a Python application that submits data to an Amazon DynamoDB table. The company requires client-side encryption of specific data items and end-to-end protection for the encrypted data in transit and at rest. Which combination of steps will meet the requirement for the encryption of specific data items? (Select TWO.)

A. Generate symmetric encryption keys with AWS Key Management Service (AWS KMS).
B. Generate asymmetric encryption keys with AWS Key Management Service (AWS KMS).
C. Use generated keys with the DynamoDB Encryption Client.
D. Use generated keys to configure DynamoDB table encryption with AWS managed customer master keys (CMKs).
E. Use generated keys to configure DynamoDB table encryption with AWS owned customer master keys (CMKs).


Answer: A. C.
Notes: When the DynamoDB Encryption Client is configured to use AWS KMS, it uses a customer master key (CMK) that is always encrypted when used outside of AWS KMS. This cryptographic materials provider returns a unique encryption key and signing key for every table item. This method of encryption uses a symmetric CMK.
Reference: Direct KMS Materials Provider.
Category: Deployment

Q80: A company is developing a REST API with Amazon API Gateway. Access to the API should be limited to users in the existing Amazon Cognito user pool. Which combination of steps should a developer perform to secure the API? (Select TWO.)
A. Create an AWS Lambda authorizer for the API.
B. Create an Amazon Cognito authorizer for the API.
C. Configure the authorizer for the API resource.
D. Configure the API methods to use the authorizer.
E. Configure the authorizer for the API stage.


Answer: B. D.
Notes: An Amazon Cognito authorizer should be used for integration with Amazon Cognito user pools. In addition to creating an authorizer, you are required to configure an API method to use that authorizer for the API.
Reference: Control access to a REST API using Amazon Cognito user pools as authorizer.
Category: Security

Q81: A developer is implementing a mobile app to provide personalized services to app users. The application code makes calls to Amazon S3 and Amazon Simple Queue Service (Amazon SQS). Which options can the developer use to authenticate the app users? (Select TWO.)
A. Authenticate to the Amazon Cognito identity pool directly.
B. Authenticate to AWS Identity and Access Management (IAM) directly.
C. Authenticate to the Amazon Cognito user pool directly.
D. Federate authentication by using Login with Amazon with the users managed with AWS Security Token Service (AWS STS).
E. Federate authentication by using Login with Amazon with the users managed with the Amazon Cognito user pool.


Answer: C. E.
Notes: The Amazon Cognito user pool provides direct user authentication. The Amazon Cognito user pool provides a federated authentication option with third-party identity provider (IdP), including amazon.com.
Reference: Adding User Pool Sign-in Through a Third Party.
Category: Security

Question: A company is implementing several order processing workflows. Each workflow is implemented by using AWS Lambda functions for each task. Which combination of steps should a developer follow to implement these workflows? (Select TWO.)
A. Define a AWS Step Functions task for each Lambda function.
B. Define a AWS Step Functions task for each workflow.
C. Write code that polls the AWS Step Functions invocation to coordinate each workflow.
D. Define an AWS Step Functions state machine for each workflow.
E. Define an AWS Step Functions state machine for each Lambda function.
Answer: A. D.
Notes: Step Functions is based on state machines and tasks. A state machine is a workflow. Tasks perform work by coordinating with other AWS services, such as Lambda. A state machine is a workflow. It can be used to express a workflow as a number of states, their relationships, and their input and output. You can coordinate individual tasks with Step Functions by expressing your workflow as a finite state machine, written in the Amazon States Language.
ReferenceText: Getting Started with AWS Step Functions.
ReferenceUrl: https://aws.amazon.com/step-functions/getting-started/
Category: Development

Welcome to AWS Certified Developer Associate Exam Preparation: Definition and Objectives, Top 100 Questions and Answers dump, White papers, Courses, Labs and Training Materials, Exam info and details, References, Jobs, Others AWS Certificates

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

What is the AWS Certified Developer Associate Exam?

This AWS Certified Developer-Associate Examination is intended for individuals who perform a Developer role. It validates an examinee’s ability to:

  • Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
  • Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS

Recommended general IT knowledge
The target candidate should have the following:
– In-depth knowledge of at least one high-level programming language
– Understanding of application lifecycle management
– The ability to write code for serverless applications
– Understanding of the use of containers in the development process

Recommended AWS knowledge
The target candidate should be able to do the following:

  • Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications
  • Identify key features of AWS services
  • Understand the AWS shared responsibility model
  • Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
  • Use and interact with AWS services
  • Apply basic understanding of cloud-native applications to write code
  • Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
  • Author, maintain, and debug code modules on AWS

What is considered out of scope for the target candidate?
The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam:
– Design architectures (for example, distributed system, microservices)
– Design and implement CI/CD pipelines

  • Administer IAM users and groups
  • Administer Amazon Elastic Container Service (Amazon ECS)
  • Design AWS networking infrastructure (for example, Amazon VPC, AWS Direct Connect)
  • Understand compliance and licensing

Exam content
Response types
There are two types of questions on the exam:
– Multiple choice: Has one correct response and three incorrect responses (distractors)
– Multiple response: Has two or more correct responses out of five or more response options
Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose.
Distractors are generally plausible responses that match the content area.
Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.

Unscored content
The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.

Exam results
The AWS Certified Developer – Associate (DVA-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines.
Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720.
Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.
Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam.
Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.

Content outline
This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context.
The percentage in each domain represents only scored content.

Domain 1: Deployment 22%
Domain 2: Security 26%
Domain 3: Development with AWS Services 30%
Domain 4: Refactoring 10%
Domain 5: Monitoring and Troubleshooting 12%

Domain 1: Deployment
1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
–  Commit code to a repository and invoke build, test and/or deployment actions
–  Use labels and branches for version and release management
–  Use AWS CodePipeline to orchestrate workflows against different environments
–  Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS
CodeDeploy for CI/CD purposes
–  Perform a roll back plan based on application deployment policy

1.2 Deploy applications using AWS Elastic Beanstalk.
–  Utilize existing supported environments to define a new application stack
–  Package the application
–  Introduce a new application version into the Elastic Beanstalk environment
–  Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable)
–  Validate application health using Elastic Beanstalk dashboard
–  Use Amazon CloudWatch Logs to instrument application logging

1.3 Prepare the application deployment package to be deployed to AWS.
–  Manage the dependencies of the code module (like environment variables, config files and static image files) within the package
–  Outline the package/container directory structure and organize files appropriately
–  Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores)

1.4 Deploy serverless applications.
–  Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template
–  Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)

Domain 2: Security
2.1 Make authenticated calls to AWS services.
–  Communicate required policy based on least privileges required by application.
–  Assume an IAM role to access a service
–  Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)

2.2 Implement encryption using AWS services.
– Encrypt data at rest (client side; server side; envelope encryption) using AWS services
–  Encrypt data in transit

2.3 Implement application authentication and authorization.
– Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools
–  Use Amazon Cognito-provided credentials to write code that access AWS services.
–  Use Amazon Cognito sync to synchronize user profiles and data
–  Use developer-authenticated identities to interact between end user devices, backend
authentication, and Amazon Cognito

Domain 3: Development with AWS Services
3.1 Write code for serverless applications.
– Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications)
– Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler)
– Create an API endpoint using Amazon API Gateway
–  Create and test appropriate API actions like GET, POST using the API endpoint
–  Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes)
–  Compute read/write capacity units for Amazon DynamoDB based on application requirements
–  Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis)
–  Invoke an AWS Lambda function synchronously and asynchronously

3.2 Translate functional requirements into application design.
– Determine real-time vs. batch processing for a given use case
– Determine use of synchronous vs. asynchronous for a given use case
– Determine use of event vs. schedule/poll for a given use case
– Account for tradeoffs for consistency models in an application design

Domain 4: Refactoring
4.1 Optimize applications to best use AWS services and features.
 Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache)
 Apply an Amazon S3 naming scheme for optimal read performance

4.2 Migrate existing application code to run on AWS.
– Isolate dependencies
– Run the application as one or more stateless processes
– Develop in order to enable horizontal scalability
– Externalize state

Domain 5: Monitoring and Troubleshooting

5.1 Write code that can be monitored.
– Create custom Amazon CloudWatch metrics
– Perform logging in a manner available to systems operators
– Instrument application source code to enable tracing in AWS X-Ray

5.2 Perform root cause analysis on faults found in testing or production.
– Interpret the outputs from the logging mechanism in AWS to identify errors in logs
– Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues
– Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component

Which key tools, technologies, and concepts might be covered on the exam?

The following is a non-exhaustive list of the tools and technologies that could appear on the exam.
This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam.
The general tools and technologies in this list appear in no particular order.
AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance:
– Analytics
– Application Integration
– Containers
– Cost and Capacity Management
– Data Movement
– Developer Tools
– Instances (virtual machines)
– Management and Governance
– Networking and Content Delivery
– Security
– Serverless

AWS services and features

Analytics:
– Amazon Elasticsearch Service (Amazon ES)
– Amazon Kinesis
Application Integration:
– Amazon EventBridge (Amazon CloudWatch Events)
– Amazon Simple Notification Service (Amazon SNS)
– Amazon Simple Queue Service (Amazon SQS)
– AWS Step Functions

Compute:
– Amazon EC2
– AWS Elastic Beanstalk
– AWS Lambda

Containers:
– Amazon Elastic Container Registry (Amazon ECR)
– Amazon Elastic Container Service (Amazon ECS)
– Amazon Elastic Kubernetes Services (Amazon EKS)

Database:
– Amazon DynamoDB
– Amazon ElastiCache
– Amazon RDS

Developer Tools:
– AWS CodeArtifact
– AWS CodeBuild
– AWS CodeCommit
– AWS CodeDeploy
– Amazon CodeGuru
– AWS CodePipeline
– AWS CodeStar
– AWS Fault Injection Simulator
– AWS X-Ray

Management and Governance:
– AWS CloudFormation
– Amazon CloudWatch

Networking and Content Delivery:
– Amazon API Gateway
– Amazon CloudFront
– Elastic Load Balancing

Security, Identity, and Compliance:
– Amazon Cognito
– AWS Identity and Access Management (IAM)
– AWS Key Management Service (AWS KMS)

Storage:
– Amazon S3

Out-of-scope AWS services and features

The following is a non-exhaustive list of AWS services and features that are not covered on the exam.
These services and features do not represent every AWS offering that is excluded from the exam content.
Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant.
Out-of-scope AWS services and features include the following:
– AWS Application Discovery Service
– Amazon AppStream 2.0
– Amazon Chime
– Amazon Connect
– AWS Database Migration Service (AWS DMS)
– AWS Device Farm
– Amazon Elastic Transcoder
– Amazon GameLift
– Amazon Lex
– Amazon Machine Learning (Amazon ML)
– AWS Managed Services
– Amazon Mobile Analytics
– Amazon Polly

– Amazon QuickSight
– Amazon Rekognition
– AWS Server Migration Service (AWS SMS)
– AWS Service Catalog
– AWS Shield Advanced
– AWS Shield Standard
– AWS Snow Family
– AWS Storage Gateway
– AWS WAF
– Amazon WorkMail
– Amazon WorkSpaces

To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Top

AWS Certified Developer – Associate Practice Questions And Answers Dump

Q0: Your application reads commands from an SQS queue and sends them to web services hosted by your
partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost.
How can you accommodate the partners’ broken web services without wasting your resources?

  • A. Create a delay queue and set DelaySeconds to 30 seconds
  • B. Requeue the message with a VisibilityTimeout of 30 seconds.
  • C. Create a dead letter queue and set the Maximum Receives to 3.
  • D. Requeue the message with a DelaySeconds of 30 seconds.
AWS Developer Associates DVA-C01 PRO
AWS Developer Associates DVA-C01 PRO
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 


C. After a message is taken from the queue and returned for the maximum number of retries, it is
automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.

Reference: Amazon SQS Dead-Letter Queues


Top

Q1: A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently.
What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. The AWS Documentation mentions the following:

DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

Q2: You are creating a DynamoDB table with the following attributes:

  • PurchaseOrderNumber (partition key)
  • CustomerID
  • PurchaseDate
  • TotalPurchaseValue

One of your applications must retrieve items from the table to calculate the total value of purchases for a
particular customer over a date range. What secondary index do you need to add to the table?

  • A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute
  • C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute


C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition
key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the
TotalPurchaseValue into the index provides all the data needed to satisfy the use case.

Reference: AWS DynamoDB Global Secondary Indexes

Difference between local and global indexes in DynamoDB

    • Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
    • Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
    • Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
    • Global Secondary Indexes defines a new paradigm – different hash/range keys per index.
      This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
    • Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
    • Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.

Throughput :

  • Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
  • Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.


Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q3: When referencing the remaining time left for a Lambda function to run within the function’s code you would use:

  • A. The event object
  • B. The timeLeft object
  • C. The remains object
  • D. The context object


D. The context object.

Reference: AWS Lambda


Top

Q4: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context
D. event, context
def handler_name(event, context):

return some_value

Reference: AWS Lambda Function Handler in Python

Top

Q5: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda Deployment Package

Top

Q6: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime

C. Function code and libraries not included within the runtime

Reference: AWS Lambda Deployment Package in PowerShell

Top

Q7: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?

  • A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
  • B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
  • C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
  • D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.


D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.

Reference: Comparison of Security Groups and Network ACLs

AWS Security Groups and NACL


Top

Q8: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.

Reference: AWS Network Address Translation Gateway


Top

Q9: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.

Reference: AWS Autoscalling


Top

Q10: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda

Top

Q11: You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?

  • A. RegisterImage
  • B. CreateImage
  • C. ami-register-image
  • D. ami-create-image

A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.

Reference: API RegisterImage

Top

Q12: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?

  • A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
  • B. Permenantly assigning users to specific instances and always routing their traffic to those instances
  • C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
  • D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance

Top

Q13: Which API call would best be used to describe an Amazon Machine Image?

  • A. ami-describe-image
  • B. ami-describe-images
  • C. DescribeImage
  • D. DescribeImages

D. In general, API actions stick to the PascalCase style with the first letter of every word capitalized.

Reference: API DescribeImages

Top

Q14: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?

  • A. Autoscaling requires using Amazon EBS-backed instances
  • B. Virtual Private Cloud requires EBS backed instances
  • C. Amazon EBS-backed instances can be stopped and restarted without losing data
  • D. Instance-store backed instances can be stopped and restarted without losing data

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.

Reference: What is the difference between EBS and Instance Store?

Top

Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command.
ssh -i my_key.pem ec2-user@52.2.222.22
However you receive the following error.
@@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@
What is the most probable reason for this and how can you fix it?

  • A. You do not have root access on your terminal and need to use the sudo option for this to work.
  • B. You do not have enough permissions to perform the operation.
  • C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
  • D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.

D. You need to run something like: chmod 400 my_key.pem

Reference:

Top

Q16: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?

  • A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
  • B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
  • C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
  • D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.

D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.

Reference: AWS Root Device Storage

Top

Q17: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:

  • A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
  • B. Can only be used to launch EC2 instances in the same country as the AMI is stored
  • C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
  • D. Can be used to launch EC2 instances in any AWS region

C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another

Reference: https://aws.amazon.com/amazon-linux-ami/

Top

Q18: Which of the following statements is true about the Elastic File System (EFS)?

  • A. EFS can scale out to meet capacity requirements and scale back down when no longer needed
  • B. EFS can be used by multiple EC2 instances simultaneously
  • C. EFS cannot be used by an instance using EBS
  • D. EFS can be configured on an instance before launch just like an IAM role or EBS volumes

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

A. and B.

Reference: https://aws.amazon.com/efs/

Top

Q19: IAM Policies, at a minimum, contain what elements?

  • A. ID
  • B. Effects
  • C. Resources
  • D. Sid
  • E. Principle
  • F. Actions

B. C. and F.

Effect – Use Allow or Deny to indicate whether the policy allows or denies access.

Resource – Specify a list of resources to which the actions apply.

Action – Include a list of actions that the policy allows or denies.

Id, Sid aren’t required fields in IAM Policies. But they are optional fields

Reference: AWS IAM Access Policies

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q20: What are the main benefits of IAM groups?

  • A. The ability to create custom permission policies.
  • B. Assigning IAM permission policies to more than one user at a time.
  • C. Easier user/policy management.
  • D. Allowing EC2 instances to gain access to S3.

B. and C.

A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.

Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html

 

Top

Q21: What are benefits of using AWS STS?

  • A. Grant access to AWS resources without having to create an IAM identity for them
  • B. Since credentials are temporary, you don’t have to rotate or revoke them
  • C. Temporary security credentials can be extended indefinitely
  • D. Temporary security credentials can be restricted to a specific region

Top

Q22: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q23: A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?

  • A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
  • B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
  • C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
  • D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.

B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence.
Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation.
During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:

  • Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
  • Saved Configurations
    Settings for any options that are not applied directly to the
    environment are loaded from a saved configuration, if specified.
  • Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the
    environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.

     

    Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.

  • Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.

If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI
.
Settings in configuration files are not applied
directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.
If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.

Reference: Managing ec2 features – Elastic beanstalk

Q24: What statements are true about Availability Zones (AZs) and Regions?

  • A. There is only one AZ in each AWS Region
  • B. AZs are geographically separated inside a region to help protect against natural disasters affecting more than one at a time.
  • C. AZs can be moved between AWS Regions based on your needs
  • D. There are (almost always) two or more AZs in each AWS Region

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B and D.

Reference: AWS global infrastructure/

Top

Q25: An AWS Region contains:

  • A. Edge Locations
  • B. Data Centers
  • C. AWS Services
  • D. Availability Zones


B. C. D. Edge locations are actually distinct locations that don’t explicitly fall within AWS regions.

Reference: AWS Global Infrastructure


Top

Q26: Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?

  • A. Eventual Consistent Reads
  • B. Conditional reads for Consistency
  • C. Strongly Consistent Reads
  • D. Not possible


C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.

Reference: https://aws.amazon.com/dynamodb/faqs/


Top

Q27: You’ ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?

  • A. Create an Opswork stack and deploy the Docker containers
  • B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
  • C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
  • D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.


B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.

Reference: Create and Deploy Docker in AWS


Top

Q28: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?

  • A. Create multiple threads and upload the objects in the multiple threads
  • B. Write the items in batches for better performance
  • C. Use the Multipart upload API
  • D. Enable versioning on the Bucket

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 


C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.

Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html


Top

Q29: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?

  • A. 6000
  • B. 10
  • C. 3600
  • D. 600

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.

You can specify the Write capacity in the Capacity tab of the DynamoDB table.

Reference: AWS working with tables

Q30: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context


D. event, context def handler_name(event, context):

return some_value
Reference: AWS Lambda Function Handler in Python

Top

Q31: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda Deployment Package

Top

Q32: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime


C. Function code and libraries not included within the runtime
Reference: AWS Lambda Deployment Package in PowerShell

Top

Q33: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
Reference: AWS Network Address Translation Gateway

Top

Q34: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.
Reference: AWS Autoscalling

Top

Q30: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

Answer:


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda

Top

Q31: An organization is using an Amazon ElastiCache cluster in front of their Amazon RDS instance. The organization would like the Developer to implement logic into the code so that the cluster only retrieves data from RDS when there is a cache miss. What strategy can the Developer implement to achieve this?

  • A. Lazy loading
  • B. Write-through
  • C. Error retries
  • D. Exponential backoff

Answer:


Answer – A
Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. All other options are incorrect.
Reference: Caching Strategies

Top

Q32: A developer is writing an application that will run on Ec2 instances and read messages from SQS queue. The nessages will arrive every 15-60 seconds. How should the Developer efficiently query the queue for new messages?

  • A. Use long polling
  • B. Set a custom visibility timeout
  • C. Use short polling
  • D. Implement exponential backoff


Answer – A Long polling will help insure that the applications make less requests for messages in a shorter period of time. This is more cost effective. Since the messages are only going to be available after 15 seconds and we don’t know exacly when they would be available, it is better to use Long Polling.
Reference: Amazon SQS Long Polling

Top

Q33: You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With new Lambda function working as per expectation which of the following will shift traffic from original Lambda function to new Lambda function in the shortest time frame?

  • A. Canary10Percent5Minutes
  • B. Linear10PercentEvery10Minutes
  • C. Canary10Percent15Minutes
  • D. Linear10PercentEvery1Minute


Answer – A
With Canary Deployment Preference type, Traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while remaining all traffic is shifted after 5 minutes.
Reference: Gradual Code Deployment

Top

Q34: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?

  • A. AWS::Serverless::Api
  • B. AWS::Serverless::Application
  • C. AWS::Serverless::Layerversion
  • D. AWS::Serverless::Function


Answer – B
AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets.
Reference: Declaring Serverless Resources

Top

Q35: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?

  • A. Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
  • B. Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
  • C. Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
  • D. Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.


Answer – D
With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys.
Reference: AWS Key Management Service Concepts

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q36: You are developing an application that will be comprised of the following architecture –

  1. A set of Ec2 instances to process the videos.
  2. These (Ec2 instances) will be spun up by an autoscaling group.
  3. SQS Queues to maintain the processing messages.
  4. There will be 2 pricing tiers.

How will you ensure that the premium customers videos are given more preference?

  • A. Create 2 Autoscaling Groups, one for normal and one for premium customers
  • B. Create 2 set of Ec2 Instances, one for normal and one for premium customers
  • C. Create 2 SQS queus, one for normal and one for premium customers
  • D. Create 2 Elastic Load Balancers, one for normal and one for premium customers.


Answer – C
The ideal option would be to create 2 SQS queues. Messages can then be processed by the application from the high priority queue first.<br? The other options are not the ideal options. They would lead to extra costs and also extra maintenance.
Reference: SQS

Top

Q37: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?

  • A. CustomerID
  • B. CustomerName
  • C. Location
  • D. Age


Answer- A
Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on..
Use composite attributes. Try to combine more than one attribute to form a unique key.
Reference: Choosing the right DynamoDB Partition Key

Top

Q38: A developer is making use of AWS services to develop an application. He has been asked to develop the application in a manner to compensate any network delays. Which of the following two mechanisms should he implement in the application?

  • A. Multiple SQS queues
  • B. Exponential backoff algorithm
  • C. Retries in your application code
  • D. Consider using the Java sdk.


Answer- B. and C.
In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values, and should be set based on the operation being performed, as well as other local factors, such as network latency.
Reference: Error Retries and Exponential Backoff in AWS

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q39: An application is being developed that is going to write data to a DynamoDB table. You have to setup the read and write throughput for the table. Data is going to be read at the rate of 300 items every 30 seconds. Each item is of size 6KB. The reads can be eventual consistent reads. What should be the read capacity that needs to be set on the table?

  • A. 10
  • B. 20
  • C. 6
  • D. 30


Answer – A

Since there are 300 items read every 30 seconds , that means there are (300/30) = 10 items read every second.
Since each item is 6KB in size , that means , 2 reads will be required for each item.
So we have total of 2*10 = 20 reads for the number of items per second
Since eventual consistency is required , we can divide the number of reads(20) by 2 , and in the end we get the Read Capacity of 10.

Reference: Read/Write Capacity Mode


Top

Q40: You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load balancer. You have been requested to monitor the incoming connections to the Elastic Load Balancer. Which of the below options can suffice this requirement?

  • A. Use AWS CloudTrail with your load balancer
  • B. Enable access logs on the load balancer
  • C. Use a CloudWatch Logs Agent
  • D. Create a custom metric CloudWatch lter on your load balancer


Answer – B
Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues.
Reference: Access Logs for Your Application Load Balancer

Top

Q41: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?

  • A. Enable versioning for the underlying S3 bucket.
  • B. Enable Replication so that the objects get replicated to the other bucket
  • C. Enable CORS for the bucket
  • D. Change the Bucket policy for the bucket to allow access from the other bucket


Answer – C

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.

Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:

Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.

Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.

Reference: Cross-Origin Resource Sharing (CORS)


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q42: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below

  • A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
  • B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
  • C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
  • D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.


Answer- C
The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3.
You can then provides access to the objects based on the key values generated via the user id.

Reference: The AWS Security Token Service (STS)


Top

Q43: Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.

  • A. Publish your data to CloudWatch Logs, and congure your application to Autoscale to handle the load on demand.
  • B. Publish your log data to an Amazon S3 bucket.  Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is congured to pull down your log les stored an Amazon S3
  • C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is congured to process your logging data.
  • D. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.

Answer:


Answer – C
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data.
Reference: Amazon Kinesis

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q44: You’ve been instructed to develop a mobile application that will make use of AWS services. You need to decide on a data store to store the user sessions. Which of the following would be an ideal data store for session management?

  • A. AWS Simple Storage Service
  • B. AWS DynamoDB
  • C. AWS RDS
  • D. AWS Redshift

Answer:


Answer – B
DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management
Reference: Scalable Session Handling in PHP Using Amazon DynamoDB

Top

Q45: Your application currently interacts with a DynamoDB table. Records are inserted into the table via the application. There is now a requirement to ensure that whenever items are updated in the DynamoDB primary table , another record is inserted into a secondary table. Which of the below feature should be used when developing such a solution?

  • A. AWS DynamoDB Encryption
  • B. AWS DynamoDB Streams
  • C. AWS DynamoDB Accelerator
  • D. AWSTable Accelerator


Answer – B
DynamoDB Streams Use Cases and Design Patterns This post describes some common use cases you might encounter, along with their design options and solutions, when migrating data from relational data stores to Amazon DynamoDB. We will consider how to manage the following scenarios:

  • How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
  • How do you trigger an event based on a particular transaction?
  • How do you audit or archive transactions?
  • How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?

Relational databases provide native support for transactions, triggers, auditing, and replication. Typically, a transaction in a database refers to performing create, read, update, and delete (CRUD) operations against multiple tables in a block. A transaction can have only two states—success or failure. In other words, there is no partial completion. As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit. You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time. AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region. All of the other options are incorrect since none of these would meet the core requirement.
Reference: DynamoDB Streams Use Cases and Design Patterns


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q46: An application has been making use of AWS DynamoDB for its back-end data store. The size of the table has now grown to 20 GB , and the scans on the table are causing throttling errors. Which of the following should now be implemented to avoid such errors?

  • A. Large Page size
  • B. Reduced page size
  • C. Parallel Scans
  • D. Sequential scans

Answer – B
When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity. You can use the following technique to minimize the impact of a scan on a table’s provisioned throughput. Reduce page size Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request. For example, suppose that each item is 4 KB and you set the page size to 40 items. A Query request would then consume only 20 eventually consistent read operations or 40 strongly consistent read operations. A larger number of smaller Query or Scan operations would allow your other critical requests to succeed without throttling.
Reference1: Rate-Limited Scans in Amazon DynamoDB

Reference2: Best Practices for Querying and Scanning Data


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q47: Which of the following is correct way of passing a stage variable to an HTTP URL ? (Select TWO.)

  • A. http://example.com/${}/prod
  • B. http://example.com/${stageVariables.}/prod
  • C. http://${stageVariables.}.example.com/dev/operation
  • D. http://${stageVariables}.example.com/dev/operation
  • E. http://${}.example.com/dev/operation
  • F. http://example.com/${stageVariables}/prod


Answer – B. and C.
A stage variable can be used as part of HTTP integration URL as in following cases, ·         A full URI without protocol ·         A full domain ·         A subdomain ·         A path ·         A query string In the above case , option B & C displays stage variable as a path & sub-domain.
Reference: Amazon API Gateway Stage Variables Reference

Top

Q48: Your company is planning on creating new development environments in AWS. They want to make use of their existing Chef recipes which they use for their on-premise configuration for servers in AWS. Which of the following service would be ideal to use in this regard?

  • A. AWS Elastic Beanstalk
  • B. AWS OpsWork
  • C. AWS Cloudformation
  • D. AWS SQS


Answer – B
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments All other options are invalid since they cannot be used to work with Chef recipes for configuration management.
Reference: AWS OpsWorks

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q49: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?

  • A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
  • B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access.
  • C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
  • D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.


Answer – C
With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used
Reference: About Web Identity Federation

Top

Q50: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?

  • A. Cognito Data
  • B. Cognito Events
  • C. Cognito Streams
  • D. Cognito Callbacks


Answer – C
Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams
Reference:

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q51: You’ve developed a set of scripts using AWS Lambda. These scripts need to access EC2 Instances in a VPC. Which of the following needs to be done to ensure that the AWS Lambda function can access the resources in the VPC. Choose 2 answers from the options given below

  • A. Ensure that the subnet ID’s are mentioned when conguring the Lambda function
  • B. Ensure that the NACL ID’s are mentioned when conguring the Lambda function
  • C. Ensure that the Security Group ID’s are mentioned when conguring the Lambda function
  • D. Ensure that the VPC Flow Log ID’s are mentioned when conguring the Lambda function


Answer: A and C.
AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPCspecific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC.
Reference: Configuring a Lambda Function to Access Resources in an Amazon VPC

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q52: You’ve currently been tasked to migrate an existing on-premise environment into Elastic Beanstalk. The application does not make use of Docker containers. You also can’t see any relevant environments in the beanstalk service that would be suitable to host your application. What should you consider doing in this case?

  • A. Migrate your application to using Docker containers and then migrate the app to the Elastic Beanstalk environment.
  • B. Consider using Cloudformation to deploy your environment to Elastic Beanstalk
  • C. Consider using Packer to create a custom platform
  • D. Consider deploying your application using the Elastic Container Service


Answer – C
Elastic Beanstalk supports custom platforms. A custom platform is a more advanced customization than a Custom Image in several ways. A custom platform lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility allows you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn’t provide a platform out of the box. Compare that to custom images, where you modify an AMI for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform’s software stack. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. To create a custom platform, you build an Amazon Machine Image (AMI) from one of the supported operating systems—Ubuntu, RHEL, or Amazon Linux (see the flavor entry in Platform.yaml File Format for the exact version numbers)—and add further customizations. You create your own Elastic Beanstalk platform using Packer, which is an open-source tool for creating machine images for many platforms, including AMIs for use with Amazon EC2. An Elastic Beanstalk platform comprises an AMI configured to run a set of software that supports an application, and metadata that can include custom configuration options and default configuration option settings.
Reference: AWS Elastic Beanstalk Custom Platforms

Top

Q53: Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.

  • A. 10
  • B. 160
  • C. 155
  • D. 16


Answer – B.
Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.
Reference: Read/Write Capacity Mode

Top

Top

Q54: Which AWS Service can be used to automatically install your application code onto EC2, on premises systems and Lambda?

  • A. CodeCommit
  • B. X-Ray
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q55: Which AWS service can be used to compile source code, run tests and package code?

  • A. CodePipeline
  • B. CodeCommit
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy Answer: B.

Reference: AWS CodeBuild


Top

Q56: How can your prevent CloudFormation from deleting your entire stack on failure? (Choose 2)

  • A. Set the Rollback on failure radio button to No in the CloudFormation console
  • B. Set Termination Protection to Enabled in the CloudFormation console
  • C. Use the –disable-rollback flag with the AWS CLI
  • D. Use the –enable-termination-protection protection flag with the AWS CLI

Answer: A. and C.

Reference: Protecting a Stack From Being Deleted

Top

Q57: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?

  • A. Continuous Integration
  • B. Continuous Deployment
  • C. Continuous Delivery
  • D. Continuous Development

Top

Q58: When deploying application code to EC2, the AppSpec file can be written in which language?

  • A. JSON
  • B. JSON or YAML
  • C. XML
  • D. YAML

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q59: Part of your CloudFormation deployment fails due to a mis-configuration, by defaukt what will happen?

  • A. CloudFormation will rollback only the failed components
  • B. CloudFormation will rollback the entire stack
  • C. Failed component will remain available for debugging purposes
  • D. CloudFormation will ask you if you want to continue with the deployment


Top

Q60: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?

  • A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
  • B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
  • C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
  • D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.

Answer: C

Reference: Getting Started with Amazon SNS


Top

Q61: Which AWS service can be used to centrally store and version control your application source code, binaries and libraries

  • A. CodeCommit
  • B. CodeBuild
  • C. CodePipeline
  • D. ElasticFileSystem

Answer: A

Reference: AWS CodeCommit


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q62: You are using CloudFormation to create a new S3 bucket, which of the following sections would you use to define the properties of your bucket?

  • A. Conditions
  • B. Parameters
  • C. Outputs
  • D. Resources

Answer: D

Reference: Resources


Top

Q63: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?

  • A. Transforms
  • B. Outputs
  • C. Resources
  • D. Instances

Answer: C.
The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3.
Reference: Resources

Top

Q64: Which AWS service can be used to fully automate your entire release process?

  • A. CodeDeploy
  • B. CodePipeline
  • C. CodeCommit
  • D. CodeBuild

Answer: B.
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates

Reference: AWS CodePipeline


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q65: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?

  • A. Outputs
  • B. Transforms
  • C. Resources
  • D. Exports

Answer: A.
Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack.
Reference: CloudFormation Outputs

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q66: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?

  • A. Inputs
  • B. Resources
  • C. Transforms
  • D. Files

Answer: C.
Transforms is used to reference code located in S3 and also specififying the use of the Serverless Application Model (SAM) for Lambda deployments.
Reference: Transforms

Top

Q67: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file
used to specify source files and lifecycle hooks?

  • A. buildspec.yml
  • B. appspec.json
  • C. appspec.yml
  • D. buildspec.json

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q68: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?

  • A. Share the code using an EBS volume
  • B. Copy and paste the code into the template each time you need to use it
  • C. Use a cloudformation nested stack
  • D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.

Answer: C.

Reference: Working with Nested Stacks

Top

Q69: In the CodeDeploy AppSpec file, what are hooks used for?

  • A. To reference AWS resources that will be used during the deployment
  • B. Hooks are reserved for future use
  • C. To specify files you want to copy during the deployment.
  • D. To specify, scripts or function that you want to run at set points in the deployment lifecycle

Answer: D.
The ‘hooks’ section for an EC2/On-Premises deployment contains mappings that link deployment lifecycle event hooks to one or more scripts.

Reference: AppSpec ‘hooks’ Section

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q70: Which command can you use to encrypt a plain text file using CMK?

  • A. aws kms-encrypt
  • B. aws iam encrypt
  • C. aws kms encrypt
  • D. aws encrypt

Answer: C.
aws kms encrypt –key-id 1234abcd-12ab-34cd-56ef-1234567890ab –plaintext fileb://ExamplePlaintextFile –output text –query CiphertextBlob > C:\Temp\ExampleEncryptedFile.base64

Reference: AWS CLI Encrypt

Top

Q72: Which of the following is an encrypted key used by KMS to encrypt your data

  • A. Custmoer Mamaged Key
  • B. Encryption Key
  • C. Envelope Key
  • D. Customer Master Key

Answer: C.
Your Data key also known as the Enveloppe key is encrypted using the master key.This approach is known as Envelope encryption.
Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.

Reference: Envelope Encryption

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q73: Which of the following statements are correct? (Choose 2)

  • A. The Customer Master Key is used to encrypt and decrypt the Envelope Key or Data Key
  • B. The Envelope Key or Data Key is used to encrypt and decrypt plain text files.
  • C. The envelope Key or Data Key is used to encrypt and decrypt the Customer Master Key.
  • D. The Customer MasterKey is used to encrypt and decrypt plain text files.

Answer: A. and B.

Reference: AWS Key Management Service Concepts

Top

 
 

Q74: Which of the following statements is correct in relation to kMS/ (Choose 2)

  • A. KMS Encryption keys are regional
  • B. You cannot export your customer master key
  • C. You can export your customer master key.
  • D. KMS encryption Keys are global

Answer: A. and B.

Reference: AWS Key Management Service FAQs

Q75:  A developer is preparing a deployment package for a Java implementation of an AWS Lambda function. What should the developer include in the deployment package? (Select TWO.)
A. Compiled application code
B. Java runtime environment
C. References to the event sources
D. Lambda execution role
E. Application dependencies


Answer: C. E.
Notes: To create a Lambda function, you first create a Lambda function deployment package. This package is a .zip or .jar file consisting of your code and any dependencies.
Reference: Lambda deployment packages.

Q76: A developer uses AWS CodeDeploy to deploy a Python application to a fleet of Amazon EC2 instances that run behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. What should the developer include in the CodeDeploy deployment package?
A. A launch template for the Amazon EC2 Auto Scaling group
B. A CodeDeploy AppSpec file
C. An EC2 role that grants the application access to AWS services
D. An IAM policy that grants the application access to AWS services


Answer: B.
Notes: The CodeDeploy AppSpec (application specific) file is unique to CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file.
Reference: CodeDeploy application specification (AppSpec) files.
Category: Deployment

Q76: A company is working on a project to enhance its serverless application development process. The company hosts applications on AWS Lambda. The development team regularly updates the Lambda code and wants to use stable code in production. Which combination of steps should the development team take to configure Lambda functions to meet both development and production requirements? (Select TWO.)

A. Create a new Lambda version every time a new code release needs testing.
B. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready unqualified Amazon Resource Name (ARN) version. Point the Development alias to the $LATEST version.
C. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to the production-ready qualified Amazon Resource Name (ARN) version. Point the Development alias to the variable LAMBDA_TASK_ROOT.
D. Create a new Lambda layer every time a new code release needs testing.
E. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready Lambda layer Amazon Resource Name (ARN). Point the Development alias to the $LATEST layer ARN.


Answer: A. B.
Notes: Lambda function versions are designed to manage deployment of functions. They can be used for code changes, without affecting the stable production version of the code. By creating separate aliases for Production and Development, systems can initiate the correct alias as needed. A Lambda function alias can be used to point to a specific Lambda function version. Using the functionality to update an alias and its linked version, the development team can update the required version as needed. The $LATEST version is the newest published version.
Reference: Lambda function versions.

For more information about Lambda layers, see Creating and sharing Lambda layers.

For more information about Lambda function aliases, see Lambda function aliases.

Category: Deployment

Q77: Each time a developer publishes a new version of an AWS Lambda function, all the dependent event source mappings need to be updated with the reference to the new version’s Amazon Resource Name (ARN). These updates are time consuming and error-prone. Which combination of actions should the developer take to avoid performing these updates when publishing a new Lambda version? (Select TWO.)
A. Update event source mappings with the ARN of the Lambda layer.
B. Point a Lambda alias to a new version of the Lambda function.
C. Create a Lambda alias for each published version of the Lambda function.
D. Point a Lambda alias to a new Lambda function alias.
E. Update the event source mappings with the Lambda alias ARN.


Answer: B. E.
Notes: A Lambda alias is a pointer to a specific Lambda function version. Instead of using ARNs for the Lambda function in event source mappings, you can use an alias ARN. You do not need to update your event source mappings when you promote a new version or roll back to a previous version.
Reference: Lambda function aliases.
Category: Deployment

Q78:  A company wants to store sensitive user data in Amazon S3 and encrypt this data at rest. The company must manage the encryption keys and use Amazon S3 to perform the encryption. How can a developer meet these requirements?
A. Enable default encryption for the S3 bucket by using the option for server-side encryption with customer-provided encryption keys (SSE-C).
B. Enable client-side encryption with an encryption key. Upload the encrypted object to the S3 bucket.
C. Enable server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Upload an object to the S3 bucket.
D. Enable server-side encryption with customer-provided encryption keys (SSE-C). Upload an object to the S3 bucket.


Answer: D.
Notes: When you upload an object, Amazon S3 uses the encryption key you provide to apply AES-256 encryption to your data and removes the encryption key from memory.
Reference: Protecting data using server-side encryption with customer-provided encryption keys (SSE-C).

Category: Security

Q79: A company is developing a Python application that submits data to an Amazon DynamoDB table. The company requires client-side encryption of specific data items and end-to-end protection for the encrypted data in transit and at rest. Which combination of steps will meet the requirement for the encryption of specific data items? (Select TWO.)

A. Generate symmetric encryption keys with AWS Key Management Service (AWS KMS).
B. Generate asymmetric encryption keys with AWS Key Management Service (AWS KMS).
C. Use generated keys with the DynamoDB Encryption Client.
D. Use generated keys to configure DynamoDB table encryption with AWS managed customer master keys (CMKs).
E. Use generated keys to configure DynamoDB table encryption with AWS owned customer master keys (CMKs).


Answer: A. C.
Notes: When the DynamoDB Encryption Client is configured to use AWS KMS, it uses a customer master key (CMK) that is always encrypted when used outside of AWS KMS. This cryptographic materials provider returns a unique encryption key and signing key for every table item. This method of encryption uses a symmetric CMK.
Reference: Direct KMS Materials Provider.
Category: Deployment

Q80: A company is developing a REST API with Amazon API Gateway. Access to the API should be limited to users in the existing Amazon Cognito user pool. Which combination of steps should a developer perform to secure the API? (Select TWO.)
A. Create an AWS Lambda authorizer for the API.
B. Create an Amazon Cognito authorizer for the API.
C. Configure the authorizer for the API resource.
D. Configure the API methods to use the authorizer.
E. Configure the authorizer for the API stage.


Answer: B. D.
Notes: An Amazon Cognito authorizer should be used for integration with Amazon Cognito user pools. In addition to creating an authorizer, you are required to configure an API method to use that authorizer for the API.
Reference: Control access to a REST API using Amazon Cognito user pools as authorizer.
Category: Security

Q81: A developer is implementing a mobile app to provide personalized services to app users. The application code makes calls to Amazon S3 and Amazon Simple Queue Service (Amazon SQS). Which options can the developer use to authenticate the app users? (Select TWO.)
A. Authenticate to the Amazon Cognito identity pool directly.
B. Authenticate to AWS Identity and Access Management (IAM) directly.
C. Authenticate to the Amazon Cognito user pool directly.
D. Federate authentication by using Login with Amazon with the users managed with AWS Security Token Service (AWS STS).
E. Federate authentication by using Login with Amazon with the users managed with the Amazon Cognito user pool.


Answer: C. E.
Notes: The Amazon Cognito user pool provides direct user authentication. The Amazon Cognito user pool provides a federated authentication option with third-party identity provider (IdP), including amazon.com.
Reference: Adding User Pool Sign-in Through a Third Party.
Category: Security

 
 

Q82: A company is implementing several order processing workflows. Each workflow is implemented by using AWS Lambda functions for each task. Which combination of steps should a developer follow to implement these workflows? (Select TWO.)
A. Define a AWS Step Functions task for each Lambda function.
B. Define a AWS Step Functions task for each workflow.
C. Write code that polls the AWS Step Functions invocation to coordinate each workflow.
D. Define an AWS Step Functions state machine for each workflow.
E. Define an AWS Step Functions state machine for each Lambda function.


Answer: A. D.
Notes: Step Functions is based on state machines and tasks. A state machine is a workflow. Tasks perform work by coordinating with other AWS services, such as Lambda. A state machine is a workflow. It can be used to express a workflow as a number of states, their relationships, and their input and output. You can coordinate individual tasks with Step Functions by expressing your workflow as a finite state machine, written in the Amazon States Language.
Reference: Getting Started with AWS Step Functions.

Category: Development

Q83: A company is migrating a web service to the AWS Cloud. The web service accepts requests by using HTTP (port 80). The company wants to use an AWS Lambda function to process HTTP requests. Which application design will satisfy these requirements?
A. Create an Amazon API Gateway API. Configure proxy integration with the Lambda function.
B. Create an Amazon API Gateway API. Configure non-proxy integration with the Lambda function.
C. Configure the Lambda function to listen to inbound network connections on port 80.
D. Configure the Lambda function as a target in the Application Load Balancer target group.


Answer: D.
Notes: Elastic Load Balancing supports Lambda functions as a target for an Application Load Balancer. You can use load balancer rules to route HTTP requests to a function, based on the path or the header values. Then, process the request and return an HTTP response from your Lambda function.
Reference: Using AWS Lambda with an Application Load Balancer.
Category: Development

Q84: A company is developing an image processing application. When an image is uploaded to an Amazon S3 bucket, a number of independent and separate services must be invoked to process the image. The services do not have to be available immediately, but they must process every image. Which application design satisfies these requirements?
A. Configure an Amazon S3 event notification that publishes to an Amazon Simple Queue Service (Amazon SQS) queue. Each service pulls the message from the same queue.
B. Configure an Amazon S3 event notification that publishes to an Amazon Simple Notification Service (Amazon SNS) topic. Each service subscribes to the same topic.
C. Configure an Amazon S3 event notification that publishes to an Amazon Simple Queue Service (Amazon SQS) queue. Subscribe a separate Amazon Simple Notification Service (Amazon SNS) topic for each service to an Amazon SQS queue.
D. Configure an Amazon S3 event notification that publishes to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe a separate Simple Queue Service (Amazon SQS) queue for each service to the Amazon SNS topic.


Answer: D.
Notes: Each service can subscribe to an individual Amazon SQS queue, which receives an event notification from the Amazon SNS topic. This is a fanout architectural implementation.
Reference: Common Amazon SNS scenarios.
Category: Development

Q85: A developer wants to implement Amazon EC2 Auto Scaling for a Multi-AZ web application. However, the developer is concerned that user sessions will be lost during scale-in events. How can the developer store the session state and share it across the EC2 instances?
A. Write the sessions to an Amazon Kinesis data stream. Configure the application to poll the stream.
B. Publish the sessions to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe each instance in the group to the topic.
C. Store the sessions in an Amazon ElastiCache for Memcached cluster. Configure the application to use the Memcached API.
D. Write the sessions to an Amazon Elastic Block Store (Amazon EBS) volume. Mount the volume to each instance in the group.


Answer: C.
Notes: ElastiCache for Memcached is a distributed in-memory data store or cache environment in the cloud. It will meet the developer’s requirement of persistent storage and is fast to access.
Reference: What is Amazon ElastiCache for Memcached?

Category: Development

 
 
 

Q86: A developer is integrating a legacy web application that runs on a fleet of Amazon EC2 instances with an Amazon DynamoDB table. There is no AWS SDK for the programming language that was used to implement the web application. Which combination of steps should the developer perform to make an API call to Amazon DynamoDB from the instances? (Select TWO.)
A. Make an HTTPS POST request to the DynamoDB API endpoint for the AWS Region. In the request body, include an XML document that contains the request attributes.
B. Make an HTTPS POST request to the DynamoDB API endpoint for the AWS Region. In the request body, include a JSON document that contains the request attributes.
C. Sign the requests by using AWS access keys and Signature Version 4.
D. Use an EC2 SSH key to calculate Signature Version 4 of the request.
E. Provide the signature value through the HTTP X-API-Key header.


Answer: B. C.
Notes: The HTTPS-based low-level AWS API for DynamoDB uses JSON as a wire protocol format. When you send HTTP requests to AWS, you sign the requests so that AWS can identify who sent them. Requests are signed with your AWS access key, which consists of an access key ID and secret access key. AWS supports two signature versions: Signature Version 4 and Signature Version 2. AWS recommends the use of Signature Version 4.
Reference: Signing AWS API requests.
Category: Development

Q87: A developer has written several custom applications that read and write to the same Amazon DynamoDB table. Each time the data in the DynamoDB table is modified, this change should be sent to an external API. Which combination of steps should the developer perform to accomplish this task? (Select TWO.)
A. Configure an AWS Lambda function to poll the stream and call the external API.
B. Configure an event in Amazon EventBridge (Amazon CloudWatch Events) that publishes the change to an Amazon Managed Streaming for Apache Kafka (Amazon MSK) data stream.
C. Create a trigger in the DynamoDB table to publish the change to an Amazon Kinesis data stream.
D. Deliver the stream to an Amazon Simple Notification Service (Amazon SNS) topic and subscribe the API to the topic.
E. Enable DynamoDB Streams on the table.


Answer: A. E.
Notes: If you enable DynamoDB Streams on a table, you can associate the stream Amazon Resource Name (ARN) with an Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table’s stream. Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. You can enable DynamoDB Streams on a table to create an event that invokes an AWS Lambda function.
Reference: Tutorial: Process New Items with DynamoDB Streams and Lambda.
Category: Monitoring

 
 
 

Q88: A company is migrating the create, read, update, and delete (CRUD) functionality of an existing Java web application to AWS Lambda. Which minimal code refactoring is necessary for the CRUD operations to run in the Lambda function?
A. Implement a Lambda handler function.
B. Import an AWS X-Ray package.
C. Rewrite the application code in Python.
D. Add a reference to the Lambda execution role.


Answer: A.
Notes: Every Lambda function needs a Lambda-specific handler. Specifics of authoring vary between runtimes, but all runtimes share a common programming model that defines the interface between your code and the runtime code. You tell the runtime which method to run by defining a handler in the function configuration. The runtime runs that method. Next, the runtime passes in objects to the handler that contain the invocation event and context, such as the function name and request ID.
Reference: Getting started with Lambda.
Category: Refactoring

Top

Q89: A company plans to use AWS log monitoring services to monitor an application that runs on premises. Currently, the application runs on a recent version of Ubuntu Server and outputs the logs to a local file. Which combination of steps should a developer perform to accomplish this goal? (Select TWO.)
A. Update the application code to include calls to the agent API for log collection.
B. Install the Amazon Elastic Container Service (Amazon ECS) container agent on the server.
C. Install the unified Amazon CloudWatch agent on the server.
D. Configure the long-term AWS credentials on the server to enable log collection by the agent.
E. Attach an IAM role to the server to enable log collection by the agent.


Answer: C. D.
Notes: The unified CloudWatch agent needs to be installed on the server. Ubuntu Server 18.04 is one of the many supported operating systems. When you install the unified CloudWatch agent on an on-premises server, you will specify a named profile that contains the credentials of the IAM user.
Reference: Collecting metrics and logs from Amazon EC2 instances and on-premises servers with the CloudWatch agent.
Category: Monitoring

Q90: A developer wants to monitor invocations of an AWS Lambda function by using Amazon CloudWatch Logs. The developer added a number of print statements to the function code that write the logging information to the stdout stream. After running the function, the developer does not see any log data being generated. Why does the log data NOT appear in the CloudWatch logs?
A. The log data is not written to the stderr stream.
B. Lambda function logging is not automatically enabled.
C. The execution role for the Lambda function did not grant permissions to write log data to CloudWatch Logs.
D. The Lambda function outputs the logs to an Amazon S3 bucket.


Answer: C.
Notes: The function needs permission to call CloudWatch Logs. Update the execution role to grant the permission. You can use the managed policy of AWSLambdaBasicExecutionRole.
Reference: Troubleshoot execution issues in Lambda.
Category: Monitoting

Q91: Which of the following are best practices you should implement into ongoing deployments of your application? (Select THREE.)

A. Use stage variables to manage secrets across environments
B. Create account-specific AWS SAM templates for each environment
C. Use an AutoPublish alias
D. Use traffic shifting with pre- and post-deployment hooks
E. Test throughout the pipeline


Answer: C. D. E.
Notes: Use an AutoPublish alias, Use traffic shifting with pre- and post-deployment hooks, Test throughout the pipeline
Reference: https://enoumen.com/2019/06/23/aws-solution-architect-associate-exam-prep-facts-and-summaries-questions-and-answers-dump/

Q92: You are handing off maintenance of your new serverless application to an incoming team lead. Which recommendations would you make? (Select THREE.)

A. Keep up to date with the quotas and payload sizes for each AWS service you are using

B. Analyze production access patterns to identify potential improvements

C. Design your services to extend their life as long as possible

D. Minimize changes to your production application

E. Compare the value of using the latest first-class integrations versus using Lambda between AWS services


Answer: A. B. D.

Notes: Keep up to date with the quotas and payload sizes for each AWS service you are using, 
Reference: https://enoumen.com/2019/06/23/aws-solution-architect-associate-exam-prep-facts-and-summaries-questions-and-answers-dump/

Q93: You are handing off maintenance of your new serverless application to an incoming team lead. Which recommendations would you make? (Select THREE.)

A. Keep up to date with the quotas and payload sizes for each AWS service you are using

B. Analyze production access patterns to identify potential improvements

C. Design your services to extend their life as long as possible

D. Minimize changes to your production application

E. Compare the value of using the latest first-class integrations versus using Lambda between AWS services


Answer: A. B. D.
Notes: Keep up to date with the quotas and payload sizes for each AWS service you are using. Analyze production access patterns to identify potential improvements. Minimize changes to your production application

Reference: https://enoumen.com/2019/06/23/aws-solution-architect-associate-exam-prep-facts-and-summaries-questions-and-answers-dump/

Q94: Your application needs to connect to an Amazon RDS instance on the backend. What is the best recommendation to the developer whose function must read from and write to the Amazon RDS instance?

A. Initialize the number of connections you want outside of the handler

B. Use the database TTL setting to clean up connections

C. Use reserved concurrency to limit the number of concurrent functions that would try to write to the database

D. Use the database proxy feature to provide connection pooling for the functions


Answer: D.
Notes: Use the database proxy feature to provide connection pooling for the functions

Reference: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/automating-updates-to-serverless-apps.html

 

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

Question 95: A developer reports that a third-party library they need cannot be shared in the Lambda invocation environment. Which suggestion would you make?

A. Decrease the deployment package size

B. Set a provisioned concurrency of one so that the library doesn’t need to be shared across environments

C. Use reserved concurrency for the function that needs to use the library

D. Load the third-party library onto an Amazon EFS volume


Answer: D
Notes: Load the third-party library onto an Amazon EFS volume

Reference: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/automating-updates-to-serverless-apps.html

AWS Certified Developer Associate exam: Whitepapers

AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers for the AWS Certified Developer – Associate Exam.

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Online Training and Labs for AWS Certified Developer Associates Exam

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Top

AWS Developer Associates Jobs

Top

AWS Certified Developer-Associate Exam info and details, How To:

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

The AWS Certified Developer Associate exam is a multiple choice, multiple answer exam. Here is the Exam Overview:

  • Certification Name: AWS Certified Developer Associate.
  • Prerequisites for the Exam: None.
  • Exam Pattern: Multiple Choice Questions
  • The AWS Certified Developer-Associate Examination (DVA-C01) is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals guided by certification industry best practices and guidelines.
  • Your results for the examination are reported as a score from 100 – 1000, with a minimum passing score of 720.
  • Exam fees: US $150
  • Exam Guide on AWS Website
  • Available languages for tests: English, Japanese, Korean, Simplified Chinese
  • Read AWS whitepapers
  • Register for certification account here.
  • Prepare for Certification Here
  • Exam Content Outline

    Domain% of Examination
    Domain 1: Deployment (22%)
    1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
    1.2 Deploy applications using Elastic Beanstalk.
    1.3 Prepare the application deployment package to be deployed to AWS.
    1.4 Deploy serverless applications
    22%
    Domain 2: Security (26%)
    2.1 Make authenticated calls to AWS services.
    2.2 Implement encryption using AWS services.
    2.3 Implement application authentication and authorization.
    26%
    Domain 3: Development with AWS Services (30%)
    3.1 Write code for serverless applications.
    3.2 Translate functional requirements into application design.
    3.3 Implement application design into application code.
    3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.
    30%
    Domain 4: Refactoring
    4.1 Optimize application to best use AWS services and features.
    4.2 Migrate existing application code to run on AWS.
    10%
    Domain 5: Monitoring and Troubleshooting (10%)
    5.1 Write code that can be monitored.
    5.2 Perform root cause analysis on faults found in testing or production.
    10%
    TOTAL100%

Top

AWS Certified Developer Associate exam: Additional Information for reference

Below are some useful reference links that would help you to learn about AWS Certified Developer Associate Exam.

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Other Relevant and Recommended AWS Certifications

AWS Certification Exams Roadmap
AWS Certification Exams Roadmap

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

Top

Other AWS Facts and Summaries and Questions/Answers Dump

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

AWS Certified Developer Associate exam: Whitepapers

AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers for the AWS Certified Developer – Associate Exam.

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Online Training and Labs for AWS Certified Developer Associates Exam

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Top

AWS Developer Associates Jobs

Top

AWS Certified Developer-Associate Exam info and details, How To:

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

The AWS Certified Developer Associate exam is a multiple choice, multiple answer exam. Here is the Exam Overview:

  • Certification Name: AWS Certified Developer Associate.
  • Prerequisites for the Exam: None.
  • Exam Pattern: Multiple Choice Questions
  • The AWS Certified Developer-Associate Examination (DVA-C01) is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals guided by certification industry best practices and guidelines.
  • Your results for the examination are reported as a score from 100 – 1000, with a minimum passing score of 720.
  • Exam fees: US $150
  • Exam Guide on AWS Website
  • Available languages for tests: English, Japanese, Korean, Simplified Chinese
  • Read AWS whitepapers
  • Register for certification account here.
  • Prepare for Certification Here
  • Exam Content Outline

    Domain% of Examination
    Domain 1: Deployment (22%)
    1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
    1.2 Deploy applications using Elastic Beanstalk.
    1.3 Prepare the application deployment package to be deployed to AWS.
    1.4 Deploy serverless applications
    22%
    Domain 2: Security (26%)
    2.1 Make authenticated calls to AWS services.
    2.2 Implement encryption using AWS services.
    2.3 Implement application authentication and authorization.
    26%
    Domain 3: Development with AWS Services (30%)
    3.1 Write code for serverless applications.
    3.2 Translate functional requirements into application design.
    3.3 Implement application design into application code.
    3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.
    30%
    Domain 4: Refactoring
    4.1 Optimize application to best use AWS services and features.
    4.2 Migrate existing application code to run on AWS.
    10%
    Domain 5: Monitoring and Troubleshooting (10%)
    5.1 Write code that can be monitored.
    5.2 Perform root cause analysis on faults found in testing or production.
    10%
    TOTAL100%

Top

 

In this AWS tutorial, we are going to discuss how we can make the best use of AWS services to build a highly scalable, and fault tolerant configuration of EC2 instances. The use of Load Balancers and Auto Scaling Groups falls under a number of best practices in AWS, including Performance Efficiency, Reliability and high availability.

Before we dive into this hands-on tutorial on how exactly we can build this solution, let’s have a brief recap on what an Auto Scaling group is, and what a Load balancer is.

Autoscaling group (ASG)

An Autoscaling group (ASG) is a logical grouping of instances which can scale up and scale down depending on pre-configured settings. By setting Scaling policies of your ASG, you can choose how many EC2 instances are launched and terminated based on your application’s load. You can do this based on manual, dynamic, scheduled or predictive scaling.

Elastic Load Balancer (ELB)

An Elastic Load Balancer (ELB) is a name describing a number of services within AWS designed to distribute traffic across multiple EC2 instances in order to provide enhanced scalability, availability, security and more. The particular type of Load Balancer we will be using today is an Application Load Balancer (ALB). The ALB is a Layer 7 Load Balancer designed to distribute HTTP/HTTPS traffic across multiple nodes – with added features such as TLS termination, Sticky Sessions and Complex routing configurations.

Getting Started

First of all, we open our AWS management console and head to the EC2 management console.

We scroll down on the left-hand side and select ‘Launch Templates’. A Launch Template is a configuration template which defines the settings for EC2 instances launched by the ASG.

Under Launch Templates, we will select “Create launch template”.

We specify the name ‘MyTestTemplate’ and use the same text in the description.

Under the ‘Auto Scaling guidance’ box, tick the box which says ‘Provide guidance to help me set up a template that I can use with EC2 Auto Scaling’ and scroll down to launch template contents.

When it comes to choosing our AMI (Amazon Machine Image) we can choose the Amazon Linux 2 under ‘Quick Start’.

The Amazon Linux 2 AMI is free tier eligible, and easy to use for our demonstration purposes.

Next, we select the ‘t2.micro’ under instance types, as this is also free tier eligible.

Under Network Settings, we create a new Security Group called ExampleSG in our default VPC, allowing HTTP access to everyone. It should look like this.

AWS Certified Developer Associate exam: Additional Information for reference

Below are some useful reference links that would help you to learn about AWS Certified Developer Associate Exam.

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Other Relevant and Recommended AWS Certifications

AWS Certification Exams Roadmap
AWS Certification Exams Roadmap

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

Top

Other AWS Facts and Summaries and Questions/Answers Dump

AWS Developer Associate DVA-C01 Exam Prep

 
 
 

AWS Certifications Breaking News and Top Stories

  • Difference between Docker and Amazon Elastic Container Registry (ECR): Is Docker superior?
    by /u/DigitalSplendid

    Created a new container on AWS Lightsail. Need to understand if there are two ways to deploy a container through their menu options: Deployments or Images? Are Docker and ECR competing products? Is Docker superior to ECR? The reason is it appears AWS LIghtsail itself has made apparently mandatory to use Docker if images option chosen. Also when Deployments menu clicked: "Enter the image reference from a public registry, such as DockerHub." So they are mentioning DockerHub and not their own ECR! submitted by /u/DigitalSplendid [link] [comments]

  • Does this mean I pass the AWS ML Specialty?
    by /u/Soyun12

    This is the email I got https://preview.redd.it/4696fjpuqbyc1.jpg?width=750&format=pjpg&auto=webp&s=b9e8d66906ba7093e358557896812271224e21b1 submitted by /u/Soyun12 [link] [comments]

  • AWS CCP for GRC professionals?
    by /u/Head_Astronaut_2442

    I have 6 years of experience in cyber risk & governance and am a CISSP holder. I’m looking to develop fundamental cloud literacy and demonstrate to future employees that I have understanding of the cloud / generally pad my resume. Is the CCP an appropriate certificate? submitted by /u/Head_Astronaut_2442 [link] [comments]

  • Who to contact for support? Is there is a support phone number I can call?
    by /u/qqpp_ddbb

    So I've been throttled for over 24 hours now in amazon bedrock and I put in a support ticket, to no avail. Is there a phone number I can call to speak with support? It seems ridiculous that I'm being throttled/rate limited this hard for such a low number of requests. Here's some metrics. Let me know if more info is needed. I'm new to this stuff, sorry.. https://i.postimg.cc/GpQP5mrk/Metrics-Cloud-Watch-us-west-2.png Thanks in advance submitted by /u/qqpp_ddbb [link] [comments]

  • Need help reconnecting to AWS VM for assignment.
    by /u/clashofcam

    Hello, I've been troubleshooting for a while. I edited my Network Settings in my AWS windows 2019 VM. I believe I messed up the DNS. As soon as I clicked "OK" I got kicked off the virtual machine. I've tried tinkering with different things in the console but I cannot figure out how to change it to auto assign the DNS so I can get back on the Virtual machine. Any help is appreciated. Thank you! submitted by /u/clashofcam [link] [comments]

  • S3 Cross Account Access
    by /u/MrStu56

    Hello all, I'm hoping someone can help me here with an issue I'm battling with when it comes to cross account S3 access. I can't quite get my head around the permissions required, or perhaps more accurately where those permissions should be applied, and in which account. Here's the scenario: I have created a bucket in account A called test-100-bucket. In that bucket are the folders: "working", "ClientAccess" and "store". I'm looking to grant: List, Put, Get, Delete (i.e full read/write) on "working" and "ClientAccess" to a group of users in Account B called "AccountB/Wranglers" I'm looking to grant List, Put, Get, Delete (ie. full read/write) on "ClientAccess" (and any subfolders /*) to a specific user in AccountA (Lets call them AccountA/Client). AccountA/Client should not be able to see any of the other folders. I'm looking to restrict access to "store", so that AccountB/Wranglers can put, list, get but should not be able to delete anything. As the bucket owner I should be able to destroy the bucket and all the objects when required. Can someone help me with: What permissions are needed for the AccountB/Wranglers (and where those permissions are applied). Do I need an account or group in AccountA to handle this? What bucket policy(?) I need to apply to restrict the ClientAccess folder. What bucket or user policy (or both?) is needed for the "store" folder. Hope you can help, I've asked AmazonQ, ChatGPT and done a few youtube vids as well as the documentation but still can't seem to get it to work. If anyone has a good resource, (in crayons at this point) I'm happy to take a look. submitted by /u/MrStu56 [link] [comments]

  • Bedrock Agents with Guardrails
    by /u/ProfessorHuman

    Has anyone used guardrails with agents? I don’t see a way to associate a guardrail with an agent. Either in the api documentation or in the console. I see you can specify a guardrail in the invoke_model method of boto3 but that’s not with an agent. Docs seem to suggest it’s possible. But I see reference anywhere to how. submitted by /u/ProfessorHuman [link] [comments]

  • boto3 not finding (IAM) credentials when running docker container on ECS task, but does work locally... why not?
    by /u/that_dogs_wilin

    Hi, I'm new to setting things up on AWS so apologies in advance for dumb questions or if I use the wrong terminology (please correct me!). Here's what I'm trying to do: I want to create a docker container that will install necessary packages, and then run a python script. The python script will run for a bit and then write some output file, my_cool_dict.json. Then, I want to upload this file to my S3 bucket, and the task is done. Here's what I've done so far: I followed some AWS documentation. For setting up the AWS CLI, I did their recommended best practice of configuring AWS CLI to use the IAM identity center, here. I did the recommended SSO token provider config, and I know it works because I can locally push stuff to my S3 bucket, etc. I created an S3 bucket, and a ECR repository for my docker image. In IAM, I created a role, ecs_task_with_logging. To that role, I attached the policies AmazonECSTaskExecutionRolePolicy (so it can run an ECS task) and AmazonS3FullAccess (so it can upload to my S3 bucket). Locally, I built a docker image and pushed it to my ECR repo. In ECS, I created a cluster, then created a task definition that references the correct image URI, and the executionRoleArn of the role above. I also set up logging with python's logging lib / made a log group with CloudWatch / added the logging stuff to the task definition. To run the task, I go to the cluster, go to the Tasks tab, and use the default options, except for selecting "Task" under "Deployment configuration", and selecting the task definition. So I know all of that works, because the following successfully works: if my python script only does logging.info("Script finished running!"), I see it in CloudWatch! Therefore, I know the process of building the image, pushing it to ECR, creating the task definition, and running it on my ECS cluster must all work. I added s3 = boto3.client("s3") and s3.upload_file() to my python script. If I simply run the python script locally, this works (the file appears in the bucket). So I know the boto3 code is fine. If I do aws s3 cp my_cool_dict.json s3://my-bucket/my_cool_dict.json just in the terminal, it works. The step I'm struggling with is uploading the file from my container to my S3 bucket. If I try running the image even locally (which should just be running the script that just worked), I get the error botocore.exceptions.NoCredentialsError: Unable to locate credentials. If instead of using boto3, I put the aws s3 cp ... command in the Dockerfile, I get upload failed: ./my_cool_dict.json to s3://my-bucket/my_cool_dict.json Unable to locate credentials. Similarly, when I run the image (with either of those methods) on ECS, I get the same errors. Can anyone point me to something I could try to figure out why it can't find the credentials? submitted by /u/that_dogs_wilin [link] [comments]

  • Cost alert for the unknown account. Help wanted.
    by /u/moondaddi80

    On 30 April, I suddenly received a COST ALERT email. --- qouted start --- AWS Budget Notification April 30, 2024 AWS Account ************ Dear AWS Customer, You requested that we alert you when the actual cost associated with your My Monthly Cost Budget budget exceeds $4.25 for the current month. The month actual cost associated with this budget is $4.27. You can find additional details below and by accessing the AWS Budgets dashboard. Budget Name Budget Type Budgeted Amount Alert Type Alert Threshold ACTUAL Amount My Monthly Cost Budget Cost $5.00 ACTUAL > $4.25 $4.27 --- quoted end --- And today I got another email. --- quoted start --- AWS Budget Notification May 03, 2024 AWS Account ************ Dear AWS Customer, You requested that we alert you when the forecasted cost associated with your My Monthly Cost Budget budget exceeds $5.00 for the current month. The month forecasted cost associated with this budget is $10.04. You can find additional details below and by accessing the AWS Budgets dashboard. Budget Name Budget Type Budgeted Amount Alert Type Alert Threshold FORECASTED Amount My Monthly Cost Budget Cost $5.00 FORECASTED > $5.00 $10.04 --- quoted end --- I contacted AWS CS and they told me that if I don't know my sign in credential, they can't discuss anything related to the account further. So they asked me to give them the TRANSACTION ID, CREDIT CARD NO., etc. that was charged. But I couldn't give them a transaction information yet, because I got an email but the charge hasn't happened on my credit card yet. The email I received the COST ALERT from is the account I had when I used AWS a few years ago, but I signed out and deleted the account after the 1 yr free-tier ended, so now when I try to log in to AWS with this email, it gives me an error saying the account doesn't exist. What I'm wondering now are these things. from the cost alert email, it seems like the cost budget is $5, but when I look at the second cost alert, it shows $10 as the forecasted amount. Is this a situation where the cost will continue to increase until the end of the month because it exceeds the budget? Or will the cost only be incurred up to the budgeted amount of $5? In the body of the email, there is an AWS Account 12 digits no. Is there any way I can log in with this? Luckily, I found the password for that AWS account in the password management application I use. submitted by /u/moondaddi80 [link] [comments]

  • SES Email Sending Request Refused
    by /u/barcodez

    I've used AWS for years, almost since the beginning at huge and large companies. Love it and always thought it was great for innovation and small companies as it really made things super easy for people to try out an idea without doing any heavy investment. So naturally when I wanted to try out an idea I picked AWS. I won't go into too much detail but the idea involves sending some email for registration (standard stuff) and some notifications on account status, so I've configured all my settings and written some prototype code using the sandbox, and naturally wanted to try things with some external email accounts (belonging to me at first, then some friends and then maybe a few referrals if things look good). Thus I requested production access. I send a quick note explaining briefly what I was doing and 24hrs later got a note saying we need more info. Fair enough they need to protect their email servers reputations. I wrote a pretty long explanation covering off all the points they were interested in but did explain it was a prototype so (a.) I don't have examples of templates, it will just be short ASCII messages for now, (b.) I'd only need to send a few emails whilst I get started (c.) bounces, complaints and unsubscribes would be handled etc. I was refused, more over I was told in no uncertain terms I was considered a threat to other users of the service and for security reasons they won't tell me why. I'm bit annoyed if I'm honest because I don't want to have to build out everything to answer some questions and provide templates when I'm really just fleshing out an idea, and sending <100 emails a month if that. Glad I've found out sooner rather than later. I'm considering just installing a mail server on EC2 with an elastic IP but having managed mail servers before I know it's a time sink and for very little reward and concerned they might take issue with this. I appreciate email is the wild west but this level of gatekeeping around smtp is a bit anti innovation in my view. Should I just get on building the prototype and send mail some other way in the hope that once I have more answers they will enable me or is it too risky? Anyone else been refused at the early stages and got accepted later? submitted by /u/barcodez [link] [comments]

  • Protection for unprotected port being probed in AWS
    by /u/lowkib

    Hey Guys, So I have a scenario i would like some help with to get some ideas. AWS Guard duty has alerted us that one of our unprotected ports is being probed from malicious IP's which are located in Russia and Poland. The port has been probed over 300 times this month. Although I know public facing port's tend to get probed and this is relatively normal the port open is related to our PAM solution which significantly increases the risk as this is the protection for our infrastructure and essentially our whole company. It has already been confirmed that the port can not be closed as it needs to be open for the PAM Solution to work effectively. So now the question is how do we protect this instance while keeping the port open? Does anyone have any ideas.. The ideas i've thought of are.. Using some sort of VPN and whitelisting only know IP's to access our infrastructure. Using a WAF on the instance. This is looking like it would be significant work and change to our infrastructure as the instance in question has an network load balancer attached to it and you cannot directly add WAF's to instances with network load balancers. I believe its possible but would have to re-design alot.. So my question is what would you guys do in this suitation? Any suggestions. Appreciate any help submitted by /u/lowkib [link] [comments]

  • CDK vs terraform
    by /u/LaserBoy9000

    I’ve never used terraform before but understand that it’s the original scalable solve to the IaC problem. I have however used CDK quite often over the last year; I found that getting up to speed with TS was painful at first but that type constraints were ultimately really helpful when debugging issues. Anyway, I’m curious what the community’s thoughts are on these tools. The obvious point to TF is that with some tweaks, GCP, Azure etc could be swapped out for AWS and vice versa. But I’d imagine that CDK gives you the most granular control over AWS resources and the ability to leverage new AWS features quickly. Thoughts? submitted by /u/LaserBoy9000 [link] [comments]

  • Evaluating the Impact of CloudFront on S3 Transfer Acceleration for Infrequent File Access in Private Cloud Storage
    by /u/yccheok

    My use case involves providing private cloud storage to individual users. Therefore, the same file will only be accessed by the same user, once every few days or even months. I am currently using S3 Transfer Acceleration to improve upload and download speeds. Here is my code snippet: s3_client = boto3.client( 's3', config=Config(s3={'use_accelerate_endpoint': True}) ) s3_client.generate_presigned_url In such a case, will CloudFront still be able to contribute to improving download and upload speeds? It seems to me that caching might not play a significant role here if a file is accessed only once throughout a day. submitted by /u/yccheok [link] [comments]

  • API data formats for internal Amazon services...
    by /u/crimson117

    Amazon famously mandated all systems must communicate over APIs two decades ago. How did this play out in terms of API request/response data structures? Was it all over the place? Was it a common set of fields/properties that each api conformed to? submitted by /u/crimson117 [link] [comments]

  • My personal website, hosted on AWS for years with no charges (minimal traffic), suddenly getting billed $8/month
    by /u/not_listed

    For years I've been hosting my personal website on AWS (simple S3) with like max 30 visits per month, so never amounted to any billing. A few months ago I finally set up HTTPS support on my AWS account (by adding Cloudfront distribution) No change in traffic, but now have been getting billed $8 a month. The charges cited are: AWS WAF Global-RuleV2 ($3) AWS WAF Global-WebACLV2 ($5) Can I avoid these charges some how? If it makes a difference: I'm almost never in AWS, this personal website was just a set it and forget it thing. NameSilo is my domain manager submitted by /u/not_listed [link] [comments]

  • A couple noob questions about AMI choice. How risky is it choosing community AMIs ? How relevant is "Verified Provider" green seal ? What is the pricing for Community AMIs ?
    by /u/Mykoliux-1

    Hello. I am new to AWS and I wanted to launch an EC2 Instance to host my hobby project. I chose to use Alpine Linux for this and the most minimum EC2 size available (either t3.nano or t4g.nano). I started to look for appropriate Amazon Machine Image (AMI) and in the marketplace I found "Alpine Linux on AWS", but it costs 0.006 USD/hour (4.32 USD/month). But I also saw some free alternatives in the "Community AMIs" section with "Verified Provider" seal. I was curious how risky is it to use community AMIs compared to Marketplace AMIs ? Is it safe to use AMIs with "Verified Provider" seal from Community section ? Are all "Community AMIs" free, because after selecting the one I need I can't check the price anywhere, it just has certain info (published date, architecture, etc.) ? submitted by /u/Mykoliux-1 [link] [comments]

  • one custom domain name to rule them all
    by /u/ManicQin

    Hi! We have multiple api gateways for internal services that we supply to our teams. Until now we had a few and it we was still easy to use aws' api gateway url. our api gateways are deployed as stacks with their lambdas and needed resources. Team A deploys their service and team B deploys theirs. I want with minimum interference to the teams, to give a joint api that will redirect the requests to the correct apis. Lets say that if team A's service url is https://AAAAAA.execute-api.us-east-1.amazonaws.com/hello and B's service url is https://AAAAAA.execute-api.us-east-1.amazonaws.com/bye I want a new api gateway: https://www.example.com/A_services/hello https://www.example.com/B_services/bye Can I use api gateway to redirect a call to another? (is that what happens when choosing HTTP integration on method creation?) Can I use it with private rest apis? Also I want a simple service discovery for the teams. How can I implement it? I thought about using boto3 to query the api gateway and retrieve all resources. I can run from a lambda and retrieve the information from the api: https://www.example.com/discovery Is there a better way? submitted by /u/ManicQin [link] [comments]

  • WebSocket doesn't work in ec2 instance amazon linux
    by /u/Temporary_Ask_0

    I have created a server that uses a websocket for chat screen, when i connect my app with my local server the websockets work perfect it shows new send message immidiently etc, but when i connect my app with my ec2 server that i have git cloned my exact same source code the web sockets return this GET /socket.io/?EIO=4&transport=websocket 404 , all the other fetches post etc works perfect in the ec2 instance. Note when i connect in the local server and connect to the web socket i dont get in my console a get log like the one above If other information is needed let me know submitted by /u/Temporary_Ask_0 [link] [comments]

  • Passed SAA-C03!
    by /u/oradb2

    Will start by thanking everyone here for their valuable input which was definitely helpful! I have been preparing on and off for about 6 months - was using Stephane Maarek's udemy course but was able to complete only 60-70% of it but it certainly seems like a nice resource! Got to know about TD tests here so ordered them and started using them about a month back - used topics mode first then review mode(did 4 tests only). I also used TD's study guide(about 340 pages) which I find pretty good - but I think you can find most of the similar information in cheat sheets as well. When I get to know about the retake promotion, I thought let me just write the exam with my current level and may be I will prepare in more detail for my second try 🙂 I was shocked(pleasantly of course) to clear the exam with 744! Here is my take for the future exam takes - hoping it will help you guys in some way: Like many people have said here, you are never fully ready so trust yourself and go for it, you should be fine(assuming you are at least at a certain level) I personally find the exam little hard - the reason could be that probably I was not that well prepared ! I tried to read about ALL the services and their details from TD's study guide but I noticed that the exam focused more on the core services(S3,EC2,ELB,VPC,RDS,Route 53,WAF to name a few) - you should be very very thorough with these core services and you would do well! I use AWS every day at work(managing RDS) so probably that might have helped me All the best on your AWS certification journey guys! submitted by /u/oradb2 [link] [comments]

  • Vouchers
    by /u/Important-Limit7092

    Can i use a 50% voucher to buy a full exam voucher? Thank you. submitted by /u/Important-Limit7092 [link] [comments]

  • Can I keep code and infra separate to support multiple deployment
    by /u/HDAxom

    This came up during one of design discussion and I want to validate this. I have mostly worked on serverless on AWS , have not tried ECS or EKS or any containerized application on any platform. We have a spring boot application deployed on ECS cluster ( with Fargate) , These two - app and infra is in same codebase and for some reason , other would also need access to this spring boot app and may deploy on own existing ECS cluster or EKS etc , so infra can change. My thinking is as long as I keep app and infra separate even in same codebase , publish app to artifactory , anyone should be able to download the app and create own infra around it - use EKS or ECS cluster or on-premise deployment. They are not bound by my container. I have given a start script that can be configured in container to boot and this set up would work. I could try technical feasibility but I was hoping to get an insight here in case someone have view! Also if this is a good practice - basically providing an application with or without container to be consumed by others. Also not sure if this is AWS question 🙂 but since we work exclusively on AWS , I would want feasibility here. submitted by /u/HDAxom [link] [comments]

  • AWS Lambda function (SAM) - How can I handle binary data from containerized application? (.Net)
    by /u/gangelofilho

    Hi Guys! I'm trying to build an api that basically receives an docx file and return the docx file processed. I trying to use lambda from aws because I want to make this project free for anyone to use. The main application is written in c#/.Net. The container runs internally a node.Js application, which is called by the main app (.Net), just to process equations (this lib is only available in node, thats why I had to do this to work). So now that I have the container working just fine, I wanted to use it in a lambda. I Tried everything to make it happen, but Im not beeing able to make it work. I Tried everything from those tutorials: 1, 2, 3, 4 I have the source code here, hosted on my Github. If anyone here could help me with anything, I would be glad. This is the first project that im trying to create using AWS, by myself. AWS is so hard to learn!! Thanks submitted by /u/gangelofilho [link] [comments]

  • Options for running Python script on EC2
    by /u/DiabloSpear

    Hi guys, I just started my AWS journey and wanted to see if I am taking the right path. I have a Python script that I want to run - this downloads data from internet and runs a simple neural network. My plan is like below. Upload my Python code to EC2 and download data from internet and upload to S3. Upload the data from S3 to EC2 (might seem redundant from step 1, but I wanted to practice transferring data between EC2 and S3, thus transferring data from EC2-> S3 -> EC2). Run the neural network. I think I am running into trouble on Step 1. One way I can think of is I can upload Python script onto S3 and import into EC2 instance and then run it? How would I run python script that is on my local PC in EC2? I read some blogs saying that you can use SSH, but I am not that savvy with terminals and shell?(I honestly do not even know what they mean...I am mostly from math background when it comes to machine learning) so I would like to stick with using the EC2 instance launch or Session manager, unless I must use SSH. I also read on this sub that I can use Lambda - what is the main difference between Lambda and EC2? Thanks so much for the answers. submitted by /u/DiabloSpear [link] [comments]

  • Do aws certification on behalf me
    by /u/Early_Task3572

    Anyone knows any company or person to do aws certificacion on behalf me that not scam me? submitted by /u/Early_Task3572 [link] [comments]

  • How to deploy a general purpose DL pipeline on AWS?
    by /u/mr_birrd

    As I could just not find any clear description of my problem I come here and hope you can help me. I have a general machine learning pipeline with a lot of code and different libraries, custom CUDA, Pytorch, etc., and I want to deploy it on AWS. I have a single prediction function which could be called that returns some data (images/point clouds). I will have a seperated website that will call the model over a REST API. How do I deploy the model? I found out I need to dockerize, but how? What functions are expected for deployment, what structure, etc.? All I found are tutorials where I run experiments using sklearn on Sagemaker, but this is not suitable. Thank you for any links or hints! submitted by /u/mr_birrd [link] [comments]

  • Bastion host shutting down "randomly"
    by /u/steadyvibin

    Hi all, We have an EC2 instance which we are using as a bastion host. We use this to sometimes get a production console but mostly as an entry point for a data transfer tool (Census) to migrate some of our database from RDS to an external warehouse. We noticed our EC2 instance would shut down randomly throughout the day. So installed the cloudwatch agent so we could monitor more metrics. The machine was not running out of memory, CPU or disk. So I went to check the logs are there were no errors happening when the machine shut down. The user session would just be stopped. So to test a theory I got a production console which would just print something out to the console every 15 seconds. This completely stopped the machine from going down. The instance is running an ubuntu image. Has anyone experienced any similar problems or have any advice? I can provide more details if needed. submitted by /u/steadyvibin [link] [comments]

  • .NET on AWS show featuring David Fowler and Norm Johanson!
    by /u/fbouteruche

    Monday 6th will be an amazing day! The .NET on AWS show will feature David Fowler and Norm Johanson all together to discuss .NET Aspire and AWS. 💻 Brandon Minnick, MBA and myself are so excited to welcome those amazing engineers 😍 It will be at 8am PST / 5pm CESTDo not miss this episode, set a reminder by clicking here 👇 https://www.twitch.tv/aws/schedule?segmentID=cde4f63f-52fe-4ee1-9628-2fd055d0227b submitted by /u/fbouteruche [link] [comments]

  • Exam voucher
    by /u/Purple_Newspaper_133

    Hello all Where and how do I get exam voucher or any sort of discount so I can schedule an exam? Btw I still have active school email id from my college. Thanks submitted by /u/Purple_Newspaper_133 [link] [comments]

  • Dear APIGW, does "data out" mean data traversing the APIGW or egress to internet?
    by /u/Avansay

    assuming this flow does data coming off of lambda2 incur 2 egress fees or just 1 at [firstapigw] ? user --get--> (internet) -- get --> [firstapigw] --> lambda1 --> [AnotherApigw] --> lambda2 submitted by /u/Avansay [link] [comments]

  • Can I use my PayPal Balance to pay for AWS Certifications?
    by /u/MatthewGalloway

    Or do they only accept credit cards submitted by /u/MatthewGalloway [link] [comments]

  • Route 53 question
    by /u/Nblearchangel

    I have a small hobby business and tried buying some domains a few years ago. I was successful with a couple but the .com I really wanted was taken. I contacted godaddy to help broker the purchase but realized quickly the domain I wanted was way more expensive than I could afford. I canceled my service with godaddy and forgot about it. Fast forward to today. I was randomly going through my aws bill and saw I actually have that domain listed in the UI. How is that possible? I definitely did not buy the domain. submitted by /u/Nblearchangel [link] [comments]

  • OnVue app on M2 Mac is freezing when launching the test simulation
    by /u/Snoo-44996

    I tried disabling stage manager and removing browser lock. Still I get the freezing circle. submitted by /u/Snoo-44996 [link] [comments]

  • CCP
    by /u/Upper_Aspect_4353

    Hi guys, I've been preparing for the CCP (Cloud practitioner exam) which is the main section that I should need to focus on? As i have only a month to pass this exam. I've preparing through Stephane ccp course. submitted by /u/Upper_Aspect_4353 [link] [comments]

  • Passed my CCP/CLF-02 exam yesterday!
    by /u/togamayo_mich

    Score: 821/1000 (Hoped for an 850-900, but that should do) Background: I do development and QA work at a large tech consultancy. We get AWS exam vouchers, but must show that we completed a relevant course before receiving one. I used GCP in a previous dev job, so I know how to use platform's UI for doing specific tasks but didn't bother to learn cloud theory (one thing I regret). For learning and noting the concepts, I used: Amazon Cloud Quest for the Cloud Practitioner hands-on activities and some trivia questions from shooting drones. AWS Cloud Practitioner Essentials for the theory and course completion confirmation. While it doesn't cover all the smaller details, it does cover the major details that I can build my notes off. I also like how the course uses a coffee shop as an analogy for the cloud. AWS documentation, whitepapers, and the service pages (be it its overview, features, or pricing) in both English and German. Kanani Nirav's practice exams #1-10 for uncovering patchable knowledge gaps. As I'm a power Notion user who learns best from taking and concising notes, I used this Notion template for: Organizing notes by exam guide domain and then task statement, before using the "knowledge of" and "skills in" bullet points to understand which concepts to include for each statement. For example, all of my notes under "Understand the benefits of and strategies for migration to the AWS Cloud" (1.3) cover the Cloud Adoption Framework and its action plan, the 6 "R's" of migration, the Snow transfer device family (Snowcone, Snowball, Snowmobile), the Amazon Transfer family, and misc migration-and-transfer related services such as DataSync. Tracking my practice exam attempts and answers. For incorrect answers, I noted the correct answer and (in some cases) why my answer was incorrect, such as "Use Config for auditing AWS change management, not CloudWatch". I also noted the corresponding concept(s) (in our case Config and CloudWatch) in the "topics to revisit" section, so I can re-study those concepts before taking another practice exam (and if applicable, adding more information to my notes). I repeated this process daily until I hit mid-80s the day before my exam. I booked my exam in-person as I'd rather avoid the same outcomes as some of the e-exam horror stories here. Smooth check-in process, I got a dry-erase board and marker as scrap, and I can stretch while sitting if needed. I also got my pass/fail results immediately after submitting my exam 50 minutes and two checks later. (If you're in Greater Vancouver, this is Ashton Testing Centre.) What's next? I'm interested in learning the concepts that'll help me get the respective Solutions Architect and Developer Associate certifications, but am unsure about where to start. Thankfully one of our senior technical architects (and mentors) holds both certificates, so I'm connecting with her tomorrow to understand how her certificates help her with day-to-day tasks (let alone her career). This should help me decide which certificate exam to prepare for first! submitted by /u/togamayo_mich [link] [comments]

  • What would you do?
    by /u/OkAcanthocephala1450

    My manager asked me to colaborate with some devs to create a platform for infra provisioning. Basically what they try to do is like a computer vision scan of diagram and asking AI to create the code and ask user to provide input variables. The problem is that the second step is unnecessary ,since we do have preconfigured templates that just an Api call is enough to get the template ,ask for variables and create the infrastructure. I said ,it is unnecessary to integrate AI(Bedrock) ,since it would just be complex and very very slow. They said , if we dont use ,it will not be AI integrated then. What am i supposed to do? Continue supporting them on this stupid idea of integrating "AI" or what.. submitted by /u/OkAcanthocephala1450 [link] [comments]

  • Barely passed SAA, but passed!!! U can do it!
    by /u/Spirited_Ad_490

    A little background: I started studying for cloud practitioner back in September and got it in January, been lurking in here ever since. For the last 3 months, I have been studying for SAA and I passed the test a couple days ago. I used Skill Builder to study. I assume any of the other courses would be similar and possibly even better. I used TD practice exams as well. I never scored more than 50% on any of them. I am not good at taking tests and find it so hard to focus. The TD cheat sheets are the most helpful. Definitely check those out, especially the ones that compare services. Each time I got a question wrong I would open the cheat sheet for that service and take notes on it. Exam had lot of basic services like s3, ec2 and whatever you'd expect. TD practice tests test you on a lot of random services that might not be on the exam. Dont ONLY focus on those, make sure you know the basic ones super well. I think I scored so low because I focused on knowing each service and not mastering the basic ones. You need a good balance of both. Scored a 735/1000 but after I took it I genuinely had no idea if I passed or not. Got the credly badge in about 4 hours and the cert in about 9. Not like CP where they tell you right away. submitted by /u/Spirited_Ad_490 [link] [comments]

  • Pearson Vue reschedule
    by /u/Sad-Buddy-5293

    Hello I wanted to know if it is possible to reschedule for my Pearson Vue Exam for AWS in less then 24 hours if you are sick. I cant write this exam in my current condition and seems I am unable to reschedule the exam I am writing Tomorrow. submitted by /u/Sad-Buddy-5293 [link] [comments]

  • Passed SAA-C03
    by /u/MFiziK

    Took me about 2.5 months, but I finally passed the Associate Solutions Architect exam after passing the Cloud Practitioner in January. A little background about me: I am a software engineer who graduated last May but still do not have any actual hands-on experience using any AWS services. I knew I would be using AWS soon in my new role, so I thought I should get familiar with it all. I followed everyone's advice on here and used Stephane Maarek's course on Udemy and reviewed questions on Tutorial Dojo. For the course, I recommend doing your best with watching all the hands-on tutorials and even trying them. I always got lazy on those lectures but eventually went back because a practice problem or two would end up asking something that you would need hands-on experience for. For the most part, I just kept working on the practice exams on TD until I was comfortably getting 70s/80s on them like everyone suggested. The exam itself is very focused on the core services like EC2, ELBs, Autoscaling groups, networking, and databases like Aurora, so definitely do not brush past these sections at all. Thank you for all the recommendations everyone, and I am happy to answer any questions that others may have. Also, good luck to anyone that's taking it soon! submitted by /u/MFiziK [link] [comments]

  • Passed DVA Yesterday
    by /u/stone4789

    The Pearson software was super laggy between every question, so it took twice as long as it should have. However, I passed and I’m probably never doing another one. I have SAA and ML, hopefully this will help me transition out of data science for good some day. If you’re prepping for the DVA, focus on lambdas and APIs. Luckily that’s all I’ve been doing at work lately. submitted by /u/stone4789 [link] [comments]

  • I just passed AWS CCP and now Ill study for AWS SAA but I only have one month available!
    by /u/saske2k20

    Im happy that I passed on AWS CCP (Stephane + Skill Builder) but now Ill focus on study to AWS SAA, I already have the Stephane course + TD, but I feel I have a big gap knowledge on AWS CCP, the recommendations are to to Cantrill course in this case, but the problem is the available time. I dont think Ill be able to finish and learn cantrill course + tests + labs on the time I have. Do you guys think considering the time it would be worth focus on stephane only to pass and as soon I pass do the cantrill course to fulfill all the gaps? Thanks edit : typo submitted by /u/saske2k20 [link] [comments]

  • Whats the most efficient roadmap?
    by /u/mildy-responsible

    Hello my fellow redditors, I have been working temporarily in the Oracle cloud infrastructure for about a year. I have come to realize that although oracle is cool I want to move towards AWS. I was wondering what is the most efficient/best way to get the solutions architect certification I checked the site and they have this new program called the AWS Skillbuilder. It’s a $30 a month subscription. I don’t know if I should get that is still a good option? When I do get this CERT, what would you recommend be my next step? submitted by /u/mildy-responsible [link] [comments]

  • I have Professional and Specialty Level AWS Certification Exam Voucher, can I apply this voucher for associate level exam.?
    by /u/AfterOcelot2727

    submitted by /u/AfterOcelot2727 [link] [comments]

  • Finally passed the AWS SAA in my 2nd attempt!!
    by /u/Begin_hunt

    Hello everyone, I have finally passed the AWS SAA-CO3 in my second attempt and here are my experiences/thoughts with the exam preparation and what went wrong in my first attempt. I’m currently working as an Information Security Engineer at a financial institution. As the majority of my job involves working with AWS cloud, I wanted to challenge myself to learn new AWS services and pass the AWS Solutions Architect Associate certification (SAA). So, the goal was set, and now it’s time to execute my plans. Domains: The AWS SAA examination tests individuals based on four key domains: Designing Secure Architectures, Designing Resilient Architectures, Designing High-Performing Architectures, and Designing Cost-Optimized Architectures. You can find the AWS SAA exam guide to understand these areas in detail Link Preparation: After reading through the exam guide, I started my preparation with a Udemy course taught by the instructor Stephane Maarek. Stephane has well-structured course material that covers both the theoretical and practical aspects of the services. I would highly recommend starting with this course. It took me about 2 months to complete this course. I spent roughly 2 hours every day at night after work, and this helped me understand the services better. Small daily improvements or repetitions do matter in the long run. Later, I solved practice papers from Stephane and scored about 65–75% on the exams. I felt that the exam wording was a bit off in Stephane Maarek exam papers and it caused some confusion when I knew the right answer. Nonetheless, after completing several practice tests, I felt confident enough to tackle the exam and gave my first attempt, scoring 680 and failing. This hit me hard as I was so close to clearing the exam. Identifying gaps: After my first attempt, I took some time off and started to identify my weak areas such as Designing Cost-Optimized, Resilient architectures and improving them. I had come across Tutorials Dojo (TD) practice tests and found them helpful. I’d highly recommend these learning materials and practice tests to supplement your knowledge. I also enjoyed solving practice tests from TD as they were worded very clearly and provided real-world architecture-based questions. I felt really confident after solving the practice tests and was scoring around 70–75% on these tests. SAA Exam: On the day of the exam, I did feel a bit nervous but I got used to the exam pattern as I had given multiple practice tests at this point. My strategy was to answer 5 questions in 10 minutes. This helped me to keep track of time and focus on one question at a time. Submitted the exam and received an email by midnight from the AWS congratulating and awarding a credly badge. AWS Cert resources: Here are the resources I have used to pass the examination: Stephane Maarek Udemy course: Link Stephane Maarek practice tests: Link Tutorials Dojo practice tests: Link Tip: Focus on understanding the services and their implementation use cases, as several services are different but do similar jobs. Hope this helps 🙂 Good luck!! submitted by /u/Begin_hunt [link] [comments]

  • Passed SAA-C03!!
    by /u/Acceptable-Drink-495

    Hi guys!! I passed SAA with 815. Started preparing from Dec and I planned to give the exam by mid march. I was ready to give the exam but i had few interviews scheduled the entire march so postponed it and btw i also got a job on 1st April. Yayyyy! Just a little background about me, i worked as a Patent Analyst for almost 3 years but it was not my calling. I wanted to get into IT. I also shifted to another country the same time i left this job. So from last 2 years i was trying to get a job in IT but there was a language barrier and also i had no experience in IT. Ya so coming back to the topic, I used Adrian Cantrill course which was just a chef kiss🤌🏻, AWS documentation and TD cheatsheets and TD practice exam(in feb end). A week before the exam i just went back to the cantrill course to brush up on all the architectures and a day before the exam gave all the TD practice exam again. I mostly got. Above 80% so felt confident. I huge thanks to this sub as i got to know about the course and the TD practice test from here. And also all the other members who mentioned few services they came across in the exam. My suggestion: Focus more on the core services that in cantrill course and everything related to them like auto scaling, Encryption , Backups for all storage and DB services. But also do refer to the documentation if you have doubts. For core services like S3, EBS,RDS, Aurora, DynamoDB, VPC,EC2 you should know things in depth You can skip few services that you come across during TD test which are just mentioned once or twice. Also do checkout all the post by other members of the sub who gave the exam, I made sure I understand all the services they have mentioned and it reallllllly helped! All the best to everyone else giving the exam! submitted by /u/Acceptable-Drink-495 [link] [comments]

  • Tips for Jon Bonso Udemy SAA?
    by /u/YoinkWB

    Hello. Just got CCP and moving onto SSA. for CCP I mainly used ExamPro course. Watched lectured and made Anki flash cards from all the notes and questions then moved onto the practice exams. Trying the Udemy Jon Bonso course now. Is this just a collection of practice exams with explanations for each question after? How did you utilize this material? If it matters, I started with 0 background in CSPs. Studied a few hours each day for about 2 weeks and passed CCP exam last night. Thanks. submitted by /u/YoinkWB [link] [comments]

  • Has anyone passed the Developer Associate DVA-C02 exam only using Neal Davis's course?
    by /u/r2dev0

    I have a few years of very light AWS experience with EC2, S3, and DynamoDB. I completed Neal's course and passed his exam with an 86% after a few tries. Is his exam pretty close to the real exam and if I am able to do okay with his exam, should I have a good chance of passing? submitted by /u/r2dev0 [link] [comments]

  • AWS Certification Study
    by /u/dojlee22

    I want to try to start aws cloud certification study. Which place has the best online class? submitted by /u/dojlee22 [link] [comments]

  • Finally Got it
    by /u/Breif7

    Finally Got my Saa certificate! It took me about 3 months of preparation and thanks to the TD practice exams they give a more solid idea of what the actual exam is going to be like Now that I have my certificate, I have some experience as a full-stack developer for web applications, where I could start looking for a job related to web applications. And they mentioned certain additional benefits like the sme program, someone knows what it refers to or what I have to do to participate. Thank you submitted by /u/Breif7 [link] [comments]

  • I created a FREE AWS Cloud Practitioner Preparation Exam web app📚💻
    by /u/fabricio85

    You can check out the quiz here. As someone who has gone through the challenge of obtaining this certification, I know how important it is to have access to quality study materials. That's why I decided to create this side project to help those pursuing this achievement but may not have the means to acquire paid resources. Throughout the practice exam, you'll find hundreds of questions covering the main topics of the actual exam. My goal is to provide a study experience that closely mirrors the real exam, so you can get used to the question format, the main knowledge domains, and enhance your overall knowledge. But don't underestimate hands-on practice! I strongly suggest using AWS Skillbuilder for practical work! There are two types of practice exams: a "basic" one (35 questions, 40 minutes long) and a "full" one (65 questions, 90 minutes long). Both feature random questions for better studying! Key features: Real-time score tracking: No need to wait until the end to see how you're doing! The practice exam displays your percentage score as you progress through the questions. Review your answers: At the end of the practice exam, you can review which questions you answered correctly and incorrectly. This is one of the best methods for reinforcing your knowledge and learning from your mistakes. Completely free: No hidden costs or subscriptions. I believe everyone should have access to quality study materials, regardless of their financial situation. I'm already working on something similar to SAA. Feel free to connect with me on LinkedIn to stay updated! submitted by /u/fabricio85 [link] [comments]

  • HI I am currently studying for the AWS Cloud Practitioner Exam. It seems like the questions are just asking what a certain function/resource does in AWS or when would be a good time to use a certain function/resource based on a certain scenario? Am I missing any curveballs?
    by /u/orozcotech

    submitted by /u/orozcotech [link] [comments]

 

Reference: https://enoumen.com/2019/06/23/aws-solution-architect-associate-exam-prep-facts-and-summaries-questions-and-answers-dump/

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

AWS Certified Developer Associate exam: Whitepapers

AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers for the AWS Certified Developer – Associate Exam.

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Online Training and Labs for AWS Certified Developer Associates Exam

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Top

AWS Developer Associates Jobs

Top

AWS Certified Developer-Associate Exam info and details, How To:

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

The AWS Certified Developer Associate exam is a multiple choice, multiple answer exam. Here is the Exam Overview:

  • Certification Name: AWS Certified Developer Associate.
  • Prerequisites for the Exam: None.
  • Exam Pattern: Multiple Choice Questions
  • The AWS Certified Developer-Associate Examination (DVA-C01) is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals guided by certification industry best practices and guidelines.
  • Your results for the examination are reported as a score from 100 – 1000, with a minimum passing score of 720.
  • Exam fees: US $150
  • Exam Guide on AWS Website
  • Available languages for tests: English, Japanese, Korean, Simplified Chinese
  • Read AWS whitepapers
  • Register for certification account here.
  • Prepare for Certification Here
  • Exam Content Outline

    Domain% of Examination
    Domain 1: Deployment (22%)
    1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
    1.2 Deploy applications using Elastic Beanstalk.
    1.3 Prepare the application deployment package to be deployed to AWS.
    1.4 Deploy serverless applications
    22%
    Domain 2: Security (26%)
    2.1 Make authenticated calls to AWS services.
    2.2 Implement encryption using AWS services.
    2.3 Implement application authentication and authorization.
    26%
    Domain 3: Development with AWS Services (30%)
    3.1 Write code for serverless applications.
    3.2 Translate functional requirements into application design.
    3.3 Implement application design into application code.
    3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.
    30%
    Domain 4: Refactoring
    4.1 Optimize application to best use AWS services and features.
    4.2 Migrate existing application code to run on AWS.
    10%
    Domain 5: Monitoring and Troubleshooting (10%)
    5.1 Write code that can be monitored.
    5.2 Perform root cause analysis on faults found in testing or production.
    10%
    TOTAL100%

Top

AWS Certified Developer Associate exam: Additional Information for reference

Below are some useful reference links that would help you to learn about AWS Certified Developer Associate Exam.

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Other Relevant and Recommended AWS Certifications

AWS Certification Exams Roadmap
AWS Certification Exams Roadmap

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

Top

Other AWS Facts and Summaries and Questions/Answers Dump

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

AWS Certified Developer Associate exam: Whitepapers

AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers for the AWS Certified Developer – Associate Exam.

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Online Training and Labs for AWS Certified Developer Associates Exam

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Top

AWS Developer Associates Jobs

Top

AWS Certified Developer-Associate Exam info and details, How To:

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

The AWS Certified Developer Associate exam is a multiple choice, multiple answer exam. Here is the Exam Overview:

  • Certification Name: AWS Certified Developer Associate.
  • Prerequisites for the Exam: None.
  • Exam Pattern: Multiple Choice Questions
  • The AWS Certified Developer-Associate Examination (DVA-C01) is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals guided by certification industry best practices and guidelines.
  • Your results for the examination are reported as a score from 100 – 1000, with a minimum passing score of 720.
  • Exam fees: US $150
  • Exam Guide on AWS Website
  • Available languages for tests: English, Japanese, Korean, Simplified Chinese
  • Read AWS whitepapers
  • Register for certification account here.
  • Prepare for Certification Here
  • Exam Content Outline

    Domain% of Examination
    Domain 1: Deployment (22%)
    1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
    1.2 Deploy applications using Elastic Beanstalk.
    1.3 Prepare the application deployment package to be deployed to AWS.
    1.4 Deploy serverless applications
    22%
    Domain 2: Security (26%)
    2.1 Make authenticated calls to AWS services.
    2.2 Implement encryption using AWS services.
    2.3 Implement application authentication and authorization.
    26%
    Domain 3: Development with AWS Services (30%)
    3.1 Write code for serverless applications.
    3.2 Translate functional requirements into application design.
    3.3 Implement application design into application code.
    3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.
    30%
    Domain 4: Refactoring
    4.1 Optimize application to best use AWS services and features.
    4.2 Migrate existing application code to run on AWS.
    10%
    Domain 5: Monitoring and Troubleshooting (10%)
    5.1 Write code that can be monitored.
    5.2 Perform root cause analysis on faults found in testing or production.
    10%
    TOTAL100%

Top

 

In this AWS tutorial, we are going to discuss how we can make the best use of AWS services to build a highly scalable, and fault tolerant configuration of EC2 instances. The use of Load Balancers and Auto Scaling Groups falls under a number of best practices in AWS, including Performance Efficiency, Reliability and high availability.

Before we dive into this hands-on tutorial on how exactly we can build this solution, let’s have a brief recap on what an Auto Scaling group is, and what a Load balancer is.

Autoscaling group (ASG)

An Autoscaling group (ASG) is a logical grouping of instances which can scale up and scale down depending on pre-configured settings. By setting Scaling policies of your ASG, you can choose how many EC2 instances are launched and terminated based on your application’s load. You can do this based on manual, dynamic, scheduled or predictive scaling.

Elastic Load Balancer (ELB)

An Elastic Load Balancer (ELB) is a name describing a number of services within AWS designed to distribute traffic across multiple EC2 instances in order to provide enhanced scalability, availability, security and more. The particular type of Load Balancer we will be using today is an Application Load Balancer (ALB). The ALB is a Layer 7 Load Balancer designed to distribute HTTP/HTTPS traffic across multiple nodes – with added features such as TLS termination, Sticky Sessions and Complex routing configurations.

Getting Started

First of all, we open our AWS management console and head to the EC2 management console.

We scroll down on the left-hand side and select ‘Launch Templates’. A Launch Template is a configuration template which defines the settings for EC2 instances launched by the ASG.

Under Launch Templates, we will select “Create launch template”.

We specify the name ‘MyTestTemplate’ and use the same text in the description.

Under the ‘Auto Scaling guidance’ box, tick the box which says ‘Provide guidance to help me set up a template that I can use with EC2 Auto Scaling’ and scroll down to launch template contents.

When it comes to choosing our AMI (Amazon Machine Image) we can choose the Amazon Linux 2 under ‘Quick Start’.

The Amazon Linux 2 AMI is free tier eligible, and easy to use for our demonstration purposes.

Next, we select the ‘t2.micro’ under instance types, as this is also free tier eligible.

Under Network Settings, we create a new Security Group called ExampleSG in our default VPC, allowing HTTP access to everyone. It should look like this.

AWS Certified Developer Associate exam: Additional Information for reference

Below are some useful reference links that would help you to learn about AWS Certified Developer Associate Exam.

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Other Relevant and Recommended AWS Certifications

AWS Certification Exams Roadmap
AWS Certification Exams Roadmap

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

Top

Other AWS Facts and Summaries and Questions/Answers Dump

AWS Developer Associate DVA-C01 Exam Prep

 
 
 

AWS Certifications Breaking News and Top Stories

  • Difference between Docker and Amazon Elastic Container Registry (ECR): Is Docker superior?
    by /u/DigitalSplendid

    Created a new container on AWS Lightsail. Need to understand if there are two ways to deploy a container through their menu options: Deployments or Images? Are Docker and ECR competing products? Is Docker superior to ECR? The reason is it appears AWS LIghtsail itself has made apparently mandatory to use Docker if images option chosen. Also when Deployments menu clicked: "Enter the image reference from a public registry, such as DockerHub." So they are mentioning DockerHub and not their own ECR! submitted by /u/DigitalSplendid [link] [comments]

  • Does this mean I pass the AWS ML Specialty?
    by /u/Soyun12

    This is the email I got https://preview.redd.it/4696fjpuqbyc1.jpg?width=750&format=pjpg&auto=webp&s=b9e8d66906ba7093e358557896812271224e21b1 submitted by /u/Soyun12 [link] [comments]

  • AWS CCP for GRC professionals?
    by /u/Head_Astronaut_2442

    I have 6 years of experience in cyber risk & governance and am a CISSP holder. I’m looking to develop fundamental cloud literacy and demonstrate to future employees that I have understanding of the cloud / generally pad my resume. Is the CCP an appropriate certificate? submitted by /u/Head_Astronaut_2442 [link] [comments]

  • Who to contact for support? Is there is a support phone number I can call?
    by /u/qqpp_ddbb

    So I've been throttled for over 24 hours now in amazon bedrock and I put in a support ticket, to no avail. Is there a phone number I can call to speak with support? It seems ridiculous that I'm being throttled/rate limited this hard for such a low number of requests. Here's some metrics. Let me know if more info is needed. I'm new to this stuff, sorry.. https://i.postimg.cc/GpQP5mrk/Metrics-Cloud-Watch-us-west-2.png Thanks in advance submitted by /u/qqpp_ddbb [link] [comments]

  • Need help reconnecting to AWS VM for assignment.
    by /u/clashofcam

    Hello, I've been troubleshooting for a while. I edited my Network Settings in my AWS windows 2019 VM. I believe I messed up the DNS. As soon as I clicked "OK" I got kicked off the virtual machine. I've tried tinkering with different things in the console but I cannot figure out how to change it to auto assign the DNS so I can get back on the Virtual machine. Any help is appreciated. Thank you! submitted by /u/clashofcam [link] [comments]

  • S3 Cross Account Access
    by /u/MrStu56

    Hello all, I'm hoping someone can help me here with an issue I'm battling with when it comes to cross account S3 access. I can't quite get my head around the permissions required, or perhaps more accurately where those permissions should be applied, and in which account. Here's the scenario: I have created a bucket in account A called test-100-bucket. In that bucket are the folders: "working", "ClientAccess" and "store". I'm looking to grant: List, Put, Get, Delete (i.e full read/write) on "working" and "ClientAccess" to a group of users in Account B called "AccountB/Wranglers" I'm looking to grant List, Put, Get, Delete (ie. full read/write) on "ClientAccess" (and any subfolders /*) to a specific user in AccountA (Lets call them AccountA/Client). AccountA/Client should not be able to see any of the other folders. I'm looking to restrict access to "store", so that AccountB/Wranglers can put, list, get but should not be able to delete anything. As the bucket owner I should be able to destroy the bucket and all the objects when required. Can someone help me with: What permissions are needed for the AccountB/Wranglers (and where those permissions are applied). Do I need an account or group in AccountA to handle this? What bucket policy(?) I need to apply to restrict the ClientAccess folder. What bucket or user policy (or both?) is needed for the "store" folder. Hope you can help, I've asked AmazonQ, ChatGPT and done a few youtube vids as well as the documentation but still can't seem to get it to work. If anyone has a good resource, (in crayons at this point) I'm happy to take a look. submitted by /u/MrStu56 [link] [comments]

  • Bedrock Agents with Guardrails
    by /u/ProfessorHuman

    Has anyone used guardrails with agents? I don’t see a way to associate a guardrail with an agent. Either in the api documentation or in the console. I see you can specify a guardrail in the invoke_model method of boto3 but that’s not with an agent. Docs seem to suggest it’s possible. But I see reference anywhere to how. submitted by /u/ProfessorHuman [link] [comments]

  • boto3 not finding (IAM) credentials when running docker container on ECS task, but does work locally... why not?
    by /u/that_dogs_wilin

    Hi, I'm new to setting things up on AWS so apologies in advance for dumb questions or if I use the wrong terminology (please correct me!). Here's what I'm trying to do: I want to create a docker container that will install necessary packages, and then run a python script. The python script will run for a bit and then write some output file, my_cool_dict.json. Then, I want to upload this file to my S3 bucket, and the task is done. Here's what I've done so far: I followed some AWS documentation. For setting up the AWS CLI, I did their recommended best practice of configuring AWS CLI to use the IAM identity center, here. I did the recommended SSO token provider config, and I know it works because I can locally push stuff to my S3 bucket, etc. I created an S3 bucket, and a ECR repository for my docker image. In IAM, I created a role, ecs_task_with_logging. To that role, I attached the policies AmazonECSTaskExecutionRolePolicy (so it can run an ECS task) and AmazonS3FullAccess (so it can upload to my S3 bucket). Locally, I built a docker image and pushed it to my ECR repo. In ECS, I created a cluster, then created a task definition that references the correct image URI, and the executionRoleArn of the role above. I also set up logging with python's logging lib / made a log group with CloudWatch / added the logging stuff to the task definition. To run the task, I go to the cluster, go to the Tasks tab, and use the default options, except for selecting "Task" under "Deployment configuration", and selecting the task definition. So I know all of that works, because the following successfully works: if my python script only does logging.info("Script finished running!"), I see it in CloudWatch! Therefore, I know the process of building the image, pushing it to ECR, creating the task definition, and running it on my ECS cluster must all work. I added s3 = boto3.client("s3") and s3.upload_file() to my python script. If I simply run the python script locally, this works (the file appears in the bucket). So I know the boto3 code is fine. If I do aws s3 cp my_cool_dict.json s3://my-bucket/my_cool_dict.json just in the terminal, it works. The step I'm struggling with is uploading the file from my container to my S3 bucket. If I try running the image even locally (which should just be running the script that just worked), I get the error botocore.exceptions.NoCredentialsError: Unable to locate credentials. If instead of using boto3, I put the aws s3 cp ... command in the Dockerfile, I get upload failed: ./my_cool_dict.json to s3://my-bucket/my_cool_dict.json Unable to locate credentials. Similarly, when I run the image (with either of those methods) on ECS, I get the same errors. Can anyone point me to something I could try to figure out why it can't find the credentials? submitted by /u/that_dogs_wilin [link] [comments]

  • Cost alert for the unknown account. Help wanted.
    by /u/moondaddi80

    On 30 April, I suddenly received a COST ALERT email. --- qouted start --- AWS Budget Notification April 30, 2024 AWS Account ************ Dear AWS Customer, You requested that we alert you when the actual cost associated with your My Monthly Cost Budget budget exceeds $4.25 for the current month. The month actual cost associated with this budget is $4.27. You can find additional details below and by accessing the AWS Budgets dashboard. Budget Name Budget Type Budgeted Amount Alert Type Alert Threshold ACTUAL Amount My Monthly Cost Budget Cost $5.00 ACTUAL > $4.25 $4.27 --- quoted end --- And today I got another email. --- quoted start --- AWS Budget Notification May 03, 2024 AWS Account ************ Dear AWS Customer, You requested that we alert you when the forecasted cost associated with your My Monthly Cost Budget budget exceeds $5.00 for the current month. The month forecasted cost associated with this budget is $10.04. You can find additional details below and by accessing the AWS Budgets dashboard. Budget Name Budget Type Budgeted Amount Alert Type Alert Threshold FORECASTED Amount My Monthly Cost Budget Cost $5.00 FORECASTED > $5.00 $10.04 --- quoted end --- I contacted AWS CS and they told me that if I don't know my sign in credential, they can't discuss anything related to the account further. So they asked me to give them the TRANSACTION ID, CREDIT CARD NO., etc. that was charged. But I couldn't give them a transaction information yet, because I got an email but the charge hasn't happened on my credit card yet. The email I received the COST ALERT from is the account I had when I used AWS a few years ago, but I signed out and deleted the account after the 1 yr free-tier ended, so now when I try to log in to AWS with this email, it gives me an error saying the account doesn't exist. What I'm wondering now are these things. from the cost alert email, it seems like the cost budget is $5, but when I look at the second cost alert, it shows $10 as the forecasted amount. Is this a situation where the cost will continue to increase until the end of the month because it exceeds the budget? Or will the cost only be incurred up to the budgeted amount of $5? In the body of the email, there is an AWS Account 12 digits no. Is there any way I can log in with this? Luckily, I found the password for that AWS account in the password management application I use. submitted by /u/moondaddi80 [link] [comments]

  • SES Email Sending Request Refused
    by /u/barcodez

    I've used AWS for years, almost since the beginning at huge and large companies. Love it and always thought it was great for innovation and small companies as it really made things super easy for people to try out an idea without doing any heavy investment. So naturally when I wanted to try out an idea I picked AWS. I won't go into too much detail but the idea involves sending some email for registration (standard stuff) and some notifications on account status, so I've configured all my settings and written some prototype code using the sandbox, and naturally wanted to try things with some external email accounts (belonging to me at first, then some friends and then maybe a few referrals if things look good). Thus I requested production access. I send a quick note explaining briefly what I was doing and 24hrs later got a note saying we need more info. Fair enough they need to protect their email servers reputations. I wrote a pretty long explanation covering off all the points they were interested in but did explain it was a prototype so (a.) I don't have examples of templates, it will just be short ASCII messages for now, (b.) I'd only need to send a few emails whilst I get started (c.) bounces, complaints and unsubscribes would be handled etc. I was refused, more over I was told in no uncertain terms I was considered a threat to other users of the service and for security reasons they won't tell me why. I'm bit annoyed if I'm honest because I don't want to have to build out everything to answer some questions and provide templates when I'm really just fleshing out an idea, and sending <100 emails a month if that. Glad I've found out sooner rather than later. I'm considering just installing a mail server on EC2 with an elastic IP but having managed mail servers before I know it's a time sink and for very little reward and concerned they might take issue with this. I appreciate email is the wild west but this level of gatekeeping around smtp is a bit anti innovation in my view. Should I just get on building the prototype and send mail some other way in the hope that once I have more answers they will enable me or is it too risky? Anyone else been refused at the early stages and got accepted later? submitted by /u/barcodez [link] [comments]

 

AWS Developer Associate DVA-C01 Exam Prep
 
 
 

I Passed AWS Developer Associate Certification DVA-C01 Testimonials

Passed DVA-C01

Passed the certified developer associate this week.

Primary study was Stephane Maarek’s course on Udemy.

I also used the Practice Exams by Stephane Maarek and Abhishek Singh.

I used Stephane’s course and practice exams for the Solutions Architect Associate as well, and find his course does a good job preparing you to pass the exams.

The practice exams were more challenging than the actual exam, so they are a good gauge to see if you are ready for the exam.

Haven’t decided if I’ll do another associate level certification next or try for the solutions architect professional.

Cleared AWS Certified Developer – Associate (DVA-C01)

 

I cleared Developer associate exam yesterday. I scored 873.
Actual Exam Exp: More questions were focused on mainly on Lambda, API, Dynamodb, cloudfront, cognito(must know proper difference between user pool and identity pool)
3 questions I found were just for redis vs memecached (so maybe you can focus more here also to know exact use case& difference.) other topic were cloudformation, beanstalk, sts, ec2. Exam was mix of too easy and too tough for me. some questions were one liner and somewhere too long.

Resources: The main resources I used was udemy. Course of Stéphane Maarek and practice exams of Neal Davis and Stéphane Maarek. These exams proved really good and they even helped me in focusing the area which I lacked. And they are up to the level to actual exam, I found 3-4 exact same questions in actual exam(This might be just luck ! ). so I feel, the course of stephane is more than sufficient and you can trust it. I have achieved solution architect associate previously so I knew basic things, so I took around 2 weeks for preparation and revised the Stephen’s course as much as possible. Parallelly I gave the mentioned exams as well, which guided me where to focus more.

Thanks to all of you and feel free to comment/DM me, if you think I can help you in anyway for achieving the same.

Passed the Developer Associate. My Notes.

  1. There was more SNS “fan out” options. None of the Udemy or Tutorial Dojo tests had that.

  2. They aren’t joking about the 30 min check I and expiring your exam if you don’t check in at least 15 mins before hand.

  3. When you finish do a pass over review for multi select questions and check the number required.

  4. Review again and look for gotcha phrases like “least operational cost”, “fastest solution”, “most secure”, and “least expensive”. Change your answers of you put on what “you” would do.

  5. Watch out for questions that mention services that don’t really have anything to do with the problem.

  6. Look at every service mentioned in the question. You can probably think of a better stack for the solution but just adhere to what the present.

  7. If you are clueless of an answer start by ruling out the ones you KNOW are wrong and then guess.

  8. Take as many practice exams like tutorial dojo as you can. On review filter out the review for “incorrect”. Open another tab on the subject and read up or book mark it.

  9. I would get 50-60% first pass at each exam. Then 85-95% after reading the answer and the open tabs and book marks.

  10. If taking the Pearson proctored online test down load the OnVue App as soon as you can. Installing that thing made me miss the 15 min window and I had to rebook and pay. Their checkin is confusing between all the cert portals and their own site. Just use the “Manage Pearson Exams” from the aws cert portal to force auth to theirs.

 

New versions of the Developer (Associate) and DevOps Engineer (Professional) exams in Feb/March 2023

AWS Certified Developer – Associate

  • Current version: DVA-C01

  • New version: DVA-C02

  • Last day to take the current exam: 2023-02-27

  • Registration open for the updated exam: 2023-01-31

  • First day to take the updated exam: 2023-02-28

AWS Certified DevOps Engineer – Professional

  • Current version: DOP-C01

  • New version: DOP-C02

  • Last day to take the current exam: 2023-03-06

  • Registration open for the updated exam: 2023-01-31

  • First day to take the updated exam: 2023-03-07

Both updates were posted on 2022-10-04 on https://aws.amazon.com/certification/coming-soon/. The exam guides and practice questions are also available there.

Passed DVA-C01 scored 910

Past January as usual I chose some goals to achieve during 2022, including to obtain an AWS certification. In February I started studying using the Pluralsight platform. The course for developers is a good introduction but is messy sometimes. At the end of the course I had a big picture of the main AWS services but I was really confused by all the different topics. Pluralsight offers alsl some labs that allowed me to practice with AWS services because I my work wasn’t related to cloud.

Then a senior engineer suggested me to study using ACloudGuru platform and I loved their contents. They focus only on the details necessary for the exam. During the videos they show useful list to summarize services and tables to compare them. Moreover they offer a quite unrestricted playground where to put your hands. So i could try their labs but also try my first cloudformation scripts and lambdas. There are also 4 practice exams that are very similar to real questions. Even though their platform is buggy sometimes, the subscription was worth it.

At the same time I purchased the pack of practice questions from Jon Bonso on Udemy and indeed the answer explainations are very detailed and very useful.

For practicing more, I was adding wrong questions to the Anki desktop application but I was revising them on my phone whenever I could. I ended up with 150 different questions on Anki.

In the meantime, in August I left my software engineer job because my boss told me that I was not able to work with cloud technologies. In september I started a new job as cloud engineer in a new company. Finally, in October I passed the Developer Associate exam and I scored 910.

I put a lot of effort into prepare the exam and eventually i got it! However I believe this is just my first step and I want to keep studying. My next objective is the Solutions Architect Associate certification.

AWS Certified Developer Associate Exam Prep - DVA-C01 DVA-C02
AWS Certified Developer Associate Exam Prep – DVA-C01 DVA-C02

Top 100 AWS Solutions Architect Associate Certification Exam Questions and Answers Dump SAA-C03

Top 60 AWS Solution Architect Associate Exam Tips

Ace the 2023 AWS Solutions Architect Associate SAA-C03 Exam with Confidence Pass the 2023 AWS Certified Machine Learning Specialty MLS-C01 Exam with Flying Colors

List of Freely available programming books - What is the single most influential book every Programmers should read



#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks

Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
zCanadian Quiz and Trivia, Canadian History, Citizenship Test, Geography, Wildlife, Secenries, Banff, Tourism

Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Africa Quiz, Africa Trivia, Quiz, African History, Geography, Wildlife, Culture

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA


Health Health, a science-based community to discuss health news and the coronavirus (COVID-19) pandemic

Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.

Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.

Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.

Turn your dream into reality with Google Workspace: It’s free for the first 14 days.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes:
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6 96DRHDRA9J7GTN6
63F733CLLY7R7MM
63F7D7CPD9XXUVT
63FLKQHWV3AEEE6
63JGLWWK36CP7WM
63KKR9EULQRR7VE
63KNY4N7VHCUA9R
63LDXXFYU6VXDG9
63MGNRCKXURAYWC
63NGNDVVXJP4N99
63P4G3ELRPADKQU
With Google Workspace, Get custom email @yourcompany, Work from anywhere; Easily scale up or down
Google gives you the tools you need to run your business like a pro. Set up custom email, share files securely online, video chat from any device, and more.
Google Workspace provides a platform, a common ground, for all our internal teams and operations to collaboratively support our primary business goal, which is to deliver quality information to our readers quickly.
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE
C37HCAQRVR7JTFK
C3AE76E7WATCTL9
C3C3RGUF9VW6LXE
C3D9LD4L736CALC
C3EQXV674DQ6PXP
C3G9M3JEHXM3XC7
C3GGR3H4TRHUD7L
C3LVUVC3LHKUEQK
C3PVGM4CHHPMWLE
C3QHQ763LWGTW4C
Even if you’re small, you want people to see you as a professional business. If you’re still growing, you need the building blocks to get you where you want to be. I’ve learned so much about business through Google Workspace—I can’t imagine working without it.
(Email us for more codes)

error: Content is protected !!