2022 AWS Certified Developer Associate Exam Preparation: Questions and Answers Dump

Welcome to AWS Certified Developer Associate Exam Preparation: Definition and Objectives, Top 100 Questions and Answers dump, White papers, Courses, Labs and Training Materials, Exam info and details, References, Jobs, Others AWS Certificates

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

What is the AWS Certified Developer Associate Exam?

This AWS Certified Developer-Associate Examination is intended for individuals who perform a Developer role. It validates an examinee’s ability to:

  • Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
  • Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS

Recommended general IT knowledge
The target candidate should have the following:
– In-depth knowledge of at least one high-level programming language
– Understanding of application lifecycle management
– The ability to write code for serverless applications
– Understanding of the use of containers in the development process

Recommended AWS knowledge
The target candidate should be able to do the following:

  • Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications
  • Identify key features of AWS services
  • Understand the AWS shared responsibility model
  • Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
  • Use and interact with AWS services
  • Apply basic understanding of cloud-native applications to write code
  • Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
  • Author, maintain, and debug code modules on AWS

What is considered out of scope for the target candidate?
The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam:
– Design architectures (for example, distributed system, microservices)
– Design and implement CI/CD pipelines

  • Administer IAM users and groups
  • Administer Amazon Elastic Container Service (Amazon ECS)
  • Design AWS networking infrastructure (for example, Amazon VPC, AWS Direct Connect)
  • Understand compliance and licensing

Exam content
Response types
There are two types of questions on the exam:
– Multiple choice: Has one correct response and three incorrect responses (distractors)
– Multiple response: Has two or more correct responses out of five or more response options
Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose.
Distractors are generally plausible responses that match the content area.
Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.

Unscored content
The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.

Exam results
The AWS Certified Developer – Associate (DVA-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines.
Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720.
Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.
Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam.
Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.

Content outline
This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context.
The percentage in each domain represents only scored content.

Domain 1: Deployment 22%
Domain 2: Security 26%
Domain 3: Development with AWS Services 30%
Domain 4: Refactoring 10%
Domain 5: Monitoring and Troubleshooting 12%

Domain 1: Deployment
1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
–  Commit code to a repository and invoke build, test and/or deployment actions
–  Use labels and branches for version and release management
–  Use AWS CodePipeline to orchestrate workflows against different environments
–  Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS
CodeDeploy for CI/CD purposes
–  Perform a roll back plan based on application deployment policy

1.2 Deploy applications using AWS Elastic Beanstalk.
–  Utilize existing supported environments to define a new application stack
–  Package the application
–  Introduce a new application version into the Elastic Beanstalk environment
–  Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable)
–  Validate application health using Elastic Beanstalk dashboard
–  Use Amazon CloudWatch Logs to instrument application logging

1.3 Prepare the application deployment package to be deployed to AWS.
–  Manage the dependencies of the code module (like environment variables, config files and static image files) within the package
–  Outline the package/container directory structure and organize files appropriately
–  Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores)

1.4 Deploy serverless applications.
–  Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template
–  Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)

Domain 2: Security
2.1 Make authenticated calls to AWS services.
–  Communicate required policy based on least privileges required by application.
–  Assume an IAM role to access a service
–  Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)

2.2 Implement encryption using AWS services.
– Encrypt data at rest (client side; server side; envelope encryption) using AWS services
–  Encrypt data in transit

2.3 Implement application authentication and authorization.
– Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools
–  Use Amazon Cognito-provided credentials to write code that access AWS services.
–  Use Amazon Cognito sync to synchronize user profiles and data
–  Use developer-authenticated identities to interact between end user devices, backend
authentication, and Amazon Cognito

Domain 3: Development with AWS Services
3.1 Write code for serverless applications.
– Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications)
– Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler)
– Create an API endpoint using Amazon API Gateway
–  Create and test appropriate API actions like GET, POST using the API endpoint
–  Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes)
–  Compute read/write capacity units for Amazon DynamoDB based on application requirements
–  Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis)
–  Invoke an AWS Lambda function synchronously and asynchronously

3.2 Translate functional requirements into application design.
– Determine real-time vs. batch processing for a given use case
– Determine use of synchronous vs. asynchronous for a given use case
– Determine use of event vs. schedule/poll for a given use case
– Account for tradeoffs for consistency models in an application design

Domain 4: Refactoring
4.1 Optimize applications to best use AWS services and features.
 Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache)
 Apply an Amazon S3 naming scheme for optimal read performance

4.2 Migrate existing application code to run on AWS.
– Isolate dependencies
– Run the application as one or more stateless processes
– Develop in order to enable horizontal scalability
– Externalize state

Domain 5: Monitoring and Troubleshooting

5.1 Write code that can be monitored.
– Create custom Amazon CloudWatch metrics
– Perform logging in a manner available to systems operators
– Instrument application source code to enable tracing in AWS X-Ray

5.2 Perform root cause analysis on faults found in testing or production.
– Interpret the outputs from the logging mechanism in AWS to identify errors in logs
– Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues
– Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component

Which key tools, technologies, and concepts might be covered on the exam?

The following is a non-exhaustive list of the tools and technologies that could appear on the exam.
This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam.
The general tools and technologies in this list appear in no particular order.
AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance:
– Analytics
– Application Integration
– Containers
– Cost and Capacity Management
– Data Movement
– Developer Tools
– Instances (virtual machines)
– Management and Governance
– Networking and Content Delivery
– Security
– Serverless

AWS services and features

Analytics:
– Amazon Elasticsearch Service (Amazon ES)
– Amazon Kinesis
Application Integration:
– Amazon EventBridge (Amazon CloudWatch Events)
– Amazon Simple Notification Service (Amazon SNS)
– Amazon Simple Queue Service (Amazon SQS)
– AWS Step Functions

Compute:
– Amazon EC2
– AWS Elastic Beanstalk
– AWS Lambda

Containers:
– Amazon Elastic Container Registry (Amazon ECR)
– Amazon Elastic Container Service (Amazon ECS)
– Amazon Elastic Kubernetes Services (Amazon EKS)

Database:
– Amazon DynamoDB
– Amazon ElastiCache
– Amazon RDS

Developer Tools:
– AWS CodeArtifact
– AWS CodeBuild
– AWS CodeCommit
– AWS CodeDeploy
– Amazon CodeGuru
– AWS CodePipeline
– AWS CodeStar
– AWS Fault Injection Simulator
– AWS X-Ray

Management and Governance:
– AWS CloudFormation
– Amazon CloudWatch

Networking and Content Delivery:
– Amazon API Gateway
– Amazon CloudFront
– Elastic Load Balancing

Security, Identity, and Compliance:
– Amazon Cognito
– AWS Identity and Access Management (IAM)
– AWS Key Management Service (AWS KMS)

Storage:
– Amazon S3

Out-of-scope AWS services and features

The following is a non-exhaustive list of AWS services and features that are not covered on the exam.
These services and features do not represent every AWS offering that is excluded from the exam content.
Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant.
Out-of-scope AWS services and features include the following:
– AWS Application Discovery Service
– Amazon AppStream 2.0
– Amazon Chime
– Amazon Connect
– AWS Database Migration Service (AWS DMS)
– AWS Device Farm
– Amazon Elastic Transcoder
– Amazon GameLift
– Amazon Lex
– Amazon Machine Learning (Amazon ML)
– AWS Managed Services
– Amazon Mobile Analytics
– Amazon Polly

– Amazon QuickSight
– Amazon Rekognition
– AWS Server Migration Service (AWS SMS)
– AWS Service Catalog
– AWS Shield Advanced
– AWS Shield Standard
– AWS Snow Family
– AWS Storage Gateway
– AWS WAF
– Amazon WorkMail
– Amazon WorkSpaces

To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Top

AWS Certified Developer – Associate Practice Questions And Answers Dump

Q0: Your application reads commands from an SQS queue and sends them to web services hosted by your
partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost.
How can you accommodate the partners’ broken web services without wasting your resources?

  • A. Create a delay queue and set DelaySeconds to 30 seconds
  • B. Requeue the message with a VisibilityTimeout of 30 seconds.
  • C. Create a dead letter queue and set the Maximum Receives to 3.
  • D. Requeue the message with a DelaySeconds of 30 seconds.
AWS Developer Associates DVA-C01 PRO
AWS Developer Associates DVA-C01 PRO
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 


C. After a message is taken from the queue and returned for the maximum number of retries, it is
automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.

Reference: Amazon SQS Dead-Letter Queues


Top

Q1: A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently.
What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. The AWS Documentation mentions the following:

DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

Q2: You are creating a DynamoDB table with the following attributes:

  • PurchaseOrderNumber (partition key)
  • CustomerID
  • PurchaseDate
  • TotalPurchaseValue

One of your applications must retrieve items from the table to calculate the total value of purchases for a
particular customer over a date range. What secondary index do you need to add to the table?

  • A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute
  • C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute


C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition
key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the
TotalPurchaseValue into the index provides all the data needed to satisfy the use case.

Reference: AWS DynamoDB Global Secondary Indexes

Difference between local and global indexes in DynamoDB

    • Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
    • Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
    • Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
    • Global Secondary Indexes defines a new paradigm – different hash/range keys per index.
      This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
    • Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
    • Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.

Throughput :

  • Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
  • Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.


Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q3: When referencing the remaining time left for a Lambda function to run within the function’s code you would use:

  • A. The event object
  • B. The timeLeft object
  • C. The remains object
  • D. The context object


D. The context object.

Reference: AWS Lambda


Top

Q4: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context
D. event, context
def handler_name(event, context):

return some_value

Reference: AWS Lambda Function Handler in Python

Top

Q5: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda Deployment Package

Top

Q6: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime

C. Function code and libraries not included within the runtime

Reference: AWS Lambda Deployment Package in PowerShell

Top

Q7: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?

  • A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
  • B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
  • C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
  • D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.


D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.

Reference: Comparison of Security Groups and Network ACLs

AWS Security Groups and NACL


Top

Q8: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.

Reference: AWS Network Address Translation Gateway


Top

Q9: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.

Reference: AWS Autoscalling


Top

Q10: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda

Top

Q11: You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?

  • A. RegisterImage
  • B. CreateImage
  • C. ami-register-image
  • D. ami-create-image

A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.

Reference: API RegisterImage

Top

Q12: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?

  • A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
  • B. Permenantly assigning users to specific instances and always routing their traffic to those instances
  • C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
  • D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance

Top

Q13: Which API call would best be used to describe an Amazon Machine Image?

  • A. ami-describe-image
  • B. ami-describe-images
  • C. DescribeImage
  • D. DescribeImages

D. In general, API actions stick to the PascalCase style with the first letter of every word capitalized.

Reference: API DescribeImages

Top

Q14: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?

  • A. Autoscaling requires using Amazon EBS-backed instances
  • B. Virtual Private Cloud requires EBS backed instances
  • C. Amazon EBS-backed instances can be stopped and restarted without losing data
  • D. Instance-store backed instances can be stopped and restarted without losing data

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.

Reference: What is the difference between EBS and Instance Store?

Top

Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command.
ssh -i my_key.pem ec2-user@52.2.222.22
However you receive the following error.
@@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@
What is the most probable reason for this and how can you fix it?

  • A. You do not have root access on your terminal and need to use the sudo option for this to work.
  • B. You do not have enough permissions to perform the operation.
  • C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
  • D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.

D. You need to run something like: chmod 400 my_key.pem

Reference:

Top

Q16: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?

  • A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
  • B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
  • C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
  • D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.

D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.

Reference: AWS Root Device Storage

Top

Q17: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:

  • A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
  • B. Can only be used to launch EC2 instances in the same country as the AMI is stored
  • C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
  • D. Can be used to launch EC2 instances in any AWS region

C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another

Reference: https://aws.amazon.com/amazon-linux-ami/

Top

Q18: Which of the following statements is true about the Elastic File System (EFS)?

  • A. EFS can scale out to meet capacity requirements and scale back down when no longer needed
  • B. EFS can be used by multiple EC2 instances simultaneously
  • C. EFS cannot be used by an instance using EBS
  • D. EFS can be configured on an instance before launch just like an IAM role or EBS volumes

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

A. and B.

Reference: https://aws.amazon.com/efs/

Top

Q19: IAM Policies, at a minimum, contain what elements?

  • A. ID
  • B. Effects
  • C. Resources
  • D. Sid
  • E. Principle
  • F. Actions

B. C. and F.

Effect – Use Allow or Deny to indicate whether the policy allows or denies access.

Resource – Specify a list of resources to which the actions apply.

Action – Include a list of actions that the policy allows or denies.

Id, Sid aren’t required fields in IAM Policies. But they are optional fields

Reference: AWS IAM Access Policies

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q20: What are the main benefits of IAM groups?

  • A. The ability to create custom permission policies.
  • B. Assigning IAM permission policies to more than one user at a time.
  • C. Easier user/policy management.
  • D. Allowing EC2 instances to gain access to S3.

B. and C.

A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.

Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html

 

Top

Q21: What are benefits of using AWS STS?

  • A. Grant access to AWS resources without having to create an IAM identity for them
  • B. Since credentials are temporary, you don’t have to rotate or revoke them
  • C. Temporary security credentials can be extended indefinitely
  • D. Temporary security credentials can be restricted to a specific region

Top

Q22: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q23: A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?

  • A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
  • B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
  • C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
  • D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.

B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence.
Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation.
During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:

  • Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
  • Saved Configurations
    Settings for any options that are not applied directly to the
    environment are loaded from a saved configuration, if specified.
  • Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the
    environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.

    Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.

  • Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.

If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI
.
Settings in configuration files are not applied
directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.
If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.

Reference: Managing ec2 features – Elastic beanstalk

Q24: What statements are true about Availability Zones (AZs) and Regions?

  • A. There is only one AZ in each AWS Region
  • B. AZs are geographically separated inside a region to help protect against natural disasters affecting more than one at a time.
  • C. AZs can be moved between AWS Regions based on your needs
  • D. There are (almost always) two or more AZs in each AWS Region

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B and D.

Reference: AWS global infrastructure/

Top

Q25: An AWS Region contains:

  • A. Edge Locations
  • B. Data Centers
  • C. AWS Services
  • D. Availability Zones


B. C. D. Edge locations are actually distinct locations that don’t explicitly fall within AWS regions.

Reference: AWS Global Infrastructure


Top

Q26: Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?

  • A. Eventual Consistent Reads
  • B. Conditional reads for Consistency
  • C. Strongly Consistent Reads
  • D. Not possible


C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.

Reference: https://aws.amazon.com/dynamodb/faqs/


Top

Q27: You’ ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?

  • A. Create an Opswork stack and deploy the Docker containers
  • B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
  • C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
  • D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.


B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.

Reference: Create and Deploy Docker in AWS


Top

Q28: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?

  • A. Create multiple threads and upload the objects in the multiple threads
  • B. Write the items in batches for better performance
  • C. Use the Multipart upload API
  • D. Enable versioning on the Bucket

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 


C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.

Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html


Top

Q29: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?

  • A. 6000
  • B. 10
  • C. 3600
  • D. 600

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.

You can specify the Write capacity in the Capacity tab of the DynamoDB table.

Reference: AWS working with tables

Q30: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context


D. event, context def handler_name(event, context):

return some_value
Reference: AWS Lambda Function Handler in Python

Top

Q31: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda Deployment Package

Top

Q32: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime


C. Function code and libraries not included within the runtime
Reference: AWS Lambda Deployment Package in PowerShell

Top

Q33: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
Reference: AWS Network Address Translation Gateway

Top

Q34: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.
Reference: AWS Autoscalling

Top

Q30: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

Answer:


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda

Top

Q31: An organization is using an Amazon ElastiCache cluster in front of their Amazon RDS instance. The organization would like the Developer to implement logic into the code so that the cluster only retrieves data from RDS when there is a cache miss. What strategy can the Developer implement to achieve this?

  • A. Lazy loading
  • B. Write-through
  • C. Error retries
  • D. Exponential backoff

Answer:


Answer – A
Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. All other options are incorrect.
Reference: Caching Strategies

Top

Q32: A developer is writing an application that will run on Ec2 instances and read messages from SQS queue. The nessages will arrive every 15-60 seconds. How should the Developer efficiently query the queue for new messages?

  • A. Use long polling
  • B. Set a custom visibility timeout
  • C. Use short polling
  • D. Implement exponential backoff


Answer – A Long polling will help insure that the applications make less requests for messages in a shorter period of time. This is more cost effective. Since the messages are only going to be available after 15 seconds and we don’t know exacly when they would be available, it is better to use Long Polling.
Reference: Amazon SQS Long Polling

Top

Q33: You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With new Lambda function working as per expectation which of the following will shift traffic from original Lambda function to new Lambda function in the shortest time frame?

  • A. Canary10Percent5Minutes
  • B. Linear10PercentEvery10Minutes
  • C. Canary10Percent15Minutes
  • D. Linear10PercentEvery1Minute


Answer – A
With Canary Deployment Preference type, Traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while remaining all traffic is shifted after 5 minutes.
Reference: Gradual Code Deployment

Top

Q34: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?

  • A. AWS::Serverless::Api
  • B. AWS::Serverless::Application
  • C. AWS::Serverless::Layerversion
  • D. AWS::Serverless::Function


Answer – B
AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets.
Reference: Declaring Serverless Resources

Top

Q35: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?

  • A. Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
  • B. Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
  • C. Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
  • D. Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.


Answer – D
With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys.
Reference: AWS Key Management Service Concepts

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q36: You are developing an application that will be comprised of the following architecture –

  1. A set of Ec2 instances to process the videos.
  2. These (Ec2 instances) will be spun up by an autoscaling group.
  3. SQS Queues to maintain the processing messages.
  4. There will be 2 pricing tiers.

How will you ensure that the premium customers videos are given more preference?

  • A. Create 2 Autoscaling Groups, one for normal and one for premium customers
  • B. Create 2 set of Ec2 Instances, one for normal and one for premium customers
  • C. Create 2 SQS queus, one for normal and one for premium customers
  • D. Create 2 Elastic Load Balancers, one for normal and one for premium customers.


Answer – C
The ideal option would be to create 2 SQS queues. Messages can then be processed by the application from the high priority queue first.<br? The other options are not the ideal options. They would lead to extra costs and also extra maintenance.
Reference: SQS

Top

Q37: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?

  • A. CustomerID
  • B. CustomerName
  • C. Location
  • D. Age


Answer- A
Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on..
Use composite attributes. Try to combine more than one attribute to form a unique key.
Reference: Choosing the right DynamoDB Partition Key

Top

Q38: A developer is making use of AWS services to develop an application. He has been asked to develop the application in a manner to compensate any network delays. Which of the following two mechanisms should he implement in the application?

  • A. Multiple SQS queues
  • B. Exponential backoff algorithm
  • C. Retries in your application code
  • D. Consider using the Java sdk.


Answer- B. and C.
In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values, and should be set based on the operation being performed, as well as other local factors, such as network latency.
Reference: Error Retries and Exponential Backoff in AWS

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q39: An application is being developed that is going to write data to a DynamoDB table. You have to setup the read and write throughput for the table. Data is going to be read at the rate of 300 items every 30 seconds. Each item is of size 6KB. The reads can be eventual consistent reads. What should be the read capacity that needs to be set on the table?

  • A. 10
  • B. 20
  • C. 6
  • D. 30


Answer – A

Since there are 300 items read every 30 seconds , that means there are (300/30) = 10 items read every second.
Since each item is 6KB in size , that means , 2 reads will be required for each item.
So we have total of 2*10 = 20 reads for the number of items per second
Since eventual consistency is required , we can divide the number of reads(20) by 2 , and in the end we get the Read Capacity of 10.

Reference: Read/Write Capacity Mode


Top

Q40: You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load balancer. You have been requested to monitor the incoming connections to the Elastic Load Balancer. Which of the below options can suffice this requirement?

  • A. Use AWS CloudTrail with your load balancer
  • B. Enable access logs on the load balancer
  • C. Use a CloudWatch Logs Agent
  • D. Create a custom metric CloudWatch lter on your load balancer


Answer – B
Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues.
Reference: Access Logs for Your Application Load Balancer

Top

Q41: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?

  • A. Enable versioning for the underlying S3 bucket.
  • B. Enable Replication so that the objects get replicated to the other bucket
  • C. Enable CORS for the bucket
  • D. Change the Bucket policy for the bucket to allow access from the other bucket


Answer – C

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.

Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:

Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.

Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.

Reference: Cross-Origin Resource Sharing (CORS)


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q42: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below

  • A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
  • B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
  • C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
  • D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.


Answer- C
The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3.
You can then provides access to the objects based on the key values generated via the user id.

Reference: The AWS Security Token Service (STS)


Top

Q43: Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.

  • A. Publish your data to CloudWatch Logs, and congure your application to Autoscale to handle the load on demand.
  • B. Publish your log data to an Amazon S3 bucket.  Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is congured to pull down your log les stored an Amazon S3
  • C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is congured to process your logging data.
  • D. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.

Answer:


Answer – C
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data.
Reference: Amazon Kinesis

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q44: You’ve been instructed to develop a mobile application that will make use of AWS services. You need to decide on a data store to store the user sessions. Which of the following would be an ideal data store for session management?

  • A. AWS Simple Storage Service
  • B. AWS DynamoDB
  • C. AWS RDS
  • D. AWS Redshift

Answer:


Answer – B
DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management
Reference: Scalable Session Handling in PHP Using Amazon DynamoDB

Top

Q45: Your application currently interacts with a DynamoDB table. Records are inserted into the table via the application. There is now a requirement to ensure that whenever items are updated in the DynamoDB primary table , another record is inserted into a secondary table. Which of the below feature should be used when developing such a solution?

  • A. AWS DynamoDB Encryption
  • B. AWS DynamoDB Streams
  • C. AWS DynamoDB Accelerator
  • D. AWSTable Accelerator


Answer – B
DynamoDB Streams Use Cases and Design Patterns This post describes some common use cases you might encounter, along with their design options and solutions, when migrating data from relational data stores to Amazon DynamoDB. We will consider how to manage the following scenarios:

  • How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
  • How do you trigger an event based on a particular transaction?
  • How do you audit or archive transactions?
  • How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?

Relational databases provide native support for transactions, triggers, auditing, and replication. Typically, a transaction in a database refers to performing create, read, update, and delete (CRUD) operations against multiple tables in a block. A transaction can have only two states—success or failure. In other words, there is no partial completion. As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit. You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time. AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region. All of the other options are incorrect since none of these would meet the core requirement.
Reference: DynamoDB Streams Use Cases and Design Patterns


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q46: An application has been making use of AWS DynamoDB for its back-end data store. The size of the table has now grown to 20 GB , and the scans on the table are causing throttling errors. Which of the following should now be implemented to avoid such errors?

  • A. Large Page size
  • B. Reduced page size
  • C. Parallel Scans
  • D. Sequential scans

Answer – B
When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity. You can use the following technique to minimize the impact of a scan on a table’s provisioned throughput. Reduce page size Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request. For example, suppose that each item is 4 KB and you set the page size to 40 items. A Query request would then consume only 20 eventually consistent read operations or 40 strongly consistent read operations. A larger number of smaller Query or Scan operations would allow your other critical requests to succeed without throttling.
Reference1: Rate-Limited Scans in Amazon DynamoDB

Reference2: Best Practices for Querying and Scanning Data


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q47: Which of the following is correct way of passing a stage variable to an HTTP URL ? (Select TWO.)

  • A. http://example.com/${}/prod
  • B. http://example.com/${stageVariables.}/prod
  • C. http://${stageVariables.}.example.com/dev/operation
  • D. http://${stageVariables}.example.com/dev/operation
  • E. http://${}.example.com/dev/operation
  • F. http://example.com/${stageVariables}/prod


Answer – B. and C.
A stage variable can be used as part of HTTP integration URL as in following cases, ·         A full URI without protocol ·         A full domain ·         A subdomain ·         A path ·         A query string In the above case , option B & C displays stage variable as a path & sub-domain.
Reference: Amazon API Gateway Stage Variables Reference

Top

Q48: Your company is planning on creating new development environments in AWS. They want to make use of their existing Chef recipes which they use for their on-premise configuration for servers in AWS. Which of the following service would be ideal to use in this regard?

  • A. AWS Elastic Beanstalk
  • B. AWS OpsWork
  • C. AWS Cloudformation
  • D. AWS SQS


Answer – B
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments All other options are invalid since they cannot be used to work with Chef recipes for configuration management.
Reference: AWS OpsWorks

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q49: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?

  • A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
  • B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access.
  • C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
  • D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.


Answer – C
With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used
Reference: About Web Identity Federation

Top

Q50: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?

  • A. Cognito Data
  • B. Cognito Events
  • C. Cognito Streams
  • D. Cognito Callbacks


Answer – C
Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams
Reference:

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q51: You’ve developed a set of scripts using AWS Lambda. These scripts need to access EC2 Instances in a VPC. Which of the following needs to be done to ensure that the AWS Lambda function can access the resources in the VPC. Choose 2 answers from the options given below

  • A. Ensure that the subnet ID’s are mentioned when conguring the Lambda function
  • B. Ensure that the NACL ID’s are mentioned when conguring the Lambda function
  • C. Ensure that the Security Group ID’s are mentioned when conguring the Lambda function
  • D. Ensure that the VPC Flow Log ID’s are mentioned when conguring the Lambda function


Answer: A and C.
AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPCspecific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC.
Reference: Configuring a Lambda Function to Access Resources in an Amazon VPC

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q52: You’ve currently been tasked to migrate an existing on-premise environment into Elastic Beanstalk. The application does not make use of Docker containers. You also can’t see any relevant environments in the beanstalk service that would be suitable to host your application. What should you consider doing in this case?

  • A. Migrate your application to using Docker containers and then migrate the app to the Elastic Beanstalk environment.
  • B. Consider using Cloudformation to deploy your environment to Elastic Beanstalk
  • C. Consider using Packer to create a custom platform
  • D. Consider deploying your application using the Elastic Container Service


Answer – C
Elastic Beanstalk supports custom platforms. A custom platform is a more advanced customization than a Custom Image in several ways. A custom platform lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility allows you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn’t provide a platform out of the box. Compare that to custom images, where you modify an AMI for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform’s software stack. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. To create a custom platform, you build an Amazon Machine Image (AMI) from one of the supported operating systems—Ubuntu, RHEL, or Amazon Linux (see the flavor entry in Platform.yaml File Format for the exact version numbers)—and add further customizations. You create your own Elastic Beanstalk platform using Packer, which is an open-source tool for creating machine images for many platforms, including AMIs for use with Amazon EC2. An Elastic Beanstalk platform comprises an AMI configured to run a set of software that supports an application, and metadata that can include custom configuration options and default configuration option settings.
Reference: AWS Elastic Beanstalk Custom Platforms

Top

Q53: Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.

  • A. 10
  • B. 160
  • C. 155
  • D. 16


Answer – B.
Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.
Reference: Read/Write Capacity Mode

Top

Top

Q54: Which AWS Service can be used to automatically install your application code onto EC2, on premises systems and Lambda?

  • A. CodeCommit
  • B. X-Ray
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q55: Which AWS service can be used to compile source code, run tests and package code?

  • A. CodePipeline
  • B. CodeCommit
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy Answer: B.

Reference: AWS CodeBuild


Top

Q56: How can your prevent CloudFormation from deleting your entire stack on failure? (Choose 2)

  • A. Set the Rollback on failure radio button to No in the CloudFormation console
  • B. Set Termination Protection to Enabled in the CloudFormation console
  • C. Use the –disable-rollback flag with the AWS CLI
  • D. Use the –enable-termination-protection protection flag with the AWS CLI

Answer: A. and C.

Reference: Protecting a Stack From Being Deleted

Top

Q57: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?

  • A. Continuous Integration
  • B. Continuous Deployment
  • C. Continuous Delivery
  • D. Continuous Development

Answer: A

Reference: What is Continuous Integration?

Top

Q58: When deploying application code to EC2, the AppSpec file can be written in which language?

  • A. JSON
  • B. JSON or YAML
  • C. XML
  • D. YAML

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q59: Part of your CloudFormation deployment fails due to a mis-configuration, by defaukt what will happen?

  • A. CloudFormation will rollback only the failed components
  • B. CloudFormation will rollback the entire stack
  • C. Failed component will remain available for debugging purposes
  • D. CloudFormation will ask you if you want to continue with the deployment

Answer: B

Reference: Troubleshooting AWS CloudFormation


Top

Q60: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?

  • A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
  • B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
  • C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
  • D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.

Answer: C

Reference: Getting Started with Amazon SNS


Top

Q61: Which AWS service can be used to centrally store and version control your application source code, binaries and libraries

  • A. CodeCommit
  • B. CodeBuild
  • C. CodePipeline
  • D. ElasticFileSystem

Answer: A

Reference: AWS CodeCommit


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q62: You are using CloudFormation to create a new S3 bucket, which of the following sections would you use to define the properties of your bucket?

  • A. Conditions
  • B. Parameters
  • C. Outputs
  • D. Resources

Answer: D

Reference: Resources


Top

Q63: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?

  • A. Transforms
  • B. Outputs
  • C. Resources
  • D. Instances

Answer: C.
The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3.
Reference: Resources

Top

Q64: Which AWS service can be used to fully automate your entire release process?

  • A. CodeDeploy
  • B. CodePipeline
  • C. CodeCommit
  • D. CodeBuild

Answer: B.
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates

Reference: AWS CodePipeline


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q65: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?

  • A. Outputs
  • B. Transforms
  • C. Resources
  • D. Exports

Answer: A.
Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack.
Reference: CloudFormation Outputs

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q66: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?

  • A. Inputs
  • B. Resources
  • C. Transforms
  • D. Files

Answer: C.
Transforms is used to reference code located in S3 and also specififying the use of the Serverless Application Model (SAM) for Lambda deployments.
Reference: Transforms

Top

Q67: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file
used to specify source files and lifecycle hooks?

  • A. buildspec.yml
  • B. appspec.json
  • C. appspec.yml
  • D. buildspec.json

Answer: C.

Reference: CodeDeploy AppSpec File Reference

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q68: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?

  • A. Share the code using an EBS volume
  • B. Copy and paste the code into the template each time you need to use it
  • C. Use a cloudformation nested stack
  • D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.

Answer: C.

Reference: Working with Nested Stacks

Top

Q69: In the CodeDeploy AppSpec file, what are hooks used for?

  • A. To reference AWS resources that will be used during the deployment
  • B. Hooks are reserved for future use
  • C. To specify files you want to copy during the deployment.
  • D. To specify, scripts or function that you want to run at set points in the deployment lifecycle

Answer: D.
The ‘hooks’ section for an EC2/On-Premises deployment contains mappings that link deployment lifecycle event hooks to one or more scripts.

Reference: AppSpec ‘hooks’ Section

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q70: Which command can you use to encrypt a plain text file using CMK?

  • A. aws kms-encrypt
  • B. aws iam encrypt
  • C. aws kms encrypt
  • D. aws encrypt

Answer: C.
aws kms encrypt –key-id 1234abcd-12ab-34cd-56ef-1234567890ab –plaintext fileb://ExamplePlaintextFile –output text –query CiphertextBlob > C:\Temp\ExampleEncryptedFile.base64

Reference: AWS CLI Encrypt

Top

Q72: Which of the following is an encrypted key used by KMS to encrypt your data

  • A. Custmoer Mamaged Key
  • B. Encryption Key
  • C. Envelope Key
  • D. Customer Master Key

Answer: C.
Your Data key also known as the Enveloppe key is encrypted using the master key.This approach is known as Envelope encryption.
Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.

Reference: Envelope Encryption

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q73: Which of the following statements are correct? (Choose 2)

  • A. The Customer Master Key is used to encrypt and decrypt the Envelope Key or Data Key
  • B. The Envelope Key or Data Key is used to encrypt and decrypt plain text files.
  • C. The envelope Key or Data Key is used to encrypt and decrypt the Customer Master Key.
  • D. The Customer MasterKey is used to encrypt and decrypt plain text files.

Answer: A. and B.

Reference: AWS Key Management Service Concepts

Top

Q74: Which of the following statements is correct in relation to kMS/ (Choose 2)

  • A. KMS Encryption keys are regional
  • B. You cannot export your customer master key
  • C. You can export your customer master key.
  • D. KMS encryption Keys are global

Answer: A. and B.

Reference: AWS Key Management Service FAQs

Q75:  A developer is preparing a deployment package for a Java implementation of an AWS Lambda function. What should the developer include in the deployment package? (Select TWO.)
A. Compiled application code
B. Java runtime environment
C. References to the event sources
D. Lambda execution role
E. Application dependencies


Answer: C. E.
Notes: To create a Lambda function, you first create a Lambda function deployment package. This package is a .zip or .jar file consisting of your code and any dependencies.
Reference: Lambda deployment packages.

Q76: A developer uses AWS CodeDeploy to deploy a Python application to a fleet of Amazon EC2 instances that run behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. What should the developer include in the CodeDeploy deployment package?
A. A launch template for the Amazon EC2 Auto Scaling group
B. A CodeDeploy AppSpec file
C. An EC2 role that grants the application access to AWS services
D. An IAM policy that grants the application access to AWS services


Answer: B.
Notes: The CodeDeploy AppSpec (application specific) file is unique to CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file.
Reference: CodeDeploy application specification (AppSpec) files.
Category: Deployment

Q76: A company is working on a project to enhance its serverless application development process. The company hosts applications on AWS Lambda. The development team regularly updates the Lambda code and wants to use stable code in production. Which combination of steps should the development team take to configure Lambda functions to meet both development and production requirements? (Select TWO.)

A. Create a new Lambda version every time a new code release needs testing.
B. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready unqualified Amazon Resource Name (ARN) version. Point the Development alias to the $LATEST version.
C. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to the production-ready qualified Amazon Resource Name (ARN) version. Point the Development alias to the variable LAMBDA_TASK_ROOT.
D. Create a new Lambda layer every time a new code release needs testing.
E. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready Lambda layer Amazon Resource Name (ARN). Point the Development alias to the $LATEST layer ARN.


Answer: A. B.
Notes: Lambda function versions are designed to manage deployment of functions. They can be used for code changes, without affecting the stable production version of the code. By creating separate aliases for Production and Development, systems can initiate the correct alias as needed. A Lambda function alias can be used to point to a specific Lambda function version. Using the functionality to update an alias and its linked version, the development team can update the required version as needed. The $LATEST version is the newest published version.
Reference: Lambda function versions.

For more information about Lambda layers, see Creating and sharing Lambda layers.

For more information about Lambda function aliases, see Lambda function aliases.

Category: Deployment

Q77: Each time a developer publishes a new version of an AWS Lambda function, all the dependent event source mappings need to be updated with the reference to the new version’s Amazon Resource Name (ARN). These updates are time consuming and error-prone. Which combination of actions should the developer take to avoid performing these updates when publishing a new Lambda version? (Select TWO.)
A. Update event source mappings with the ARN of the Lambda layer.
B. Point a Lambda alias to a new version of the Lambda function.
C. Create a Lambda alias for each published version of the Lambda function.
D. Point a Lambda alias to a new Lambda function alias.
E. Update the event source mappings with the Lambda alias ARN.


Answer: B. E.
Notes: A Lambda alias is a pointer to a specific Lambda function version. Instead of using ARNs for the Lambda function in event source mappings, you can use an alias ARN. You do not need to update your event source mappings when you promote a new version or roll back to a previous version.
Reference: Lambda function aliases.
Category: Deployment

Q78:  A company wants to store sensitive user data in Amazon S3 and encrypt this data at rest. The company must manage the encryption keys and use Amazon S3 to perform the encryption. How can a developer meet these requirements?
A. Enable default encryption for the S3 bucket by using the option for server-side encryption with customer-provided encryption keys (SSE-C).
B. Enable client-side encryption with an encryption key. Upload the encrypted object to the S3 bucket.
C. Enable server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Upload an object to the S3 bucket.
D. Enable server-side encryption with customer-provided encryption keys (SSE-C). Upload an object to the S3 bucket.


Answer: D.
Notes: When you upload an object, Amazon S3 uses the encryption key you provide to apply AES-256 encryption to your data and removes the encryption key from memory.
Reference: Protecting data using server-side encryption with customer-provided encryption keys (SSE-C).

Category: Security

Q79: A company is developing a Python application that submits data to an Amazon DynamoDB table. The company requires client-side encryption of specific data items and end-to-end protection for the encrypted data in transit and at rest. Which combination of steps will meet the requirement for the encryption of specific data items? (Select TWO.)

A. Generate symmetric encryption keys with AWS Key Management Service (AWS KMS).
B. Generate asymmetric encryption keys with AWS Key Management Service (AWS KMS).
C. Use generated keys with the DynamoDB Encryption Client.
D. Use generated keys to configure DynamoDB table encryption with AWS managed customer master keys (CMKs).
E. Use generated keys to configure DynamoDB table encryption with AWS owned customer master keys (CMKs).


Answer: A. C.
Notes: When the DynamoDB Encryption Client is configured to use AWS KMS, it uses a customer master key (CMK) that is always encrypted when used outside of AWS KMS. This cryptographic materials provider returns a unique encryption key and signing key for every table item. This method of encryption uses a symmetric CMK.
Reference: Direct KMS Materials Provider.
Category: Deployment

Q80: A company is developing a REST API with Amazon API Gateway. Access to the API should be limited to users in the existing Amazon Cognito user pool. Which combination of steps should a developer perform to secure the API? (Select TWO.)
A. Create an AWS Lambda authorizer for the API.
B. Create an Amazon Cognito authorizer for the API.
C. Configure the authorizer for the API resource.
D. Configure the API methods to use the authorizer.
E. Configure the authorizer for the API stage.


Answer: B. D.
Notes: An Amazon Cognito authorizer should be used for integration with Amazon Cognito user pools. In addition to creating an authorizer, you are required to configure an API method to use that authorizer for the API.
Reference: Control access to a REST API using Amazon Cognito user pools as authorizer.
Category: Security

Q81: A developer is implementing a mobile app to provide personalized services to app users. The application code makes calls to Amazon S3 and Amazon Simple Queue Service (Amazon SQS). Which options can the developer use to authenticate the app users? (Select TWO.)
A. Authenticate to the Amazon Cognito identity pool directly.
B. Authenticate to AWS Identity and Access Management (IAM) directly.
C. Authenticate to the Amazon Cognito user pool directly.
D. Federate authentication by using Login with Amazon with the users managed with AWS Security Token Service (AWS STS).
E. Federate authentication by using Login with Amazon with the users managed with the Amazon Cognito user pool.


Answer: C. E.
Notes: The Amazon Cognito user pool provides direct user authentication. The Amazon Cognito user pool provides a federated authentication option with third-party identity provider (IdP), including amazon.com.
Reference: Adding User Pool Sign-in Through a Third Party.
Category: Security

Question: A company is implementing several order processing workflows. Each workflow is implemented by using AWS Lambda functions for each task. Which combination of steps should a developer follow to implement these workflows? (Select TWO.)
A. Define a AWS Step Functions task for each Lambda function.
B. Define a AWS Step Functions task for each workflow.
C. Write code that polls the AWS Step Functions invocation to coordinate each workflow.
D. Define an AWS Step Functions state machine for each workflow.
E. Define an AWS Step Functions state machine for each Lambda function.
Answer: A. D.
Notes: Step Functions is based on state machines and tasks. A state machine is a workflow. Tasks perform work by coordinating with other AWS services, such as Lambda. A state machine is a workflow. It can be used to express a workflow as a number of states, their relationships, and their input and output. You can coordinate individual tasks with Step Functions by expressing your workflow as a finite state machine, written in the Amazon States Language.
ReferenceText: Getting Started with AWS Step Functions.
ReferenceUrl: https://aws.amazon.com/step-functions/getting-started/
Category: Development

Welcome to AWS Certified Developer Associate Exam Preparation: Definition and Objectives, Top 100 Questions and Answers dump, White papers, Courses, Labs and Training Materials, Exam info and details, References, Jobs, Others AWS Certificates

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

What is the AWS Certified Developer Associate Exam?

This AWS Certified Developer-Associate Examination is intended for individuals who perform a Developer role. It validates an examinee’s ability to:

  • Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
  • Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS

Recommended general IT knowledge
The target candidate should have the following:
– In-depth knowledge of at least one high-level programming language
– Understanding of application lifecycle management
– The ability to write code for serverless applications
– Understanding of the use of containers in the development process

Recommended AWS knowledge
The target candidate should be able to do the following:

  • Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications
  • Identify key features of AWS services
  • Understand the AWS shared responsibility model
  • Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
  • Use and interact with AWS services
  • Apply basic understanding of cloud-native applications to write code
  • Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
  • Author, maintain, and debug code modules on AWS

What is considered out of scope for the target candidate?
The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam:
– Design architectures (for example, distributed system, microservices)
– Design and implement CI/CD pipelines

  • Administer IAM users and groups
  • Administer Amazon Elastic Container Service (Amazon ECS)
  • Design AWS networking infrastructure (for example, Amazon VPC, AWS Direct Connect)
  • Understand compliance and licensing

Exam content
Response types
There are two types of questions on the exam:
– Multiple choice: Has one correct response and three incorrect responses (distractors)
– Multiple response: Has two or more correct responses out of five or more response options
Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose.
Distractors are generally plausible responses that match the content area.
Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.

Unscored content
The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.

Exam results
The AWS Certified Developer – Associate (DVA-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines.
Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720.
Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.
Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam.
Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.

Content outline
This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context.
The percentage in each domain represents only scored content.

Domain 1: Deployment 22%
Domain 2: Security 26%
Domain 3: Development with AWS Services 30%
Domain 4: Refactoring 10%
Domain 5: Monitoring and Troubleshooting 12%

Domain 1: Deployment
1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
–  Commit code to a repository and invoke build, test and/or deployment actions
–  Use labels and branches for version and release management
–  Use AWS CodePipeline to orchestrate workflows against different environments
–  Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS
CodeDeploy for CI/CD purposes
–  Perform a roll back plan based on application deployment policy

1.2 Deploy applications using AWS Elastic Beanstalk.
–  Utilize existing supported environments to define a new application stack
–  Package the application
–  Introduce a new application version into the Elastic Beanstalk environment
–  Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable)
–  Validate application health using Elastic Beanstalk dashboard
–  Use Amazon CloudWatch Logs to instrument application logging

1.3 Prepare the application deployment package to be deployed to AWS.
–  Manage the dependencies of the code module (like environment variables, config files and static image files) within the package
–  Outline the package/container directory structure and organize files appropriately
–  Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores)

1.4 Deploy serverless applications.
–  Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template
–  Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)

Domain 2: Security
2.1 Make authenticated calls to AWS services.
–  Communicate required policy based on least privileges required by application.
–  Assume an IAM role to access a service
–  Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)

2.2 Implement encryption using AWS services.
– Encrypt data at rest (client side; server side; envelope encryption) using AWS services
–  Encrypt data in transit

2.3 Implement application authentication and authorization.
– Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools
–  Use Amazon Cognito-provided credentials to write code that access AWS services.
–  Use Amazon Cognito sync to synchronize user profiles and data
–  Use developer-authenticated identities to interact between end user devices, backend
authentication, and Amazon Cognito

Domain 3: Development with AWS Services
3.1 Write code for serverless applications.
– Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications)
– Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler)
– Create an API endpoint using Amazon API Gateway
–  Create and test appropriate API actions like GET, POST using the API endpoint
–  Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes)
–  Compute read/write capacity units for Amazon DynamoDB based on application requirements
–  Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis)
–  Invoke an AWS Lambda function synchronously and asynchronously

3.2 Translate functional requirements into application design.
– Determine real-time vs. batch processing for a given use case
– Determine use of synchronous vs. asynchronous for a given use case
– Determine use of event vs. schedule/poll for a given use case
– Account for tradeoffs for consistency models in an application design

Domain 4: Refactoring
4.1 Optimize applications to best use AWS services and features.
 Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache)
 Apply an Amazon S3 naming scheme for optimal read performance

4.2 Migrate existing application code to run on AWS.
– Isolate dependencies
– Run the application as one or more stateless processes
– Develop in order to enable horizontal scalability
– Externalize state

Domain 5: Monitoring and Troubleshooting

5.1 Write code that can be monitored.
– Create custom Amazon CloudWatch metrics
– Perform logging in a manner available to systems operators
– Instrument application source code to enable tracing in AWS X-Ray

5.2 Perform root cause analysis on faults found in testing or production.
– Interpret the outputs from the logging mechanism in AWS to identify errors in logs
– Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues
– Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component

Which key tools, technologies, and concepts might be covered on the exam?

The following is a non-exhaustive list of the tools and technologies that could appear on the exam.
This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam.
The general tools and technologies in this list appear in no particular order.
AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance:
– Analytics
– Application Integration
– Containers
– Cost and Capacity Management
– Data Movement
– Developer Tools
– Instances (virtual machines)
– Management and Governance
– Networking and Content Delivery
– Security
– Serverless

AWS services and features

Analytics:
– Amazon Elasticsearch Service (Amazon ES)
– Amazon Kinesis
Application Integration:
– Amazon EventBridge (Amazon CloudWatch Events)
– Amazon Simple Notification Service (Amazon SNS)
– Amazon Simple Queue Service (Amazon SQS)
– AWS Step Functions

Compute:
– Amazon EC2
– AWS Elastic Beanstalk
– AWS Lambda

Containers:
– Amazon Elastic Container Registry (Amazon ECR)
– Amazon Elastic Container Service (Amazon ECS)
– Amazon Elastic Kubernetes Services (Amazon EKS)

Database:
– Amazon DynamoDB
– Amazon ElastiCache
– Amazon RDS

Developer Tools:
– AWS CodeArtifact
– AWS CodeBuild
– AWS CodeCommit
– AWS CodeDeploy
– Amazon CodeGuru
– AWS CodePipeline
– AWS CodeStar
– AWS Fault Injection Simulator
– AWS X-Ray

Management and Governance:
– AWS CloudFormation
– Amazon CloudWatch

Networking and Content Delivery:
– Amazon API Gateway
– Amazon CloudFront
– Elastic Load Balancing

Security, Identity, and Compliance:
– Amazon Cognito
– AWS Identity and Access Management (IAM)
– AWS Key Management Service (AWS KMS)

Storage:
– Amazon S3

Out-of-scope AWS services and features

The following is a non-exhaustive list of AWS services and features that are not covered on the exam.
These services and features do not represent every AWS offering that is excluded from the exam content.
Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant.
Out-of-scope AWS services and features include the following:
– AWS Application Discovery Service
– Amazon AppStream 2.0
– Amazon Chime
– Amazon Connect
– AWS Database Migration Service (AWS DMS)
– AWS Device Farm
– Amazon Elastic Transcoder
– Amazon GameLift
– Amazon Lex
– Amazon Machine Learning (Amazon ML)
– AWS Managed Services
– Amazon Mobile Analytics
– Amazon Polly

– Amazon QuickSight
– Amazon Rekognition
– AWS Server Migration Service (AWS SMS)
– AWS Service Catalog
– AWS Shield Advanced
– AWS Shield Standard
– AWS Snow Family
– AWS Storage Gateway
– AWS WAF
– Amazon WorkMail
– Amazon WorkSpaces

To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Top

AWS Certified Developer – Associate Practice Questions And Answers Dump

Q0: Your application reads commands from an SQS queue and sends them to web services hosted by your
partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost.
How can you accommodate the partners’ broken web services without wasting your resources?

  • A. Create a delay queue and set DelaySeconds to 30 seconds
  • B. Requeue the message with a VisibilityTimeout of 30 seconds.
  • C. Create a dead letter queue and set the Maximum Receives to 3.
  • D. Requeue the message with a DelaySeconds of 30 seconds.
AWS Developer Associates DVA-C01 PRO
AWS Developer Associates DVA-C01 PRO
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 


C. After a message is taken from the queue and returned for the maximum number of retries, it is
automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.

Reference: Amazon SQS Dead-Letter Queues


Top

Q1: A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently.
What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. The AWS Documentation mentions the following:

DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

Q2: You are creating a DynamoDB table with the following attributes:

  • PurchaseOrderNumber (partition key)
  • CustomerID
  • PurchaseDate
  • TotalPurchaseValue

One of your applications must retrieve items from the table to calculate the total value of purchases for a
particular customer over a date range. What secondary index do you need to add to the table?

  • A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute
  • C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute


C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition
key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the
TotalPurchaseValue into the index provides all the data needed to satisfy the use case.

Reference: AWS DynamoDB Global Secondary Indexes

Difference between local and global indexes in DynamoDB

    • Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
    • Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
    • Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
    • Global Secondary Indexes defines a new paradigm – different hash/range keys per index.
      This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
    • Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
    • Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.

Throughput :

  • Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
  • Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.


Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q3: When referencing the remaining time left for a Lambda function to run within the function’s code you would use:

  • A. The event object
  • B. The timeLeft object
  • C. The remains object
  • D. The context object


D. The context object.

Reference: AWS Lambda


Top

Q4: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context
D. event, context
def handler_name(event, context):

return some_value

Reference: AWS Lambda Function Handler in Python

Top

Q5: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda Deployment Package

Top

Q6: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime

C. Function code and libraries not included within the runtime

Reference: AWS Lambda Deployment Package in PowerShell

Top

Q7: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?

  • A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
  • B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
  • C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
  • D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.


D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.

Reference: Comparison of Security Groups and Network ACLs

AWS Security Groups and NACL


Top

Q8: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.

Reference: AWS Network Address Translation Gateway


Top

Q9: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.

Reference: AWS Autoscalling


Top

Q10: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda

Top

Q11: You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?

  • A. RegisterImage
  • B. CreateImage
  • C. ami-register-image
  • D. ami-create-image

A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.

Reference: API RegisterImage

Top

Q12: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?

  • A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
  • B. Permenantly assigning users to specific instances and always routing their traffic to those instances
  • C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
  • D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance

Top

Q13: Which API call would best be used to describe an Amazon Machine Image?

  • A. ami-describe-image
  • B. ami-describe-images
  • C. DescribeImage
  • D. DescribeImages

D. In general, API actions stick to the PascalCase style with the first letter of every word capitalized.

Reference: API DescribeImages

Top

Q14: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?

  • A. Autoscaling requires using Amazon EBS-backed instances
  • B. Virtual Private Cloud requires EBS backed instances
  • C. Amazon EBS-backed instances can be stopped and restarted without losing data
  • D. Instance-store backed instances can be stopped and restarted without losing data

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.

Reference: What is the difference between EBS and Instance Store?

Top

Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command.
ssh -i my_key.pem ec2-user@52.2.222.22
However you receive the following error.
@@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@
What is the most probable reason for this and how can you fix it?

  • A. You do not have root access on your terminal and need to use the sudo option for this to work.
  • B. You do not have enough permissions to perform the operation.
  • C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
  • D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.

D. You need to run something like: chmod 400 my_key.pem

Reference:

Top

Q16: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?

  • A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
  • B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
  • C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
  • D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.

D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.

Reference: AWS Root Device Storage

Top

Q17: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:

  • A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
  • B. Can only be used to launch EC2 instances in the same country as the AMI is stored
  • C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
  • D. Can be used to launch EC2 instances in any AWS region

C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another

Reference: https://aws.amazon.com/amazon-linux-ami/

Top

Q18: Which of the following statements is true about the Elastic File System (EFS)?

  • A. EFS can scale out to meet capacity requirements and scale back down when no longer needed
  • B. EFS can be used by multiple EC2 instances simultaneously
  • C. EFS cannot be used by an instance using EBS
  • D. EFS can be configured on an instance before launch just like an IAM role or EBS volumes

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

A. and B.

Reference: https://aws.amazon.com/efs/

Top

Q19: IAM Policies, at a minimum, contain what elements?

  • A. ID
  • B. Effects
  • C. Resources
  • D. Sid
  • E. Principle
  • F. Actions

B. C. and F.

Effect – Use Allow or Deny to indicate whether the policy allows or denies access.

Resource – Specify a list of resources to which the actions apply.

Action – Include a list of actions that the policy allows or denies.

Id, Sid aren’t required fields in IAM Policies. But they are optional fields

Reference: AWS IAM Access Policies

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q20: What are the main benefits of IAM groups?

  • A. The ability to create custom permission policies.
  • B. Assigning IAM permission policies to more than one user at a time.
  • C. Easier user/policy management.
  • D. Allowing EC2 instances to gain access to S3.

B. and C.

A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.

Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html

 

Top

Q21: What are benefits of using AWS STS?

  • A. Grant access to AWS resources without having to create an IAM identity for them
  • B. Since credentials are temporary, you don’t have to rotate or revoke them
  • C. Temporary security credentials can be extended indefinitely
  • D. Temporary security credentials can be restricted to a specific region

Top

Q22: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q23: A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?

  • A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
  • B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
  • C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
  • D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.

B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence.
Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation.
During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:

  • Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
  • Saved Configurations
    Settings for any options that are not applied directly to the
    environment are loaded from a saved configuration, if specified.
  • Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the
    environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.

    Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.

  • Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.

If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI
.
Settings in configuration files are not applied
directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.
If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.

Reference: Managing ec2 features – Elastic beanstalk

Q24: What statements are true about Availability Zones (AZs) and Regions?

  • A. There is only one AZ in each AWS Region
  • B. AZs are geographically separated inside a region to help protect against natural disasters affecting more than one at a time.
  • C. AZs can be moved between AWS Regions based on your needs
  • D. There are (almost always) two or more AZs in each AWS Region

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B and D.

Reference: AWS global infrastructure/

Top

Q25: An AWS Region contains:

  • A. Edge Locations
  • B. Data Centers
  • C. AWS Services
  • D. Availability Zones


B. C. D. Edge locations are actually distinct locations that don’t explicitly fall within AWS regions.

Reference: AWS Global Infrastructure


Top

Q26: Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?

  • A. Eventual Consistent Reads
  • B. Conditional reads for Consistency
  • C. Strongly Consistent Reads
  • D. Not possible


C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.

Reference: https://aws.amazon.com/dynamodb/faqs/


Top

Q27: You’ ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?

  • A. Create an Opswork stack and deploy the Docker containers
  • B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
  • C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
  • D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.


B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.

Reference: Create and Deploy Docker in AWS


Top

Q28: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?

  • A. Create multiple threads and upload the objects in the multiple threads
  • B. Write the items in batches for better performance
  • C. Use the Multipart upload API
  • D. Enable versioning on the Bucket

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 


C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.

Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html


Top

Q29: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?

  • A. 6000
  • B. 10
  • C. 3600
  • D. 600

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.

You can specify the Write capacity in the Capacity tab of the DynamoDB table.

Reference: AWS working with tables

Q30: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context


D. event, context def handler_name(event, context):

return some_value
Reference: AWS Lambda Function Handler in Python

Top

Q31: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda Deployment Package

Top

Q32: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime


C. Function code and libraries not included within the runtime
Reference: AWS Lambda Deployment Package in PowerShell

Top

Q33: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
Reference: AWS Network Address Translation Gateway

Top

Q34: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.
Reference: AWS Autoscalling

Top

Q30: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

Answer:


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda

Top

Q31: An organization is using an Amazon ElastiCache cluster in front of their Amazon RDS instance. The organization would like the Developer to implement logic into the code so that the cluster only retrieves data from RDS when there is a cache miss. What strategy can the Developer implement to achieve this?

  • A. Lazy loading
  • B. Write-through
  • C. Error retries
  • D. Exponential backoff

Answer:


Answer – A
Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. All other options are incorrect.
Reference: Caching Strategies

Top

Q32: A developer is writing an application that will run on Ec2 instances and read messages from SQS queue. The nessages will arrive every 15-60 seconds. How should the Developer efficiently query the queue for new messages?

  • A. Use long polling
  • B. Set a custom visibility timeout
  • C. Use short polling
  • D. Implement exponential backoff


Answer – A Long polling will help insure that the applications make less requests for messages in a shorter period of time. This is more cost effective. Since the messages are only going to be available after 15 seconds and we don’t know exacly when they would be available, it is better to use Long Polling.
Reference: Amazon SQS Long Polling

Top

Q33: You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With new Lambda function working as per expectation which of the following will shift traffic from original Lambda function to new Lambda function in the shortest time frame?

  • A. Canary10Percent5Minutes
  • B. Linear10PercentEvery10Minutes
  • C. Canary10Percent15Minutes
  • D. Linear10PercentEvery1Minute


Answer – A
With Canary Deployment Preference type, Traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while remaining all traffic is shifted after 5 minutes.
Reference: Gradual Code Deployment

Top

Q34: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?

  • A. AWS::Serverless::Api
  • B. AWS::Serverless::Application
  • C. AWS::Serverless::Layerversion
  • D. AWS::Serverless::Function


Answer – B
AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets.
Reference: Declaring Serverless Resources

Top

Q35: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?

  • A. Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
  • B. Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
  • C. Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
  • D. Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.


Answer – D
With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys.
Reference: AWS Key Management Service Concepts

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q36: You are developing an application that will be comprised of the following architecture –

  1. A set of Ec2 instances to process the videos.
  2. These (Ec2 instances) will be spun up by an autoscaling group.
  3. SQS Queues to maintain the processing messages.
  4. There will be 2 pricing tiers.

How will you ensure that the premium customers videos are given more preference?

  • A. Create 2 Autoscaling Groups, one for normal and one for premium customers
  • B. Create 2 set of Ec2 Instances, one for normal and one for premium customers
  • C. Create 2 SQS queus, one for normal and one for premium customers
  • D. Create 2 Elastic Load Balancers, one for normal and one for premium customers.


Answer – C
The ideal option would be to create 2 SQS queues. Messages can then be processed by the application from the high priority queue first.<br? The other options are not the ideal options. They would lead to extra costs and also extra maintenance.
Reference: SQS

Top

Q37: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?

  • A. CustomerID
  • B. CustomerName
  • C. Location
  • D. Age


Answer- A
Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on..
Use composite attributes. Try to combine more than one attribute to form a unique key.
Reference: Choosing the right DynamoDB Partition Key

Top

Q38: A developer is making use of AWS services to develop an application. He has been asked to develop the application in a manner to compensate any network delays. Which of the following two mechanisms should he implement in the application?

  • A. Multiple SQS queues
  • B. Exponential backoff algorithm
  • C. Retries in your application code
  • D. Consider using the Java sdk.


Answer- B. and C.
In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values, and should be set based on the operation being performed, as well as other local factors, such as network latency.
Reference: Error Retries and Exponential Backoff in AWS

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q39: An application is being developed that is going to write data to a DynamoDB table. You have to setup the read and write throughput for the table. Data is going to be read at the rate of 300 items every 30 seconds. Each item is of size 6KB. The reads can be eventual consistent reads. What should be the read capacity that needs to be set on the table?

  • A. 10
  • B. 20
  • C. 6
  • D. 30


Answer – A

Since there are 300 items read every 30 seconds , that means there are (300/30) = 10 items read every second.
Since each item is 6KB in size , that means , 2 reads will be required for each item.
So we have total of 2*10 = 20 reads for the number of items per second
Since eventual consistency is required , we can divide the number of reads(20) by 2 , and in the end we get the Read Capacity of 10.

Reference: Read/Write Capacity Mode


Top

Q40: You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load balancer. You have been requested to monitor the incoming connections to the Elastic Load Balancer. Which of the below options can suffice this requirement?

  • A. Use AWS CloudTrail with your load balancer
  • B. Enable access logs on the load balancer
  • C. Use a CloudWatch Logs Agent
  • D. Create a custom metric CloudWatch lter on your load balancer


Answer – B
Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues.
Reference: Access Logs for Your Application Load Balancer

Top

Q41: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?

  • A. Enable versioning for the underlying S3 bucket.
  • B. Enable Replication so that the objects get replicated to the other bucket
  • C. Enable CORS for the bucket
  • D. Change the Bucket policy for the bucket to allow access from the other bucket


Answer – C

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.

Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:

Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.

Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.

Reference: Cross-Origin Resource Sharing (CORS)


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q42: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below

  • A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
  • B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
  • C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
  • D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.


Answer- C
The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3.
You can then provides access to the objects based on the key values generated via the user id.

Reference: The AWS Security Token Service (STS)


Top

Q43: Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.

  • A. Publish your data to CloudWatch Logs, and congure your application to Autoscale to handle the load on demand.
  • B. Publish your log data to an Amazon S3 bucket.  Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is congured to pull down your log les stored an Amazon S3
  • C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is congured to process your logging data.
  • D. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.

Answer:


Answer – C
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data.
Reference: Amazon Kinesis

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q44: You’ve been instructed to develop a mobile application that will make use of AWS services. You need to decide on a data store to store the user sessions. Which of the following would be an ideal data store for session management?

  • A. AWS Simple Storage Service
  • B. AWS DynamoDB
  • C. AWS RDS
  • D. AWS Redshift

Answer:


Answer – B
DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management
Reference: Scalable Session Handling in PHP Using Amazon DynamoDB

Top

Q45: Your application currently interacts with a DynamoDB table. Records are inserted into the table via the application. There is now a requirement to ensure that whenever items are updated in the DynamoDB primary table , another record is inserted into a secondary table. Which of the below feature should be used when developing such a solution?

  • A. AWS DynamoDB Encryption
  • B. AWS DynamoDB Streams
  • C. AWS DynamoDB Accelerator
  • D. AWSTable Accelerator


Answer – B
DynamoDB Streams Use Cases and Design Patterns This post describes some common use cases you might encounter, along with their design options and solutions, when migrating data from relational data stores to Amazon DynamoDB. We will consider how to manage the following scenarios:

  • How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
  • How do you trigger an event based on a particular transaction?
  • How do you audit or archive transactions?
  • How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?

Relational databases provide native support for transactions, triggers, auditing, and replication. Typically, a transaction in a database refers to performing create, read, update, and delete (CRUD) operations against multiple tables in a block. A transaction can have only two states—success or failure. In other words, there is no partial completion. As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit. You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time. AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region. All of the other options are incorrect since none of these would meet the core requirement.
Reference: DynamoDB Streams Use Cases and Design Patterns


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q46: An application has been making use of AWS DynamoDB for its back-end data store. The size of the table has now grown to 20 GB , and the scans on the table are causing throttling errors. Which of the following should now be implemented to avoid such errors?

  • A. Large Page size
  • B. Reduced page size
  • C. Parallel Scans
  • D. Sequential scans

Answer – B
When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity. You can use the following technique to minimize the impact of a scan on a table’s provisioned throughput. Reduce page size Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request. For example, suppose that each item is 4 KB and you set the page size to 40 items. A Query request would then consume only 20 eventually consistent read operations or 40 strongly consistent read operations. A larger number of smaller Query or Scan operations would allow your other critical requests to succeed without throttling.
Reference1: Rate-Limited Scans in Amazon DynamoDB

Reference2: Best Practices for Querying and Scanning Data


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q47: Which of the following is correct way of passing a stage variable to an HTTP URL ? (Select TWO.)

  • A. http://example.com/${}/prod
  • B. http://example.com/${stageVariables.}/prod
  • C. http://${stageVariables.}.example.com/dev/operation
  • D. http://${stageVariables}.example.com/dev/operation
  • E. http://${}.example.com/dev/operation
  • F. http://example.com/${stageVariables}/prod


Answer – B. and C.
A stage variable can be used as part of HTTP integration URL as in following cases, ·         A full URI without protocol ·         A full domain ·         A subdomain ·         A path ·         A query string In the above case , option B & C displays stage variable as a path & sub-domain.
Reference: Amazon API Gateway Stage Variables Reference

Top

Q48: Your company is planning on creating new development environments in AWS. They want to make use of their existing Chef recipes which they use for their on-premise configuration for servers in AWS. Which of the following service would be ideal to use in this regard?

  • A. AWS Elastic Beanstalk
  • B. AWS OpsWork
  • C. AWS Cloudformation
  • D. AWS SQS


Answer – B
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments All other options are invalid since they cannot be used to work with Chef recipes for configuration management.
Reference: AWS OpsWorks

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q49: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?

  • A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
  • B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access.
  • C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
  • D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.


Answer – C
With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used
Reference: About Web Identity Federation

Top

Q50: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?

  • A. Cognito Data
  • B. Cognito Events
  • C. Cognito Streams
  • D. Cognito Callbacks


Answer – C
Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams
Reference:

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q51: You’ve developed a set of scripts using AWS Lambda. These scripts need to access EC2 Instances in a VPC. Which of the following needs to be done to ensure that the AWS Lambda function can access the resources in the VPC. Choose 2 answers from the options given below

  • A. Ensure that the subnet ID’s are mentioned when conguring the Lambda function
  • B. Ensure that the NACL ID’s are mentioned when conguring the Lambda function
  • C. Ensure that the Security Group ID’s are mentioned when conguring the Lambda function
  • D. Ensure that the VPC Flow Log ID’s are mentioned when conguring the Lambda function


Answer: A and C.
AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPCspecific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC.
Reference: Configuring a Lambda Function to Access Resources in an Amazon VPC

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q52: You’ve currently been tasked to migrate an existing on-premise environment into Elastic Beanstalk. The application does not make use of Docker containers. You also can’t see any relevant environments in the beanstalk service that would be suitable to host your application. What should you consider doing in this case?

  • A. Migrate your application to using Docker containers and then migrate the app to the Elastic Beanstalk environment.
  • B. Consider using Cloudformation to deploy your environment to Elastic Beanstalk
  • C. Consider using Packer to create a custom platform
  • D. Consider deploying your application using the Elastic Container Service


Answer – C
Elastic Beanstalk supports custom platforms. A custom platform is a more advanced customization than a Custom Image in several ways. A custom platform lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility allows you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn’t provide a platform out of the box. Compare that to custom images, where you modify an AMI for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform’s software stack. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. To create a custom platform, you build an Amazon Machine Image (AMI) from one of the supported operating systems—Ubuntu, RHEL, or Amazon Linux (see the flavor entry in Platform.yaml File Format for the exact version numbers)—and add further customizations. You create your own Elastic Beanstalk platform using Packer, which is an open-source tool for creating machine images for many platforms, including AMIs for use with Amazon EC2. An Elastic Beanstalk platform comprises an AMI configured to run a set of software that supports an application, and metadata that can include custom configuration options and default configuration option settings.
Reference: AWS Elastic Beanstalk Custom Platforms

Top

Q53: Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.

  • A. 10
  • B. 160
  • C. 155
  • D. 16


Answer – B.
Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.
Reference: Read/Write Capacity Mode

Top

Top

Q54: Which AWS Service can be used to automatically install your application code onto EC2, on premises systems and Lambda?

  • A. CodeCommit
  • B. X-Ray
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q55: Which AWS service can be used to compile source code, run tests and package code?

  • A. CodePipeline
  • B. CodeCommit
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy Answer: B.

Reference: AWS CodeBuild


Top

Q56: How can your prevent CloudFormation from deleting your entire stack on failure? (Choose 2)

  • A. Set the Rollback on failure radio button to No in the CloudFormation console
  • B. Set Termination Protection to Enabled in the CloudFormation console
  • C. Use the –disable-rollback flag with the AWS CLI
  • D. Use the –enable-termination-protection protection flag with the AWS CLI

Answer: A. and C.

Reference: Protecting a Stack From Being Deleted

Top

Q57: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?

  • A. Continuous Integration
  • B. Continuous Deployment
  • C. Continuous Delivery
  • D. Continuous Development

Answer: A

Reference: What is Continuous Integration?

Top

Q58: When deploying application code to EC2, the AppSpec file can be written in which language?

  • A. JSON
  • B. JSON or YAML
  • C. XML
  • D. YAML

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q59: Part of your CloudFormation deployment fails due to a mis-configuration, by defaukt what will happen?

  • A. CloudFormation will rollback only the failed components
  • B. CloudFormation will rollback the entire stack
  • C. Failed component will remain available for debugging purposes
  • D. CloudFormation will ask you if you want to continue with the deployment

Answer: B

Reference: Troubleshooting AWS CloudFormation


Top

Q60: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?

  • A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
  • B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
  • C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
  • D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.

Answer: C

Reference: Getting Started with Amazon SNS


Top

Q61: Which AWS service can be used to centrally store and version control your application source code, binaries and libraries

  • A. CodeCommit
  • B. CodeBuild
  • C. CodePipeline
  • D. ElasticFileSystem

Answer: A

Reference: AWS CodeCommit


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q62: You are using CloudFormation to create a new S3 bucket, which of the following sections would you use to define the properties of your bucket?

  • A. Conditions
  • B. Parameters
  • C. Outputs
  • D. Resources

Answer: D

Reference: Resources


Top

Q63: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?

  • A. Transforms
  • B. Outputs
  • C. Resources
  • D. Instances

Answer: C.
The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3.
Reference: Resources

Top

Q64: Which AWS service can be used to fully automate your entire release process?

  • A. CodeDeploy
  • B. CodePipeline
  • C. CodeCommit
  • D. CodeBuild

Answer: B.
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates

Reference: AWS CodePipeline


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q65: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?

  • A. Outputs
  • B. Transforms
  • C. Resources
  • D. Exports

Answer: A.
Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack.
Reference: CloudFormation Outputs

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q66: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?

  • A. Inputs
  • B. Resources
  • C. Transforms
  • D. Files

Answer: C.
Transforms is used to reference code located in S3 and also specififying the use of the Serverless Application Model (SAM) for Lambda deployments.
Reference: Transforms

Top

Q67: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file
used to specify source files and lifecycle hooks?

  • A. buildspec.yml
  • B. appspec.json
  • C. appspec.yml
  • D. buildspec.json

Answer: C.

Reference: CodeDeploy AppSpec File Reference

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q68: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?

  • A. Share the code using an EBS volume
  • B. Copy and paste the code into the template each time you need to use it
  • C. Use a cloudformation nested stack
  • D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.

Answer: C.

Reference: Working with Nested Stacks

Top

Q69: In the CodeDeploy AppSpec file, what are hooks used for?

  • A. To reference AWS resources that will be used during the deployment
  • B. Hooks are reserved for future use
  • C. To specify files you want to copy during the deployment.
  • D. To specify, scripts or function that you want to run at set points in the deployment lifecycle

Answer: D.
The ‘hooks’ section for an EC2/On-Premises deployment contains mappings that link deployment lifecycle event hooks to one or more scripts.

Reference: AppSpec ‘hooks’ Section

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q70: Which command can you use to encrypt a plain text file using CMK?

  • A. aws kms-encrypt
  • B. aws iam encrypt
  • C. aws kms encrypt
  • D. aws encrypt

Answer: C.
aws kms encrypt –key-id 1234abcd-12ab-34cd-56ef-1234567890ab –plaintext fileb://ExamplePlaintextFile –output text –query CiphertextBlob > C:\Temp\ExampleEncryptedFile.base64

Reference: AWS CLI Encrypt

Top

Q72: Which of the following is an encrypted key used by KMS to encrypt your data

  • A. Custmoer Mamaged Key
  • B. Encryption Key
  • C. Envelope Key
  • D. Customer Master Key

Answer: C.
Your Data key also known as the Enveloppe key is encrypted using the master key.This approach is known as Envelope encryption.
Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.

Reference: Envelope Encryption

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q73: Which of the following statements are correct? (Choose 2)

  • A. The Customer Master Key is used to encrypt and decrypt the Envelope Key or Data Key
  • B. The Envelope Key or Data Key is used to encrypt and decrypt plain text files.
  • C. The envelope Key or Data Key is used to encrypt and decrypt the Customer Master Key.
  • D. The Customer MasterKey is used to encrypt and decrypt plain text files.

Answer: A. and B.

Reference: AWS Key Management Service Concepts

Top

 
 

Q74: Which of the following statements is correct in relation to kMS/ (Choose 2)

  • A. KMS Encryption keys are regional
  • B. You cannot export your customer master key
  • C. You can export your customer master key.
  • D. KMS encryption Keys are global

Answer: A. and B.

Reference: AWS Key Management Service FAQs

Q75:  A developer is preparing a deployment package for a Java implementation of an AWS Lambda function. What should the developer include in the deployment package? (Select TWO.)
A. Compiled application code
B. Java runtime environment
C. References to the event sources
D. Lambda execution role
E. Application dependencies


Answer: C. E.
Notes: To create a Lambda function, you first create a Lambda function deployment package. This package is a .zip or .jar file consisting of your code and any dependencies.
Reference: Lambda deployment packages.

Q76: A developer uses AWS CodeDeploy to deploy a Python application to a fleet of Amazon EC2 instances that run behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. What should the developer include in the CodeDeploy deployment package?
A. A launch template for the Amazon EC2 Auto Scaling group
B. A CodeDeploy AppSpec file
C. An EC2 role that grants the application access to AWS services
D. An IAM policy that grants the application access to AWS services


Answer: B.
Notes: The CodeDeploy AppSpec (application specific) file is unique to CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file.
Reference: CodeDeploy application specification (AppSpec) files.
Category: Deployment

Q76: A company is working on a project to enhance its serverless application development process. The company hosts applications on AWS Lambda. The development team regularly updates the Lambda code and wants to use stable code in production. Which combination of steps should the development team take to configure Lambda functions to meet both development and production requirements? (Select TWO.)

A. Create a new Lambda version every time a new code release needs testing.
B. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready unqualified Amazon Resource Name (ARN) version. Point the Development alias to the $LATEST version.
C. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to the production-ready qualified Amazon Resource Name (ARN) version. Point the Development alias to the variable LAMBDA_TASK_ROOT.
D. Create a new Lambda layer every time a new code release needs testing.
E. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready Lambda layer Amazon Resource Name (ARN). Point the Development alias to the $LATEST layer ARN.


Answer: A. B.
Notes: Lambda function versions are designed to manage deployment of functions. They can be used for code changes, without affecting the stable production version of the code. By creating separate aliases for Production and Development, systems can initiate the correct alias as needed. A Lambda function alias can be used to point to a specific Lambda function version. Using the functionality to update an alias and its linked version, the development team can update the required version as needed. The $LATEST version is the newest published version.
Reference: Lambda function versions.

For more information about Lambda layers, see Creating and sharing Lambda layers.

For more information about Lambda function aliases, see Lambda function aliases.

Category: Deployment

Q77: Each time a developer publishes a new version of an AWS Lambda function, all the dependent event source mappings need to be updated with the reference to the new version’s Amazon Resource Name (ARN). These updates are time consuming and error-prone. Which combination of actions should the developer take to avoid performing these updates when publishing a new Lambda version? (Select TWO.)
A. Update event source mappings with the ARN of the Lambda layer.
B. Point a Lambda alias to a new version of the Lambda function.
C. Create a Lambda alias for each published version of the Lambda function.
D. Point a Lambda alias to a new Lambda function alias.
E. Update the event source mappings with the Lambda alias ARN.


Answer: B. E.
Notes: A Lambda alias is a pointer to a specific Lambda function version. Instead of using ARNs for the Lambda function in event source mappings, you can use an alias ARN. You do not need to update your event source mappings when you promote a new version or roll back to a previous version.
Reference: Lambda function aliases.
Category: Deployment

Q78:  A company wants to store sensitive user data in Amazon S3 and encrypt this data at rest. The company must manage the encryption keys and use Amazon S3 to perform the encryption. How can a developer meet these requirements?
A. Enable default encryption for the S3 bucket by using the option for server-side encryption with customer-provided encryption keys (SSE-C).
B. Enable client-side encryption with an encryption key. Upload the encrypted object to the S3 bucket.
C. Enable server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Upload an object to the S3 bucket.
D. Enable server-side encryption with customer-provided encryption keys (SSE-C). Upload an object to the S3 bucket.


Answer: D.
Notes: When you upload an object, Amazon S3 uses the encryption key you provide to apply AES-256 encryption to your data and removes the encryption key from memory.
Reference: Protecting data using server-side encryption with customer-provided encryption keys (SSE-C).

Category: Security

Q79: A company is developing a Python application that submits data to an Amazon DynamoDB table. The company requires client-side encryption of specific data items and end-to-end protection for the encrypted data in transit and at rest. Which combination of steps will meet the requirement for the encryption of specific data items? (Select TWO.)

A. Generate symmetric encryption keys with AWS Key Management Service (AWS KMS).
B. Generate asymmetric encryption keys with AWS Key Management Service (AWS KMS).
C. Use generated keys with the DynamoDB Encryption Client.
D. Use generated keys to configure DynamoDB table encryption with AWS managed customer master keys (CMKs).
E. Use generated keys to configure DynamoDB table encryption with AWS owned customer master keys (CMKs).


Answer: A. C.
Notes: When the DynamoDB Encryption Client is configured to use AWS KMS, it uses a customer master key (CMK) that is always encrypted when used outside of AWS KMS. This cryptographic materials provider returns a unique encryption key and signing key for every table item. This method of encryption uses a symmetric CMK.
Reference: Direct KMS Materials Provider.
Category: Deployment

Q80: A company is developing a REST API with Amazon API Gateway. Access to the API should be limited to users in the existing Amazon Cognito user pool. Which combination of steps should a developer perform to secure the API? (Select TWO.)
A. Create an AWS Lambda authorizer for the API.
B. Create an Amazon Cognito authorizer for the API.
C. Configure the authorizer for the API resource.
D. Configure the API methods to use the authorizer.
E. Configure the authorizer for the API stage.


Answer: B. D.
Notes: An Amazon Cognito authorizer should be used for integration with Amazon Cognito user pools. In addition to creating an authorizer, you are required to configure an API method to use that authorizer for the API.
Reference: Control access to a REST API using Amazon Cognito user pools as authorizer.
Category: Security

Q81: A developer is implementing a mobile app to provide personalized services to app users. The application code makes calls to Amazon S3 and Amazon Simple Queue Service (Amazon SQS). Which options can the developer use to authenticate the app users? (Select TWO.)
A. Authenticate to the Amazon Cognito identity pool directly.
B. Authenticate to AWS Identity and Access Management (IAM) directly.
C. Authenticate to the Amazon Cognito user pool directly.
D. Federate authentication by using Login with Amazon with the users managed with AWS Security Token Service (AWS STS).
E. Federate authentication by using Login with Amazon with the users managed with the Amazon Cognito user pool.


Answer: C. E.
Notes: The Amazon Cognito user pool provides direct user authentication. The Amazon Cognito user pool provides a federated authentication option with third-party identity provider (IdP), including amazon.com.
Reference: Adding User Pool Sign-in Through a Third Party.
Category: Security

 
 

Q82: A company is implementing several order processing workflows. Each workflow is implemented by using AWS Lambda functions for each task. Which combination of steps should a developer follow to implement these workflows? (Select TWO.)
A. Define a AWS Step Functions task for each Lambda function.
B. Define a AWS Step Functions task for each workflow.
C. Write code that polls the AWS Step Functions invocation to coordinate each workflow.
D. Define an AWS Step Functions state machine for each workflow.
E. Define an AWS Step Functions state machine for each Lambda function.


Answer: A. D.
Notes: Step Functions is based on state machines and tasks. A state machine is a workflow. Tasks perform work by coordinating with other AWS services, such as Lambda. A state machine is a workflow. It can be used to express a workflow as a number of states, their relationships, and their input and output. You can coordinate individual tasks with Step Functions by expressing your workflow as a finite state machine, written in the Amazon States Language.
Reference: Getting Started with AWS Step Functions.

Category: Development

Q83: A company is migrating a web service to the AWS Cloud. The web service accepts requests by using HTTP (port 80). The company wants to use an AWS Lambda function to process HTTP requests. Which application design will satisfy these requirements?
A. Create an Amazon API Gateway API. Configure proxy integration with the Lambda function.
B. Create an Amazon API Gateway API. Configure non-proxy integration with the Lambda function.
C. Configure the Lambda function to listen to inbound network connections on port 80.
D. Configure the Lambda function as a target in the Application Load Balancer target group.


Answer: D.
Notes: Elastic Load Balancing supports Lambda functions as a target for an Application Load Balancer. You can use load balancer rules to route HTTP requests to a function, based on the path or the header values. Then, process the request and return an HTTP response from your Lambda function.
Reference: Using AWS Lambda with an Application Load Balancer.
Category: Development

Q84: A company is developing an image processing application. When an image is uploaded to an Amazon S3 bucket, a number of independent and separate services must be invoked to process the image. The services do not have to be available immediately, but they must process every image. Which application design satisfies these requirements?
A. Configure an Amazon S3 event notification that publishes to an Amazon Simple Queue Service (Amazon SQS) queue. Each service pulls the message from the same queue.
B. Configure an Amazon S3 event notification that publishes to an Amazon Simple Notification Service (Amazon SNS) topic. Each service subscribes to the same topic.
C. Configure an Amazon S3 event notification that publishes to an Amazon Simple Queue Service (Amazon SQS) queue. Subscribe a separate Amazon Simple Notification Service (Amazon SNS) topic for each service to an Amazon SQS queue.
D. Configure an Amazon S3 event notification that publishes to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe a separate Simple Queue Service (Amazon SQS) queue for each service to the Amazon SNS topic.


Answer: D.
Notes: Each service can subscribe to an individual Amazon SQS queue, which receives an event notification from the Amazon SNS topic. This is a fanout architectural implementation.
Reference: Common Amazon SNS scenarios.
Category: Development

Q85: A developer wants to implement Amazon EC2 Auto Scaling for a Multi-AZ web application. However, the developer is concerned that user sessions will be lost during scale-in events. How can the developer store the session state and share it across the EC2 instances?
A. Write the sessions to an Amazon Kinesis data stream. Configure the application to poll the stream.
B. Publish the sessions to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe each instance in the group to the topic.
C. Store the sessions in an Amazon ElastiCache for Memcached cluster. Configure the application to use the Memcached API.
D. Write the sessions to an Amazon Elastic Block Store (Amazon EBS) volume. Mount the volume to each instance in the group.


Answer: C.
Notes: ElastiCache for Memcached is a distributed in-memory data store or cache environment in the cloud. It will meet the developer’s requirement of persistent storage and is fast to access.
Reference: What is Amazon ElastiCache for Memcached?

Category: Development

 
 
 

Q86: A developer is integrating a legacy web application that runs on a fleet of Amazon EC2 instances with an Amazon DynamoDB table. There is no AWS SDK for the programming language that was used to implement the web application. Which combination of steps should the developer perform to make an API call to Amazon DynamoDB from the instances? (Select TWO.)
A. Make an HTTPS POST request to the DynamoDB API endpoint for the AWS Region. In the request body, include an XML document that contains the request attributes.
B. Make an HTTPS POST request to the DynamoDB API endpoint for the AWS Region. In the request body, include a JSON document that contains the request attributes.
C. Sign the requests by using AWS access keys and Signature Version 4.
D. Use an EC2 SSH key to calculate Signature Version 4 of the request.
E. Provide the signature value through the HTTP X-API-Key header.


Answer: B. C.
Notes: The HTTPS-based low-level AWS API for DynamoDB uses JSON as a wire protocol format. When you send HTTP requests to AWS, you sign the requests so that AWS can identify who sent them. Requests are signed with your AWS access key, which consists of an access key ID and secret access key. AWS supports two signature versions: Signature Version 4 and Signature Version 2. AWS recommends the use of Signature Version 4.
Reference: Signing AWS API requests.
Category: Development

Q87: A developer has written several custom applications that read and write to the same Amazon DynamoDB table. Each time the data in the DynamoDB table is modified, this change should be sent to an external API. Which combination of steps should the developer perform to accomplish this task? (Select TWO.)
A. Configure an AWS Lambda function to poll the stream and call the external API.
B. Configure an event in Amazon EventBridge (Amazon CloudWatch Events) that publishes the change to an Amazon Managed Streaming for Apache Kafka (Amazon MSK) data stream.
C. Create a trigger in the DynamoDB table to publish the change to an Amazon Kinesis data stream.
D. Deliver the stream to an Amazon Simple Notification Service (Amazon SNS) topic and subscribe the API to the topic.
E. Enable DynamoDB Streams on the table.


Answer: A. E.
Notes: If you enable DynamoDB Streams on a table, you can associate the stream Amazon Resource Name (ARN) with an Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table’s stream. Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. You can enable DynamoDB Streams on a table to create an event that invokes an AWS Lambda function.
Reference: Tutorial: Process New Items with DynamoDB Streams and Lambda.
Category: Monitoring

 
 
 

Q88: A company is migrating the create, read, update, and delete (CRUD) functionality of an existing Java web application to AWS Lambda. Which minimal code refactoring is necessary for the CRUD operations to run in the Lambda function?
A. Implement a Lambda handler function.
B. Import an AWS X-Ray package.
C. Rewrite the application code in Python.
D. Add a reference to the Lambda execution role.


Answer: A.
Notes: Every Lambda function needs a Lambda-specific handler. Specifics of authoring vary between runtimes, but all runtimes share a common programming model that defines the interface between your code and the runtime code. You tell the runtime which method to run by defining a handler in the function configuration. The runtime runs that method. Next, the runtime passes in objects to the handler that contain the invocation event and context, such as the function name and request ID.
Reference: Getting started with Lambda.
Category: Refactoring

Top

Q89: A company plans to use AWS log monitoring services to monitor an application that runs on premises. Currently, the application runs on a recent version of Ubuntu Server and outputs the logs to a local file. Which combination of steps should a developer perform to accomplish this goal? (Select TWO.)
A. Update the application code to include calls to the agent API for log collection.
B. Install the Amazon Elastic Container Service (Amazon ECS) container agent on the server.
C. Install the unified Amazon CloudWatch agent on the server.
D. Configure the long-term AWS credentials on the server to enable log collection by the agent.
E. Attach an IAM role to the server to enable log collection by the agent.


Answer: C. D.
Notes: The unified CloudWatch agent needs to be installed on the server. Ubuntu Server 18.04 is one of the many supported operating systems. When you install the unified CloudWatch agent on an on-premises server, you will specify a named profile that contains the credentials of the IAM user.
Reference: Collecting metrics and logs from Amazon EC2 instances and on-premises servers with the CloudWatch agent.
Category: Monitoring

Q90: A developer wants to monitor invocations of an AWS Lambda function by using Amazon CloudWatch Logs. The developer added a number of print statements to the function code that write the logging information to the stdout stream. After running the function, the developer does not see any log data being generated. Why does the log data NOT appear in the CloudWatch logs?
A. The log data is not written to the stderr stream.
B. Lambda function logging is not automatically enabled.
C. The execution role for the Lambda function did not grant permissions to write log data to CloudWatch Logs.
D. The Lambda function outputs the logs to an Amazon S3 bucket.


Answer: C.
Notes: The function needs permission to call CloudWatch Logs. Update the execution role to grant the permission. You can use the managed policy of AWSLambdaBasicExecutionRole.
Reference: Troubleshoot execution issues in Lambda.
Category: Monitoting

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

AWS Certified Developer Associate exam: Whitepapers

AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers for the AWS Certified Developer – Associate Exam.

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Online Training and Labs for AWS Certified Developer Associates Exam

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Top

AWS Developer Associates Jobs

Top

AWS Certified Developer-Associate Exam info and details, How To:

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

The AWS Certified Developer Associate exam is a multiple choice, multiple answer exam. Here is the Exam Overview:

  • Certification Name: AWS Certified Developer Associate.
  • Prerequisites for the Exam: None.
  • Exam Pattern: Multiple Choice Questions
  • The AWS Certified Developer-Associate Examination (DVA-C01) is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals guided by certification industry best practices and guidelines.
  • Your results for the examination are reported as a score from 100 – 1000, with a minimum passing score of 720.
  • Exam fees: US $150
  • Exam Guide on AWS Website
  • Available languages for tests: English, Japanese, Korean, Simplified Chinese
  • Read AWS whitepapers
  • Register for certification account here.
  • Prepare for Certification Here
  • Exam Content Outline

    Domain % of Examination
    Domain 1: Deployment (22%)
    1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
    1.2 Deploy applications using Elastic Beanstalk.
    1.3 Prepare the application deployment package to be deployed to AWS.
    1.4 Deploy serverless applications
    22%
    Domain 2: Security (26%)
    2.1 Make authenticated calls to AWS services.
    2.2 Implement encryption using AWS services.
    2.3 Implement application authentication and authorization.
    26%
    Domain 3: Development with AWS Services (30%)
    3.1 Write code for serverless applications.
    3.2 Translate functional requirements into application design.
    3.3 Implement application design into application code.
    3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.
    30%
    Domain 4: Refactoring
    4.1 Optimize application to best use AWS services and features.
    4.2 Migrate existing application code to run on AWS.
    10%
    Domain 5: Monitoring and Troubleshooting (10%)
    5.1 Write code that can be monitored.
    5.2 Perform root cause analysis on faults found in testing or production.
    10%
    TOTAL 100%

Top

AWS Certified Developer Associate exam: Additional Information for reference

Below are some useful reference links that would help you to learn about AWS Certified Developer Associate Exam.

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Other Relevant and Recommended AWS Certifications

AWS Certification Exams Roadmap
AWS Certification Exams Roadmap

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

Top

Other AWS Facts and Summaries and Questions/Answers Dump

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

AWS Certified Developer Associate exam: Whitepapers

AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers for the AWS Certified Developer – Associate Exam.

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Online Training and Labs for AWS Certified Developer Associates Exam

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Top

AWS Developer Associates Jobs

Top

AWS Certified Developer-Associate Exam info and details, How To:

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

The AWS Certified Developer Associate exam is a multiple choice, multiple answer exam. Here is the Exam Overview:

  • Certification Name: AWS Certified Developer Associate.
  • Prerequisites for the Exam: None.
  • Exam Pattern: Multiple Choice Questions
  • The AWS Certified Developer-Associate Examination (DVA-C01) is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals guided by certification industry best practices and guidelines.
  • Your results for the examination are reported as a score from 100 – 1000, with a minimum passing score of 720.
  • Exam fees: US $150
  • Exam Guide on AWS Website
  • Available languages for tests: English, Japanese, Korean, Simplified Chinese
  • Read AWS whitepapers
  • Register for certification account here.
  • Prepare for Certification Here
  • Exam Content Outline

    Domain % of Examination
    Domain 1: Deployment (22%)
    1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
    1.2 Deploy applications using Elastic Beanstalk.
    1.3 Prepare the application deployment package to be deployed to AWS.
    1.4 Deploy serverless applications
    22%
    Domain 2: Security (26%)
    2.1 Make authenticated calls to AWS services.
    2.2 Implement encryption using AWS services.
    2.3 Implement application authentication and authorization.
    26%
    Domain 3: Development with AWS Services (30%)
    3.1 Write code for serverless applications.
    3.2 Translate functional requirements into application design.
    3.3 Implement application design into application code.
    3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.
    30%
    Domain 4: Refactoring
    4.1 Optimize application to best use AWS services and features.
    4.2 Migrate existing application code to run on AWS.
    10%
    Domain 5: Monitoring and Troubleshooting (10%)
    5.1 Write code that can be monitored.
    5.2 Perform root cause analysis on faults found in testing or production.
    10%
    TOTAL 100%

Top

AWS Certified Developer Associate exam: Additional Information for reference

Below are some useful reference links that would help you to learn about AWS Certified Developer Associate Exam.

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Other Relevant and Recommended AWS Certifications

AWS Certification Exams Roadmap
AWS Certification Exams Roadmap

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

Top

Other AWS Facts and Summaries and Questions/Answers Dump

AWS Developer Associate DVA-C01 Exam Prep

 
 
 

AWS Breaking News and Top Stories

  • Event Bridge Input Transformer error: Output cannot be found in the sample event.
    by /u/alex6219

    Having some issues getting my input transformer to work. I took a sample event from a "CreateWorkspaces" event in cloud trail and am testing its output for when I get sent an email notification. ***Some information has been replaced with "***" for privacy reasons*** Sample Event { "eventVersion": "1.08", "userIdentity": { "type": "AssumedRole", "principalId": "***PRINCIPAL ID HERE***", "arn": "***ARN HERE***", "accountId": "***ACCOUNT ID HERE***", "accessKeyId": "ACCESS KEY HERE***", "sessionContext": { "sessionIssuer": { "type": "Role", "principalId": "***PRINCIPAL ID HERE***", "arn": "***ARN HERE***", "accountId": "***ACCOUNT ID HERE***", "userName": "***USERNAME HERE***" }, "webIdFederationData": {}, "attributes": { "creationDate": "2022-06-28T16:41:42Z", "mfaAuthenticated": "false" } } }, "eventTime": "2022-06-28T16:44:12Z", "eventSource": "workspaces.amazonaws.com", "eventName": "CreateWorkspaces", "awsRegion": "***REGION HERE***", "sourceIPAddress": "AWS Internal", "userAgent": "AWS Internal", "requestParameters": { "workspaces": [{ "rootVolumeEncryptionEnabled": false, "bundleId": "***BUNDLE ID HERE***", "userVolumeEncryptionEnabled": false, "userName": "***USERNAME HERE***", "directoryId": "***DIRECTORY ID HERE***", "tags": [], "workspaceProperties": { "userVolumeSizeGib": 100, "runningMode": "ALWAYS_ON", "computeTypeName": "PERFORMANCE", "rootVolumeSizeGib": 80 } }], "requestContext": { "awsCustomerVia": "***AWSCUSTOMERVIA HERE***", "awsAccountId": "***AWSACCOUNTID HERE***" } }, "responseElements": { "pendingRequests": [{ "bundleId": "***BUNDLEID HERE***", "workspaceId": "***WORKSPACE ID HERE***", "directoryId": "***DIRECTORY ID HERE***", "state": "PENDING", "userName": "***USERNAME HERE***", "rootVolumeEncryptionEnabled": false, "userVolumeEncryptionEnabled": false }], "failedRequests": [] }, "requestID": "***REQUESTID HERE***", "eventID": "***EVENTID HERE***", "readOnly": false, "eventType": "AwsApiCall", "managementEvent": true, "recipientAccountId": "ACCOUNT ID HERE***", "eventCategory": "Management", "sessionCredentialFromConsole": "true" } ​ ​ Input Path: { "creationDate": "$.userIdentity.sessionContext.attributes.creationDate", "eventTime": "$.eventTime", "eventName": "$.eventName", "userName": "$.requestParameters.workspaces.userName", "workspaceId": "$.responseElements.pendingRequests.workspaceId" } ​ Template: "Workspace <workspaceId> has been <eventName> by <userName> at <eventTime> on <creationDate>" When I click "Generate Output", I get a message saying: "Output can not be generated, because field "$.responseElements.pendingRequests.workspaceId" can not be found in the sample event. ​ Is there something I'm missing, or am I doing this incorrectly? submitted by /u/alex6219 [link] [comments]

  • Could Amplify be filling my Elastic Block Storage?
    by /u/lRulicol

    Hey there, I'm very new to aws and working for a small startup, and they wanted to have their web app hosted on aws so I went with Amplify and made a Next.js app. After a couple of days we realised our Elastic Block Storage is about to reach the 30GB free tier limit, and I wanted to know if this could be because with each deploy I'm making the whole web app gets stored there. It would make sense because I've made 27 deploys and each of them would've filled around 1gb (counting all the dependencies being installed). Is my assumption right? Do you guys think I should make changes (apart from making way less deploys) in order to save costs? AWS is really hard to understand for a beginner so please try to explain it like I'm a 5 year old, there's not much I could find on this by googling submitted by /u/lRulicol [link] [comments]

  • Can you always use StringEquals aws:ResourceTag/foo in an IAM policy?
    by /u/manlymatt83

    I have been reading through the IAM actions and conditionals documentation for a bunch of AWS services. It seems like the documentation for the resource ("*" vs. something specific) is always pretty accurate, but often times the conditionals column is empty even though aws:ResourceTag/foo == bar works. Is this something that is global to anything and just sometimes poorly documented? Appreciate any insight! submitted by /u/manlymatt83 [link] [comments]

  • Route 53 - No Zones
    by /u/no1bossman

    I'm working on a client email migration project. Long story short I'm not seeing zones from Route 53 at all under the context of the provided tenancy user accounts. DNS Lookup Tool - DNS Tools - MxToolbox states both domains are registered at Amazon, and then even makes a statement below the lookup results to say Amazon Route 53 is the hosting provider for both domains. To be clear I've never managed DNS records from AWS before. But Route 53 from both tenants shows no hosted zones. I have looked under all sub-sections of the Route 53 service but see no evidence of any data. I have already asked the client and waiting for a reply to see if any other AWS user accounts are to be used. But in saying that billing under the context of each user account shows the same structured billing information suggesting multiple zones being available. Both tenancies are not entitled to technical support under their basic support plans, hence why I have come here. https://preview.redd.it/sm9fdsw52g891.png?width=474&format=png&auto=webp&s=871adef4e232391e92efad0e9bebeb5e73b3be9d Any help would be greatly appreciated. https://preview.redd.it/l176nmbf1g891.png?width=1397&format=png&auto=webp&s=ef46a42d1ee0cac7944e6efd636a85361a9b83c7 submitted by /u/no1bossman [link] [comments]

  • Trying to simplify IoT comms with simple (Berkley) sockets
    by /u/LOLteacher

    I know of the IoT setup for a new EC2 for using web sockets in an elegant way, but I already have simple socket servers & clients that I wrote to run on my local LAN. It's been a long time since I exposed anything on a routable IP port, so I'm wondering if it might get blasted by traffic and jack up my costs. I'm using my own custom messaging formatting, so I will promptly ignore any gibberish, but I'm just worried about excessive server connections, if that's to be expected. Edit: After thinking a bit, can I read and write to/from a DB instance via only Lambda functions? There's no actual reason for me to keep a server around if I have the persistence I need in a DB. submitted by /u/LOLteacher [link] [comments]

  • Fargate - How to distribute compute
    by /u/dmorris87

    I am looking at Fargate as an option for running a containerized Python script. It's a batch process that needs to run on a daily schedule. The script pulls data from a database for several clients and does some data analysis. I feel the 4 vCPU, 30GB limits may not be sufficient. Is there a way to distribute the compute, e.g. multiple Docker containers? submitted by /u/dmorris87 [link] [comments]

  • Cross-account Resource Policy or Deployment Role for S3 Bucket?
    by /u/new-creation

    I'm setting up an S3 web hosting bucket in a workload account and will be deploying to it from a CI/CD account. Any thoughts on the pros and cons of: Creating a deployment role in the workload account that is assumed by the deployment role in the CI/CD account; or, Granting the deployment role in the CI/CD account access to the S3 bucket via a resource policy? So far all I've come up with is if there are other resources that need to be deployed to, but don't support cross-account access, (1) would be needed anyway. submitted by /u/new-creation [link] [comments]

  • Is it necessary to use aws lamda for Android app development ??
    by /u/alex_animation

    When I use aws graphql api for storage data in s3,it stored.My doubt is what is the use of aws lamda when data are stored in S3 without aws lamda ? pls help me to understand the aws lamda use cases. submitted by /u/alex_animation [link] [comments]

  • AWS Kinesis 500 Error when using AT_TIMESTAMP.
    by /u/Ashamed_Background46

    I am facing a bug when using Amazon Kinesis while building locally with localstack. I'm not sure if this is just a localstack problem or if it is a problem with my implementation/usage. If anyone might know the issue or have time to look through the stack trace, that would be awesome. Context: I am currently working on the streaming api. I am using AWS KCL to constantly consume from the Kinesis stream. In addition to streaming the latest records from Kinesis, if a customer wishes to start streaming from a particular time in the past, they would send the timestamp in the header. The streaming API would have that timestamp be the initial position that the Kinesis consumer starts at. However, I am running into this exception: java.util.concurrent.ExecutionException: software.amazon.awssdk.services.kinesis.model.KinesisException: null (Service: Kinesis, Status Code: 500, Request ID: null) I've also tried invoking the Kinesis API's directly, but I come across the same issue when I try to perform the GetShardIterator request with shardIteratorType AT_TIMESTAMP. ​ Stack Trace: 2022-06-27 19:33:58.063 INFO 1821 --- [ Thread-4] s.a.kinesis.leases.KinesisShardDetector : Stream samplestream: listing shards with list shards request ListShardsRequest(StreamName=samplestream, ShardFilter=ShardFilter(Type=AT_TIMESTAMP, Timestamp=2022-06-27T14:12:00Z)) 2022-06-27 19:33:58.600 ERROR 1821 --- [ Thread-4] s.amazon.kinesis.leases.ShardSyncTask : Caught exception while sync'ing Kinesis shards and leases software.amazon.awssdk.services.kinesis.model.KinesisException: null (Service: Kinesis, Status Code: 500, Request ID: null) at software.amazon.awssdk.services.kinesis.model.KinesisException$BuilderImpl.build(KinesisException.java:95) ~[kinesis-2.17.209.jar:na] at software.amazon.awssdk.services.kinesis.model.KinesisException$BuilderImpl.build(KinesisException.java:55) ~[kinesis-2.17.209.jar:na] at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.unmarshall(AwsJsonProtocolErrorUnmarshaller.java:87) ~[aws-json-protocol-2.17.209.jar:na] at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.handle(AwsJsonProtocolErrorUnmarshaller.java:61) ~[aws-json-protocol-2.17.209.jar:na] at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.handle(AwsJsonProtocolErrorUnmarshaller.java:40) ~[aws-json-protocol-2.17.209.jar:na] at software.amazon.awssdk.core.http.MetricCollectingHttpResponseHandler.lambda$handle$0(MetricCollectingHttpResponseHandler.java:52) ~[sdk-core-2.17.209.jar:na] at software.amazon.awssdk.core.internal.util.MetricUtils.measureDurationUnsafe(MetricUtils.java:63) ~[sdk-core-2.17.209.jar:na] at software.amazon.awssdk.core.http.MetricCollectingHttpResponseHandler.handle(MetricCollectingHttpResponseHandler.java:52) ~[sdk-core-2.17.209.jar:na] at software.amazon.awssdk.core.internal.http.async.AsyncResponseHandler.lambda$prepare$0(AsyncResponseHandler.java:89) ~[sdk-core-2.17.209.jar:na] at java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072) ~[na:na] at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[na:na] at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073) ~[na:na] at software.amazon.awssdk.core.internal.http.async.AsyncResponseHandler$BaosSubscriber.onComplete(AsyncResponseHandler.java:132) ~[sdk-core-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler$DataCountingPublisher$1.onComplete(ResponseHandler.java:513) ~[netty-nio-client-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler.runAndLogError(ResponseHandler.java:250) ~[netty-nio-client-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler.access$600(ResponseHandler.java:75) ~[netty-nio-client-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler$PublisherAdapter$1.onComplete(ResponseHandler.java:371) ~[netty-nio-client-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.nrs.HandlerPublisher.complete(HandlerPublisher.java:447) ~[netty-nio-client-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.nrs.HandlerPublisher.handlerRemoved(HandlerPublisher.java:435) ~[netty-nio-client-2.17.209.jar:na] at io.netty.channel.AbstractChannelHandlerContext.callHandlerRemoved(AbstractChannelHandlerContext.java:946) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.DefaultChannelPipeline.callHandlerRemoved0(DefaultChannelPipeline.java:637) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:477) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:423) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at software.amazon.awssdk.http.nio.netty.internal.nrs.HttpStreamsHandler.removeHandlerIfActive(HttpStreamsHandler.java:361) ~[netty-nio-client-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.nrs.HttpStreamsHandler.handleReadHttpContent(HttpStreamsHandler.java:223) ~[netty-nio-client-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.nrs.HttpStreamsHandler.channelRead(HttpStreamsHandler.java:199) ~[netty-nio-client-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.nrs.HttpStreamsClientHandler.channelRead(HttpStreamsClientHandler.java:173) ~[netty-nio-client-2.17.209.jar:na] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at software.amazon.awssdk.http.nio.netty.internal.LastHttpContentHandler.channelRead(LastHttpContentHandler.java:43) ~[netty-nio-client-2.17.209.jar:na] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at software.amazon.awssdk.http.nio.netty.internal.http2.Http2ToHttpInboundAdapter.onDataRead(Http2ToHttpInboundAdapter.java:87) ~[netty-nio-client-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.http2.Http2ToHttpInboundAdapter.channelRead0(Http2ToHttpInboundAdapter.java:49) ~[netty-nio-client-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.http2.Http2ToHttpInboundAdapter.channelRead0(Http2ToHttpInboundAdapter.java:42) ~[netty-nio-client-2.17.209.jar:na] at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) ~[netty-handler-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.AbstractHttp2StreamChannel$Http2ChannelUnsafe.doRead0(AbstractHttp2StreamChannel.java:901) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.AbstractHttp2StreamChannel.fireChildRead(AbstractHttp2StreamChannel.java:555) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.Http2MultiplexHandler.channelRead(Http2MultiplexHandler.java:180) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.Http2FrameCodec.onHttp2Frame(Http2FrameCodec.java:707) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.Http2FrameCodec$FrameListener.onDataRead(Http2FrameCodec.java:646) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.Http2FrameListenerDecorator.onDataRead(Http2FrameListenerDecorator.java:36) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.Http2EmptyDataFrameListener.onDataRead(Http2EmptyDataFrameListener.java:49) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onDataRead(DefaultHttp2ConnectionDecoder.java:307) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.Http2InboundFrameLogger$1.onDataRead(Http2InboundFrameLogger.java:48) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readDataFrame(DefaultHttp2FrameReader.java:415) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.DefaultHttp2FrameReader.processPayloadState(DefaultHttp2FrameReader.java:250) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readFrame(DefaultHttp2FrameReader.java:159) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.Http2InboundFrameLogger.readFrame(Http2InboundFrameLogger.java:41) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder.decodeFrame(DefaultHttp2ConnectionDecoder.java:173) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.DecoratingHttp2ConnectionDecoder.decodeFrame(DecoratingHttp2ConnectionDecoder.java:63) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.Http2ConnectionHandler$FrameDecoder.decode(Http2ConnectionHandler.java:378) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:438) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:510) ~[netty-codec-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:449) ~[netty-codec-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279) ~[netty-codec-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:995) ~[netty-common-4.1.77.Final.jar:4.1.77.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.77.Final.jar:4.1.77.Final] at java.base/java.lang.Thread.run(Thread.java:829) ~[na:na] 2022-06-27 19:33:58.604 ERROR 1821 --- [ Thread-4] s.amazon.kinesis.coordinator.Scheduler : Caught exception when initializing LeaseCoordinator software.amazon.awssdk.services.kinesis.model.KinesisException: null (Service: Kinesis, Status Code: 500, Request ID: null) at software.amazon.awssdk.services.kinesis.model.KinesisException$BuilderImpl.build(KinesisException.java:95) ~[kinesis-2.17.209.jar:na] at software.amazon.awssdk.services.kinesis.model.KinesisException$BuilderImpl.build(KinesisException.java:55) ~[kinesis-2.17.209.jar:na] at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.unmarshall(AwsJsonProtocolErrorUnmarshaller.java:87) ~[aws-json-protocol-2.17.209.jar:na] at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.handle(AwsJsonProtocolErrorUnmarshaller.java:61) ~[aws-json-protocol-2.17.209.jar:na] at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.handle(AwsJsonProtocolErrorUnmarshaller.java:40) ~[aws-json-protocol-2.17.209.jar:na] at software.amazon.awssdk.core.http.MetricCollectingHttpResponseHandler.lambda$handle$0(MetricCollectingHttpResponseHandler.java:52) ~[sdk-core-2.17.209.jar:na] at software.amazon.awssdk.core.internal.util.MetricUtils.measureDurationUnsafe(MetricUtils.java:63) ~[sdk-core-2.17.209.jar:na] at software.amazon.awssdk.core.http.MetricCollectingHttpResponseHandler.handle(MetricCollectingHttpResponseHandler.java:52) ~[sdk-core-2.17.209.jar:na] at software.amazon.awssdk.core.internal.http.async.AsyncResponseHandler.lambda$prepare$0(AsyncResponseHandler.java:89) ~[sdk-core-2.17.209.jar:na] at java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072) ~[na:na] at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[na:na] at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073) ~[na:na] at software.amazon.awssdk.core.internal.http.async.AsyncResponseHandler$BaosSubscriber.onComplete(AsyncResponseHandler.java:132) ~[sdk-core-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler$DataCountingPublisher$1.onComplete(ResponseHandler.java:513) ~[netty-nio-client-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler.runAndLogError(ResponseHandler.java:250) ~[netty-nio-client-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler.access$600(ResponseHandler.java:75) ~[netty-nio-client-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler$PublisherAdapter$1.onComplete(ResponseHandler.java:371) ~[netty-nio-client-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.nrs.HandlerPublisher.complete(HandlerPublisher.java:447) ~[netty-nio-client-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.nrs.HandlerPublisher.handlerRemoved(HandlerPublisher.java:435) ~[netty-nio-client-2.17.209.jar:na] at io.netty.channel.AbstractChannelHandlerContext.callHandlerRemoved(AbstractChannelHandlerContext.java:946) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.DefaultChannelPipeline.callHandlerRemoved0(DefaultChannelPipeline.java:637) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:477) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:423) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at software.amazon.awssdk.http.nio.netty.internal.nrs.HttpStreamsHandler.removeHandlerIfActive(HttpStreamsHandler.java:361) ~[netty-nio-client-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.nrs.HttpStreamsHandler.handleReadHttpContent(HttpStreamsHandler.java:223) ~[netty-nio-client-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.nrs.HttpStreamsHandler.channelRead(HttpStreamsHandler.java:199) ~[netty-nio-client-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.nrs.HttpStreamsClientHandler.channelRead(HttpStreamsClientHandler.java:173) ~[netty-nio-client-2.17.209.jar:na] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at software.amazon.awssdk.http.nio.netty.internal.LastHttpContentHandler.channelRead(LastHttpContentHandler.java:43) ~[netty-nio-client-2.17.209.jar:na] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at software.amazon.awssdk.http.nio.netty.internal.http2.Http2ToHttpInboundAdapter.onDataRead(Http2ToHttpInboundAdapter.java:87) ~[netty-nio-client-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.http2.Http2ToHttpInboundAdapter.channelRead0(Http2ToHttpInboundAdapter.java:49) ~[netty-nio-client-2.17.209.jar:na] at software.amazon.awssdk.http.nio.netty.internal.http2.Http2ToHttpInboundAdapter.channelRead0(Http2ToHttpInboundAdapter.java:42) ~[netty-nio-client-2.17.209.jar:na] at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) ~[netty-handler-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.AbstractHttp2StreamChannel$Http2ChannelUnsafe.doRead0(AbstractHttp2StreamChannel.java:901) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.AbstractHttp2StreamChannel.fireChildRead(AbstractHttp2StreamChannel.java:555) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.Http2MultiplexHandler.channelRead(Http2MultiplexHandler.java:180) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.Http2FrameCodec.onHttp2Frame(Http2FrameCodec.java:707) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.Http2FrameCodec$FrameListener.onDataRead(Http2FrameCodec.java:646) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.Http2FrameListenerDecorator.onDataRead(Http2FrameListenerDecorator.java:36) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.Http2EmptyDataFrameListener.onDataRead(Http2EmptyDataFrameListener.java:49) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onDataRead(DefaultHttp2ConnectionDecoder.java:307) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.Http2InboundFrameLogger$1.onDataRead(Http2InboundFrameLogger.java:48) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readDataFrame(DefaultHttp2FrameReader.java:415) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.DefaultHttp2FrameReader.processPayloadState(DefaultHttp2FrameReader.java:250) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readFrame(DefaultHttp2FrameReader.java:159) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.Http2InboundFrameLogger.readFrame(Http2InboundFrameLogger.java:41) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder.decodeFrame(DefaultHttp2ConnectionDecoder.java:173) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.DecoratingHttp2ConnectionDecoder.decodeFrame(DecoratingHttp2ConnectionDecoder.java:63) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.Http2ConnectionHandler$FrameDecoder.decode(Http2ConnectionHandler.java:378) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:438) ~[netty-codec-http2-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:510) ~[netty-codec-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:449) ~[netty-codec-4.1.77.Final.jar:4.1.77.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279) ~[netty-codec-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) ~[netty-transport-4.1.77.Final.jar:4.1.77.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:995) ~[netty-common-4.1.77.Final.jar:4.1.77.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.77.Final.jar:4.1.77.Final] at java.base/java.lang.Thread.run(Thread.java:829) ~[na:na] submitted by /u/Ashamed_Background46 [link] [comments]

  • Taming Content Discovery Scaling Challenges with Hexagons and Elasticsearch
    by /u/BeeComprehensive9027

    At DoorDash we use consumers location to pull in relevant stores and items into carousels, but faced scaling "fan out' challenges as our platform grew and too many stores were being called by the campaign system. To solve this we had to evaluate several technologies including S2, H3, Elasticsearch and Geohash. We ended up building a solution with H3 which divided a market geo's into hexagons that could be filtered by elastic search and severed to consumers. Read the technical guide to learn the technical details and how we solved this massive scaling challenge https://doordash.engineering/2022/06/28/taming-content-discovery-scaling-challenges-with-hexagons-and-elasticsearch/ submitted by /u/BeeComprehensive9027 [link] [comments]

  • Redshift Data Shares: Objects not showing up on consumer
    by /u/elus

    Environment details: Two redshift clusters on RA3.XLPLUS 2 Nodes each Goal: Implement data share between two clusters (producer and consumer) Steps taken: Followed steps from this AWS video Problem: Running the query on the consumer cluster select * from svv_datashares yields the expected data but select * from svv_datashare_objects returns an empty set Anyone run into this issue before or have a set of things to check? submitted by /u/elus [link] [comments]

  • Is Shared Memory possible in a Multi-Model Sagemaker Endpoint for HuggingFace adapters?
    by /u/Vimlearner

    I'm in a situation where I will be deploying many (possibly 100s) of "Adapter" models based on huggingface adapterhub. (https://adapterhub.ml/) A primary advantage of using Adapters is that two of them can share a base layer of nodes in a network. It's quite easy, by hand, to load a base layer and two adapters on top. My question is "how can adapters be leveraged in Multi-Model Sagemaker Endpoints?" Specifically, I'd like Sagemaker to perceive A and B as two distinct "models" and yet, somehow, have those two models share the memory of the base node layer on which they were trained in common. Is this possible? I would VERY much appreciate any advice/guidance. submitted by /u/Vimlearner [link] [comments]

  • RDS ingest ideas
    by /u/Responsible-Ad-7038

    Right now I have a task to ingest large (10's of gigs) datasets in a csv format that are located on s3 to a rds postgres database, I have been trying things like https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PostgreSQL.S3Import.html but the issue I hit there is that I need a solution as dynamic as possible I don't initially know all the column names or even table names. Due to regulations I am not able to use Athena or firehose. What I've landed on is sparkjob in glue, this works but I don't know if its the most efficient method available. does anyone have any thoughts on more efficient methods? submitted by /u/Responsible-Ad-7038 [link] [comments]

  • Is it safe to use Lambda environment variables to store API credentials for a personal project?
    by /u/emerald-wind

    I want to work on a personal project that uses the Reddit API in Python with PRAW. I want to use the Reddit API for read-only access. Because of read-only access, the only credentials for calling the API would be the client ID, client secret, and user_agent of my project. If I'm the sole and only user that uses my AWS account, would it be safe to store these credentials as environment variables in my Lambda function? If so, would you recommend configuring "Encryption in transit" for these environment variables? submitted by /u/emerald-wind [link] [comments]

  • All I see in my company's lambdas are imports and definitions of functions. The lambdas are called by object creation events. Where are these functions actually called?
    by /u/TheNowAndHere

    At some point, somewhere, something interprets the defined functions within these lambdas and invokes them. Where is that happening? Is that just "what is done" when it's triggered? Lambda is like "Hey I see you have an __INIT__.py and 7 other, different .py files, let me invoke all of the functions defined therein at once."? Thanks in advance for your help! submitted by /u/TheNowAndHere [link] [comments]

  • Using CodeBuild for CICD
    by /u/Equivalent_Form_9717

    Hi all, I am setting up a standard workflow for people to use dbt, Redshift, and Git (branching, version control). I have completed the CI portion of the pipeline by writing a ‘build spec.yml’ for CodeBuild to pick up when code is pushed, pull-request within feature branches CodeBuild currently runs and does the automated tests with this GitHub webhook trigger. However, I am not sure how to generalise CIbuild.yml file so that it can determine what branch it is on (feature_*, release, prod) to pull in the correct environment variables to run the “continuous deployment” I have thought about having separate yaml files on what best practices I should consider when implementing “CD” Essentially, it should be able to deploy to redshift environment immediately. Any documentation/advice is welcome! submitted by /u/Equivalent_Form_9717 [link] [comments]

  • How do I retrieve basic server metrics such as CPU utilization or RAM usage?
    by /u/greedness

    I am trying to retrieve those basic metrics for internal use, how do I go about doing so? I am very new to AWS so please bear with me. From my limited research it looks like I am going to have to get it from cloudwatch. I would need to put the metric data into Logs, send it to S3, then get it from S3. Is this is the simplest way to do it? If so, how do I go about doing it? submitted by /u/greedness [link] [comments]

  • Running SSM Scripts via Lambda
    by /u/bitbythecron

    AWS here. I have CloudFormation-managed infrastructure including an application load balancer (ALB) sitting in front of an Autoscaling Group (ASG) of EC2 instances. HTTP requests hit the balancer and the balancer distributes those requests across members of the ASG, and the ASG is scaling up/down depending on traffic/load. I'm trying to achieve automated deployments to these ASG-managed EC2 instances. This means: when the ASG creates a new node to be added to the balancer pool (target group) I need it to run a deployment process I want to manually invoked a redeployment to any EC2 instances from a GitHub Action so if the ASG target group has 3 EC2s in it and I run the deployment from inside the GH Action, all 3 instances get the changes deployed to them I also want to be able to manually stop and restart any EC2 instances in the target group and run start/stop scripts for proper + graceful start/stop cleanup So I have 3 shell scripts to manage EC2 instance lifecycles: deploy.sh which copies executable artifacts (the app server) from S3 to the correct location on the EC2 instance start.sh which starts the app server stop.sh which shuts down the app server gracefully (performing cleanup, etc.) I'm trying to figure out how to invoke these scripts correctly: When a new EC2 instance is created by the ASG I want to run deploy.sh && start.sh on that node; lets call this Scale Up Event When I manually run a GitHub Action I want stop.sh && deploy.sh && start.sh to run on all/any EC2 instances inside the ASG target group; this shuts down the app server, runs the deployment of the new code, and then restarts the app server); lets call this Deploy Event Ideally, there would be a way to invoke stop.sh and start.sh through Lambda/SSM without having to SSH into each and every EC2 instance and running them manually So my tentative pipeline for the Deploy Event here is: GitHub Action calls a Deployer Lambda Deployer Lambda calls SSM and tells it to run scripts against all the instances of the target group And my tentative pipeline for the Scale Up Event here is: Add a CloudWatch Alarm on the ASG that triggers when the target group scales up The Alarm invokes the same Deployer Lambda above, but perhaps scoped to the newly-created EC2 instance --> "New EC2" Deployer Lambda calls SSM and tells it to run scripts against the New EC2 The reason why I am tentatively placing Lambda in there is because I do not believe GitHub Actions can invoke SSM capabilities directly, and I don't believe CloudWatch Alarms can either. But both can invoke Lambdas. First I would ask whether or not I'm on track or way off base, and if so, what a proper pipeline should look like here. Assuming I'm more or less barking up the right tree: How would I invoke deploy.sh, start.sh and stop.sh all from Lambda/SSM? Can anyone even provide basic pseudo-code examples to help me see the forest through the trees here? Thanks in advance! submitted by /u/bitbythecron [link] [comments]

  • How to receive email alert whenever a workspace is created/destroyed by any user?
    by /u/alex6219

    I want to receive email alerts whenever a workspace is newly created or deleted. I'm assuming I need to setup a cloudtrail and then create an SNS topic? EDIT: I successfully created a cloud trail "data event" for the workspaces and subscribed an SNS topic to it. Now I receive an email whenever a workspace is crated or terminated. However, the email doesn't show any information, only that an event happened and put it in the S3 bucket. submitted by /u/alex6219 [link] [comments]

  • Setting up a local WordPress dev environment based on a snapshot from AWS Lightsail
    by /u/okashii

    I'm hoping that you can help me wrap my head around how to do this. I recently started a new job that involves working on WordPress websites. I'm very comfortable with the actual programming side, but setting up the environment is new to me. The site is hosted on Lightsail and is quite complicated with many plugins. I want to be able to download a snapshot from Lightsail, work on it locally, and then deploy my changes back to the Lightsail instance. I think the previous developer was a bit of a cowboy and just made changes directly in the admin console and theme editor, but I want to be a bit more careful. I'd like to have version control, and I've seen some information on using a git repo for deployment as well as version control. However, so far in my research people are only using git for their wordpress theme files, and not the entire application. I also worry about database changes, as I don't think those can be passed on through git. This is probably a stupid question, but is it possible to have the entire application and database stored in git, and then deploy that directly to Lightsail? As you can probably tell, there are a lot of gaps in my knowledge. For my personal projects I use local by flywheel for local development and deployment, but that's not an option in this case. I'd appreciate any resources, tips, tricks or tutorials you could send my way. I've been googling but I haven't hit upon the right search yet. I'm not sure the best place to post this, so I'm going to cross post it to both r/aws and r/Wordpress to hopefully find someone who can help. If there are other subreddits that might be more appropriate please point me to them. submitted by /u/okashii [link] [comments]

  • Best way to automatically Start build in AWS CodeBuild on Push new code
    by /u/xtrzx8

    I want users to be able to write their code on local machine and build new docker image for them with CodeBuild after they pushed their new code. I'm not sure what's the best way to Start build in CodeBuild after user pushed her code to CodeCommit. CodeBuild have only time-based triggers. But I want to start new build every time user pushed new code to repository. I don't want to use CodePipeline, because I'm working in restricted environment where I can't create/edit IAM policies and roles. It's easier for me to make one ticket for one role for CodeBuild than make ticket for every new CodePipeline. I found push to existing branch event trigger in Lambda, is it best way to use it? Or is there some better way? submitted by /u/xtrzx8 [link] [comments]

  • CloudFormation insisting on provisioning resources only in Default VPC
    by /u/bitbythecron

    I created and ran a CloudFormation "base template" (JSON file) that created a VPC in my region, along with some subnets, routing tables, IG, NAT gateway, etc. I did not specify anything in the Outputs of that template since I'm still new to CF and wasn't sure whether that was necessary or not. So the template ran successfully and my region now has 2 VPCs in it: vpc-12345 --> a VPC that has been there for a while and was provisioned manually; this is the Default VPC in the region vpc-23456 --> the VPC my CF base template created I am now running a 2nd template that will add application-layer infra on top of the VPC created in the base "layer" (base template mentioned above). Things like load balancers, DBs, auto scaling groups, etc. I have many input Parameters defined for this 2nd template, one of which is the VPC to operate inside of: "Parameters": { "VpcId": { "Type": "AWS::EC2::VPC::Id", "Description": "VpcId of your existing Virtual Private Cloud (VPC)", "ConstraintDescription": "Must be the VPC Id of an existing Virtual Private Cloud." }, ... Of the many things created in this template, one of them is a security group named myapp-instance-sg: "InstanceSecurityGroup": { "Type": "AWS::EC2::SecurityGroup", "Properties": { "GroupName": "myapp-instance-sg", "GroupDescription": "Enable HTTP/8089 from the load balancer only", "SecurityGroupIngress": [{ "CidrIp": "0.0.0.0/0", "IpProtocol": "tcp", "FromPort": "8089", "ToPort": "8089" }], "Tags": [{ "Key": "environment", "Value": { "Ref": "EnvironmentSlug" } }, { "Key": "Application", "Value": { "Ref": "AWS::StackId" } } ], "VpcId": { "Ref": "VpcId" } } }, When I run this 2nd app template, I make sure to specify the VpcId parameter correctly: aws cloudformation create-stack --stack-name myapp-stack-dev --template-body file:///Users/myuser/myapp/app.json --parameters ParameterKey=VpcId,ParameterValue=vpc-23456 ... When this runs, the stack fails and rolls back in the UI. When I look to see why it failed, I see the following error: The security group 'myapp-instance-sg' does not exist in the default VPC 'vpc-12345'. Why is CF trying to use the default VPC for the region (vpc-12345) when I am clearly specifying and wiring the SG to use vpc-23456? Do I need to specify something in the outputs of the base template, so I can subsequently use it in this app template? Does CF only work on Default VPCs for some reason? Something else?!? Thanks in advance! Edit: here is the entire CF template in all its glory: { "AWSTemplateFormatVersion": "2010-09-09", "Description": "Use this template to provision/maintain application-layer infrastructure for a non-production myapp service.", "Parameters": { "ServiceSlug": { "Type": "String", "Description": "Lower-cased, hyphenated slug/abbreviation of the service ('core' for 'myapp Core', etc.). Used to name and tag the service and its related resources." }, "EnvironmentSlug": { "Type": "String", "Description": "Lower-cased, hyphenated slug/abbreviation of the environment (staging, dev, demo, sandbox, sbx, etc.). Used to name and tag the environment across all resources in the created VPC." }, "VpcId": { "Type": "AWS::EC2::VPC::Id", "Description": "VpcId of your existing Virtual Private Cloud (VPC)", "ConstraintDescription": "Must be the VPC Id of an existing Virtual Private Cloud." }, "BalancerSubnets": { "Type": "List<AWS::EC2::Subnet::Id>", "Description": "The SubnetId of the subnet where the balancer will live", "ConstraintDescription": "Must be a valid subnet ID associated. Should be residing in the selected Virtual Private Cloud." }, "InstanceSubnets": { "Type": "List<AWS::EC2::Subnet::Id>", "Description": "The list of SubnetIds of the subnets where the service EC2 instances will live.", "ConstraintDescription": "Must be a list of at least two existing subnets associated with at least two different availability zones. They should be residing in the selected Virtual Private Cloud." }, "InstanceType": { "Description": "Application Server (Service) EC2 instance type", "Type": "String", "Default": "t2.micro", "AllowedValues": ["t1.micro", "t2.nano", "t2.micro", "t2.small", "t2.medium", "t2.large", "m1.small", "m1.medium", "m1.large", "m1.xlarge", "m2.xlarge", "m2.2xlarge", "m2.4xlarge", "m3.medium", "m3.large", "m3.xlarge", "m3.2xlarge", "m4.large", "m4.xlarge", "m4.2xlarge", "m4.4xlarge", "m4.10xlarge", "c1.medium", "c1.xlarge", "c3.large", "c3.xlarge", "c3.2xlarge", "c3.4xlarge", "c3.8xlarge", "c4.large", "c4.xlarge", "c4.2xlarge", "c4.4xlarge", "c4.8xlarge", "g2.2xlarge", "g2.8xlarge", "r3.large", "r3.xlarge", "r3.2xlarge", "r3.4xlarge", "r3.8xlarge", "i2.xlarge", "i2.2xlarge", "i2.4xlarge", "i2.8xlarge", "d2.xlarge", "d2.2xlarge", "d2.4xlarge", "d2.8xlarge", "hi1.4xlarge", "hs1.8xlarge", "cr1.8xlarge", "cc2.8xlarge", "cg1.4xlarge"], "ConstraintDescription": "Must be a valid EC2 instance type." }, "NotificationEmail": { "Description": "Email address to notify if there are any scaling operations", "Type": "String", "AllowedPattern": "([a-zA-Z0-9_\\-\\.]+)@((\\[[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.)|(([a-zA-Z0-9\\-]+\\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\\]?)", "ConstraintDescription": "Must be a valid Email address." }, "KeyName": { "Description": "The EC2 Key Pair to allow SSH access to the instances", "Type": "AWS::EC2::KeyPair::KeyName", "ConstraintDescription": "Must be the name of an existing EC2 KeyPair." }, "CertificateArn": { "Description": "ARN of the issued and verified SSL Certificate to use for the load balancer", "Type": "String", "ConstraintDescription": "Must be a valid Certificate ARN." }, "HealthCheckPort": { "Description": "Port number that the balancer will use for checking the health of the backing EC2 instances", "Type": "Number", "ConstraintDescription": "Must be a valid port number and should match the actual port the application servers are listening/serving from." }, "InstanceProfileArn": { "Description": "ARN of the IAM Instance Profile to use for the auto-scaling group-managed EC2 instances", "Type": "String", "ConstraintDescription": "Must be a valid Instance Profile ARN." }, "InstanceProfileName": { "Description": "Name of the IAM Instance Profile to use for the auto-scaling group-managed EC2 instances", "Type": "String", "ConstraintDescription": "Must be a valid IAM Instance Profile Name." } }, "Resources": { "ScalingSnsTopic": { "Type": "AWS::SNS::Topic", "Properties": { "Subscription": [{ "Endpoint": { "Ref": "NotificationEmail" }, "Protocol": "Email" }] } }, "InstanceSecurityGroup": { "Type": "AWS::EC2::SecurityGroup", "Properties": { "GroupName": "myapp-instance-sg", "GroupDescription": "Enable HTTP/8089 from the load balancer only", "SecurityGroupIngress": [{ "CidrIp": "0.0.0.0/0", "IpProtocol": "tcp", "FromPort": "8089", "ToPort": "8089" }], "Tags": [{ "Key": "environment", "Value": { "Ref": "EnvironmentSlug" } }, { "Key": "Application", "Value": { "Ref": "AWS::StackId" } } ], "VpcId": { "Ref": "VpcId" } } }, "ServiceLaunchTemplate": { "Type": "AWS::EC2::LaunchTemplate", "Properties": { "LaunchTemplateName": { "Fn::Join": ["-", [{ "Ref": "ServiceSlug" }, "launch-template"]] }, "LaunchTemplateData": { "IamInstanceProfile": { "Arn": { "Ref": "InstanceProfileArn" }, "Name": { "Ref": "InstanceProfileName" } }, "ImageId": "ami-09e67e426f25ce0d7", "InstanceType": { "Ref": "InstanceType" }, "KeyName": { "Ref": "KeyName" }, "Monitoring": { "Enabled": true }, "Placement": { "Tenancy": "dedicated" }, "SecurityGroups": ["myapp-instance-sg"] } } }, "ServiceScalingGroup": { "Type": "AWS::AutoScaling::AutoScalingGroup", "Properties": { "VPCZoneIdentifier": { "Ref": "InstanceSubnets" }, "LaunchTemplate": { "LaunchTemplateId": { "Ref": "ServiceLaunchTemplate" }, "Version": { "Fn::GetAtt": ["ServiceLaunchTemplate", "LatestVersionNumber"] } }, "MinSize": "1", "MaxSize": "5", "TargetGroupARNs": [{ "Ref": "AlbTargetGroup" }], "NotificationConfiguration": { "TopicARN": { "Ref": "ScalingSnsTopic" }, "NotificationTypes": ["autoscaling:EC2_INSTANCE_LAUNCH", "autoscaling:EC2_INSTANCE_LAUNCH_ERROR", "autoscaling:EC2_INSTANCE_TERMINATE", "autoscaling:EC2_INSTANCE_TERMINATE_ERROR" ] }, "Tags": [{ "Key": "environment", "Value": { "Ref": "EnvironmentSlug" }, "PropagateAtLaunch": true }, { "Key": "Application", "Value": { "Ref": "AWS::StackId" }, "PropagateAtLaunch": true } ] }, "CreationPolicy": { "ResourceSignal": { "Timeout": "PT8M", "Count": "1" } }, "UpdatePolicy": { "AutoScalingRollingUpdate": { "MinInstancesInService": "1", "MaxBatchSize": "1", "PauseTime": "PT8M", "WaitOnResourceSignals": "true" } } }, "ServiceScaleUpPolicy": { "Type": "AWS::AutoScaling::ScalingPolicy", "Properties": { "AdjustmentType": "ChangeInCapacity", "AutoScalingGroupName": { "Ref": "ServiceScalingGroup" }, "Cooldown": "60", "ScalingAdjustment": "1" } }, "ServiceScaleDownPolicy": { "Type": "AWS::AutoScaling::ScalingPolicy", "Properties": { "AdjustmentType": "ChangeInCapacity", "AutoScalingGroupName": { "Ref": "ServiceScalingGroup" }, "Cooldown": "60", "ScalingAdjustment": "-1" } }, "CpuHighAlarm": { "Type": "AWS::CloudWatch::Alarm", "Properties": { "AlarmDescription": "Scale-up if CPU > 90% for 10 minutes", "MetricName": "CPUUtilization", "Namespace": "AWS/EC2", "Statistic": "Average", "Period": "300", "EvaluationPeriods": "2", "Threshold": "90", "AlarmActions": [{ "Ref": "ServiceScaleUpPolicy" }], "Dimensions": [{ "Name": "AutoScalingGroupName", "Value": { "Ref": "ServiceScalingGroup" } }], "ComparisonOperator": "GreaterThanThreshold" } }, "CpuLowAlarm": { "Type": "AWS::CloudWatch::Alarm", "Properties": { "AlarmDescription": "Scale-down if CPU < 70% for 10 minutes", "MetricName": "CPUUtilization", "Namespace": "AWS/EC2", "Statistic": "Average", "Period": "300", "EvaluationPeriods": "2", "Threshold": "70", "AlarmActions": [{ "Ref": "ServiceScaleDownPolicy" }], "Dimensions": [{ "Name": "AutoScalingGroupName", "Value": { "Ref": "ServiceScalingGroup" } }], "ComparisonOperator": "LessThanThreshold" } }, "BalancerSecurityGroup": { "Type": "AWS::EC2::SecurityGroup", "Properties": { "GroupDescription": "Enable public access HTTP from the IG only", "SecurityGroupIngress": [{ "CidrIp": "0.0.0.0/0", "IpProtocol": "tcp", "FromPort": "80", "ToPort": "80" }], "Tags": [{ "Key": "environment", "Value": { "Ref": "EnvironmentSlug" } }, { "Key": "Application", "Value": { "Ref": "AWS::StackId" } } ], "VpcId": { "Ref": "VpcId" } } }, "ApplicationLoadBalancer": { "Type": "AWS::ElasticLoadBalancingV2::LoadBalancer", "Properties": { "LoadBalancerAttributes": [{ "Key": "access_logs.s3.enabled", "Value": "true" }, { "Key": "access_logs.s3.bucket", "Value": "myapp-logs" }, { "Key": "access_logs.s3.prefix", "Value": { "Fn::Join": ["/", [{ "Ref": "EnvironmentSlug" }, { "Ref": "ServiceSlug" }]] } } ], "SecurityGroups": [{ "Ref": "BalancerSecurityGroup" }], "Subnets": { "Ref": "BalancerSubnets" } } }, "AlbListener": { "Type": "AWS::ElasticLoadBalancingV2::Listener", "Properties": { "DefaultActions": [{ "Type": "forward", "TargetGroupArn": { "Ref": "AlbTargetGroup" } }], "LoadBalancerArn": { "Ref": "ApplicationLoadBalancer" }, "Port": "443", "Protocol": "HTTPS", "Certificates": [{ "CertificateArn": { "Ref": "CertificateArn" } }], "Tags": [{ "Key": "environment", "Value": { "Ref": "EnvironmentSlug" } }, { "Key": "Application", "Value": { "Ref": "AWS::StackId" } } ] } }, "TestingOnlyAlbListener": { "Type": "AWS::ElasticLoadBalancingV2::Listener", "Properties": { "DefaultActions": [{ "Type": "forward", "TargetGroupArn": { "Ref": "AlbTargetGroup" } }], "LoadBalancerArn": { "Ref": "ApplicationLoadBalancer" }, "Port": "80", "Protocol": "HTTP", "Tags": [{ "Key": "environment", "Value": { "Ref": "EnvironmentSlug" } }, { "Key": "Application", "Value": { "Ref": "AWS::StackId" } } ] } }, "AlbTargetGroup": { "Type": "AWS::ElasticLoadBalancingV2::TargetGroup", "Properties": { "HealthCheckIntervalSeconds": 30, "HealthCheckTimeoutSeconds": 5, "HealthyThresholdCount": 2, "Port": { "Ref": "HealthCheckPort" }, "Protocol": "HTTP", "UnhealthyThresholdCount": 5, "VpcId": { "Ref": "VpcId" } } } }, "Outputs": {} } submitted by /u/bitbythecron [link] [comments]

  • AWS configuration management
    by /u/MohammadNour97

    is there a service that provide AWS configuration management, for example if I want to save my setting to specific service, is that possible? submitted by /u/MohammadNour97 [link] [comments]

  • Does this mean that I passed the cloud practitioner? I never saw a pass/fail screen I also don’t have a pass/fail status on Pearson
    by /u/ElChappyShampoo

    submitted by /u/ElChappyShampoo [link] [comments]

  • I made this to help me pass whilst studying for my Cloud Practitioner exam
    by /u/justajolt

    Cloud practitioner practise tool I've found it useful, so I've polished it up a little and here it is. I usually do a session of 40-50 questions, then study the stuff that the report says I'm rubbish at! The drill mode is especially harsh in exposing what I do and don't know. Fill yer boots. submitted by /u/justajolt [link] [comments]

  • Interview with Amazon
    by /u/PuzzleheadedSet3615

    Just received an Email from AWS as a SR cloud support engineer, I don't remember applying for a SR position. I was aiming for an entry level . What do I do? submitted by /u/PuzzleheadedSet3615 [link] [comments]

  • Amazon EKS improves control plane scaling and update speed by up to 4x
    by /u/shadowsyntax

    submitted by /u/shadowsyntax [link] [comments]

  • During exam materials
    by /u/cult_of_me

    Hello fellow exam takers! Regarding SAA-C02: are any materials (personal notes for example) are allowed to be used during the test? or am I to rely on my memory only? I could not find any mentions of it in the documentation. Thanks! submitted by /u/cult_of_me [link] [comments]

  • Did anyone feel confident after taking an exam but later find out you had failed?
    by /u/PacePossible1408

    I just took the cloud practitioner test. I feel really confident about it. I find the test surprisingly easy. But since the test is easy, I need to get more questions correct to pass. When I took Udemy tests, I only passed 3/6 so I’m worried that my confidence is misplaced. It sucks that I have to wait for results. I only used a cloud guru as a resource. I took practice tests on Udemy but didn’t take any courses on there. submitted by /u/PacePossible1408 [link] [comments]

  • Cannot check into my certification exam
    by /u/johnsmithx0

    I scheduled an exam with Pearson VUE I ran the system requirements TWICE, once on the day I registered and 30 minutes before the exam. I was given the green light to just check in on the exam date but the day arrived and I couldn't check-in. I am confused regarding the start time I don't know if it's local time or some other time zone I scheduled it for 9:45 p.m. and I was checking in at 9:15 as per the instructions and no matter how many times I ran the software I could not check-in. Is " Etc/GMT+8 - GMT-08:00" another time zone? is it EST? I had to reschedule and luckily I was able to for tomorrow. I am concerned I am going to run into the same issue again can someone who's taken the exam with Pearson VUE please let me know if you ran into the same issue and or what I can do to not run into this issue tomorrow. submitted by /u/johnsmithx0 [link] [comments]

  • Hi, I’ve developer associate exam after 2 weeks. I’ve 6 months of experience in AWS and I’ve completed Stephane Maarek’s courses. Anyone tips or suggestions would be helpful. TIA.
    by /u/ninjafrmIn

    submitted by /u/ninjafrmIn [link] [comments]

  • Maarek or Cantrill course for SAA-CO2?
    by /u/um_rr

    I recently passed the CCP and am moving on to SAA. I hope to take and pass the exam by August 30th which is the last date for the current C02 exam and wanted to start prepping by choosing a course that would get me a good understanding of the material. I have fairly basic AWS experience in my work and took Maarek's CCP course which gave me a good understanding of the basics. I've also already purchased the TD/Boso SAA test prep course. What course would you recommend me to take to get a good understanding of the material and pass the SAA before the new exam comes out? submitted by /u/um_rr [link] [comments]

  • SAA course on ACG
    by /u/TranceSoulBrother

    For those that have studied for SAA on ACG, how up to date is the course? Couldn’t easily find when it was made. 2021 or 2022 or whatever. Is it worth spending time on it vs just the Cantrill course? submitted by /u/TranceSoulBrother [link] [comments]

  • Just took the SAA-c02 exam; a word of warning!
    by /u/wheatleyjj

    Wanted to jot something down whilst this was all still fresh in my mind. I have been using AWS for ~5-6 years, but not in an architecting capacity. Only my last year has been in a large organisation. I had the urge to get some AWS qualifications whilst talking with my teams so picked up the CCP exam which was pretty easy without prep. Moving to the solution architect one I thought I’d run through some of the material listed on the certification overview (I’ve never paid for explicit courses for other certs before, so why start now… right?) Using the twitch power hour series from last year, the practice exam and the practice questions set - along with some YouTube content I felt relatively confident until I’ve just taken it. It feels like there has been a shift in service focus/question depth that just wasn’t touched in the power hour series specifically. I’m hoping that it was limited to the 15 unscored beta questions with SAA-c03 coming in august, but I can’t help but dread that I’ve underestimated the exam. We will see tomorrow - but I would say to take some extra time looking for up to date resources (like this sub recommends /u/acantril / Stephane’s courses) - just to make sure you don’t get too flustered when you see something you don’t understand. submitted by /u/wheatleyjj [link] [comments]

  • AWS Perimeter: a new open source tool to check your AWS accounts for public resources, resources shared with untrusted accounts, and insecure network configurations
    by /u/bobtbot

    submitted by /u/bobtbot [link] [comments]

  • Just passed my Cloud Partitioner Exam in two weeks, with little to no knowledge of computer lingo starting. Some tips I recommend.
    by /u/Inner-Bus1926

    Highly recommend watching YouTube videos on cloud partitioners free exams. Download apps regarding cloud partitioner exams so your study doesn’t stop when you’re in the bathroom. Think of real life scenarios and how it can be applied with what you learned so far. If you’re unsure of something, ALWAYS google or watch another video of it being explained. My methodology in learning is if you can’t explain it, you didn’t learn it. Lastly, the exam did consist a lot of Well-Built Architecture Framework and billing questions. Make sure that it’s well understood. submitted by /u/Inner-Bus1926 [link] [comments]

  • No pass/fail status or screen after exam - Cloud Practitioner
    by /u/ElChappyShampoo

    Took the CPC this morning and I felt like I did well but there was so pass/fail screen neither is there a pass/fail status on the Pearson VUI. Only status I have is „Exam Delivered Successfully“. Anyone have a similar experience? Let me know! submitted by /u/ElChappyShampoo [link] [comments]

  • Stuck Trying to Login
    by /u/Iconically_Lost

    Hi, So I am trying to log into the AWS Training and Certification portal, and am failing to sing in since it doesn't give me the option. I'm at the "Sing in or Sign Up" and the only options I get are company email, partner, or AWS employee. I'm neither. I have done AWS exams in the past and do hold several certs. So I do have a valid Training account. ​ It does give a 4th option, Amazon(shopping). If you click on this, it will sing in with you shopping list account and then ask to give it permission. ​ So the problem is: The shopping account has the same email used, I do not wish to grant my shopping account access to my certification. ​ As per the FAQ I should be able to use the AWS Training account I used previously without linking it to my shopping list. But I cannot figure out how to do it, and or to view my current certs or book new exams. ​ So has anyone managed to log into their Certification portal, with just a regular email. ​ *EDIT*, Half the issues with linking the shopping account with my Cert portal. Is that the shopping has my name shorthanded. But the cert are my full legal(passport/Driver Lic) name. I don't want to use this full name in my shopping. I like using my shorthanded version. I can't be the first person having this issue. As I spoke to 2x people in the office and both had the same issue a while ago. Apparently it was a nightmare to get AWS to correct it. https://preview.redd.it/czjv9s6ci6891.png?width=775&format=png&auto=webp&s=1bd046c4674399c491f92bce1a97ecd2586031b8 ​ https://preview.redd.it/atgz7pgdi6891.png?width=926&format=png&auto=webp&s=4f796186cf0db13244b48e49faf05a26e5236d8c submitted by /u/Iconically_Lost [link] [comments]

  • [Free] AWS Certified Solutions Architect Associate (SAA-C02) Self-study Resources
    by /u/Anastasia_IT

    Hi community, We have new updates and fixes on the AWS Certified Solutions Architect Associate (SAA-C02) path, which you might find useful. It's absolutely free to enroll. https://examsdigest.com/courses/amazon-aws-saa-c02/ Changelog. [improve] Domain 2.0 Design High-Performing Architectures [improve] Domain 3.0 Design Secure Applications and Architectures Happy learning, Anastasia submitted by /u/Anastasia_IT [link] [comments]

  • Passed SAA-C02 in 3 weeks
    by /u/Taron_G

    Hi All, Got my pass for AWS SAA-C02, and I thought I'd do a write up of what I did and how the process went for me. I started studying almost exactly 3 weeks before the date of the exam. My resources were Stephane Maarek's udemy course, Maarek's set of 6 practise exams, and also the set of 6 tutorialsdojo practise exams (Jon Bonso). I did not read any whitepapers, or other resources. I did the udemy course at between 1.5x to 2x and skipped the hands-on videos. I did not do any of the hands-on excercises. I was impatient to start doing the practise tests, so I took the first of Maarek's set when I was only 45% of the way through the course. I continued to work through the course doing an exam every so often, doing the last 2 exams after I had completed the course. I only took each practise exam once, but reviewed the questions I got wrong several times. My scores were: 66%, 61%, 60%, 69%, 70% and 66%. I then started doing the tutorialsdojo practise exams. Again, I only took each exam once, my scores were: 77%, 74%, 83%, 77%, 75% and 86%. I felt that Maarek's practise exams were unbalanced compared to tutorialsdojo/jon bonso, as I scored the same on the first Maarek test, when I had done less than half the course, as I did on the last test when I had finished it. That doesn't seem right. They were still definitely worth doing however, and helped plug a lot of gaps that either weren't in the course or that didn't stick with me for whatever reason. I would suggest they need some rebalancing somehow. The real exam was more similar to the tutorialsdojo practise exams, but with fewer easy questions. Most questions were longer, asking about multiple services in one question, and the answers were also longer, taking more time to read and parse than either of the sets of practise exams. The overall difficulty of the exam was not too bad, and was about what I was expecting. I completed the real exam with 40 minutes to spare which I used to review flagged questions. I flagged about half the questions in the exam, but after reviewing them I only changed 1 answer. I had no issues with the exam proctors, and they said nothing to me after the initial check-in, or 'greet' as they call it. I did have a clear desk and uncluttered room. After submitting I got no pass/fail result, only this text, which I think is now standard for the exam: Thank you for taking the AWS Certified Solutions Architect - Associate exam. After your exam, AWS evaluates compliance with testing policies and procedures and the validity of exam results before the results are officially posted in your AWS Certification Account. This process can take up to 5 business days. When your results are available, you will receive an email stating your exam results and score report are available for download under Previous Exams. If your results are not available after 5 business days or if you have any questions about the AWS Certification program, please contact AWS Training and Certification Support (https://www.aws.training/Support). After nearly 24 hours the result appeared in the AWS certification 'Exam history' section as a pass (but no email, so check the portal), and had a link to a PDF which contained the score and breakdown of sections. I scored 833, which isn't bad after 3 weeks of study - but I should note that I do have previous experience in cloud, but no previous certs. I took the course and the cert to plug any gaps in my knowledge. It turned out that you can use AWS professionally and still only know a limited amount about a limited set of services, the ones that you directly use. I feel that taking the cert gave me a much better overview of AWS services and how they fit together, but actually doing real-world projects with the new services, and comparing them to other cloud offerings, is required to really have a reliable overview of the subject. Next I will be looking in Azure, GCP and Kubernetes. Hope this info helps someone! submitted by /u/Taron_G [link] [comments]

  • Need help figuring out how to study for AWS CCP
    by /u/analuciferase

    I’m pretty new to AWS and have no experience in the cloud. The biggest domain that i’m struggling with is technology. What are some resources that i can use to study the technology domain? submitted by /u/analuciferase [link] [comments]

  • AWS Security Architect - Coding Interview
    by /u/xaveri88

    Is there anyone who applied for a security architect position at AWS Canada and able to share their experience? The recruiter also told me that there will be a live coding exam and I am curious to know what they will be asking me to code. Thanks submitted by /u/xaveri88 [link] [comments]

  • Yet another SAA-C02 completion post
    by /u/PokingLaughingCrying

    Hey everyone! I passed the SAA-C02 a few days ago with a score of 845/1000! I started studying mid march 2022 with u/acantril course. Absolutely incredible resource and worth the time it takes to complete. My studies with that finished at the beginning of June. Following up with that I used u/stephanemaarek course, but I skipped the labs and just focused on the content as a refresher. While I was doing this I practiced with tutorials dojo exams. I felt the difficulty was on par with the exam and was scoring between 85-90% on them. Thanks u/jon-bonso-tdojo! I'm really glad I did the u/stephanemaarek course as well, specifically the machine learning section the day before the exam definitely snagged me some points. I had 2 questions on it! Mostly just picking the correct service to use. I had a handful of questions about HPC's which was interesting, more than I thought I would! Not sure what's next for me, I completed the CKA earlier this year and have a cloud resume developed from the cloud resume challenge. The last piece is implementing testing in the deployment pipeline. Maybe I'll try and use terraform instead of CloudFormation. Who knows! submitted by /u/PokingLaughingCrying [link] [comments]

  • Is MBA useless for entry level Cloud role ?
    by /u/PM_40

    Have an MBA and I was thinking which Cloud role could use some of the MBA type skills - documentation, process analysis, management. Is MBA useless for entry level Cloud role ? I am planning to do some certificates to get into Cloud space. submitted by /u/PM_40 [link] [comments]

  • Free AWS Mini Project - Hybrid DNS AWS <=> ONPREM (with videos)
    by /u/acantril

    ​ https://preview.redd.it/f0pqfz5jg2891.png?width=3832&format=png&auto=webp&s=08ab0ec4226ffad8a0ac89e4dd87ae9f07e1047a Hi Everyone ! I just wanted to throw another free mini project out there for everyone to use - another one with videos. https://github.com/acantril/learn-cantrill-io-labs/tree/master/aws-hybrid-dns I maintain a AWS mini project REPO https://github.com/acantril/learn-cantrill-io-labs These are mini projects which help improve your practical implementation skills - I've had students tell me these have made the difference in interview situations. Today I added my AWS Hybrid DNS mini project to the list (https://github.com/acantril/learn-cantrill-io-labs/tree/master/aws-hybrid-dns) which includes videos. The videos are linked in that link, or available on YouTube here https://www.youtube.com/watch?v=UmPTavtAB9s&list=PLTk5ZYSbd9MjGUpHNvjhGiy2SESdYZwce (I uploaded 6k resolution versions, YouTube are still processing above 5k) In this one, we simulate a hybrid network environment with AWS on one side, and a simulated on-premises environment on the other. We implement network connectivity and then by creating inbound & outbound route53 endpoints we connect the Linux named based DNS to AWS allowing bi-directional resolution (AWS => ONPREM & ONPREM => AWS) But wait, there's more... I've been progressively going through my mini projects repo and adding video guides to the most popular ones: With this one you will use codecommit as a repo, build docker images with codebuild, and use pipeline and deploy to push the docker image to ECS Fargate (a simple container of cats dockerized application) https://github.com/acantril/learn-cantrill-io-labs/tree/master/aws-codepipeline-catpipeline If you want to create a Dynamic BGP VPN check this one out https://github.com/acantril/learn-cantrill-io-labs/blob/master/aws-hybrid-bgpvpn If you fancy creating a working web identity federation application check this out https://github.com/acantril/learn-cantrill-io-labs/tree/master/aws-cognito-web-identity-federation or if you want to create a serverless reminder application using S3 API Gateway, Lambda, Step functions, SNS and SES check this out https://github.com/acantril/learn-cantrill-io-labs/tree/master/aws-serverless-pet-cuddle-o-tron and if you want to do a full architecture evolution of a monolithic web app through to a scalable and self-healing architecture ... this is the one for you https://github.com/acantril/learn-cantrill-io-labs/tree/master/aws-elastic-wordpress-evolution I'm going to go through as many of these as i can and add high quality video guides, but these are the ones so far. if you do enjoy them, please spread the word. They are taken from my courses, but these are usable for free !!! if you want notifications on new video guides then subscribe to my channel https://youtube.com/c/learncantrill submitted by /u/acantril [link] [comments]

  • Need AWS certification ASAP super important
    by /u/AkCute

    basically im going to thailand in 30 days with my friends but my dad is saying he will cancel tickets if I dont get aws cert by then also will be traveling during this time but can dedicate like 2-3 hours per day to study plz let me know what the fastest way to get some certification is submitted by /u/AkCute [link] [comments]

  • Failed exam for AWS Certified Developer - Associate
    by /u/RP_m_13

    Hey guys, I have been preparing for AWS Certified Developer - Associate, I have finished the Cantrill course, then used dojo practice exams with Stephane Maarek practice exams, where I have landed consistently +80%. But I just tried to go with certification and failed it, I'm still waiting for detailed results because 5 days have not passed since my try. I feel really bad because it is my first AWS certification ever, and my first try. And I do not know what to practice or to do this 14 days before next attempt. Any tips, advice ​ UPD: Just got results, Landed with 702 score ; C submitted by /u/RP_m_13 [link] [comments]

  • Exam Rules (Associate Developer)
    by /u/ISpamThereforeIStink

    I'm going for my Associate Dev Certificate in the next couple weeks. Feeling fairly good about the material itself, but I've always been pretty bad at exams. I dunno if I have like undiagnosed ADD, but in general with passive activities like reading, watching movies, and yes taking exams, my mind tends to drift around a lot and it's hard to stay focused. I do fine with more active things like video games and playing music, which is probably why I like coding. In taking some practice exams, I'm realizing I stay more focused and tend to do better when I'm able to take notes on a blank piece of paper. Even literally just writing things like "ASG --> EC2" when reading a question helps a TON. I feel like I know the answer to this already, but there's no physical pen & paper notes taking of any kind allowed, correct? I've seen some people mention an online whiteboard kinda in microsoft paint style, is this the case for Pearson & PSI? What about In person & online? Any accommodations I'm not seeing to help with this? Appreciate any help / advice folks have! TLDR: What's the deal with taking notes during the exam? submitted by /u/ISpamThereforeIStink [link] [comments]

  • SAA-C02 - Woke up to this email. Encouragement for those will take soon
    by /u/ThrivingNomadic

    ​ https://preview.redd.it/uu9ayvb8py791.png?width=614&format=png&auto=webp&s=5660821a483f72d6694d9be505df67b43c77d839 To my surprise, I woke up to this email this morning. Scored 798. I took the exam Saturday mid-day and wasn't expecting an answer until Monday. But it came Sunday morning. This post is only to be served as encouragement for those who will take it soon. TIMELINE: I was in a bit of a rush. My timeline was one month because I will be making a cross country move next month. Then I found out this version of the exam was retiring end of August, that put even more pressure. I spent the past month studying 3-5hrs a day. Every single day. BACKGROUND: I work at home as helpdesk with an insane amount of downtime. I would have about 2hours of actual work done per day, then study for the remainder of my shift/day. My work actually encourages to use the downtime and earn certs they reimburse for. STUDY SOURCE: Before knowing about this sub, I searched on YouTube 'AWS Cert' and came across NetworkChuck's AWS video. He recommended Anthony Sequeira on Udemy. No surprise as I am sure he gets an actual cut of the deal. But after completing this course, there was SO much missing. I still felt lost. The material just wasn't there. I was not prepared to take the exam at all. It was a good beginners level intro course for Practitioner, definitely not Solutions Architect. Then I found this sub and went with Stéphane Maarek (Udemy) and https://tutorialsdojo.com/ Practice Exams. Stéphane Maarek (Udemy): I could not recommend this highly enough for hands on. Actually doing it hands on makes you remember a lot of things. His 825-page slide was SUPER beneficial as it served as a cheat-sheet for me. Whenever I needed to remember or reference what a certain service was for, I used the search function of the slide and BOOM. I finished the Udemy course in 2 weeks. These slides ARE NOT FREE, you must purchase the Udemy course to access them. https://tutorialsdojo.com/ : I still needed something to supplement the videos and the Practice Exams here I also highly recommend. The way the questions are worded were very similar to the actual exam. It helped me know what to look for, how to use elimination technique, how to search for keywords and key concepts within each question. I consistently scored 60-70s my first round of practice exams but I was not satisfied enough. After taking another week to study, I was consistently scoring 77-85%. His cheat-sheet section and Comparison of AWS Services section was also a game changer for me. Available free here: https://tutorialsdojo.com/comparison-of-aws-services/ ADVICE: If you are bi-lingual, you can get extra +30minutes accommodation. I am spanish but took the test in english. You dont have to call for this accommodation. You literally select it from the accomodations page and it was instantly approved. This saved my butt as I flagged so many questions, it took exactly the remaining 30 minutes to revisit. There are other available accomodations on the list. Eat something light as you will be sitting still for good bit. I took my test at home. If you are in apartments, CLOSE YOUR WINDOW. I have kids that play outside constantly. If the proctor hears this, they will absolutely not hesitate to end your exam. I was getting worried that too many of my answers were "A" option. Like an insane amount. It became a distractor. Process of elimination is key. Every instructor will tell you this. When you are consistently scoring 80s in the above practice exam, then you are ready. A Warning I feel like I definitely got some Beta questions in from the new C03 exam as there was a keyword i have not heard any instructor talk about. There were three questions that talked about "AWS Data Lake" either in the question itself or in the answers. This was the first time I heard of this word. I knew the concept but I had no idea this was an AWS service. I went back to my study materials and confirmed no instructor covered this keyword. Cheers! submitted by /u/ThrivingNomadic [link] [comments]

  • Passed the SAA-C02! Some insight from my experience
    by /u/Incompl

    First of all, I just wanted to say that I've lurked this subreddit for a while, and amazed at the support and positivity I've seen here. I wanted to contribute a tiny bit and post about my experience and the information I looked for when searching about this exam. tl;dr Passed with score of 893. First AWS cert, took in person with PearsonVUE, as I did not want to deal with the virtual setup and risk any technical issues. Did not receive results immediately, results received within 24 hours. Main study materials over 2 months: Udemy - Stephane Maarek (Training and Exam) Tutorials Dojo - Jon Bonso Practice Exams Udemy - Neal Davis Practice Exams AWS Official docs and FAQs 1 White paper (AWS Well-Architected Framework) Tutorials Dojo resources (cheatsheets, and comparison of services) Background Software engineer for 10+ years Hands-on AWS experience for 7+ years. Varying levels of exposure and usage for the following services: S3, Lambda, SQS, SNS, EC2, EBS, ASG, RDS, ELB, EMR, Route 53, CloudWatch, IAM, and Athena. Almost no hands-on experience on: Anything related to VPC (only worked lightly with Security Groups), Aurora, EFS, API Gateway, Kinesis, CloudFront, Storage Gateway, FSx, EFA, Beanstalk, WAF Studying Approach While I did have a good amount of hands-on experience prior to starting my study, some of my knowledge was quite out of date (did not know S3 was strongly consistent), and I only had bits and pieces of experience (only worked with ALBs, no experience with NLBs). I knew about the basic concepts of S3, but I didn't know anything about all the other storage classes such as S3-IA or Glacier. I had no idea EBS volumes were in a single AZ. I only knew about weighted routing policy in Route 53, and so on. So while using these services did give me a leg up on having at least a baseline knowledge, I felt like I had to rebuild my fundamental understanding of all services and the latest and greatest features. I did the majority of my learning and preparation for my exam through the resources I'll talk about in a bit. Step 1: Initial Training - Building the fundamental knowledge Over the course of about 2 months, I went through Stephane Maarek's course on Udemy. I think if you have never used AWS before, going in order probably makes the most sense. However, since I had used multiple services before, I skipped ahead to the VPC section and spent a considerable amount of time there to understand all the different services, while taking notes. Once I had a decent enough understanding, I went through the course from the beginning. While I wanted to speed through the content (I initially tried at x1.5 speed), I was having a tough time retaining the knowledge. I decided to take it slow and take a bunch of notes, and really try to expand my baseline knowledge. Step 2: Testing my understanding After talking to my colleagues and reading this subreddit, I went ahead and went with buying the Jon Bonso Tutorials Dojo tests. My goal here was to test my fundamental understanding of the my knowledge, and identify my gaps. I also only took the exams in timed mode, to simulate the pacing I would need. First pass test scores below: Set 1: 73.85% Set 2: 66.15% Set 3: 76.92% Set 4: 75.38% Set 5: 70.77% Set 6: 75.38% I also took Stephane's exam at the end of his course, where I scored 75%. Step 3: Reviewing my understanding After my tests, I would go through every incorrect question first, to understand why I got it wrong. Sometimes, it was a matter of choosing wrong between two similar choices (I had a tough time understanding the difference was between an interface endpoint and a gateway endpoint). Other times, I was careless when I was reading the question and answers (the question asks for options which are not suitable). Other times, I just had no idea what the service was even capable of (no idea you could do Expedited Retrievals for Glacier). The TD exams are great, as they directly explain every single option, as well as link to resources where you can read up more about it. Some things that I struggled to remember were specific facts, so I took some time to memorize those (Number of days storage requirements for different S3 storage classes, and IOPS for varying EBS volumes, for example). Step 4: Filling in the gaps While the Stephane's course covers a lot of content, I really wanted to shore up my understanding and fill in the gaps. While the practice test scores probably indicated that I could take the test and pass, I didn't want to risk having to retake it. So I went ahead and went through various other resources, which included: Using the Tutorials Dojo comparison of AWS services and study guide Various AWS service FAQs - Good to read through to confirm understanding. If would definitely recommend at least skimming through these, focusing primarily on the ones that Stephane covers in his course. AWS Documentation: You could read through them, but I would use the details docs to specifically to learn about the services which came up on the practice exams which I didn't fully understand (Gateway endpoint vs Interface endpoint). Skimmed through one Whitepaper, which was AWS Well-Architected Framework. Going through Stephane's course and my notes again. Step 5: Retaking practice exams and final prep When I was a few days out from the exam, and at least a week+ after taking the initial Bonso exams, I wanted to retake the exams. I did not end up retaking all of them, but my scores were as follows: Set 1: 81.54% Set 2: 95.38% Set 3: 95.38% Set 4: 100% I definitely remembered some of the questions and answers, so these scores are definitely inflated. However, I was happy that I was able to now understand the questions which tripped me up before. I finally wanted to do a couple of more tests for practice, so I took a few of Neal Davis's practice exams on Udemy. Test scores below: 81% 76% 86% By this point, I was pretty exhausted from taking so many practice exams and reading through so many materials so I decided to stop at 3 exams. The morning of the exam, I lightly skimmed over the some of the more lightly covered services (mainly machine learning ones mentioned in Stephane's course) to remind myself of what those are, as well as the TD study materials, such as the cheat sheets and comparison of services. Step 6: Taking the actual exam By this point, I was fairly well versed in the timing required for taking the exam, so my pace was pretty solid. I went through and answered every question with about 50 minutes to spare, while flagging about 15 questions which I wasn't 100% certain about. Some of them varied in uncertainty (some I was 80% sure of the right answer, some I was 50-50). I took up the remaining time to review every single question to make sure I had read the question correctly (to make sure I didn't trip up on the wording or understanding). After the exam, I was fairly confident I had passed the exam, but wasn't entirely sure about my score. Finished the exam about 1:30 pm on Friday, and received the results Saturday morning at about 5:30 AM with a score of 893, which was higher than I had expected. Final words: I would highly recommend all the materials I used in my studying. Stephane's course is excellent for covering all the topics while he gives some helpful tips on what you will probably be tested on the exam. Both Jon's and Neal's exams were extremely helpful for giving information on why answers were right and why others were wrong. I also think in terms of level of difficulty, they are very similar to the actual exam. I also think this is subjective, as it kind of depends on your level of knowledge and comfort level with all the different services. Here are some random tips and observations I had from my experience: I noticed that there were definitely a few questions on the real exam that I had encountered (or very similar questions) to ones I saw in the practice exams Jon Bonso and Neal Davis. Taking all those practice exams and reviewing them closely definitely helped in these cases. Remember that while there are 65 questions, 15 are unscored. Of course, I don't know which ones are unscored, but don't panic if you see a service that you've barely seen or remember from studying. Just flag them for review later or try to eliminate as many options as possible before answering. Chances are that these are the unscored questions, assuming you went through all these materials. I definitely remember at least one question which referenced some machine learning services. I am a very visual person, so I did my best to visualize all network related questions while studying, such as VPCs, ELBs, Route 53, and CloudFront. While I did get a whiteboard and used it for one VPC related question, it helped to be able to visualize these things in my head without having to write it out. Try to really understand and remember what makes a specific service unique or different from each other. This is covered in various resources that I have already mentioned, and it will depend on your specific level of understanding. Read the questions and answers carefully. This is important as there are always specific keywords they use to indicate what the solution is looking for (cost, availability, durability, time periods, etc). Especially the long-winded questions with long answers, it's easy to glaze over it and not correctly interpret what they're proposing in the solution. When there are long answers with 3+ services in the picture, there's usually one of them that are obviously wrong, so it's easy to eliminate. Definitely try and eliminate as many answers as possible, before attempting to answer the question. If you are unable to least narrow it down to two answers during your practice exams, read up on the incorrect answers to understand why those are incorrect. You're probably lacking knowledge still if you're unable to do so. Thanks for reading my long post, good luck to you all on your certification journey. submitted by /u/Incompl [link] [comments]

 
AWS Developer Associate DVA-C01 Exam Prep
 
 
 
 

We know you like Sports and Geeky things, We do too, but you should build the skills that’ll drive your career into six figures. Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career. 85% of hiring managers say cloud certifications make a candidate more attractive.

Download the Djamga App for ios or android or Microsoft for drop in soccer, basketball, volleyball, badminton, football, hockey, cricket games details and location in your city.

taimienphi.vn
error: Content is protected !!