AWS Certification Exam Prep: DynamoDB Facts, Summaries and Questions/Answers.

DynamoDB
DjamgaMind

DjamgaMind: Audio Intelligence for the C-Suite (Energy, Healthcare, Finance)

Are you drowning in dense legal text? DjamgaMind is the new audio intelligence platform that turns 100-page healthcare or Energy mandates into 5-minute executive briefings. Whether you are navigating Bill C-27 (Canada) or the CMS-0057-F Interoperability Rule (USA), our AI agents decode the liability so you don’t have to. 👉 Start your specialized audio briefing today at Djamgamind.com


AI Jobs and Career

I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.

Job TitleStatusPay
Full-Stack Engineer Strong match, Full-time $150K - $220K / year
Developer Experience and Productivity Engineer Pre-qualified, Full-time $160K - $300K / year
Software Engineer - Tooling & AI Workflows (Contract) Contract $90 / hour
DevOps Engineer (India) Full-time $20K - $50K / year
Senior Full-Stack Engineer Full-time $2.8K - $4K / week
Enterprise IT & Cloud Domain Expert - India Contract $20 - $30 / hour
Senior Software Engineer Contract $100 - $200 / hour
Senior Software Engineer Pre-qualified, Full-time $150K - $300K / year
Senior Full-Stack Engineer: Latin America Full-time $1.6K - $2.1K / week
Software Engineering Expert Contract $50 - $150 / hour
Generalist Video Annotators Contract $45 / hour
Generalist Writing Expert Contract $45 / hour
Editors, Fact Checkers, & Data Quality Reviewers Contract $50 - $60 / hour
Multilingual Expert Contract $54 / hour
Mathematics Expert (PhD) Contract $60 - $80 / hour
Software Engineer - India Contract $20 - $45 / hour
Physics Expert (PhD) Contract $60 - $80 / hour
Finance Expert Contract $150 / hour
Designers Contract $50 - $70 / hour
Chemistry Expert (PhD) Contract $60 - $80 / hour

AWS Certification Exam Prep: DynamoDB facts and summaries, AWS DynamoDB Top 10 Questions and Answers Dump

Definition 1: Amazon DynamoDB is a fully managed proprietary NoSQL database service that supports key-value and document data structures and is offered by Amazon.com as part of the Amazon Web Services portfolio. DynamoDB exposes a similar data model to and derives its name from Dynamo, but has a different underlying implementation. Dynamo had a multi-master design requiring the client to resolve version conflicts and DynamoDB uses synchronous replication across multiple datacenters for high durability and availability.

Definition 2: DynamoDB is a fast and flexible non-relational database service for any scale. DynamoDB enables customers to offload the administrative burdens of operating and scaling distributed databases to AWS so that they don’t have to worry about hardware provisioning, setup and configuration, throughput capacity planning, replication, software patching, or cluster scaling.

Amazon DynamoDB explained

  • Fully Managed
  • Fast, consistent Performance
  • Fine-grained access control
  • Flexible
Amazon DynamoDB explained
Amazon DynamoDB explained

AWS DynamoDB Facts and Summaries

  1. Amazon DynamoDB is a low-latency NoSQL database.
  2. DynamoDB consists of Tables, Items, and Attributes
  3. DynamoDb supports both document and key-value data models
  4. DynamoDB Supported documents formats are JSON, HTML, XML
  5. DynamoDB has 2 types of Primary Keys: Partition Key and combination of Partition Key + Sort Key (Composite Key)
  6. DynamoDB has 2 consistency models: Strongly Consistent / Eventually Consistent
  7. DynamoDB Access is controlled using IAM policies.
  8. DynamoDB has fine grained access control using IAM Condition parameter dynamodb:LeadingKeys to allow users to access only the items where the partition key vakue matches their user ID.
  9. DynamoDB Indexes enable fast queries on specific data columns
  10. DynamoDB indexes give you a different view of your data based on alternative Partition / Sort Keys.
  11. DynamoDB Local Secondary indexes must be created when you create your table, they have same partition Key as your table, and they have a different Sort Key.
  12. DynamoDB Global Secondary Index Can be created at any time: at table creation or after. They have a different partition Key as your table and a different sort key as your table.
  13. A DynamoDB query operation finds items in a table using only the primary Key attribute: You provide the Primary Key name and a distinct value to search for.
  14. A DynamoDB Scan operation examines every item in the table. By default, it return data attributes.
  15. DynamoDB Query operation is generally more efficient than a Scan.
  16. With DynamoDB, you can reduce the impact of a query or scan by setting a smaller page size which uses fewer read operations.
  17. To optimize DynamoDB performance, isolate scan operations to specific tables and segregate them from your mission-critical traffic.
  18. To optimize DynamoDB performance, try Parallel scans rather than the default sequential scan.
  19. To optimize DynamoDB performance: Avoid using scan operations if you can: design tables in a way that you can use Query, Get, or BatchGetItems APIs.
  20. When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity.
  21. DynamoDb Provisioned Throughput is measured in Capacity Units.
    • 1 Write Capacity Unit = 1 x 1KB Write per second.
    • 1 Read Capacity Unit = 1 x 4KB Strongly Consistent Read Or 2 x 4KB Eventually Consistent Reads per second. Eventual consistent reads give us the maximum performance with the read operation.
  22. What is the maximum throughput that can be provisioned for a single DynamoDB table?
    DynamoDB is designed to scale without limits. However, if you want to exceed throughput rates of 10,000 write capacity units or 10,000 read capacity units for an individual table, you must Contact AWS to increase it.
    If you want to provision more than 20,000 write capacity units or 20,000 read capacity units from a single subscriber account, you must first contact AWS to request a limit increase.
  23. Dynamo Db Performance: DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications.
    • As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds
    • DAX improves response times for Eventually Consistent reads only.
    • With DAX, you point your API calls to the DAX cluster instead of your table.
    • If the item you are querying is on the cache, DAX will return it; otherwise, it will perform and Eventually Consistent GetItem operation to your DynamoDB table.
    • DAX reduces operational and application complexity by providing a managed service that is API compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
    • DAX is not suitable for write-intensive applications or applications that require Strongly Consistent reads.
    • For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
  24. Dynamo Db Performance: ElastiCache
    • In-memory cache sits between your application and database
    • 2 different caching strategies: Lazy loading and Write Through: Lazy loading only caches the data when it is requested
    • Elasticache Node failures are not fatal, just lots of cache misses
    • Avoid stale data by implementing a TTL.
    • Write-Through strategy writes data into cache whenever there is a change to the database. Data is never stale
    • Write-Through penalty: Each write involves a write to the cache. Elasticache node failure means that data is missing until added or updated in the database.
    • Elasticache is wasted resources if most of the data is never used.
  25. Time To Live (TTL) for DynamoDB allows you to define when items in a table expire so that they can be automatically deleted from the database. TTL is provided at no extra cost as a way to reduce storage usage and reduce the cost of storing irrelevant data without using provisioned throughput. With TTL enabled on a table, you can set a timestamp for deletion on a per-item basis, allowing you to limit storage usage to only those records that are relevant.
  26. DynamoDB Security: DynamoDB uses the CMK to generate and encrypt a unique data key for the table, known as the table key. With DynamoDB, AWS Owned, or AWS Managed CMK can be used to generate & encrypt keys. AWS Owned CMK is free of charge while AWS Managed CMK is chargeable. Customer managed CMK’s are not supported with encryption at rest.
  27. Amazon DynamoDB offers fully managed encryption at rest. DynamoDB encryption at rest provides enhanced security by encrypting your data at rest using an AWS Key Management Service (AWS KMS) managed encryption key for DynamoDB. This functionality eliminates the operational burden and complexity involved in protecting sensitive data.
  28. DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management
  29. DynamoDB Streams Use Cases and Design Patterns:
    How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
    How do you trigger an event based on a particular transaction?
    How do you audit or archive transactions?
    How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?
    As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit.

    You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time.

    AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region

  30. 20 global secondary indexes are allowed per table? (by default)
  31. What is one key difference between a global secondary index and a local secondary index?
    A local secondary index must have the same partition key as the main table
  32. How many tables can an AWS account have per region? 256
  33. How many secondary indexes (global and local combined) are allowed per table? (by default): 25
    You can define up to 5 local secondary indexes and 20 global secondary indexes per table (by default) – for a total of 25.
  34. How can you increase your DynamoDB table limit in a region?
    By contacting AWS and requesting a limit increase
  35. For any AWS account, there is an initial limit of 256 tables per region.
  36. The minimum length of a partition key value is 1 byte. The maximum length is 2048 bytes.
  37. The minimum length of a sort key value is 1 byte. The maximum length is 1024 bytes.
  38. For tables with local secondary indexes, there is a 10 GB size limit per partition key value. A table with local secondary indexes can store any number of items, as long as the total size for any one partition key value does not exceed 10 GB.
  39. The following diagram shows a local secondary index named LastPostIndex. Note that the partition key is the same as that of the Thread table, but the sort key is LastPostDateTime.

    DynamoDB secondary indexes example
    AWS DynamoDB secondary indexes example
  40. Relational vs Non Relational (SQL vs NoSQL)
Relational vs Non Relational
Relational vs Non Relational
SQL vs NOSQL
SQL vs NOSQL
SQL vs NoSQL in AWS
SQL vs NoSQL in AWS

Top
Reference: AWS DynamoDB

AWS DynamoDB Questions and Answers Dumps

Q0: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:

AI-Powered Professional Certification Quiz Platform
Crack Your Next Exam with Djamgatech AI Cert Master

Web|iOs|Android|Windows

Are you passionate about AI and looking for your next career challenge? In the fast-evolving world of artificial intelligence, connecting with the right opportunities can make all the difference. We're excited to recommend Mercor, a premier platform dedicated to bridging the gap between exceptional AI professionals and innovative companies.

Whether you're seeking roles in machine learning, data science, or other cutting-edge AI fields, Mercor offers a streamlined path to your ideal position. Explore the possibilities and accelerate your AI career by visiting Mercor through our exclusive referral link:

Find Your AI Dream Job on Mercor

Your next big opportunity in AI could be just a click away!

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

Q2: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?

  • A. 6000
  • B. 10
  • C. 3600
  • D. 600

B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.

You can specify the Write capacity in the Capacity tab of the DynamoDB table.

Reference: AWS working with tables

Q3: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?

  • A. CustomerID
  • B. CustomerName
  • C. Location
  • D. Age


Answer- A
Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on..
Use composite attributes. Try to combine more than one attribute to form a unique key.
Reference: Choosing the right DynamoDB Partition Key

Top

AI Jobs and Career

And before we wrap up today's AI news, I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.

Q4: A DynamoDB table is set with a Read Throughput capacity of 5 RCU. Which of the following read configuration will provide us the maximum read throughput?

  • A. Read capacity set to 5 for 4KB reads of data at strong consistency
  • B. Read capacity set to 5 for 4KB reads of data at eventual consistency
  • C. Read capacity set to 15 for 1KB reads of data at strong consistency
  • D. Read capacity set to 5 for 1KB reads of data at eventual consistency
Answer: B.
The calculation of throughput capacity for option B would be:
Read capacity(5) * Amount of data(4) = 20.
Since its required at eventual consistency , we can double the read throughput to 20*2=40

Reference: Read/Write Capacity Mode

Top

Q5: Your team is developing a solution that will make use of DynamoDB tables. Due to the nature of the application, the data is needed across a couple of regions across the world. Which of the following would help reduce the latency of requests to DynamoDB from different regions?

  • A. Enable Multi-AZ for the DynamoDB table
  • B. Enable global tables for DynamoDB
  • C. Enable Indexes for the table
  • D. Increase the read and write throughput for the tablez
Answer: B
Amazon DynamoDB global tables provide a fully managed solution for deploying a multi-region, multimaster database, without having to build and maintain your own replication solution. When you create a global table, you specify the AWS regions where you want the table to be available. DynamoDB performs all of the necessary tasks to create identical tables in these regions, and propagate ongoing data changes to all of them.
Reference: Global Tables

Top

Q6: An application is currently accessing  a DynamoDB table. Currently the tables queries are performing well. Changes have been made to the application and now the performance of the application is starting to degrade. After looking at the changes , you see that the queries are making use of an attribute which is not the partition key? Which of the following would be the adequate change to make to resolve the issue?

  • A. Add an index for the DynamoDB table
  • B. Change all the queries to ensure they use the partition key
  • C. Enable global tables for DynamoDB
  • D. Change the read capacity on the table


Answer: A
Amazon DynamoDB provides fast access to items in a table by specifying primary key values. However, many applications might benefit from having one or more secondary (or alternate) keys available, to allow efficient access to data with attributes other than the primary key. To address this, you can create one or more secondary indexes on a table, and issue Query or Scan requests against these indexes.

A secondary index is a data structure that contains a subset of attributes from a table, along with an alternate key to support Query operations. You can retrieve data from the index using a Query, in much the same way as you use Query with a table. A table can have multiple secondary indexes, which gives your applications access to many different query patterns.

Reference: Improving Data Access with Secondary Indexes

Top

Q7: Company B has created an e-commerce site using DynamoDB and is designing a products table that includes items purchased and the users who purchased the item.
When creating a primary key on a table which of the following would be the best attribute for the partition key? Select the BEST possible answer.

  • A. None of these are correct.
  • B. user_id where there are many users to few products
  • C. category_id where there are few categories to many products
  • D. product_id where there are few products to many users
Answer: B.
When designing tables it is important for the data to be distributed evenly across the entire table. It is best practice for performance to set your primary key where there are many primary keys to few rows. An example would be many users to few products. An example of bad design would be a primary key of product_id where there are few products but many users.
When designing tables it is important for the data to be distributed evenly across the entire table. It is best practice for performance to set your primary key where there are many primary keys to few rows. An example would be many users to few products. An example of bad design would be a primary key of product_id where there are few products but many users.
Reference: Partition Keys and Sort Keys

Top


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Gemini, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Q8: Which API call can be used to retrieve up to 100 items at a time or 16 MB of data from a DynamoDB table?

  • A. BatchItem
  • B. GetItem
  • C. BatchGetItem
  • D. ChunkGetItem
Answer: C. BatchGetItem

The BatchGetItem operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key.

A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. BatchGetItem will return a partial result if the response size limit is exceeded, the table’s provisioned throughput is exceeded, or an internal processing failure occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys. You can use this value to retry the operation starting with the next item to get.Reference: API-Specific Limits

Top

Q9: Which DynamoDB limits can be raised by contacting AWS support?

  • A. The number of hash keys per account
  • B. The maximum storage used per account
  • C. The number of tables per account
  • D. The number of local secondary indexes per account
  • E. The number of provisioned throughput units per account


Answer: C. and E.

For any AWS account, there is an initial limit of 256 tables per region.
AWS places some default limits on the throughput you can provision.
These are the limits unless you request a higher amount.
To request a service limit increase see https://aws.amazon.com/support.

Reference: Limits in DynamoDB


Top

Q10: Which approach below provides the least impact to provisioned throughput on the “Product”
table?

  • A. Create an “Images” DynamoDB table to store the Image with a foreign key constraint to
    the “Product” table
  • B. Add an image data type to the “Product” table to store the images in binary format
  • C. Serialize the image and store it in multiple DynamoDB tables
  • D. Store the images in Amazon S3 and add an S3 URL pointer to the “Product” table item
    for each image


Answer: D.

Amazon DynamoDB currently limits the size of each item that you store in a table (see Limits in DynamoDB). If your application needs to store more data in an item than the DynamoDB size limit permits, you can try compressing one or more large attributes, or you can store them as an object in Amazon Simple Storage Service (Amazon S3) and store the Amazon S3 object identifier in your DynamoDB item.
Compressing large attribute values can let them fit within item limits in DynamoDB and reduce your storage costs. Compression algorithms such as GZIP or LZO produce binary output that you can then store in a Binary attribute type.
Reference: Best Practices for Storing Large Items and Attributes


Top

Q11: You’re creating a forum DynamoDB database for hosting forums. Your “thread” table contains the forum name and each “forum name” can have one or more “subjects”. What primary key type would you give the thread table in order to allow more than one subject to be tied to the forum primary key name?

  • A. Hash
  • B. Range and Hash
  • C. Primary and Range
  • D. Hash and Range

Answer: D.
Each forum name can have one or more subjects. In this case, ForumName is the hash attribute and Subject is the range attribute.

Reference: DynamoDB keys

Top

Amazon Aurora explained:

  • High scalability
  • High availability and durability
  • High Performance
  • Multi Region
Amazon Aurora explained
Amazon Aurora explained

Amazon ElastiCache Explained

  • In-Memory data store
  • High availability and reliability
  • Fully managed
  • Supports two pop
  • Open source engine
Amazon ElastiCache Explained
Amazon ElastiCache Explained

Amazon Redshift explained

  • Fast, fully managed, petabyte-scale data warehouse
  • Supports wide range of open data formats
  • Allows you to run SQL queries against large unstructured data in Amazon Simple Storage Service
  • Integrates with popular Business Intelligence (BI) and extract, Transform, Load  (ETL) solutions.
Amazon Redshift explained
Amazon Redshift explained

Amazon Neptune Explained

  • Fully managed graph database
  • Supports open graph APIs
  • Used in Social Networking
  • Amazon Neptune Explained
    Amazon Neptune Explained

AWS Certification Exam Prep: S3 Facts, Summaries, Questions and Answers

DjamgaMind

DjamgaMind: Audio Intelligence for the C-Suite (Energy, Healthcare, Finance)

Are you drowning in dense legal text? DjamgaMind is the new audio intelligence platform that turns 100-page healthcare or Energy mandates into 5-minute executive briefings. Whether you are navigating Bill C-27 (Canada) or the CMS-0057-F Interoperability Rule (USA), our AI agents decode the liability so you don’t have to. 👉 Start your specialized audio briefing today at Djamgamind.com


AI Jobs and Career

I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.

Job TitleStatusPay
Full-Stack Engineer Strong match, Full-time $150K - $220K / year
Developer Experience and Productivity Engineer Pre-qualified, Full-time $160K - $300K / year
Software Engineer - Tooling & AI Workflows (Contract) Contract $90 / hour
DevOps Engineer (India) Full-time $20K - $50K / year
Senior Full-Stack Engineer Full-time $2.8K - $4K / week
Enterprise IT & Cloud Domain Expert - India Contract $20 - $30 / hour
Senior Software Engineer Contract $100 - $200 / hour
Senior Software Engineer Pre-qualified, Full-time $150K - $300K / year
Senior Full-Stack Engineer: Latin America Full-time $1.6K - $2.1K / week
Software Engineering Expert Contract $50 - $150 / hour
Generalist Video Annotators Contract $45 / hour
Generalist Writing Expert Contract $45 / hour
Editors, Fact Checkers, & Data Quality Reviewers Contract $50 - $60 / hour
Multilingual Expert Contract $54 / hour
Mathematics Expert (PhD) Contract $60 - $80 / hour
Software Engineer - India Contract $20 - $45 / hour
Physics Expert (PhD) Contract $60 - $80 / hour
Finance Expert Contract $150 / hour
Designers Contract $50 - $70 / hour
Chemistry Expert (PhD) Contract $60 - $80 / hour

AWS Certification Exam Prep: S3 Facts, Summaries, Questions and Answers

AWS S3 Facts and summaries, AWS S3 Top 10 Questions and Answers Dump

Definition 1: Amazon S3 or Amazon Simple Storage Service is a “simple storage service” offered by Amazon Web Services that provides object storage through a web service interface. Amazon S3 uses the same scalable storage infrastructure that Amazon.com uses to run its global e-commerce network.

Definition 2: Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.

AWS S3 Explained graphically:

Amazon S3 Explained in pictures
Amazon S3 Explained

Amazon S3 Explained in pictures
Amazon S3 Explained in pictures
Amazon S3 Explained graphically
Amazon S3 Explained graphically

AWS S3 Facts and summaries

  1. S3 is a universal namespace, meaning each S3 bucket you create must have a unique name that is not being used by anyone else in the world.
  2. S3 is object based: i.e allows you to upload files.
  3. Files can be from 0 Bytes to 5 TB
  4. What is the maximum length, in bytes, of a DynamoDB range primary key attribute value?
    The maximum length of a DynamoDB range primary key attribute value is 2048 bytes (NOT 256 bytes).
  5. S3 has unlimited storage.
  6. Files are stored in Buckets.
  7. Read after write consistency for PUTS of new Objects
  8. Eventual Consistency for overwrite PUTS and DELETES (can take some time to propagate)
  9. S3 Storage Classes/Tiers:
    • S3 Standard (durable, immediately available, frequently accesses)
    • Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering): It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access.
    • S3 Standard-Infrequent Access – S3 Standard-IA (durable, immediately available, infrequently accessed)
    • S3 – One Zone-Infrequent Access – S3 One Zone IA: Same ad IA. However, data is stored in a single Availability Zone only
    • S3 – Reduced Redundancy Storage (data that is easily reproducible, such as thumbnails, etc.)
    • Glacier – Archived data, where you can wait 3-5 hours before accessing

    You can have a bucket that has different objects stored in S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA.

  10. The default URL for S3 hosted websites lists the bucket name first followed by s3-website-region.amazonaws.com . Example: enoumen.com.s3-website-us-east-1.amazonaws.com
  11. Core fundamentals of an S3 object
    • Key (name)
    • Value (data)
    • Version (ID)
    • Metadata
    • Sub-resources (used to manage bucket-specific configuration)
      • Bucket Policies, ACLs,
      • CORS
      • Transfer Acceleration
  12. Object-based storage only for files
  13. Not suitable to install OS on.
  14. Successful uploads will generate a HTTP 200 status code.
  15. S3 Security – Summary
    • By default, all newly created buckets are PRIVATE.
    • You can set up access control to your buckets using:
      • Bucket Policies – Applied at the bucket level
      • Access Control Lists – Applied at an object level.
    • S3 buckets can be configured to create access logs, which log all requests made to the S3 bucket. These logs can be written to another bucket.
  16. S3 Encryption
    • Encryption In-Transit (SSL/TLS)
    • Encryption At Rest:
      • Server side Encryption (SSE-S3, SSE-KMS, SSE-C)
      • Client Side Encryption
    • Remember that we can use a Bucket policy to prevent unencrypted files from being uploaded by creating a policy which only allows requests which include the x-amz-server-side-encryption parameter in the request header.
  17. S3 CORS (Cross Origin Resource Sharing):
    CORS defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.

    • Used to enable cross origin access for your AWS resources, e.g. S3 hosted website accessing javascript or image files located in another bucket. By default, resources in one bucket cannot access resources located in another. To allow this we need to configure CORS on the bucket being accessed and enable access for the origin (bucket) attempting to access.
    • Always use the S3 website URL, not the regular bucket URL. E.g.: https://s3-eu-west-2.amazonaws.com/acloudguru
  18. S3 CloudFront:
    • Edge locations are not just READ only – you can WRITE to them too (i.e put an object on to them.)
    • Objects are cached for the life of the TTL (Time to Live)
    • You can clear cached objects, but you will be charged. (Invalidation)
  19. S3 Performance optimization – 2 main approaches to Performance Optimization for S3:
    • GET-Intensive Workloads – Use Cloudfront
    • Mixed Workload – Avoid sequencial key names for your S3 objects. Instead, add a random prefix like a hex hash to the key name to prevent multiple objects from being stored on the same partition.
      • mybucket/7eh4-2019-03-04-15-00-00/cust1234234/photo1.jpg
      • mybucket/h35d-2019-03-04-15-00-00/cust1234234/photo2.jpg
      • mybucket/o3n6-2019-03-04-15-00-00/cust1234234/photo3.jpg
  20. The best way to handle large objects uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts.
  21. You can enable versioning on a bucket, even if that bucket already has objects in it. The already existing objects, though, will show their versions as null. All new objects will have version IDs.
  22. Bucket names cannot start with a . or – characters. S3 bucket names can contain both the . and – characters. There can only be one . or one – between labels. E.G mybucket-com mybucket.com are valid names but mybucket–com and mybucket..com are not valid bucket names.
  23. What is the maximum number of S3 buckets allowed per AWS account (by default)? 100
  24. You successfully upload an item to the us-east-1 region. You then immediately make another API call and attempt to read the object. What will happen?
    All AWS regions now have read-after-write consistency for PUT operations of new objects. Read-after-write consistency allows you to retrieve objects immediately after creation in Amazon S3. Other actions still follow the eventual consistency model (where you will sometimes get stale results if you have recently made changes)
  25. S3 bucket policies require a Principal be defined. Review the access policy elements here
  26. What checksums does Amazon S3 employ to detect data corruption?

    Amazon S3 uses a combination of Content-MD5 checksums and cyclic redundancy checks (CRCs) to detect data corruption. Amazon S3 performs these checksums on data at rest and repairs any corruption using redundant data. In addition, the service calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data.

Top
Reference: AWS S3

AWS S3 Top 10 Questions and Answers Dump

Q0: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?

  • A. Create multiple threads and upload the objects in the multiple threads
  • B. Write the items in batches for better performance
  • C. Use the Multipart upload API
  • D. Enable versioning on the Bucket


C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.

AI-Powered Professional Certification Quiz Platform
Crack Your Next Exam with Djamgatech AI Cert Master

Web|iOs|Android|Windows

Are you passionate about AI and looking for your next career challenge? In the fast-evolving world of artificial intelligence, connecting with the right opportunities can make all the difference. We're excited to recommend Mercor, a premier platform dedicated to bridging the gap between exceptional AI professionals and innovative companies.

Whether you're seeking roles in machine learning, data science, or other cutting-edge AI fields, Mercor offers a streamlined path to your ideal position. Explore the possibilities and accelerate your AI career by visiting Mercor through our exclusive referral link:

Find Your AI Dream Job on Mercor

Your next big opportunity in AI could be just a click away!

Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html


Top

Q2: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?

  • A. AWS::Serverless::Api
  • B. AWS::Serverless::Application
  • C. AWS::Serverless::Layerversion
  • D. AWS::Serverless::Function


Answer – B
AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets.
Reference: Declaring Serverless Resources

Top

Q3: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?

  • A. Enable versioning for the underlying S3 bucket.
  • B. Enable Replication so that the objects get replicated to the other bucket
  • C. Enable CORS for the bucket
  • D. Change the Bucket policy for the bucket to allow access from the other bucket


Answer – C

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.

AI Jobs and Career

And before we wrap up today's AI news, I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.

Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:

Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can congure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.

Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would congure the bucket that is hosting the web font to allow any origin to make these requests.

Reference: Cross-Origin Resource Sharing (CORS)


Top

Q4: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below

  • A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
  • B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
  • C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
  • D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.


Answer- C
The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3.
You can then provides access to the objects based on the key values generated via the user id.

Reference: The AWS Security Token Service (STS)


Top

Q5: Both ACLs and Bucket Policies can be used to grant access to S3 buckets. Which of the following statements is true about ACLs and Bucket policies?

  • A. Bucket Policies are Written in JSON and ACLs are written in XML
  • B. ACLs can be attached to S3 objects or S3 Buckets
  • C. Bucket Policies and ACLs are written in JSON
  • D. Bucket policies are only attached to s3 buckets, ACLs are only attached to s3 objects
Answer: A. and B.
Only Bucket Policies are written in JSON, ACLs are written in XML.
While Bucket policies are indeed only attached to S3 buckets, ACLs can be attached to S3 Buckets OR S3 Objects.
Reference:

Top


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Gemini, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Q6: What are good options to improve S3 performance when you have significantly high numbers of GET requests?

  • A. Introduce random prefixes to S3 objects
  • B. Introduce random suffixes to S3 objects
  • C. Setup CloudFront for S3 objects
  • D. Migrate commonly used objects to Amazon Glacier
Answer: C
CloudFront caching is an excellent way to avoid putting extra strain on the S3 service and to improve the response times of reqeusts by caching data closer to users at CloudFront locations.
S3 Transfer Acceleration optimizes the TCP protocol and adds additional intelligence between the client and the S3 bucket, making S3 Transfer Acceleration a better choice if a higher throughput is desired. If you have objects that are smaller than 1GB or if the data set is less than 1GB in size, you should consider using Amazon CloudFront’s PUT/POST commands for optimal performance.
Reference: Amazon S3 Transfer Acceleration

Top

Q7: If an application is storing hourly log files from thousands of instances from a high traffic
web site, which naming scheme would give optimal performance on S3?

  • A. Sequential
  • B. HH-DD-MM-YYYY-log_instanceID
  • C. YYYY-MM-DD-HH-log_instanceID
  • D. instanceID_log-HH-DD-MM-YYYY
  • E. instanceID_log-YYYY-MM-DD-HH


Answer: A. B. C. D. and E.
Amazon S3 now provides increased performance to support at least 3,500 requests per second to add data and 5,500 requests per second to retrieve data, which can save significant processing time for no additional charge. Each S3 prefix can support these request rates, making it simple to increase performance significantly.
This S3 request rate performance increase removes any previous guidance to randomize object prefixes to achieve faster performance. That means you can now use logical or sequential naming patterns in S3 object naming without any performance implications.

Reference: Amazon S3 Announces Increased Request Rate Performance


Top

Q8: You are working with the S3 API and receive an error message: 409 Conflict. What is the possible cause of this error

  • A. You’re attempting to remove a bucket without emptying the contents of the bucket first.
  • B. You’re attempting to upload an object to the bucket that is greater than 5TB in size.
  • C. Your request does not contain the proper metadata.
  • D. Amazon S3 is having internal issues.

Top

Q9: You created three S3 buckets – “mywebsite.com”, “downloads.mywebsite.com”, and “www.mywebsite.com”. You uploaded your files and enabled static website hosting. You specified both of the default documents under the “enable static website hosting” header. You also set the “Make Public” permission for the objects in each of the three buckets. You create the Route 53 Aliases for the three buckets. You are going to have your end users test your websites by browsing to http://mydomain.com/error.html, http://downloads.mydomain.com/index.html, and http://www.mydomain.com. What problems will your testers encounter?

  • A. http://mydomain.com/error.html will not work because you did not set a value for the error.html file
  • B. There will be no problems, all three sites should work.
  • C. http://www.mywebsite.com will not work because the URL does not include a file name at the end of it.
  • D. http://downloads.mywebsite.com/index.html will not work because the “downloads” prefix is not a supported prefix for S3 websites using Route 53 aliases

Answer: B.
It used to be that the only allowed domain prefix when creating Route 53 Aliases for S3 static websites was the “www” prefix. However, this is no longer the case. You can now use other subdomain.

Reference: Hosting a Static Website on Amazon S3

Top

Q10: Which of the following is NOT a common S3 API call?

  • A. UploadPart
  • B. ReadObject
  • C. PutObject
  • D. DownloadBucket

Top

Other AWS Facts and Summaries

2022 AWS Certified Developer Associate Exam Preparation: Questions and Answers Dump

DjamgaMind

DjamgaMind: Audio Intelligence for the C-Suite (Energy, Healthcare, Finance)

Are you drowning in dense legal text? DjamgaMind is the new audio intelligence platform that turns 100-page healthcare or Energy mandates into 5-minute executive briefings. Whether you are navigating Bill C-27 (Canada) or the CMS-0057-F Interoperability Rule (USA), our AI agents decode the liability so you don’t have to. 👉 Start your specialized audio briefing today at Djamgamind.com


AI Jobs and Career

I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.

Job TitleStatusPay
Full-Stack Engineer Strong match, Full-time $150K - $220K / year
Developer Experience and Productivity Engineer Pre-qualified, Full-time $160K - $300K / year
Software Engineer - Tooling & AI Workflows (Contract) Contract $90 / hour
DevOps Engineer (India) Full-time $20K - $50K / year
Senior Full-Stack Engineer Full-time $2.8K - $4K / week
Enterprise IT & Cloud Domain Expert - India Contract $20 - $30 / hour
Senior Software Engineer Contract $100 - $200 / hour
Senior Software Engineer Pre-qualified, Full-time $150K - $300K / year
Senior Full-Stack Engineer: Latin America Full-time $1.6K - $2.1K / week
Software Engineering Expert Contract $50 - $150 / hour
Generalist Video Annotators Contract $45 / hour
Generalist Writing Expert Contract $45 / hour
Editors, Fact Checkers, & Data Quality Reviewers Contract $50 - $60 / hour
Multilingual Expert Contract $54 / hour
Mathematics Expert (PhD) Contract $60 - $80 / hour
Software Engineer - India Contract $20 - $45 / hour
Physics Expert (PhD) Contract $60 - $80 / hour
Finance Expert Contract $150 / hour
Designers Contract $50 - $70 / hour
Chemistry Expert (PhD) Contract $60 - $80 / hour

2022 AWS Certified Developer Associate Exam Preparation: Questions and Answers Dump.

Welcome to AWS Certified Developer Associate Exam Preparation:

Definition and Objectives, Top 100 Questions and Answers dump, White papers, Courses, Labs and Training Materials, Exam info and details, References, Jobs, Others AWS Certificates

2022 AWS Certified Developer Associate Exam Preparation:  Questions and Answers Dump
2022 AWS Certified Developer Associate Exam Preparation: Questions and Answers Dump
 
#AWS #Developer #AWSCloud #DVAC02 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

What is the AWS Certified Developer Associate Exam?

This AWS Certified Developer-Associate Examination is intended for individuals who perform a Developer role. It validates an examinee’s ability to:

  • Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
  • Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS

Recommended general IT knowledge
The target candidate should have the following:
– In-depth knowledge of at least one high-level programming language
– Understanding of application lifecycle management
– The ability to write code for serverless applications
– Understanding of the use of containers in the development process

Recommended AWS knowledge
The target candidate should be able to do the following:

  • Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications
  • Identify key features of AWS services
  • Understand the AWS shared responsibility model
  • Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
  • Use and interact with AWS services
  • Apply basic understanding of cloud-native applications to write code
  • Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
  • Author, maintain, and debug code modules on AWS

What is considered out of scope for the target candidate?
The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam:
– Design architectures (for example, distributed system, microservices)
– Design and implement CI/CD pipelines

  • Administer IAM users and groups
  • Administer Amazon Elastic Container Service (Amazon ECS)
  • Design AWS networking infrastructure (for example, Amazon VPC, AWS Direct Connect)
  • Understand compliance and licensing

Exam content
Response types
There are two types of questions on the exam:
– Multiple choice: Has one correct response and three incorrect responses (distractors)
– Multiple response: Has two or more correct responses out of five or more response options
Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose.
Distractors are generally plausible responses that match the content area.
Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.

Unscored content
The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.

Exam results
The AWS Certified Developer – Associate (DVA-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines.
Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720.
Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.
Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam.
Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.

AI-Powered Professional Certification Quiz Platform
Crack Your Next Exam with Djamgatech AI Cert Master

Web|iOs|Android|Windows

Are you passionate about AI and looking for your next career challenge? In the fast-evolving world of artificial intelligence, connecting with the right opportunities can make all the difference. We're excited to recommend Mercor, a premier platform dedicated to bridging the gap between exceptional AI professionals and innovative companies.

Whether you're seeking roles in machine learning, data science, or other cutting-edge AI fields, Mercor offers a streamlined path to your ideal position. Explore the possibilities and accelerate your AI career by visiting Mercor through our exclusive referral link:

Find Your AI Dream Job on Mercor

Your next big opportunity in AI could be just a click away!

Content outline
This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context.
The percentage in each domain represents only scored content.

Domain 1: Deployment 22%
Domain 2: Security 26%
Domain 3: Development with AWS Services 30%
Domain 4: Refactoring 10%
Domain 5: Monitoring and Troubleshooting 12%

Domain 1: Deployment
1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
–  Commit code to a repository and invoke build, test and/or deployment actions
–  Use labels and branches for version and release management
–  Use AWS CodePipeline to orchestrate workflows against different environments
–  Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS
CodeDeploy for CI/CD purposes
–  Perform a roll back plan based on application deployment policy

1.2 Deploy applications using AWS Elastic Beanstalk.
–  Utilize existing supported environments to define a new application stack
–  Package the application
–  Introduce a new application version into the Elastic Beanstalk environment
–  Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable)
–  Validate application health using Elastic Beanstalk dashboard
–  Use Amazon CloudWatch Logs to instrument application logging

AI Jobs and Career

And before we wrap up today's AI news, I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.

1.3 Prepare the application deployment package to be deployed to AWS.
–  Manage the dependencies of the code module (like environment variables, config files and static image files) within the package
–  Outline the package/container directory structure and organize files appropriately
–  Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores)

1.4 Deploy serverless applications.
–  Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template
–  Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)

Domain 2: Security
2.1 Make authenticated calls to AWS services.
–  Communicate required policy based on least privileges required by application.
–  Assume an IAM role to access a service
–  Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)

2.2 Implement encryption using AWS services.
– Encrypt data at rest (client side; server side; envelope encryption) using AWS services
–  Encrypt data in transit

2.3 Implement application authentication and authorization.
– Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools
–  Use Amazon Cognito-provided credentials to write code that access AWS services.
–  Use Amazon Cognito sync to synchronize user profiles and data
–  Use developer-authenticated identities to interact between end user devices, backend
authentication, and Amazon Cognito

Domain 3: Development with AWS Services
3.1 Write code for serverless applications.
– Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications)
– Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler)
– Create an API endpoint using Amazon API Gateway
–  Create and test appropriate API actions like GET, POST using the API endpoint
–  Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes)
–  Compute read/write capacity units for Amazon DynamoDB based on application requirements
–  Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis)
–  Invoke an AWS Lambda function synchronously and asynchronously

3.2 Translate functional requirements into application design.
– Determine real-time vs. batch processing for a given use case
– Determine use of synchronous vs. asynchronous for a given use case
– Determine use of event vs. schedule/poll for a given use case
– Account for tradeoffs for consistency models in an application design


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Gemini, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Domain 4: Refactoring
4.1 Optimize applications to best use AWS services and features.
 Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache)
 Apply an Amazon S3 naming scheme for optimal read performance

4.2 Migrate existing application code to run on AWS.
– Isolate dependencies
– Run the application as one or more stateless processes
– Develop in order to enable horizontal scalability
– Externalize state

Domain 5: Monitoring and Troubleshooting

5.1 Write code that can be monitored.
– Create custom Amazon CloudWatch metrics
– Perform logging in a manner available to systems operators
– Instrument application source code to enable tracing in AWS X-Ray

5.2 Perform root cause analysis on faults found in testing or production.
– Interpret the outputs from the logging mechanism in AWS to identify errors in logs
– Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues
– Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component

Which key tools, technologies, and concepts might be covered on the exam?

The following is a non-exhaustive list of the tools and technologies that could appear on the exam.
This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam.
The general tools and technologies in this list appear in no particular order.
AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance:
– Analytics
– Application Integration
– Containers
– Cost and Capacity Management
– Data Movement
– Developer Tools
– Instances (virtual machines)
– Management and Governance
– Networking and Content Delivery
– Security
– Serverless

AWS services and features

Analytics:
– Amazon Elasticsearch Service (Amazon ES)
– Amazon Kinesis
Application Integration:
– Amazon EventBridge (Amazon CloudWatch Events)
– Amazon Simple Notification Service (Amazon SNS)
– Amazon Simple Queue Service (Amazon SQS)
– AWS Step Functions

Compute:
– Amazon EC2
– AWS Elastic Beanstalk
– AWS Lambda

Containers:
– Amazon Elastic Container Registry (Amazon ECR)
– Amazon Elastic Container Service (Amazon ECS)
– Amazon Elastic Kubernetes Services (Amazon EKS)

Database:
– Amazon DynamoDB
– Amazon ElastiCache
– Amazon RDS

Developer Tools:
– AWS CodeArtifact
– AWS CodeBuild
– AWS CodeCommit
– AWS CodeDeploy
– Amazon CodeGuru
– AWS CodePipeline
– AWS CodeStar
– AWS Fault Injection Simulator
– AWS X-Ray

Management and Governance:
– AWS CloudFormation
– Amazon CloudWatch

Networking and Content Delivery:
– Amazon API Gateway
– Amazon CloudFront
– Elastic Load Balancing

Security, Identity, and Compliance:
– Amazon Cognito
– AWS Identity and Access Management (IAM)
– AWS Key Management Service (AWS KMS)

Storage:
– Amazon S3

Out-of-scope AWS services and features

The following is a non-exhaustive list of AWS services and features that are not covered on the exam.
These services and features do not represent every AWS offering that is excluded from the exam content.
Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant.
Out-of-scope AWS services and features include the following:
– AWS Application Discovery Service
– Amazon AppStream 2.0
– Amazon Chime
– Amazon Connect
– AWS Database Migration Service (AWS DMS)
– AWS Device Farm
– Amazon Elastic Transcoder
– Amazon GameLift
– Amazon Lex
– Amazon Machine Learning (Amazon ML)
– AWS Managed Services
– Amazon Mobile Analytics
– Amazon Polly

– Amazon QuickSight
– Amazon Rekognition
– AWS Server Migration Service (AWS SMS)
– AWS Service Catalog
– AWS Shield Advanced
– AWS Shield Standard
– AWS Snow Family
– AWS Storage Gateway
– AWS WAF
– Amazon WorkMail
– Amazon WorkSpaces

To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Top

AWS Certified Developer – Associate Practice Questions And Answers Dump

Q0: Your application reads commands from an SQS queue and sends them to web services hosted by your
partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost.
How can you accommodate the partners’ broken web services without wasting your resources?

  • A. Create a delay queue and set DelaySeconds to 30 seconds
  • B. Requeue the message with a VisibilityTimeout of 30 seconds.
  • C. Create a dead letter queue and set the Maximum Receives to 3.
  • D. Requeue the message with a DelaySeconds of 30 seconds.
2022 AWS Certified Developer Associate Exam Preparation:  Questions and Answers Dump
AWS Developer Associates DVA-C01 PRO
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 


C. After a message is taken from the queue and returned for the maximum number of retries, it is
automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.

Reference: Amazon SQS Dead-Letter Queues


Top

Q1: A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently.
What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. The AWS Documentation mentions the following:

DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

Q2: You are creating a DynamoDB table with the following attributes:

  • PurchaseOrderNumber (partition key)
  • CustomerID
  • PurchaseDate
  • TotalPurchaseValue

One of your applications must retrieve items from the table to calculate the total value of purchases for a
particular customer over a date range. What secondary index do you need to add to the table?

  • A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute
  • C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute
Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease


C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition
key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the
TotalPurchaseValue into the index provides all the data needed to satisfy the use case.

Reference: AWS DynamoDB Global Secondary Indexes

Difference between local and global indexes in DynamoDB

    • Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
    • Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
    • Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
    • Global Secondary Indexes defines a new paradigm – different hash/range keys per index.
      This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
    • Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
    • Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.

Throughput :

  • Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
  • Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.


Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q3: When referencing the remaining time left for a Lambda function to run within the function’s code you would use:

  • A. The event object
  • B. The timeLeft object
  • C. The remains object
  • D. The context object


D. The context object.

Reference: AWS Lambda


Top

Q4: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context
D. event, context
def handler_name(event, context):

return some_value

Reference: AWS Lambda Function Handler in Python

Top

Q5: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda Deployment Package

Top

Q6: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime

C. Function code and libraries not included within the runtime

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

Reference: AWS Lambda Deployment Package in PowerShell

Top

Q7: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?

  • A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
  • B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
  • C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
  • D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.


D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.

Reference: Comparison of Security Groups and Network ACLs

AWS Security Groups and NACL


Top

Q8: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.

Reference: AWS Network Address Translation Gateway


Top

Q9: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.

Reference: AWS Autoscalling


Top

Q10: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda

Top

Q11: You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?

  • A. RegisterImage
  • B. CreateImage
  • C. ami-register-image
  • D. ami-create-image

A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.

Reference: API RegisterImage

Top

Q12: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?

  • A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
  • B. Permenantly assigning users to specific instances and always routing their traffic to those instances
  • C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
  • D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance

Top

Q13: Which API call would best be used to describe an Amazon Machine Image?

  • A. ami-describe-image
  • B. ami-describe-images
  • C. DescribeImage
  • D. DescribeImages

D. In general, API actions stick to the PascalCase style with the first letter of every word capitalized.

Reference: API DescribeImages

Top

Q14: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?

  • A. Autoscaling requires using Amazon EBS-backed instances
  • B. Virtual Private Cloud requires EBS backed instances
  • C. Amazon EBS-backed instances can be stopped and restarted without losing data
  • D. Instance-store backed instances can be stopped and restarted without losing data

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.

Reference: What is the difference between EBS and Instance Store?

Top

Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command.
ssh -i my_key.pem ec2-user@52.2.222.22
However you receive the following error.
@@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@
What is the most probable reason for this and how can you fix it?

  • A. You do not have root access on your terminal and need to use the sudo option for this to work.
  • B. You do not have enough permissions to perform the operation.
  • C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
  • D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.

D. You need to run something like: chmod 400 my_key.pem

Reference:

Top

Q16: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?

  • A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
  • B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
  • C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
  • D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.

D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.

Reference: AWS Root Device Storage

Top

Q17: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:

  • A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
  • B. Can only be used to launch EC2 instances in the same country as the AMI is stored
  • C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
  • D. Can be used to launch EC2 instances in any AWS region

C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another

Reference: https://aws.amazon.com/amazon-linux-ami/

Top

Q18: Which of the following statements is true about the Elastic File System (EFS)?

  • A. EFS can scale out to meet capacity requirements and scale back down when no longer needed
  • B. EFS can be used by multiple EC2 instances simultaneously
  • C. EFS cannot be used by an instance using EBS
  • D. EFS can be configured on an instance before launch just like an IAM role or EBS volumes

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

A. and B.

Reference: https://aws.amazon.com/efs/

Top

Q19: IAM Policies, at a minimum, contain what elements?

  • A. ID
  • B. Effects
  • C. Resources
  • D. Sid
  • E. Principle
  • F. Actions

B. C. and F.

Effect – Use Allow or Deny to indicate whether the policy allows or denies access.

Resource – Specify a list of resources to which the actions apply.

Action – Include a list of actions that the policy allows or denies.

Id, Sid aren’t required fields in IAM Policies. But they are optional fields

Reference: AWS IAM Access Policies

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q20: What are the main benefits of IAM groups?

  • A. The ability to create custom permission policies.
  • B. Assigning IAM permission policies to more than one user at a time.
  • C. Easier user/policy management.
  • D. Allowing EC2 instances to gain access to S3.

B. and C.

A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.

Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html

 

Top

Q21: What are benefits of using AWS STS?

  • A. Grant access to AWS resources without having to create an IAM identity for them
  • B. Since credentials are temporary, you don’t have to rotate or revoke them
  • C. Temporary security credentials can be extended indefinitely
  • D. Temporary security credentials can be restricted to a specific region

Top

Q22: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q23: A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?

  • A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
  • B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
  • C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
  • D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.

B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence.
Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation.
During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:

  • Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
  • Saved Configurations
    Settings for any options that are not applied directly to the
    environment are loaded from a saved configuration, if specified.
  • Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the
    environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.

     

    Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.

  • Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.

If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI
.
Settings in configuration files are not applied
directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.
If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.

Reference: Managing ec2 features – Elastic beanstalk

Q24: What statements are true about Availability Zones (AZs) and Regions?

  • A. There is only one AZ in each AWS Region
  • B. AZs are geographically separated inside a region to help protect against natural disasters affecting more than one at a time.
  • C. AZs can be moved between AWS Regions based on your needs
  • D. There are (almost always) two or more AZs in each AWS Region

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B and D.

Reference: AWS global infrastructure/

Top

Q25: An AWS Region contains:

  • A. Edge Locations
  • B. Data Centers
  • C. AWS Services
  • D. Availability Zones


B. C. D. Edge locations are actually distinct locations that don’t explicitly fall within AWS regions.

Reference: AWS Global Infrastructure


Top

Q26: Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?

  • A. Eventual Consistent Reads
  • B. Conditional reads for Consistency
  • C. Strongly Consistent Reads
  • D. Not possible


C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.

Reference: https://aws.amazon.com/dynamodb/faqs/


Top

Q27: You’ ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?

  • A. Create an Opswork stack and deploy the Docker containers
  • B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
  • C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
  • D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.


B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.

Reference: Create and Deploy Docker in AWS


Top

Q28: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?

  • A. Create multiple threads and upload the objects in the multiple threads
  • B. Write the items in batches for better performance
  • C. Use the Multipart upload API
  • D. Enable versioning on the Bucket

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 


C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.

Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html


Top

Q29: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?

  • A. 6000
  • B. 10
  • C. 3600
  • D. 600

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.

You can specify the Write capacity in the Capacity tab of the DynamoDB table.

Reference: AWS working with tables

Q30: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context


D. event, context def handler_name(event, context):

return some_value
Reference: AWS Lambda Function Handler in Python

Top

Q31: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda Deployment Package

Top

Q32: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime


C. Function code and libraries not included within the runtime
Reference: AWS Lambda Deployment Package in PowerShell

Top

Q33: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
Reference: AWS Network Address Translation Gateway

Top

Q34: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.
Reference: AWS Autoscalling

Top

Q30: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

Answer:


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda

Top

Q31: An organization is using an Amazon ElastiCache cluster in front of their Amazon RDS instance. The organization would like the Developer to implement logic into the code so that the cluster only retrieves data from RDS when there is a cache miss. What strategy can the Developer implement to achieve this?

  • A. Lazy loading
  • B. Write-through
  • C. Error retries
  • D. Exponential backoff

Answer:


Answer – A
Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. All other options are incorrect.
Reference: Caching Strategies

Top

Q32: A developer is writing an application that will run on Ec2 instances and read messages from SQS queue. The nessages will arrive every 15-60 seconds. How should the Developer efficiently query the queue for new messages?

  • A. Use long polling
  • B. Set a custom visibility timeout
  • C. Use short polling
  • D. Implement exponential backoff


Answer – A Long polling will help insure that the applications make less requests for messages in a shorter period of time. This is more cost effective. Since the messages are only going to be available after 15 seconds and we don’t know exacly when they would be available, it is better to use Long Polling.
Reference: Amazon SQS Long Polling

Top

Q33: You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With new Lambda function working as per expectation which of the following will shift traffic from original Lambda function to new Lambda function in the shortest time frame?

  • A. Canary10Percent5Minutes
  • B. Linear10PercentEvery10Minutes
  • C. Canary10Percent15Minutes
  • D. Linear10PercentEvery1Minute


Answer – A
With Canary Deployment Preference type, Traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while remaining all traffic is shifted after 5 minutes.
Reference: Gradual Code Deployment

Top

Q34: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?

  • A. AWS::Serverless::Api
  • B. AWS::Serverless::Application
  • C. AWS::Serverless::Layerversion
  • D. AWS::Serverless::Function


Answer – B
AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets.
Reference: Declaring Serverless Resources

Top

Q35: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?

  • A. Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
  • B. Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
  • C. Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
  • D. Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.


Answer – D
With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys.
Reference: AWS Key Management Service Concepts

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q36: You are developing an application that will be comprised of the following architecture –

  1. A set of Ec2 instances to process the videos.
  2. These (Ec2 instances) will be spun up by an autoscaling group.
  3. SQS Queues to maintain the processing messages.
  4. There will be 2 pricing tiers.

How will you ensure that the premium customers videos are given more preference?

  • A. Create 2 Autoscaling Groups, one for normal and one for premium customers
  • B. Create 2 set of Ec2 Instances, one for normal and one for premium customers
  • C. Create 2 SQS queus, one for normal and one for premium customers
  • D. Create 2 Elastic Load Balancers, one for normal and one for premium customers.


Answer – C
The ideal option would be to create 2 SQS queues. Messages can then be processed by the application from the high priority queue first.<br? The other options are not the ideal options. They would lead to extra costs and also extra maintenance.
Reference: SQS

Top

Q37: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?

  • A. CustomerID
  • B. CustomerName
  • C. Location
  • D. Age


Answer- A
Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on..
Use composite attributes. Try to combine more than one attribute to form a unique key.
Reference: Choosing the right DynamoDB Partition Key

Top

Q38: A developer is making use of AWS services to develop an application. He has been asked to develop the application in a manner to compensate any network delays. Which of the following two mechanisms should he implement in the application?

  • A. Multiple SQS queues
  • B. Exponential backoff algorithm
  • C. Retries in your application code
  • D. Consider using the Java sdk.


Answer- B. and C.
In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values, and should be set based on the operation being performed, as well as other local factors, such as network latency.
Reference: Error Retries and Exponential Backoff in AWS

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q39: An application is being developed that is going to write data to a DynamoDB table. You have to setup the read and write throughput for the table. Data is going to be read at the rate of 300 items every 30 seconds. Each item is of size 6KB. The reads can be eventual consistent reads. What should be the read capacity that needs to be set on the table?

  • A. 10
  • B. 20
  • C. 6
  • D. 30


Answer – A

Since there are 300 items read every 30 seconds , that means there are (300/30) = 10 items read every second.
Since each item is 6KB in size , that means , 2 reads will be required for each item.
So we have total of 2*10 = 20 reads for the number of items per second
Since eventual consistency is required , we can divide the number of reads(20) by 2 , and in the end we get the Read Capacity of 10.

Reference: Read/Write Capacity Mode


Top

Q40: You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load balancer. You have been requested to monitor the incoming connections to the Elastic Load Balancer. Which of the below options can suffice this requirement?

  • A. Use AWS CloudTrail with your load balancer
  • B. Enable access logs on the load balancer
  • C. Use a CloudWatch Logs Agent
  • D. Create a custom metric CloudWatch lter on your load balancer


Answer – B
Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues.
Reference: Access Logs for Your Application Load Balancer

Top

Q41: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?

  • A. Enable versioning for the underlying S3 bucket.
  • B. Enable Replication so that the objects get replicated to the other bucket
  • C. Enable CORS for the bucket
  • D. Change the Bucket policy for the bucket to allow access from the other bucket


Answer – C

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.

Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:

Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.

Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.

Reference: Cross-Origin Resource Sharing (CORS)


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q42: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below

  • A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
  • B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
  • C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
  • D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.


Answer- C
The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3.
You can then provides access to the objects based on the key values generated via the user id.

Reference: The AWS Security Token Service (STS)


Top

Q43: Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.

  • A. Publish your data to CloudWatch Logs, and congure your application to Autoscale to handle the load on demand.
  • B. Publish your log data to an Amazon S3 bucket.  Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is congured to pull down your log les stored an Amazon S3
  • C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is congured to process your logging data.
  • D. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.

Answer:


Answer – C
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data.
Reference: Amazon Kinesis

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q44: You’ve been instructed to develop a mobile application that will make use of AWS services. You need to decide on a data store to store the user sessions. Which of the following would be an ideal data store for session management?

  • A. AWS Simple Storage Service
  • B. AWS DynamoDB
  • C. AWS RDS
  • D. AWS Redshift

Answer:


Answer – B
DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management
Reference: Scalable Session Handling in PHP Using Amazon DynamoDB

Top

Q45: Your application currently interacts with a DynamoDB table. Records are inserted into the table via the application. There is now a requirement to ensure that whenever items are updated in the DynamoDB primary table , another record is inserted into a secondary table. Which of the below feature should be used when developing such a solution?

  • A. AWS DynamoDB Encryption
  • B. AWS DynamoDB Streams
  • C. AWS DynamoDB Accelerator
  • D. AWSTable Accelerator


Answer – B
DynamoDB Streams Use Cases and Design Patterns This post describes some common use cases you might encounter, along with their design options and solutions, when migrating data from relational data stores to Amazon DynamoDB. We will consider how to manage the following scenarios:

  • How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
  • How do you trigger an event based on a particular transaction?
  • How do you audit or archive transactions?
  • How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?

Relational databases provide native support for transactions, triggers, auditing, and replication. Typically, a transaction in a database refers to performing create, read, update, and delete (CRUD) operations against multiple tables in a block. A transaction can have only two states—success or failure. In other words, there is no partial completion. As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit. You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time. AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region. All of the other options are incorrect since none of these would meet the core requirement.
Reference: DynamoDB Streams Use Cases and Design Patterns


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q46: An application has been making use of AWS DynamoDB for its back-end data store. The size of the table has now grown to 20 GB , and the scans on the table are causing throttling errors. Which of the following should now be implemented to avoid such errors?

  • A. Large Page size
  • B. Reduced page size
  • C. Parallel Scans
  • D. Sequential scans

Answer – B
When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity. You can use the following technique to minimize the impact of a scan on a table’s provisioned throughput. Reduce page size Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request. For example, suppose that each item is 4 KB and you set the page size to 40 items. A Query request would then consume only 20 eventually consistent read operations or 40 strongly consistent read operations. A larger number of smaller Query or Scan operations would allow your other critical requests to succeed without throttling.
Reference1: Rate-Limited Scans in Amazon DynamoDB

Reference2: Best Practices for Querying and Scanning Data


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q47: Which of the following is correct way of passing a stage variable to an HTTP URL ? (Select TWO.)

  • A. http://example.com/${}/prod
  • B. http://example.com/${stageVariables.}/prod
  • C. http://${stageVariables.}.example.com/dev/operation
  • D. http://${stageVariables}.example.com/dev/operation
  • E. http://${}.example.com/dev/operation
  • F. http://example.com/${stageVariables}/prod


Answer – B. and C.
A stage variable can be used as part of HTTP integration URL as in following cases, ·         A full URI without protocol ·         A full domain ·         A subdomain ·         A path ·         A query string In the above case , option B & C displays stage variable as a path & sub-domain.
Reference: Amazon API Gateway Stage Variables Reference

Top

Q48: Your company is planning on creating new development environments in AWS. They want to make use of their existing Chef recipes which they use for their on-premise configuration for servers in AWS. Which of the following service would be ideal to use in this regard?

  • A. AWS Elastic Beanstalk
  • B. AWS OpsWork
  • C. AWS Cloudformation
  • D. AWS SQS


Answer – B
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments All other options are invalid since they cannot be used to work with Chef recipes for configuration management.
Reference: AWS OpsWorks

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q49: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?

  • A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
  • B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access.
  • C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
  • D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.


Answer – C
With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used
Reference: About Web Identity Federation

Top

Q50: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?

  • A. Cognito Data
  • B. Cognito Events
  • C. Cognito Streams
  • D. Cognito Callbacks


Answer – C
Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams
Reference:

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q51: You’ve developed a set of scripts using AWS Lambda. These scripts need to access EC2 Instances in a VPC. Which of the following needs to be done to ensure that the AWS Lambda function can access the resources in the VPC. Choose 2 answers from the options given below

  • A. Ensure that the subnet ID’s are mentioned when conguring the Lambda function
  • B. Ensure that the NACL ID’s are mentioned when conguring the Lambda function
  • C. Ensure that the Security Group ID’s are mentioned when conguring the Lambda function
  • D. Ensure that the VPC Flow Log ID’s are mentioned when conguring the Lambda function


Answer: A and C.
AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPCspecific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC.
Reference: Configuring a Lambda Function to Access Resources in an Amazon VPC

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q52: You’ve currently been tasked to migrate an existing on-premise environment into Elastic Beanstalk. The application does not make use of Docker containers. You also can’t see any relevant environments in the beanstalk service that would be suitable to host your application. What should you consider doing in this case?

  • A. Migrate your application to using Docker containers and then migrate the app to the Elastic Beanstalk environment.
  • B. Consider using Cloudformation to deploy your environment to Elastic Beanstalk
  • C. Consider using Packer to create a custom platform
  • D. Consider deploying your application using the Elastic Container Service


Answer – C
Elastic Beanstalk supports custom platforms. A custom platform is a more advanced customization than a Custom Image in several ways. A custom platform lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility allows you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn’t provide a platform out of the box. Compare that to custom images, where you modify an AMI for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform’s software stack. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. To create a custom platform, you build an Amazon Machine Image (AMI) from one of the supported operating systems—Ubuntu, RHEL, or Amazon Linux (see the flavor entry in Platform.yaml File Format for the exact version numbers)—and add further customizations. You create your own Elastic Beanstalk platform using Packer, which is an open-source tool for creating machine images for many platforms, including AMIs for use with Amazon EC2. An Elastic Beanstalk platform comprises an AMI configured to run a set of software that supports an application, and metadata that can include custom configuration options and default configuration option settings.
Reference: AWS Elastic Beanstalk Custom Platforms

Top

Q53: Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.

  • A. 10
  • B. 160
  • C. 155
  • D. 16


Answer – B.
Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.
Reference: Read/Write Capacity Mode

Top

Top

Q54: Which AWS Service can be used to automatically install your application code onto EC2, on premises systems and Lambda?

  • A. CodeCommit
  • B. X-Ray
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q55: Which AWS service can be used to compile source code, run tests and package code?

  • A. CodePipeline
  • B. CodeCommit
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy Answer: B.

Reference: AWS CodeBuild


Top

Q56: How can your prevent CloudFormation from deleting your entire stack on failure? (Choose 2)

  • A. Set the Rollback on failure radio button to No in the CloudFormation console
  • B. Set Termination Protection to Enabled in the CloudFormation console
  • C. Use the –disable-rollback flag with the AWS CLI
  • D. Use the –enable-termination-protection protection flag with the AWS CLI

Answer: A. and C.

Reference: Protecting a Stack From Being Deleted

Top

Q57: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?

  • A. Continuous Integration
  • B. Continuous Deployment
  • C. Continuous Delivery
  • D. Continuous Development

Top

Q58: When deploying application code to EC2, the AppSpec file can be written in which language?

  • A. JSON
  • B. JSON or YAML
  • C. XML
  • D. YAML

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q59: Part of your CloudFormation deployment fails due to a mis-configuration, by defaukt what will happen?

  • A. CloudFormation will rollback only the failed components
  • B. CloudFormation will rollback the entire stack
  • C. Failed component will remain available for debugging purposes
  • D. CloudFormation will ask you if you want to continue with the deployment


Top

Q60: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?

  • A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
  • B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
  • C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
  • D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.

Answer: C

Reference: Getting Started with Amazon SNS


Top

Q61: Which AWS service can be used to centrally store and version control your application source code, binaries and libraries

  • A. CodeCommit
  • B. CodeBuild
  • C. CodePipeline
  • D. ElasticFileSystem

Answer: A

Reference: AWS CodeCommit


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q62: You are using CloudFormation to create a new S3 bucket, which of the following sections would you use to define the properties of your bucket?

  • A. Conditions
  • B. Parameters
  • C. Outputs
  • D. Resources

Answer: D

Reference: Resources


Top

Q63: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?

  • A. Transforms
  • B. Outputs
  • C. Resources
  • D. Instances

Answer: C.
The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3.
Reference: Resources

Top

Q64: Which AWS service can be used to fully automate your entire release process?

  • A. CodeDeploy
  • B. CodePipeline
  • C. CodeCommit
  • D. CodeBuild

Answer: B.
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates

Reference: AWS CodePipeline


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q65: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?

  • A. Outputs
  • B. Transforms
  • C. Resources
  • D. Exports

Answer: A.
Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack.
Reference: CloudFormation Outputs

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q66: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?

  • A. Inputs
  • B. Resources
  • C. Transforms
  • D. Files

Answer: C.
Transforms is used to reference code located in S3 and also specififying the use of the Serverless Application Model (SAM) for Lambda deployments.
Reference: Transforms

Top

Q67: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file
used to specify source files and lifecycle hooks?

  • A. buildspec.yml
  • B. appspec.json
  • C. appspec.yml
  • D. buildspec.json

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q68: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?

  • A. Share the code using an EBS volume
  • B. Copy and paste the code into the template each time you need to use it
  • C. Use a cloudformation nested stack
  • D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.

Answer: C.

Reference: Working with Nested Stacks

Top

Q69: In the CodeDeploy AppSpec file, what are hooks used for?

  • A. To reference AWS resources that will be used during the deployment
  • B. Hooks are reserved for future use
  • C. To specify files you want to copy during the deployment.
  • D. To specify, scripts or function that you want to run at set points in the deployment lifecycle

Answer: D.
The ‘hooks’ section for an EC2/On-Premises deployment contains mappings that link deployment lifecycle event hooks to one or more scripts.

Reference: AppSpec ‘hooks’ Section

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q70: Which command can you use to encrypt a plain text file using CMK?

  • A. aws kms-encrypt
  • B. aws iam encrypt
  • C. aws kms encrypt
  • D. aws encrypt

Answer: C.
aws kms encrypt –key-id 1234abcd-12ab-34cd-56ef-1234567890ab –plaintext fileb://ExamplePlaintextFile –output text –query CiphertextBlob > C:\Temp\ExampleEncryptedFile.base64

Reference: AWS CLI Encrypt

Top

Q72: Which of the following is an encrypted key used by KMS to encrypt your data

  • A. Custmoer Mamaged Key
  • B. Encryption Key
  • C. Envelope Key
  • D. Customer Master Key

Answer: C.
Your Data key also known as the Enveloppe key is encrypted using the master key.This approach is known as Envelope encryption.
Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.

Reference: Envelope Encryption

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q73: Which of the following statements are correct? (Choose 2)

  • A. The Customer Master Key is used to encrypt and decrypt the Envelope Key or Data Key
  • B. The Envelope Key or Data Key is used to encrypt and decrypt plain text files.
  • C. The envelope Key or Data Key is used to encrypt and decrypt the Customer Master Key.
  • D. The Customer MasterKey is used to encrypt and decrypt plain text files.

Answer: A. and B.

Reference: AWS Key Management Service Concepts

Top

Q74: Which of the following statements is correct in relation to kMS/ (Choose 2)

  • A. KMS Encryption keys are regional
  • B. You cannot export your customer master key
  • C. You can export your customer master key.
  • D. KMS encryption Keys are global

Answer: A. and B.

Reference: AWS Key Management Service FAQs

Q75:  A developer is preparing a deployment package for a Java implementation of an AWS Lambda function. What should the developer include in the deployment package? (Select TWO.)
A. Compiled application code
B. Java runtime environment
C. References to the event sources
D. Lambda execution role
E. Application dependencies


Answer: C. E.
Notes: To create a Lambda function, you first create a Lambda function deployment package. This package is a .zip or .jar file consisting of your code and any dependencies.
Reference: Lambda deployment packages.

Q76: A developer uses AWS CodeDeploy to deploy a Python application to a fleet of Amazon EC2 instances that run behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. What should the developer include in the CodeDeploy deployment package?
A. A launch template for the Amazon EC2 Auto Scaling group
B. A CodeDeploy AppSpec file
C. An EC2 role that grants the application access to AWS services
D. An IAM policy that grants the application access to AWS services


Answer: B.
Notes: The CodeDeploy AppSpec (application specific) file is unique to CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file.
Reference: CodeDeploy application specification (AppSpec) files.
Category: Deployment

Q76: A company is working on a project to enhance its serverless application development process. The company hosts applications on AWS Lambda. The development team regularly updates the Lambda code and wants to use stable code in production. Which combination of steps should the development team take to configure Lambda functions to meet both development and production requirements? (Select TWO.)

A. Create a new Lambda version every time a new code release needs testing.
B. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready unqualified Amazon Resource Name (ARN) version. Point the Development alias to the $LATEST version.
C. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to the production-ready qualified Amazon Resource Name (ARN) version. Point the Development alias to the variable LAMBDA_TASK_ROOT.
D. Create a new Lambda layer every time a new code release needs testing.
E. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready Lambda layer Amazon Resource Name (ARN). Point the Development alias to the $LATEST layer ARN.


Answer: A. B.
Notes: Lambda function versions are designed to manage deployment of functions. They can be used for code changes, without affecting the stable production version of the code. By creating separate aliases for Production and Development, systems can initiate the correct alias as needed. A Lambda function alias can be used to point to a specific Lambda function version. Using the functionality to update an alias and its linked version, the development team can update the required version as needed. The $LATEST version is the newest published version.
Reference: Lambda function versions.

For more information about Lambda layers, see Creating and sharing Lambda layers.

For more information about Lambda function aliases, see Lambda function aliases.

Category: Deployment

Q77: Each time a developer publishes a new version of an AWS Lambda function, all the dependent event source mappings need to be updated with the reference to the new version’s Amazon Resource Name (ARN). These updates are time consuming and error-prone. Which combination of actions should the developer take to avoid performing these updates when publishing a new Lambda version? (Select TWO.)
A. Update event source mappings with the ARN of the Lambda layer.
B. Point a Lambda alias to a new version of the Lambda function.
C. Create a Lambda alias for each published version of the Lambda function.
D. Point a Lambda alias to a new Lambda function alias.
E. Update the event source mappings with the Lambda alias ARN.


Answer: B. E.
Notes: A Lambda alias is a pointer to a specific Lambda function version. Instead of using ARNs for the Lambda function in event source mappings, you can use an alias ARN. You do not need to update your event source mappings when you promote a new version or roll back to a previous version.
Reference: Lambda function aliases.
Category: Deployment

Q78:  A company wants to store sensitive user data in Amazon S3 and encrypt this data at rest. The company must manage the encryption keys and use Amazon S3 to perform the encryption. How can a developer meet these requirements?
A. Enable default encryption for the S3 bucket by using the option for server-side encryption with customer-provided encryption keys (SSE-C).
B. Enable client-side encryption with an encryption key. Upload the encrypted object to the S3 bucket.
C. Enable server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Upload an object to the S3 bucket.
D. Enable server-side encryption with customer-provided encryption keys (SSE-C). Upload an object to the S3 bucket.


Answer: D.
Notes: When you upload an object, Amazon S3 uses the encryption key you provide to apply AES-256 encryption to your data and removes the encryption key from memory.
Reference: Protecting data using server-side encryption with customer-provided encryption keys (SSE-C).

Category: Security

Q79: A company is developing a Python application that submits data to an Amazon DynamoDB table. The company requires client-side encryption of specific data items and end-to-end protection for the encrypted data in transit and at rest. Which combination of steps will meet the requirement for the encryption of specific data items? (Select TWO.)

A. Generate symmetric encryption keys with AWS Key Management Service (AWS KMS).
B. Generate asymmetric encryption keys with AWS Key Management Service (AWS KMS).
C. Use generated keys with the DynamoDB Encryption Client.
D. Use generated keys to configure DynamoDB table encryption with AWS managed customer master keys (CMKs).
E. Use generated keys to configure DynamoDB table encryption with AWS owned customer master keys (CMKs).


Answer: A. C.
Notes: When the DynamoDB Encryption Client is configured to use AWS KMS, it uses a customer master key (CMK) that is always encrypted when used outside of AWS KMS. This cryptographic materials provider returns a unique encryption key and signing key for every table item. This method of encryption uses a symmetric CMK.
Reference: Direct KMS Materials Provider.
Category: Deployment

Q80: A company is developing a REST API with Amazon API Gateway. Access to the API should be limited to users in the existing Amazon Cognito user pool. Which combination of steps should a developer perform to secure the API? (Select TWO.)
A. Create an AWS Lambda authorizer for the API.
B. Create an Amazon Cognito authorizer for the API.
C. Configure the authorizer for the API resource.
D. Configure the API methods to use the authorizer.
E. Configure the authorizer for the API stage.


Answer: B. D.
Notes: An Amazon Cognito authorizer should be used for integration with Amazon Cognito user pools. In addition to creating an authorizer, you are required to configure an API method to use that authorizer for the API.
Reference: Control access to a REST API using Amazon Cognito user pools as authorizer.
Category: Security

Q81: A developer is implementing a mobile app to provide personalized services to app users. The application code makes calls to Amazon S3 and Amazon Simple Queue Service (Amazon SQS). Which options can the developer use to authenticate the app users? (Select TWO.)
A. Authenticate to the Amazon Cognito identity pool directly.
B. Authenticate to AWS Identity and Access Management (IAM) directly.
C. Authenticate to the Amazon Cognito user pool directly.
D. Federate authentication by using Login with Amazon with the users managed with AWS Security Token Service (AWS STS).
E. Federate authentication by using Login with Amazon with the users managed with the Amazon Cognito user pool.


Answer: C. E.
Notes: The Amazon Cognito user pool provides direct user authentication. The Amazon Cognito user pool provides a federated authentication option with third-party identity provider (IdP), including amazon.com.
Reference: Adding User Pool Sign-in Through a Third Party.
Category: Security

Question: A company is implementing several order processing workflows. Each workflow is implemented by using AWS Lambda functions for each task. Which combination of steps should a developer follow to implement these workflows? (Select TWO.)
A. Define a AWS Step Functions task for each Lambda function.
B. Define a AWS Step Functions task for each workflow.
C. Write code that polls the AWS Step Functions invocation to coordinate each workflow.
D. Define an AWS Step Functions state machine for each workflow.
E. Define an AWS Step Functions state machine for each Lambda function.
Answer: A. D.
Notes: Step Functions is based on state machines and tasks. A state machine is a workflow. Tasks perform work by coordinating with other AWS services, such as Lambda. A state machine is a workflow. It can be used to express a workflow as a number of states, their relationships, and their input and output. You can coordinate individual tasks with Step Functions by expressing your workflow as a finite state machine, written in the Amazon States Language.
ReferenceText: Getting Started with AWS Step Functions.
ReferenceUrl: https://aws.amazon.com/step-functions/getting-started/
Category: Development

Welcome to AWS Certified Developer Associate Exam Preparation: Definition and Objectives, Top 100 Questions and Answers dump, White papers, Courses, Labs and Training Materials, Exam info and details, References, Jobs, Others AWS Certificates

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

What is the AWS Certified Developer Associate Exam?

This AWS Certified Developer-Associate Examination is intended for individuals who perform a Developer role. It validates an examinee’s ability to:

  • Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
  • Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS

Recommended general IT knowledge
The target candidate should have the following:
– In-depth knowledge of at least one high-level programming language
– Understanding of application lifecycle management
– The ability to write code for serverless applications
– Understanding of the use of containers in the development process

Recommended AWS knowledge
The target candidate should be able to do the following:

  • Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications
  • Identify key features of AWS services
  • Understand the AWS shared responsibility model
  • Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
  • Use and interact with AWS services
  • Apply basic understanding of cloud-native applications to write code
  • Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
  • Author, maintain, and debug code modules on AWS

What is considered out of scope for the target candidate?
The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam:
– Design architectures (for example, distributed system, microservices)
– Design and implement CI/CD pipelines

  • Administer IAM users and groups
  • Administer Amazon Elastic Container Service (Amazon ECS)
  • Design AWS networking infrastructure (for example, Amazon VPC, AWS Direct Connect)
  • Understand compliance and licensing

Exam content
Response types
There are two types of questions on the exam:
– Multiple choice: Has one correct response and three incorrect responses (distractors)
– Multiple response: Has two or more correct responses out of five or more response options
Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose.
Distractors are generally plausible responses that match the content area.
Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.

Unscored content
The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.

Exam results
The AWS Certified Developer – Associate (DVA-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines.
Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720.
Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.
Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam.
Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.

Content outline
This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context.
The percentage in each domain represents only scored content.

Domain 1: Deployment 22%
Domain 2: Security 26%
Domain 3: Development with AWS Services 30%
Domain 4: Refactoring 10%
Domain 5: Monitoring and Troubleshooting 12%

Domain 1: Deployment
1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
–  Commit code to a repository and invoke build, test and/or deployment actions
–  Use labels and branches for version and release management
–  Use AWS CodePipeline to orchestrate workflows against different environments
–  Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS
CodeDeploy for CI/CD purposes
–  Perform a roll back plan based on application deployment policy

1.2 Deploy applications using AWS Elastic Beanstalk.
–  Utilize existing supported environments to define a new application stack
–  Package the application
–  Introduce a new application version into the Elastic Beanstalk environment
–  Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable)
–  Validate application health using Elastic Beanstalk dashboard
–  Use Amazon CloudWatch Logs to instrument application logging

1.3 Prepare the application deployment package to be deployed to AWS.
–  Manage the dependencies of the code module (like environment variables, config files and static image files) within the package
–  Outline the package/container directory structure and organize files appropriately
–  Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores)

1.4 Deploy serverless applications.
–  Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template
–  Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)

Domain 2: Security
2.1 Make authenticated calls to AWS services.
–  Communicate required policy based on least privileges required by application.
–  Assume an IAM role to access a service
–  Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)

2.2 Implement encryption using AWS services.
– Encrypt data at rest (client side; server side; envelope encryption) using AWS services
–  Encrypt data in transit

2.3 Implement application authentication and authorization.
– Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools
–  Use Amazon Cognito-provided credentials to write code that access AWS services.
–  Use Amazon Cognito sync to synchronize user profiles and data
–  Use developer-authenticated identities to interact between end user devices, backend
authentication, and Amazon Cognito

Domain 3: Development with AWS Services
3.1 Write code for serverless applications.
– Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications)
– Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler)
– Create an API endpoint using Amazon API Gateway
–  Create and test appropriate API actions like GET, POST using the API endpoint
–  Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes)
–  Compute read/write capacity units for Amazon DynamoDB based on application requirements
–  Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis)
–  Invoke an AWS Lambda function synchronously and asynchronously

3.2 Translate functional requirements into application design.
– Determine real-time vs. batch processing for a given use case
– Determine use of synchronous vs. asynchronous for a given use case
– Determine use of event vs. schedule/poll for a given use case
– Account for tradeoffs for consistency models in an application design

Domain 4: Refactoring
4.1 Optimize applications to best use AWS services and features.
 Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache)
 Apply an Amazon S3 naming scheme for optimal read performance

4.2 Migrate existing application code to run on AWS.
– Isolate dependencies
– Run the application as one or more stateless processes
– Develop in order to enable horizontal scalability
– Externalize state

Domain 5: Monitoring and Troubleshooting

5.1 Write code that can be monitored.
– Create custom Amazon CloudWatch metrics
– Perform logging in a manner available to systems operators
– Instrument application source code to enable tracing in AWS X-Ray

5.2 Perform root cause analysis on faults found in testing or production.
– Interpret the outputs from the logging mechanism in AWS to identify errors in logs
– Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues
– Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component

Which key tools, technologies, and concepts might be covered on the exam?

The following is a non-exhaustive list of the tools and technologies that could appear on the exam.
This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam.
The general tools and technologies in this list appear in no particular order.
AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance:
– Analytics
– Application Integration
– Containers
– Cost and Capacity Management
– Data Movement
– Developer Tools
– Instances (virtual machines)
– Management and Governance
– Networking and Content Delivery
– Security
– Serverless

AWS services and features

Analytics:
– Amazon Elasticsearch Service (Amazon ES)
– Amazon Kinesis
Application Integration:
– Amazon EventBridge (Amazon CloudWatch Events)
– Amazon Simple Notification Service (Amazon SNS)
– Amazon Simple Queue Service (Amazon SQS)
– AWS Step Functions

Compute:
– Amazon EC2
– AWS Elastic Beanstalk
– AWS Lambda

Containers:
– Amazon Elastic Container Registry (Amazon ECR)
– Amazon Elastic Container Service (Amazon ECS)
– Amazon Elastic Kubernetes Services (Amazon EKS)

Database:
– Amazon DynamoDB
– Amazon ElastiCache
– Amazon RDS

Developer Tools:
– AWS CodeArtifact
– AWS CodeBuild
– AWS CodeCommit
– AWS CodeDeploy
– Amazon CodeGuru
– AWS CodePipeline
– AWS CodeStar
– AWS Fault Injection Simulator
– AWS X-Ray

Management and Governance:
– AWS CloudFormation
– Amazon CloudWatch

Networking and Content Delivery:
– Amazon API Gateway
– Amazon CloudFront
– Elastic Load Balancing

Security, Identity, and Compliance:
– Amazon Cognito
– AWS Identity and Access Management (IAM)
– AWS Key Management Service (AWS KMS)

Storage:
– Amazon S3

Out-of-scope AWS services and features

The following is a non-exhaustive list of AWS services and features that are not covered on the exam.
These services and features do not represent every AWS offering that is excluded from the exam content.
Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant.
Out-of-scope AWS services and features include the following:
– AWS Application Discovery Service
– Amazon AppStream 2.0
– Amazon Chime
– Amazon Connect
– AWS Database Migration Service (AWS DMS)
– AWS Device Farm
– Amazon Elastic Transcoder
– Amazon GameLift
– Amazon Lex
– Amazon Machine Learning (Amazon ML)
– AWS Managed Services
– Amazon Mobile Analytics
– Amazon Polly

– Amazon QuickSight
– Amazon Rekognition
– AWS Server Migration Service (AWS SMS)
– AWS Service Catalog
– AWS Shield Advanced
– AWS Shield Standard
– AWS Snow Family
– AWS Storage Gateway
– AWS WAF
– Amazon WorkMail
– Amazon WorkSpaces

To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Top

AWS Certified Developer – Associate Practice Questions And Answers Dump

Q0: Your application reads commands from an SQS queue and sends them to web services hosted by your
partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost.
How can you accommodate the partners’ broken web services without wasting your resources?

  • A. Create a delay queue and set DelaySeconds to 30 seconds
  • B. Requeue the message with a VisibilityTimeout of 30 seconds.
  • C. Create a dead letter queue and set the Maximum Receives to 3.
  • D. Requeue the message with a DelaySeconds of 30 seconds.
AWS Developer Associates DVA-C01 PRO
AWS Developer Associates DVA-C01 PRO
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 


C. After a message is taken from the queue and returned for the maximum number of retries, it is
automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.

Reference: Amazon SQS Dead-Letter Queues


Top

Q1: A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently.
What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. The AWS Documentation mentions the following:

DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

Q2: You are creating a DynamoDB table with the following attributes:

  • PurchaseOrderNumber (partition key)
  • CustomerID
  • PurchaseDate
  • TotalPurchaseValue

One of your applications must retrieve items from the table to calculate the total value of purchases for a
particular customer over a date range. What secondary index do you need to add to the table?

  • A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute
  • C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute


C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition
key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the
TotalPurchaseValue into the index provides all the data needed to satisfy the use case.

Reference: AWS DynamoDB Global Secondary Indexes

Difference between local and global indexes in DynamoDB

    • Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
    • Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
    • Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
    • Global Secondary Indexes defines a new paradigm – different hash/range keys per index.
      This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
    • Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
    • Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.

Throughput :

  • Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
  • Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.


Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q3: When referencing the remaining time left for a Lambda function to run within the function’s code you would use:

  • A. The event object
  • B. The timeLeft object
  • C. The remains object
  • D. The context object


D. The context object.

Reference: AWS Lambda


Top

Q4: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context
D. event, context
def handler_name(event, context):

return some_value

Reference: AWS Lambda Function Handler in Python

Top

Q5: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda Deployment Package

Top

Q6: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime

C. Function code and libraries not included within the runtime

Reference: AWS Lambda Deployment Package in PowerShell

Top

Q7: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?

  • A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
  • B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
  • C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
  • D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.


D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.

Reference: Comparison of Security Groups and Network ACLs

AWS Security Groups and NACL


Top

Q8: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.

Reference: AWS Network Address Translation Gateway


Top

Q9: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.

Reference: AWS Autoscalling


Top

Q10: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda

Top

Q11: You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?

  • A. RegisterImage
  • B. CreateImage
  • C. ami-register-image
  • D. ami-create-image

A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.

Reference: API RegisterImage

Top

Q12: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?

  • A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
  • B. Permenantly assigning users to specific instances and always routing their traffic to those instances
  • C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
  • D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance

Top

Q13: Which API call would best be used to describe an Amazon Machine Image?

  • A. ami-describe-image
  • B. ami-describe-images
  • C. DescribeImage
  • D. DescribeImages

D. In general, API actions stick to the PascalCase style with the first letter of every word capitalized.

Reference: API DescribeImages

Top

Q14: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?

  • A. Autoscaling requires using Amazon EBS-backed instances
  • B. Virtual Private Cloud requires EBS backed instances
  • C. Amazon EBS-backed instances can be stopped and restarted without losing data
  • D. Instance-store backed instances can be stopped and restarted without losing data

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.

Reference: What is the difference between EBS and Instance Store?

Top

Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command.
ssh -i my_key.pem ec2-user@52.2.222.22
However you receive the following error.
@@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@
What is the most probable reason for this and how can you fix it?

  • A. You do not have root access on your terminal and need to use the sudo option for this to work.
  • B. You do not have enough permissions to perform the operation.
  • C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
  • D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.

D. You need to run something like: chmod 400 my_key.pem

Reference:

Top

Q16: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?

  • A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
  • B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
  • C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
  • D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.

D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.

Reference: AWS Root Device Storage

Top

Q17: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:

  • A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
  • B. Can only be used to launch EC2 instances in the same country as the AMI is stored
  • C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
  • D. Can be used to launch EC2 instances in any AWS region

C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another

Reference: https://aws.amazon.com/amazon-linux-ami/

Top

Q18: Which of the following statements is true about the Elastic File System (EFS)?

  • A. EFS can scale out to meet capacity requirements and scale back down when no longer needed
  • B. EFS can be used by multiple EC2 instances simultaneously
  • C. EFS cannot be used by an instance using EBS
  • D. EFS can be configured on an instance before launch just like an IAM role or EBS volumes

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

A. and B.

Reference: https://aws.amazon.com/efs/

Top

Q19: IAM Policies, at a minimum, contain what elements?

  • A. ID
  • B. Effects
  • C. Resources
  • D. Sid
  • E. Principle
  • F. Actions

B. C. and F.

Effect – Use Allow or Deny to indicate whether the policy allows or denies access.

Resource – Specify a list of resources to which the actions apply.

Action – Include a list of actions that the policy allows or denies.

Id, Sid aren’t required fields in IAM Policies. But they are optional fields

Reference: AWS IAM Access Policies

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q20: What are the main benefits of IAM groups?

  • A. The ability to create custom permission policies.
  • B. Assigning IAM permission policies to more than one user at a time.
  • C. Easier user/policy management.
  • D. Allowing EC2 instances to gain access to S3.

B. and C.

A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.

Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html

 

Top

Q21: What are benefits of using AWS STS?

  • A. Grant access to AWS resources without having to create an IAM identity for them
  • B. Since credentials are temporary, you don’t have to rotate or revoke them
  • C. Temporary security credentials can be extended indefinitely
  • D. Temporary security credentials can be restricted to a specific region

Top

Q22: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q23: A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?

  • A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
  • B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
  • C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
  • D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.

B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence.
Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation.
During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:

  • Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
  • Saved Configurations
    Settings for any options that are not applied directly to the
    environment are loaded from a saved configuration, if specified.
  • Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the
    environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.

     

    Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.

  • Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.

If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI
.
Settings in configuration files are not applied
directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.
If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.

Reference: Managing ec2 features – Elastic beanstalk

Q24: What statements are true about Availability Zones (AZs) and Regions?

  • A. There is only one AZ in each AWS Region
  • B. AZs are geographically separated inside a region to help protect against natural disasters affecting more than one at a time.
  • C. AZs can be moved between AWS Regions based on your needs
  • D. There are (almost always) two or more AZs in each AWS Region

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B and D.

Reference: AWS global infrastructure/

Top

Q25: An AWS Region contains:

  • A. Edge Locations
  • B. Data Centers
  • C. AWS Services
  • D. Availability Zones


B. C. D. Edge locations are actually distinct locations that don’t explicitly fall within AWS regions.

Reference: AWS Global Infrastructure


Top

Q26: Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?

  • A. Eventual Consistent Reads
  • B. Conditional reads for Consistency
  • C. Strongly Consistent Reads
  • D. Not possible


C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.

Reference: https://aws.amazon.com/dynamodb/faqs/


Top

Q27: You’ ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?

  • A. Create an Opswork stack and deploy the Docker containers
  • B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
  • C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
  • D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.


B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.

Reference: Create and Deploy Docker in AWS


Top

Q28: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?

  • A. Create multiple threads and upload the objects in the multiple threads
  • B. Write the items in batches for better performance
  • C. Use the Multipart upload API
  • D. Enable versioning on the Bucket

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 


C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.

Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html


Top

Q29: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?

  • A. 6000
  • B. 10
  • C. 3600
  • D. 600

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.

You can specify the Write capacity in the Capacity tab of the DynamoDB table.

Reference: AWS working with tables

Q30: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context


D. event, context def handler_name(event, context):

return some_value
Reference: AWS Lambda Function Handler in Python

Top

Q31: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda Deployment Package

Top

Q32: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime


C. Function code and libraries not included within the runtime
Reference: AWS Lambda Deployment Package in PowerShell

Top

Q33: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
Reference: AWS Network Address Translation Gateway

Top

Q34: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.
Reference: AWS Autoscalling

Top

Q30: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

Answer:


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda

Top

Q31: An organization is using an Amazon ElastiCache cluster in front of their Amazon RDS instance. The organization would like the Developer to implement logic into the code so that the cluster only retrieves data from RDS when there is a cache miss. What strategy can the Developer implement to achieve this?

  • A. Lazy loading
  • B. Write-through
  • C. Error retries
  • D. Exponential backoff

Answer:


Answer – A
Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. All other options are incorrect.
Reference: Caching Strategies

Top

Q32: A developer is writing an application that will run on Ec2 instances and read messages from SQS queue. The nessages will arrive every 15-60 seconds. How should the Developer efficiently query the queue for new messages?

  • A. Use long polling
  • B. Set a custom visibility timeout
  • C. Use short polling
  • D. Implement exponential backoff


Answer – A Long polling will help insure that the applications make less requests for messages in a shorter period of time. This is more cost effective. Since the messages are only going to be available after 15 seconds and we don’t know exacly when they would be available, it is better to use Long Polling.
Reference: Amazon SQS Long Polling

Top

Q33: You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With new Lambda function working as per expectation which of the following will shift traffic from original Lambda function to new Lambda function in the shortest time frame?

  • A. Canary10Percent5Minutes
  • B. Linear10PercentEvery10Minutes
  • C. Canary10Percent15Minutes
  • D. Linear10PercentEvery1Minute


Answer – A
With Canary Deployment Preference type, Traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while remaining all traffic is shifted after 5 minutes.
Reference: Gradual Code Deployment

Top

Q34: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?

  • A. AWS::Serverless::Api
  • B. AWS::Serverless::Application
  • C. AWS::Serverless::Layerversion
  • D. AWS::Serverless::Function


Answer – B
AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets.
Reference: Declaring Serverless Resources

Top

Q35: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?

  • A. Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
  • B. Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
  • C. Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
  • D. Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.


Answer – D
With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys.
Reference: AWS Key Management Service Concepts

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q36: You are developing an application that will be comprised of the following architecture –

  1. A set of Ec2 instances to process the videos.
  2. These (Ec2 instances) will be spun up by an autoscaling group.
  3. SQS Queues to maintain the processing messages.
  4. There will be 2 pricing tiers.

How will you ensure that the premium customers videos are given more preference?

  • A. Create 2 Autoscaling Groups, one for normal and one for premium customers
  • B. Create 2 set of Ec2 Instances, one for normal and one for premium customers
  • C. Create 2 SQS queus, one for normal and one for premium customers
  • D. Create 2 Elastic Load Balancers, one for normal and one for premium customers.


Answer – C
The ideal option would be to create 2 SQS queues. Messages can then be processed by the application from the high priority queue first.<br? The other options are not the ideal options. They would lead to extra costs and also extra maintenance.
Reference: SQS

Top

Q37: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?

  • A. CustomerID
  • B. CustomerName
  • C. Location
  • D. Age


Answer- A
Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on..
Use composite attributes. Try to combine more than one attribute to form a unique key.
Reference: Choosing the right DynamoDB Partition Key

Top

Q38: A developer is making use of AWS services to develop an application. He has been asked to develop the application in a manner to compensate any network delays. Which of the following two mechanisms should he implement in the application?

  • A. Multiple SQS queues
  • B. Exponential backoff algorithm
  • C. Retries in your application code
  • D. Consider using the Java sdk.


Answer- B. and C.
In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values, and should be set based on the operation being performed, as well as other local factors, such as network latency.
Reference: Error Retries and Exponential Backoff in AWS

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q39: An application is being developed that is going to write data to a DynamoDB table. You have to setup the read and write throughput for the table. Data is going to be read at the rate of 300 items every 30 seconds. Each item is of size 6KB. The reads can be eventual consistent reads. What should be the read capacity that needs to be set on the table?

  • A. 10
  • B. 20
  • C. 6
  • D. 30


Answer – A

Since there are 300 items read every 30 seconds , that means there are (300/30) = 10 items read every second.
Since each item is 6KB in size , that means , 2 reads will be required for each item.
So we have total of 2*10 = 20 reads for the number of items per second
Since eventual consistency is required , we can divide the number of reads(20) by 2 , and in the end we get the Read Capacity of 10.

Reference: Read/Write Capacity Mode


Top

Q40: You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load balancer. You have been requested to monitor the incoming connections to the Elastic Load Balancer. Which of the below options can suffice this requirement?

  • A. Use AWS CloudTrail with your load balancer
  • B. Enable access logs on the load balancer
  • C. Use a CloudWatch Logs Agent
  • D. Create a custom metric CloudWatch lter on your load balancer


Answer – B
Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues.
Reference: Access Logs for Your Application Load Balancer

Top

Q41: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?

  • A. Enable versioning for the underlying S3 bucket.
  • B. Enable Replication so that the objects get replicated to the other bucket
  • C. Enable CORS for the bucket
  • D. Change the Bucket policy for the bucket to allow access from the other bucket


Answer – C

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.

Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:

Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.

Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.

Reference: Cross-Origin Resource Sharing (CORS)


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q42: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below

  • A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
  • B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
  • C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
  • D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.


Answer- C
The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3.
You can then provides access to the objects based on the key values generated via the user id.

Reference: The AWS Security Token Service (STS)


Top

Q43: Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.

  • A. Publish your data to CloudWatch Logs, and congure your application to Autoscale to handle the load on demand.
  • B. Publish your log data to an Amazon S3 bucket.  Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is congured to pull down your log les stored an Amazon S3
  • C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is congured to process your logging data.
  • D. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.

Answer:


Answer – C
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data.
Reference: Amazon Kinesis

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q44: You’ve been instructed to develop a mobile application that will make use of AWS services. You need to decide on a data store to store the user sessions. Which of the following would be an ideal data store for session management?

  • A. AWS Simple Storage Service
  • B. AWS DynamoDB
  • C. AWS RDS
  • D. AWS Redshift

Answer:


Answer – B
DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management
Reference: Scalable Session Handling in PHP Using Amazon DynamoDB

Top

Q45: Your application currently interacts with a DynamoDB table. Records are inserted into the table via the application. There is now a requirement to ensure that whenever items are updated in the DynamoDB primary table , another record is inserted into a secondary table. Which of the below feature should be used when developing such a solution?

  • A. AWS DynamoDB Encryption
  • B. AWS DynamoDB Streams
  • C. AWS DynamoDB Accelerator
  • D. AWSTable Accelerator


Answer – B
DynamoDB Streams Use Cases and Design Patterns This post describes some common use cases you might encounter, along with their design options and solutions, when migrating data from relational data stores to Amazon DynamoDB. We will consider how to manage the following scenarios:

  • How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
  • How do you trigger an event based on a particular transaction?
  • How do you audit or archive transactions?
  • How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?

Relational databases provide native support for transactions, triggers, auditing, and replication. Typically, a transaction in a database refers to performing create, read, update, and delete (CRUD) operations against multiple tables in a block. A transaction can have only two states—success or failure. In other words, there is no partial completion. As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit. You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time. AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region. All of the other options are incorrect since none of these would meet the core requirement.
Reference: DynamoDB Streams Use Cases and Design Patterns


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q46: An application has been making use of AWS DynamoDB for its back-end data store. The size of the table has now grown to 20 GB , and the scans on the table are causing throttling errors. Which of the following should now be implemented to avoid such errors?

  • A. Large Page size
  • B. Reduced page size
  • C. Parallel Scans
  • D. Sequential scans

Answer – B
When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity. You can use the following technique to minimize the impact of a scan on a table’s provisioned throughput. Reduce page size Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request. For example, suppose that each item is 4 KB and you set the page size to 40 items. A Query request would then consume only 20 eventually consistent read operations or 40 strongly consistent read operations. A larger number of smaller Query or Scan operations would allow your other critical requests to succeed without throttling.
Reference1: Rate-Limited Scans in Amazon DynamoDB

Reference2: Best Practices for Querying and Scanning Data


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q47: Which of the following is correct way of passing a stage variable to an HTTP URL ? (Select TWO.)

  • A. http://example.com/${}/prod
  • B. http://example.com/${stageVariables.}/prod
  • C. http://${stageVariables.}.example.com/dev/operation
  • D. http://${stageVariables}.example.com/dev/operation
  • E. http://${}.example.com/dev/operation
  • F. http://example.com/${stageVariables}/prod


Answer – B. and C.
A stage variable can be used as part of HTTP integration URL as in following cases, ·         A full URI without protocol ·         A full domain ·         A subdomain ·         A path ·         A query string In the above case , option B & C displays stage variable as a path & sub-domain.
Reference: Amazon API Gateway Stage Variables Reference

Top

Q48: Your company is planning on creating new development environments in AWS. They want to make use of their existing Chef recipes which they use for their on-premise configuration for servers in AWS. Which of the following service would be ideal to use in this regard?

  • A. AWS Elastic Beanstalk
  • B. AWS OpsWork
  • C. AWS Cloudformation
  • D. AWS SQS


Answer – B
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments All other options are invalid since they cannot be used to work with Chef recipes for configuration management.
Reference: AWS OpsWorks

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q49: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?

  • A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
  • B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access.
  • C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
  • D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.


Answer – C
With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used
Reference: About Web Identity Federation

Top

Q50: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?

  • A. Cognito Data
  • B. Cognito Events
  • C. Cognito Streams
  • D. Cognito Callbacks


Answer – C
Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams
Reference:

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q51: You’ve developed a set of scripts using AWS Lambda. These scripts need to access EC2 Instances in a VPC. Which of the following needs to be done to ensure that the AWS Lambda function can access the resources in the VPC. Choose 2 answers from the options given below

  • A. Ensure that the subnet ID’s are mentioned when conguring the Lambda function
  • B. Ensure that the NACL ID’s are mentioned when conguring the Lambda function
  • C. Ensure that the Security Group ID’s are mentioned when conguring the Lambda function
  • D. Ensure that the VPC Flow Log ID’s are mentioned when conguring the Lambda function


Answer: A and C.
AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPCspecific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC.
Reference: Configuring a Lambda Function to Access Resources in an Amazon VPC

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q52: You’ve currently been tasked to migrate an existing on-premise environment into Elastic Beanstalk. The application does not make use of Docker containers. You also can’t see any relevant environments in the beanstalk service that would be suitable to host your application. What should you consider doing in this case?

  • A. Migrate your application to using Docker containers and then migrate the app to the Elastic Beanstalk environment.
  • B. Consider using Cloudformation to deploy your environment to Elastic Beanstalk
  • C. Consider using Packer to create a custom platform
  • D. Consider deploying your application using the Elastic Container Service


Answer – C
Elastic Beanstalk supports custom platforms. A custom platform is a more advanced customization than a Custom Image in several ways. A custom platform lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility allows you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn’t provide a platform out of the box. Compare that to custom images, where you modify an AMI for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform’s software stack. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. To create a custom platform, you build an Amazon Machine Image (AMI) from one of the supported operating systems—Ubuntu, RHEL, or Amazon Linux (see the flavor entry in Platform.yaml File Format for the exact version numbers)—and add further customizations. You create your own Elastic Beanstalk platform using Packer, which is an open-source tool for creating machine images for many platforms, including AMIs for use with Amazon EC2. An Elastic Beanstalk platform comprises an AMI configured to run a set of software that supports an application, and metadata that can include custom configuration options and default configuration option settings.
Reference: AWS Elastic Beanstalk Custom Platforms

Top

Q53: Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.

  • A. 10
  • B. 160
  • C. 155
  • D. 16


Answer – B.
Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.
Reference: Read/Write Capacity Mode

Top

Top

Q54: Which AWS Service can be used to automatically install your application code onto EC2, on premises systems and Lambda?

  • A. CodeCommit
  • B. X-Ray
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q55: Which AWS service can be used to compile source code, run tests and package code?

  • A. CodePipeline
  • B. CodeCommit
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy Answer: B.

Reference: AWS CodeBuild


Top

Q56: How can your prevent CloudFormation from deleting your entire stack on failure? (Choose 2)

  • A. Set the Rollback on failure radio button to No in the CloudFormation console
  • B. Set Termination Protection to Enabled in the CloudFormation console
  • C. Use the –disable-rollback flag with the AWS CLI
  • D. Use the –enable-termination-protection protection flag with the AWS CLI

Answer: A. and C.

Reference: Protecting a Stack From Being Deleted

Top

Q57: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?

  • A. Continuous Integration
  • B. Continuous Deployment
  • C. Continuous Delivery
  • D. Continuous Development

Top

Q58: When deploying application code to EC2, the AppSpec file can be written in which language?

  • A. JSON
  • B. JSON or YAML
  • C. XML
  • D. YAML

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q59: Part of your CloudFormation deployment fails due to a mis-configuration, by defaukt what will happen?

  • A. CloudFormation will rollback only the failed components
  • B. CloudFormation will rollback the entire stack
  • C. Failed component will remain available for debugging purposes
  • D. CloudFormation will ask you if you want to continue with the deployment


Top

Q60: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?

  • A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
  • B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
  • C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
  • D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.

Answer: C

Reference: Getting Started with Amazon SNS


Top

Q61: Which AWS service can be used to centrally store and version control your application source code, binaries and libraries

  • A. CodeCommit
  • B. CodeBuild
  • C. CodePipeline
  • D. ElasticFileSystem

Answer: A

Reference: AWS CodeCommit


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q62: You are using CloudFormation to create a new S3 bucket, which of the following sections would you use to define the properties of your bucket?

  • A. Conditions
  • B. Parameters
  • C. Outputs
  • D. Resources

Answer: D

Reference: Resources


Top

Q63: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?

  • A. Transforms
  • B. Outputs
  • C. Resources
  • D. Instances

Answer: C.
The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3.
Reference: Resources

Top

Q64: Which AWS service can be used to fully automate your entire release process?

  • A. CodeDeploy
  • B. CodePipeline
  • C. CodeCommit
  • D. CodeBuild

Answer: B.
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates

Reference: AWS CodePipeline


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q65: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?

  • A. Outputs
  • B. Transforms
  • C. Resources
  • D. Exports

Answer: A.
Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack.
Reference: CloudFormation Outputs

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q66: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?

  • A. Inputs
  • B. Resources
  • C. Transforms
  • D. Files

Answer: C.
Transforms is used to reference code located in S3 and also specififying the use of the Serverless Application Model (SAM) for Lambda deployments.
Reference: Transforms

Top

Q67: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file
used to specify source files and lifecycle hooks?

  • A. buildspec.yml
  • B. appspec.json
  • C. appspec.yml
  • D. buildspec.json

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q68: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?

  • A. Share the code using an EBS volume
  • B. Copy and paste the code into the template each time you need to use it
  • C. Use a cloudformation nested stack
  • D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.

Answer: C.

Reference: Working with Nested Stacks

Top

Q69: In the CodeDeploy AppSpec file, what are hooks used for?

  • A. To reference AWS resources that will be used during the deployment
  • B. Hooks are reserved for future use
  • C. To specify files you want to copy during the deployment.
  • D. To specify, scripts or function that you want to run at set points in the deployment lifecycle

Answer: D.
The ‘hooks’ section for an EC2/On-Premises deployment contains mappings that link deployment lifecycle event hooks to one or more scripts.

Reference: AppSpec ‘hooks’ Section

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q70: Which command can you use to encrypt a plain text file using CMK?

  • A. aws kms-encrypt
  • B. aws iam encrypt
  • C. aws kms encrypt
  • D. aws encrypt

Answer: C.
aws kms encrypt –key-id 1234abcd-12ab-34cd-56ef-1234567890ab –plaintext fileb://ExamplePlaintextFile –output text –query CiphertextBlob > C:\Temp\ExampleEncryptedFile.base64

Reference: AWS CLI Encrypt

Top

Q72: Which of the following is an encrypted key used by KMS to encrypt your data

  • A. Custmoer Mamaged Key
  • B. Encryption Key
  • C. Envelope Key
  • D. Customer Master Key

Answer: C.
Your Data key also known as the Enveloppe key is encrypted using the master key.This approach is known as Envelope encryption.
Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.

Reference: Envelope Encryption

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q73: Which of the following statements are correct? (Choose 2)

  • A. The Customer Master Key is used to encrypt and decrypt the Envelope Key or Data Key
  • B. The Envelope Key or Data Key is used to encrypt and decrypt plain text files.
  • C. The envelope Key or Data Key is used to encrypt and decrypt the Customer Master Key.
  • D. The Customer MasterKey is used to encrypt and decrypt plain text files.

Answer: A. and B.

Reference: AWS Key Management Service Concepts

Top

 
 

Q74: Which of the following statements is correct in relation to kMS/ (Choose 2)

  • A. KMS Encryption keys are regional
  • B. You cannot export your customer master key
  • C. You can export your customer master key.
  • D. KMS encryption Keys are global

Answer: A. and B.

Reference: AWS Key Management Service FAQs

Q75:  A developer is preparing a deployment package for a Java implementation of an AWS Lambda function. What should the developer include in the deployment package? (Select TWO.)
A. Compiled application code
B. Java runtime environment
C. References to the event sources
D. Lambda execution role
E. Application dependencies


Answer: C. E.
Notes: To create a Lambda function, you first create a Lambda function deployment package. This package is a .zip or .jar file consisting of your code and any dependencies.
Reference: Lambda deployment packages.

Q76: A developer uses AWS CodeDeploy to deploy a Python application to a fleet of Amazon EC2 instances that run behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. What should the developer include in the CodeDeploy deployment package?
A. A launch template for the Amazon EC2 Auto Scaling group
B. A CodeDeploy AppSpec file
C. An EC2 role that grants the application access to AWS services
D. An IAM policy that grants the application access to AWS services


Answer: B.
Notes: The CodeDeploy AppSpec (application specific) file is unique to CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file.
Reference: CodeDeploy application specification (AppSpec) files.
Category: Deployment

Q76: A company is working on a project to enhance its serverless application development process. The company hosts applications on AWS Lambda. The development team regularly updates the Lambda code and wants to use stable code in production. Which combination of steps should the development team take to configure Lambda functions to meet both development and production requirements? (Select TWO.)

A. Create a new Lambda version every time a new code release needs testing.
B. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready unqualified Amazon Resource Name (ARN) version. Point the Development alias to the $LATEST version.
C. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to the production-ready qualified Amazon Resource Name (ARN) version. Point the Development alias to the variable LAMBDA_TASK_ROOT.
D. Create a new Lambda layer every time a new code release needs testing.
E. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready Lambda layer Amazon Resource Name (ARN). Point the Development alias to the $LATEST layer ARN.


Answer: A. B.
Notes: Lambda function versions are designed to manage deployment of functions. They can be used for code changes, without affecting the stable production version of the code. By creating separate aliases for Production and Development, systems can initiate the correct alias as needed. A Lambda function alias can be used to point to a specific Lambda function version. Using the functionality to update an alias and its linked version, the development team can update the required version as needed. The $LATEST version is the newest published version.
Reference: Lambda function versions.

For more information about Lambda layers, see Creating and sharing Lambda layers.

For more information about Lambda function aliases, see Lambda function aliases.

Category: Deployment

Q77: Each time a developer publishes a new version of an AWS Lambda function, all the dependent event source mappings need to be updated with the reference to the new version’s Amazon Resource Name (ARN). These updates are time consuming and error-prone. Which combination of actions should the developer take to avoid performing these updates when publishing a new Lambda version? (Select TWO.)
A. Update event source mappings with the ARN of the Lambda layer.
B. Point a Lambda alias to a new version of the Lambda function.
C. Create a Lambda alias for each published version of the Lambda function.
D. Point a Lambda alias to a new Lambda function alias.
E. Update the event source mappings with the Lambda alias ARN.


Answer: B. E.
Notes: A Lambda alias is a pointer to a specific Lambda function version. Instead of using ARNs for the Lambda function in event source mappings, you can use an alias ARN. You do not need to update your event source mappings when you promote a new version or roll back to a previous version.
Reference: Lambda function aliases.
Category: Deployment

Q78:  A company wants to store sensitive user data in Amazon S3 and encrypt this data at rest. The company must manage the encryption keys and use Amazon S3 to perform the encryption. How can a developer meet these requirements?
A. Enable default encryption for the S3 bucket by using the option for server-side encryption with customer-provided encryption keys (SSE-C).
B. Enable client-side encryption with an encryption key. Upload the encrypted object to the S3 bucket.
C. Enable server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Upload an object to the S3 bucket.
D. Enable server-side encryption with customer-provided encryption keys (SSE-C). Upload an object to the S3 bucket.


Answer: D.
Notes: When you upload an object, Amazon S3 uses the encryption key you provide to apply AES-256 encryption to your data and removes the encryption key from memory.
Reference: Protecting data using server-side encryption with customer-provided encryption keys (SSE-C).

Category: Security

Q79: A company is developing a Python application that submits data to an Amazon DynamoDB table. The company requires client-side encryption of specific data items and end-to-end protection for the encrypted data in transit and at rest. Which combination of steps will meet the requirement for the encryption of specific data items? (Select TWO.)

A. Generate symmetric encryption keys with AWS Key Management Service (AWS KMS).
B. Generate asymmetric encryption keys with AWS Key Management Service (AWS KMS).
C. Use generated keys with the DynamoDB Encryption Client.
D. Use generated keys to configure DynamoDB table encryption with AWS managed customer master keys (CMKs).
E. Use generated keys to configure DynamoDB table encryption with AWS owned customer master keys (CMKs).


Answer: A. C.
Notes: When the DynamoDB Encryption Client is configured to use AWS KMS, it uses a customer master key (CMK) that is always encrypted when used outside of AWS KMS. This cryptographic materials provider returns a unique encryption key and signing key for every table item. This method of encryption uses a symmetric CMK.
Reference: Direct KMS Materials Provider.
Category: Deployment

Q80: A company is developing a REST API with Amazon API Gateway. Access to the API should be limited to users in the existing Amazon Cognito user pool. Which combination of steps should a developer perform to secure the API? (Select TWO.)
A. Create an AWS Lambda authorizer for the API.
B. Create an Amazon Cognito authorizer for the API.
C. Configure the authorizer for the API resource.
D. Configure the API methods to use the authorizer.
E. Configure the authorizer for the API stage.


Answer: B. D.
Notes: An Amazon Cognito authorizer should be used for integration with Amazon Cognito user pools. In addition to creating an authorizer, you are required to configure an API method to use that authorizer for the API.
Reference: Control access to a REST API using Amazon Cognito user pools as authorizer.
Category: Security

Q81: A developer is implementing a mobile app to provide personalized services to app users. The application code makes calls to Amazon S3 and Amazon Simple Queue Service (Amazon SQS). Which options can the developer use to authenticate the app users? (Select TWO.)
A. Authenticate to the Amazon Cognito identity pool directly.
B. Authenticate to AWS Identity and Access Management (IAM) directly.
C. Authenticate to the Amazon Cognito user pool directly.
D. Federate authentication by using Login with Amazon with the users managed with AWS Security Token Service (AWS STS).
E. Federate authentication by using Login with Amazon with the users managed with the Amazon Cognito user pool.


Answer: C. E.
Notes: The Amazon Cognito user pool provides direct user authentication. The Amazon Cognito user pool provides a federated authentication option with third-party identity provider (IdP), including amazon.com.
Reference: Adding User Pool Sign-in Through a Third Party.
Category: Security

 
 

Q82: A company is implementing several order processing workflows. Each workflow is implemented by using AWS Lambda functions for each task. Which combination of steps should a developer follow to implement these workflows? (Select TWO.)
A. Define a AWS Step Functions task for each Lambda function.
B. Define a AWS Step Functions task for each workflow.
C. Write code that polls the AWS Step Functions invocation to coordinate each workflow.
D. Define an AWS Step Functions state machine for each workflow.
E. Define an AWS Step Functions state machine for each Lambda function.


Answer: A. D.
Notes: Step Functions is based on state machines and tasks. A state machine is a workflow. Tasks perform work by coordinating with other AWS services, such as Lambda. A state machine is a workflow. It can be used to express a workflow as a number of states, their relationships, and their input and output. You can coordinate individual tasks with Step Functions by expressing your workflow as a finite state machine, written in the Amazon States Language.
Reference: Getting Started with AWS Step Functions.

Category: Development

Q83: A company is migrating a web service to the AWS Cloud. The web service accepts requests by using HTTP (port 80). The company wants to use an AWS Lambda function to process HTTP requests. Which application design will satisfy these requirements?
A. Create an Amazon API Gateway API. Configure proxy integration with the Lambda function.
B. Create an Amazon API Gateway API. Configure non-proxy integration with the Lambda function.
C. Configure the Lambda function to listen to inbound network connections on port 80.
D. Configure the Lambda function as a target in the Application Load Balancer target group.


Answer: D.
Notes: Elastic Load Balancing supports Lambda functions as a target for an Application Load Balancer. You can use load balancer rules to route HTTP requests to a function, based on the path or the header values. Then, process the request and return an HTTP response from your Lambda function.
Reference: Using AWS Lambda with an Application Load Balancer.
Category: Development

Q84: A company is developing an image processing application. When an image is uploaded to an Amazon S3 bucket, a number of independent and separate services must be invoked to process the image. The services do not have to be available immediately, but they must process every image. Which application design satisfies these requirements?
A. Configure an Amazon S3 event notification that publishes to an Amazon Simple Queue Service (Amazon SQS) queue. Each service pulls the message from the same queue.
B. Configure an Amazon S3 event notification that publishes to an Amazon Simple Notification Service (Amazon SNS) topic. Each service subscribes to the same topic.
C. Configure an Amazon S3 event notification that publishes to an Amazon Simple Queue Service (Amazon SQS) queue. Subscribe a separate Amazon Simple Notification Service (Amazon SNS) topic for each service to an Amazon SQS queue.
D. Configure an Amazon S3 event notification that publishes to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe a separate Simple Queue Service (Amazon SQS) queue for each service to the Amazon SNS topic.


Answer: D.
Notes: Each service can subscribe to an individual Amazon SQS queue, which receives an event notification from the Amazon SNS topic. This is a fanout architectural implementation.
Reference: Common Amazon SNS scenarios.
Category: Development

Q85: A developer wants to implement Amazon EC2 Auto Scaling for a Multi-AZ web application. However, the developer is concerned that user sessions will be lost during scale-in events. How can the developer store the session state and share it across the EC2 instances?
A. Write the sessions to an Amazon Kinesis data stream. Configure the application to poll the stream.
B. Publish the sessions to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe each instance in the group to the topic.
C. Store the sessions in an Amazon ElastiCache for Memcached cluster. Configure the application to use the Memcached API.
D. Write the sessions to an Amazon Elastic Block Store (Amazon EBS) volume. Mount the volume to each instance in the group.


Answer: C.
Notes: ElastiCache for Memcached is a distributed in-memory data store or cache environment in the cloud. It will meet the developer’s requirement of persistent storage and is fast to access.
Reference: What is Amazon ElastiCache for Memcached?

Category: Development

 
 
 

Q86: A developer is integrating a legacy web application that runs on a fleet of Amazon EC2 instances with an Amazon DynamoDB table. There is no AWS SDK for the programming language that was used to implement the web application. Which combination of steps should the developer perform to make an API call to Amazon DynamoDB from the instances? (Select TWO.)
A. Make an HTTPS POST request to the DynamoDB API endpoint for the AWS Region. In the request body, include an XML document that contains the request attributes.
B. Make an HTTPS POST request to the DynamoDB API endpoint for the AWS Region. In the request body, include a JSON document that contains the request attributes.
C. Sign the requests by using AWS access keys and Signature Version 4.
D. Use an EC2 SSH key to calculate Signature Version 4 of the request.
E. Provide the signature value through the HTTP X-API-Key header.


Answer: B. C.
Notes: The HTTPS-based low-level AWS API for DynamoDB uses JSON as a wire protocol format. When you send HTTP requests to AWS, you sign the requests so that AWS can identify who sent them. Requests are signed with your AWS access key, which consists of an access key ID and secret access key. AWS supports two signature versions: Signature Version 4 and Signature Version 2. AWS recommends the use of Signature Version 4.
Reference: Signing AWS API requests.
Category: Development

Q87: A developer has written several custom applications that read and write to the same Amazon DynamoDB table. Each time the data in the DynamoDB table is modified, this change should be sent to an external API. Which combination of steps should the developer perform to accomplish this task? (Select TWO.)
A. Configure an AWS Lambda function to poll the stream and call the external API.
B. Configure an event in Amazon EventBridge (Amazon CloudWatch Events) that publishes the change to an Amazon Managed Streaming for Apache Kafka (Amazon MSK) data stream.
C. Create a trigger in the DynamoDB table to publish the change to an Amazon Kinesis data stream.
D. Deliver the stream to an Amazon Simple Notification Service (Amazon SNS) topic and subscribe the API to the topic.
E. Enable DynamoDB Streams on the table.


Answer: A. E.
Notes: If you enable DynamoDB Streams on a table, you can associate the stream Amazon Resource Name (ARN) with an Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table’s stream. Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. You can enable DynamoDB Streams on a table to create an event that invokes an AWS Lambda function.
Reference: Tutorial: Process New Items with DynamoDB Streams and Lambda.
Category: Monitoring

 
 
 

Q88: A company is migrating the create, read, update, and delete (CRUD) functionality of an existing Java web application to AWS Lambda. Which minimal code refactoring is necessary for the CRUD operations to run in the Lambda function?
A. Implement a Lambda handler function.
B. Import an AWS X-Ray package.
C. Rewrite the application code in Python.
D. Add a reference to the Lambda execution role.


Answer: A.
Notes: Every Lambda function needs a Lambda-specific handler. Specifics of authoring vary between runtimes, but all runtimes share a common programming model that defines the interface between your code and the runtime code. You tell the runtime which method to run by defining a handler in the function configuration. The runtime runs that method. Next, the runtime passes in objects to the handler that contain the invocation event and context, such as the function name and request ID.
Reference: Getting started with Lambda.
Category: Refactoring

Top

Q89: A company plans to use AWS log monitoring services to monitor an application that runs on premises. Currently, the application runs on a recent version of Ubuntu Server and outputs the logs to a local file. Which combination of steps should a developer perform to accomplish this goal? (Select TWO.)
A. Update the application code to include calls to the agent API for log collection.
B. Install the Amazon Elastic Container Service (Amazon ECS) container agent on the server.
C. Install the unified Amazon CloudWatch agent on the server.
D. Configure the long-term AWS credentials on the server to enable log collection by the agent.
E. Attach an IAM role to the server to enable log collection by the agent.


Answer: C. D.
Notes: The unified CloudWatch agent needs to be installed on the server. Ubuntu Server 18.04 is one of the many supported operating systems. When you install the unified CloudWatch agent on an on-premises server, you will specify a named profile that contains the credentials of the IAM user.
Reference: Collecting metrics and logs from Amazon EC2 instances and on-premises servers with the CloudWatch agent.
Category: Monitoring

Q90: A developer wants to monitor invocations of an AWS Lambda function by using Amazon CloudWatch Logs. The developer added a number of print statements to the function code that write the logging information to the stdout stream. After running the function, the developer does not see any log data being generated. Why does the log data NOT appear in the CloudWatch logs?
A. The log data is not written to the stderr stream.
B. Lambda function logging is not automatically enabled.
C. The execution role for the Lambda function did not grant permissions to write log data to CloudWatch Logs.
D. The Lambda function outputs the logs to an Amazon S3 bucket.


Answer: C.
Notes: The function needs permission to call CloudWatch Logs. Update the execution role to grant the permission. You can use the managed policy of AWSLambdaBasicExecutionRole.
Reference: Troubleshoot execution issues in Lambda.
Category: Monitoting

Q91: Which of the following are best practices you should implement into ongoing deployments of your application? (Select THREE.)

A. Use stage variables to manage secrets across environments
B. Create account-specific AWS SAM templates for each environment
C. Use an AutoPublish alias
D. Use traffic shifting with pre- and post-deployment hooks
E. Test throughout the pipeline


Answer: C. D. E.
Notes: Use an AutoPublish alias, Use traffic shifting with pre- and post-deployment hooks, Test throughout the pipeline
Reference: https://enoumen.com/2019/06/23/aws-solution-architect-associate-exam-prep-facts-and-summaries-questions-and-answers-dump/

Q92: You are handing off maintenance of your new serverless application to an incoming team lead. Which recommendations would you make? (Select THREE.)

A. Keep up to date with the quotas and payload sizes for each AWS service you are using

B. Analyze production access patterns to identify potential improvements

C. Design your services to extend their life as long as possible

D. Minimize changes to your production application

E. Compare the value of using the latest first-class integrations versus using Lambda between AWS services


Answer: A. B. D.

Notes: Keep up to date with the quotas and payload sizes for each AWS service you are using, 

2022 AWS Certified Developer Associate Exam Preparation: Questions and Answers Dump.

Welcome to AWS Certified Developer Associate Exam Preparation:

Definition and Objectives, Top 100 Questions and Answers dump, White papers, Courses, Labs and Training Materials, Exam info and details, References, Jobs, Others AWS Certificates

2022 AWS Certified Developer Associate Exam Preparation:  Questions and Answers Dump
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

What is the AWS Certified Developer Associate Exam?

This AWS Certified Developer-Associate Examination is intended for individuals who perform a Developer role. It validates an examinee’s ability to:

  • Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
  • Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS

Recommended general IT knowledge
The target candidate should have the following:
– In-depth knowledge of at least one high-level programming language
– Understanding of application lifecycle management
– The ability to write code for serverless applications
– Understanding of the use of containers in the development process

Recommended AWS knowledge
The target candidate should be able to do the following:

  • Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications
  • Identify key features of AWS services
  • Understand the AWS shared responsibility model
  • Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
  • Use and interact with AWS services
  • Apply basic understanding of cloud-native applications to write code
  • Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
  • Author, maintain, and debug code modules on AWS

What is considered out of scope for the target candidate?
The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam:
– Design architectures (for example, distributed system, microservices)
– Design and implement CI/CD pipelines

  • Administer IAM users and groups
  • Administer Amazon Elastic Container Service (Amazon ECS)
  • Design AWS networking infrastructure (for example, Amazon VPC, AWS Direct Connect)
  • Understand compliance and licensing

Exam content
Response types
There are two types of questions on the exam:
– Multiple choice: Has one correct response and three incorrect responses (distractors)
– Multiple response: Has two or more correct responses out of five or more response options
Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose.
Distractors are generally plausible responses that match the content area.
Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.

Unscored content
The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.

Exam results
The AWS Certified Developer – Associate (DVA-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines.
Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720.
Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.
Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam.
Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.

Content outline
This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context.
The percentage in each domain represents only scored content.

Domain 1: Deployment 22%
Domain 2: Security 26%
Domain 3: Development with AWS Services 30%
Domain 4: Refactoring 10%
Domain 5: Monitoring and Troubleshooting 12%

Domain 1: Deployment
1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
–  Commit code to a repository and invoke build, test and/or deployment actions
–  Use labels and branches for version and release management
–  Use AWS CodePipeline to orchestrate workflows against different environments
–  Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS
CodeDeploy for CI/CD purposes
–  Perform a roll back plan based on application deployment policy

1.2 Deploy applications using AWS Elastic Beanstalk.
–  Utilize existing supported environments to define a new application stack
–  Package the application
–  Introduce a new application version into the Elastic Beanstalk environment
–  Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable)
–  Validate application health using Elastic Beanstalk dashboard
–  Use Amazon CloudWatch Logs to instrument application logging

1.3 Prepare the application deployment package to be deployed to AWS.
–  Manage the dependencies of the code module (like environment variables, config files and static image files) within the package
–  Outline the package/container directory structure and organize files appropriately
–  Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores)

1.4 Deploy serverless applications.
–  Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template
–  Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)

Domain 2: Security
2.1 Make authenticated calls to AWS services.
–  Communicate required policy based on least privileges required by application.
–  Assume an IAM role to access a service
–  Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)

2.2 Implement encryption using AWS services.
– Encrypt data at rest (client side; server side; envelope encryption) using AWS services
–  Encrypt data in transit

2.3 Implement application authentication and authorization.
– Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools
–  Use Amazon Cognito-provided credentials to write code that access AWS services.
–  Use Amazon Cognito sync to synchronize user profiles and data
–  Use developer-authenticated identities to interact between end user devices, backend
authentication, and Amazon Cognito

Domain 3: Development with AWS Services
3.1 Write code for serverless applications.
– Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications)
– Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler)
– Create an API endpoint using Amazon API Gateway
–  Create and test appropriate API actions like GET, POST using the API endpoint
–  Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes)
–  Compute read/write capacity units for Amazon DynamoDB based on application requirements
–  Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis)
–  Invoke an AWS Lambda function synchronously and asynchronously

3.2 Translate functional requirements into application design.
– Determine real-time vs. batch processing for a given use case
– Determine use of synchronous vs. asynchronous for a given use case
– Determine use of event vs. schedule/poll for a given use case
– Account for tradeoffs for consistency models in an application design

Domain 4: Refactoring
4.1 Optimize applications to best use AWS services and features.
 Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache)
 Apply an Amazon S3 naming scheme for optimal read performance

4.2 Migrate existing application code to run on AWS.
– Isolate dependencies
– Run the application as one or more stateless processes
– Develop in order to enable horizontal scalability
– Externalize state

Domain 5: Monitoring and Troubleshooting

5.1 Write code that can be monitored.
– Create custom Amazon CloudWatch metrics
– Perform logging in a manner available to systems operators
– Instrument application source code to enable tracing in AWS X-Ray

5.2 Perform root cause analysis on faults found in testing or production.
– Interpret the outputs from the logging mechanism in AWS to identify errors in logs
– Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues
– Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component

Which key tools, technologies, and concepts might be covered on the exam?

The following is a non-exhaustive list of the tools and technologies that could appear on the exam.
This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam.
The general tools and technologies in this list appear in no particular order.
AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance:
– Analytics
– Application Integration
– Containers
– Cost and Capacity Management
– Data Movement
– Developer Tools
– Instances (virtual machines)
– Management and Governance
– Networking and Content Delivery
– Security
– Serverless

AWS services and features

Analytics:
– Amazon Elasticsearch Service (Amazon ES)
– Amazon Kinesis
Application Integration:
– Amazon EventBridge (Amazon CloudWatch Events)
– Amazon Simple Notification Service (Amazon SNS)
– Amazon Simple Queue Service (Amazon SQS)
– AWS Step Functions

Compute:
– Amazon EC2
– AWS Elastic Beanstalk
– AWS Lambda

Containers:
– Amazon Elastic Container Registry (Amazon ECR)
– Amazon Elastic Container Service (Amazon ECS)
– Amazon Elastic Kubernetes Services (Amazon EKS)

Database:
– Amazon DynamoDB
– Amazon ElastiCache
– Amazon RDS

Developer Tools:
– AWS CodeArtifact
– AWS CodeBuild
– AWS CodeCommit
– AWS CodeDeploy
– Amazon CodeGuru
– AWS CodePipeline
– AWS CodeStar
– AWS Fault Injection Simulator
– AWS X-Ray

Management and Governance:
– AWS CloudFormation
– Amazon CloudWatch

Networking and Content Delivery:
– Amazon API Gateway
– Amazon CloudFront
– Elastic Load Balancing

Security, Identity, and Compliance:
– Amazon Cognito
– AWS Identity and Access Management (IAM)
– AWS Key Management Service (AWS KMS)

Storage:
– Amazon S3

Out-of-scope AWS services and features

The following is a non-exhaustive list of AWS services and features that are not covered on the exam.
These services and features do not represent every AWS offering that is excluded from the exam content.
Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant.
Out-of-scope AWS services and features include the following:
– AWS Application Discovery Service
– Amazon AppStream 2.0
– Amazon Chime
– Amazon Connect
– AWS Database Migration Service (AWS DMS)
– AWS Device Farm
– Amazon Elastic Transcoder
– Amazon GameLift
– Amazon Lex
– Amazon Machine Learning (Amazon ML)
– AWS Managed Services
– Amazon Mobile Analytics
– Amazon Polly

– Amazon QuickSight
– Amazon Rekognition
– AWS Server Migration Service (AWS SMS)
– AWS Service Catalog
– AWS Shield Advanced
– AWS Shield Standard
– AWS Snow Family
– AWS Storage Gateway
– AWS WAF
– Amazon WorkMail
– Amazon WorkSpaces

To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Top

AWS Certified Developer – Associate Practice Questions And Answers Dump

Q0: Your application reads commands from an SQS queue and sends them to web services hosted by your
partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost.
How can you accommodate the partners’ broken web services without wasting your resources?

  • A. Create a delay queue and set DelaySeconds to 30 seconds
  • B. Requeue the message with a VisibilityTimeout of 30 seconds.
  • C. Create a dead letter queue and set the Maximum Receives to 3.
  • D. Requeue the message with a DelaySeconds of 30 seconds.
2022 AWS Certified Developer Associate Exam Preparation:  Questions and Answers Dump
AWS Developer Associates DVA-C01 PRO
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 


C. After a message is taken from the queue and returned for the maximum number of retries, it is
automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.

Reference: Amazon SQS Dead-Letter Queues


Top

Q1: A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently.
What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. The AWS Documentation mentions the following:

DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

Q2: You are creating a DynamoDB table with the following attributes:

  • PurchaseOrderNumber (partition key)
  • CustomerID
  • PurchaseDate
  • TotalPurchaseValue

One of your applications must retrieve items from the table to calculate the total value of purchases for a
particular customer over a date range. What secondary index do you need to add to the table?

  • A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute
  • C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute


C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition
key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the
TotalPurchaseValue into the index provides all the data needed to satisfy the use case.

Reference: AWS DynamoDB Global Secondary Indexes

Difference between local and global indexes in DynamoDB

    • Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
    • Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
    • Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
    • Global Secondary Indexes defines a new paradigm – different hash/range keys per index.
      This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
    • Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
    • Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.

Throughput :

  • Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
  • Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.


Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q3: When referencing the remaining time left for a Lambda function to run within the function’s code you would use:

  • A. The event object
  • B. The timeLeft object
  • C. The remains object
  • D. The context object


D. The context object.

Reference: AWS Lambda


Top

Q4: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context
D. event, context
def handler_name(event, context):

return some_value

Reference: AWS Lambda Function Handler in Python

Top

Q5: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda Deployment Package

Top

Q6: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime

C. Function code and libraries not included within the runtime

Reference: AWS Lambda Deployment Package in PowerShell

Top

Q7: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?

  • A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
  • B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
  • C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
  • D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.


D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.

Reference: Comparison of Security Groups and Network ACLs

AWS Security Groups and NACL


Top

Q8: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.

Reference: AWS Network Address Translation Gateway


Top

Q9: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.

Reference: AWS Autoscalling


Top

Q10: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda

Top

Q11: You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?

  • A. RegisterImage
  • B. CreateImage
  • C. ami-register-image
  • D. ami-create-image

A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.

Reference: API RegisterImage

Top

Q12: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?

  • A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
  • B. Permenantly assigning users to specific instances and always routing their traffic to those instances
  • C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
  • D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance

Top

Q13: Which API call would best be used to describe an Amazon Machine Image?

  • A. ami-describe-image
  • B. ami-describe-images
  • C. DescribeImage
  • D. DescribeImages

D. In general, API actions stick to the PascalCase style with the first letter of every word capitalized.

Reference: API DescribeImages

Top

Q14: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?

  • A. Autoscaling requires using Amazon EBS-backed instances
  • B. Virtual Private Cloud requires EBS backed instances
  • C. Amazon EBS-backed instances can be stopped and restarted without losing data
  • D. Instance-store backed instances can be stopped and restarted without losing data

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.

Reference: What is the difference between EBS and Instance Store?

Top

Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command.
ssh -i my_key.pem ec2-user@52.2.222.22
However you receive the following error.
@@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@
What is the most probable reason for this and how can you fix it?

  • A. You do not have root access on your terminal and need to use the sudo option for this to work.
  • B. You do not have enough permissions to perform the operation.
  • C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
  • D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.

D. You need to run something like: chmod 400 my_key.pem

Reference:

Top

Q16: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?

  • A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
  • B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
  • C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
  • D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.

D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.

Reference: AWS Root Device Storage

Top

Q17: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:

  • A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
  • B. Can only be used to launch EC2 instances in the same country as the AMI is stored
  • C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
  • D. Can be used to launch EC2 instances in any AWS region

C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another

Reference: https://aws.amazon.com/amazon-linux-ami/

Top

Q18: Which of the following statements is true about the Elastic File System (EFS)?

  • A. EFS can scale out to meet capacity requirements and scale back down when no longer needed
  • B. EFS can be used by multiple EC2 instances simultaneously
  • C. EFS cannot be used by an instance using EBS
  • D. EFS can be configured on an instance before launch just like an IAM role or EBS volumes

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

A. and B.

Reference: https://aws.amazon.com/efs/

Top

Q19: IAM Policies, at a minimum, contain what elements?

  • A. ID
  • B. Effects
  • C. Resources
  • D. Sid
  • E. Principle
  • F. Actions

B. C. and F.

Effect – Use Allow or Deny to indicate whether the policy allows or denies access.

Resource – Specify a list of resources to which the actions apply.

Action – Include a list of actions that the policy allows or denies.

Id, Sid aren’t required fields in IAM Policies. But they are optional fields

Reference: AWS IAM Access Policies

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q20: What are the main benefits of IAM groups?

  • A. The ability to create custom permission policies.
  • B. Assigning IAM permission policies to more than one user at a time.
  • C. Easier user/policy management.
  • D. Allowing EC2 instances to gain access to S3.

B. and C.

A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.

Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html

 

Top

Q21: What are benefits of using AWS STS?

  • A. Grant access to AWS resources without having to create an IAM identity for them
  • B. Since credentials are temporary, you don’t have to rotate or revoke them
  • C. Temporary security credentials can be extended indefinitely
  • D. Temporary security credentials can be restricted to a specific region

Top

Q22: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q23: A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?

  • A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
  • B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
  • C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
  • D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.

B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence.
Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation.
During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:

  • Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
  • Saved Configurations
    Settings for any options that are not applied directly to the
    environment are loaded from a saved configuration, if specified.
  • Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the
    environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.

     

    Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.

  • Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.

If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI
.
Settings in configuration files are not applied
directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.
If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.

Reference: Managing ec2 features – Elastic beanstalk

Q24: What statements are true about Availability Zones (AZs) and Regions?

  • A. There is only one AZ in each AWS Region
  • B. AZs are geographically separated inside a region to help protect against natural disasters affecting more than one at a time.
  • C. AZs can be moved between AWS Regions based on your needs
  • D. There are (almost always) two or more AZs in each AWS Region

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B and D.

Reference: AWS global infrastructure/

Top

Q25: An AWS Region contains:

  • A. Edge Locations
  • B. Data Centers
  • C. AWS Services
  • D. Availability Zones


B. C. D. Edge locations are actually distinct locations that don’t explicitly fall within AWS regions.

Reference: AWS Global Infrastructure


Top

Q26: Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?

  • A. Eventual Consistent Reads
  • B. Conditional reads for Consistency
  • C. Strongly Consistent Reads
  • D. Not possible


C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.

Reference: https://aws.amazon.com/dynamodb/faqs/


Top

Q27: You’ ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?

  • A. Create an Opswork stack and deploy the Docker containers
  • B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
  • C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
  • D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.


B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.

Reference: Create and Deploy Docker in AWS


Top

Q28: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?

  • A. Create multiple threads and upload the objects in the multiple threads
  • B. Write the items in batches for better performance
  • C. Use the Multipart upload API
  • D. Enable versioning on the Bucket

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 


C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.

Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html


Top

Q29: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?

  • A. 6000
  • B. 10
  • C. 3600
  • D. 600

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.

You can specify the Write capacity in the Capacity tab of the DynamoDB table.

Reference: AWS working with tables

Q30: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context


D. event, context def handler_name(event, context):

return some_value
Reference: AWS Lambda Function Handler in Python

Top

Q31: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda Deployment Package

Top

Q32: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime


C. Function code and libraries not included within the runtime
Reference: AWS Lambda Deployment Package in PowerShell

Top

Q33: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
Reference: AWS Network Address Translation Gateway

Top

Q34: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.
Reference: AWS Autoscalling

Top

Q30: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

Answer:


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda

Top

Q31: An organization is using an Amazon ElastiCache cluster in front of their Amazon RDS instance. The organization would like the Developer to implement logic into the code so that the cluster only retrieves data from RDS when there is a cache miss. What strategy can the Developer implement to achieve this?

  • A. Lazy loading
  • B. Write-through
  • C. Error retries
  • D. Exponential backoff

Answer:


Answer – A
Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. All other options are incorrect.
Reference: Caching Strategies

Top

Q32: A developer is writing an application that will run on Ec2 instances and read messages from SQS queue. The nessages will arrive every 15-60 seconds. How should the Developer efficiently query the queue for new messages?

  • A. Use long polling
  • B. Set a custom visibility timeout
  • C. Use short polling
  • D. Implement exponential backoff


Answer – A Long polling will help insure that the applications make less requests for messages in a shorter period of time. This is more cost effective. Since the messages are only going to be available after 15 seconds and we don’t know exacly when they would be available, it is better to use Long Polling.
Reference: Amazon SQS Long Polling

Top

Q33: You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With new Lambda function working as per expectation which of the following will shift traffic from original Lambda function to new Lambda function in the shortest time frame?

  • A. Canary10Percent5Minutes
  • B. Linear10PercentEvery10Minutes
  • C. Canary10Percent15Minutes
  • D. Linear10PercentEvery1Minute


Answer – A
With Canary Deployment Preference type, Traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while remaining all traffic is shifted after 5 minutes.
Reference: Gradual Code Deployment

Top

Q34: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?

  • A. AWS::Serverless::Api
  • B. AWS::Serverless::Application
  • C. AWS::Serverless::Layerversion
  • D. AWS::Serverless::Function


Answer – B
AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets.
Reference: Declaring Serverless Resources

Top

Q35: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?

  • A. Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
  • B. Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
  • C. Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
  • D. Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.


Answer – D
With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys.
Reference: AWS Key Management Service Concepts

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q36: You are developing an application that will be comprised of the following architecture –

  1. A set of Ec2 instances to process the videos.
  2. These (Ec2 instances) will be spun up by an autoscaling group.
  3. SQS Queues to maintain the processing messages.
  4. There will be 2 pricing tiers.

How will you ensure that the premium customers videos are given more preference?

  • A. Create 2 Autoscaling Groups, one for normal and one for premium customers
  • B. Create 2 set of Ec2 Instances, one for normal and one for premium customers
  • C. Create 2 SQS queus, one for normal and one for premium customers
  • D. Create 2 Elastic Load Balancers, one for normal and one for premium customers.


Answer – C
The ideal option would be to create 2 SQS queues. Messages can then be processed by the application from the high priority queue first.<br? The other options are not the ideal options. They would lead to extra costs and also extra maintenance.
Reference: SQS

Top

Q37: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?

  • A. CustomerID
  • B. CustomerName
  • C. Location
  • D. Age


Answer- A
Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on..
Use composite attributes. Try to combine more than one attribute to form a unique key.
Reference: Choosing the right DynamoDB Partition Key

Top

Q38: A developer is making use of AWS services to develop an application. He has been asked to develop the application in a manner to compensate any network delays. Which of the following two mechanisms should he implement in the application?

  • A. Multiple SQS queues
  • B. Exponential backoff algorithm
  • C. Retries in your application code
  • D. Consider using the Java sdk.


Answer- B. and C.
In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values, and should be set based on the operation being performed, as well as other local factors, such as network latency.
Reference: Error Retries and Exponential Backoff in AWS

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q39: An application is being developed that is going to write data to a DynamoDB table. You have to setup the read and write throughput for the table. Data is going to be read at the rate of 300 items every 30 seconds. Each item is of size 6KB. The reads can be eventual consistent reads. What should be the read capacity that needs to be set on the table?

  • A. 10
  • B. 20
  • C. 6
  • D. 30


Answer – A

Since there are 300 items read every 30 seconds , that means there are (300/30) = 10 items read every second.
Since each item is 6KB in size , that means , 2 reads will be required for each item.
So we have total of 2*10 = 20 reads for the number of items per second
Since eventual consistency is required , we can divide the number of reads(20) by 2 , and in the end we get the Read Capacity of 10.

Reference: Read/Write Capacity Mode


Top

Q40: You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load balancer. You have been requested to monitor the incoming connections to the Elastic Load Balancer. Which of the below options can suffice this requirement?

  • A. Use AWS CloudTrail with your load balancer
  • B. Enable access logs on the load balancer
  • C. Use a CloudWatch Logs Agent
  • D. Create a custom metric CloudWatch lter on your load balancer


Answer – B
Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues.
Reference: Access Logs for Your Application Load Balancer

Top

Q41: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?

  • A. Enable versioning for the underlying S3 bucket.
  • B. Enable Replication so that the objects get replicated to the other bucket
  • C. Enable CORS for the bucket
  • D. Change the Bucket policy for the bucket to allow access from the other bucket


Answer – C

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.

Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:

Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.

Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.

Reference: Cross-Origin Resource Sharing (CORS)


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q42: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below

  • A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
  • B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
  • C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
  • D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.


Answer- C
The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3.
You can then provides access to the objects based on the key values generated via the user id.

Reference: The AWS Security Token Service (STS)


Top

Q43: Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.

  • A. Publish your data to CloudWatch Logs, and congure your application to Autoscale to handle the load on demand.
  • B. Publish your log data to an Amazon S3 bucket.  Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is congured to pull down your log les stored an Amazon S3
  • C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is congured to process your logging data.
  • D. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.

Answer:


Answer – C
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data.
Reference: Amazon Kinesis

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q44: You’ve been instructed to develop a mobile application that will make use of AWS services. You need to decide on a data store to store the user sessions. Which of the following would be an ideal data store for session management?

  • A. AWS Simple Storage Service
  • B. AWS DynamoDB
  • C. AWS RDS
  • D. AWS Redshift

Answer:


Answer – B
DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management
Reference: Scalable Session Handling in PHP Using Amazon DynamoDB

Top

Q45: Your application currently interacts with a DynamoDB table. Records are inserted into the table via the application. There is now a requirement to ensure that whenever items are updated in the DynamoDB primary table , another record is inserted into a secondary table. Which of the below feature should be used when developing such a solution?

  • A. AWS DynamoDB Encryption
  • B. AWS DynamoDB Streams
  • C. AWS DynamoDB Accelerator
  • D. AWSTable Accelerator


Answer – B
DynamoDB Streams Use Cases and Design Patterns This post describes some common use cases you might encounter, along with their design options and solutions, when migrating data from relational data stores to Amazon DynamoDB. We will consider how to manage the following scenarios:

  • How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
  • How do you trigger an event based on a particular transaction?
  • How do you audit or archive transactions?
  • How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?

Relational databases provide native support for transactions, triggers, auditing, and replication. Typically, a transaction in a database refers to performing create, read, update, and delete (CRUD) operations against multiple tables in a block. A transaction can have only two states—success or failure. In other words, there is no partial completion. As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit. You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time. AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region. All of the other options are incorrect since none of these would meet the core requirement.
Reference: DynamoDB Streams Use Cases and Design Patterns


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q46: An application has been making use of AWS DynamoDB for its back-end data store. The size of the table has now grown to 20 GB , and the scans on the table are causing throttling errors. Which of the following should now be implemented to avoid such errors?

  • A. Large Page size
  • B. Reduced page size
  • C. Parallel Scans
  • D. Sequential scans

Answer – B
When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity. You can use the following technique to minimize the impact of a scan on a table’s provisioned throughput. Reduce page size Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request. For example, suppose that each item is 4 KB and you set the page size to 40 items. A Query request would then consume only 20 eventually consistent read operations or 40 strongly consistent read operations. A larger number of smaller Query or Scan operations would allow your other critical requests to succeed without throttling.
Reference1: Rate-Limited Scans in Amazon DynamoDB

Reference2: Best Practices for Querying and Scanning Data


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q47: Which of the following is correct way of passing a stage variable to an HTTP URL ? (Select TWO.)

  • A. http://example.com/${}/prod
  • B. http://example.com/${stageVariables.}/prod
  • C. http://${stageVariables.}.example.com/dev/operation
  • D. http://${stageVariables}.example.com/dev/operation
  • E. http://${}.example.com/dev/operation
  • F. http://example.com/${stageVariables}/prod


Answer – B. and C.
A stage variable can be used as part of HTTP integration URL as in following cases, ·         A full URI without protocol ·         A full domain ·         A subdomain ·         A path ·         A query string In the above case , option B & C displays stage variable as a path & sub-domain.
Reference: Amazon API Gateway Stage Variables Reference

Top

Q48: Your company is planning on creating new development environments in AWS. They want to make use of their existing Chef recipes which they use for their on-premise configuration for servers in AWS. Which of the following service would be ideal to use in this regard?

  • A. AWS Elastic Beanstalk
  • B. AWS OpsWork
  • C. AWS Cloudformation
  • D. AWS SQS


Answer – B
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments All other options are invalid since they cannot be used to work with Chef recipes for configuration management.
Reference: AWS OpsWorks

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q49: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?

  • A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
  • B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access.
  • C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
  • D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.


Answer – C
With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used
Reference: About Web Identity Federation

Top

Q50: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?

  • A. Cognito Data
  • B. Cognito Events
  • C. Cognito Streams
  • D. Cognito Callbacks


Answer – C
Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams
Reference:

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q51: You’ve developed a set of scripts using AWS Lambda. These scripts need to access EC2 Instances in a VPC. Which of the following needs to be done to ensure that the AWS Lambda function can access the resources in the VPC. Choose 2 answers from the options given below

  • A. Ensure that the subnet ID’s are mentioned when conguring the Lambda function
  • B. Ensure that the NACL ID’s are mentioned when conguring the Lambda function
  • C. Ensure that the Security Group ID’s are mentioned when conguring the Lambda function
  • D. Ensure that the VPC Flow Log ID’s are mentioned when conguring the Lambda function


Answer: A and C.
AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPCspecific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC.
Reference: Configuring a Lambda Function to Access Resources in an Amazon VPC

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q52: You’ve currently been tasked to migrate an existing on-premise environment into Elastic Beanstalk. The application does not make use of Docker containers. You also can’t see any relevant environments in the beanstalk service that would be suitable to host your application. What should you consider doing in this case?

  • A. Migrate your application to using Docker containers and then migrate the app to the Elastic Beanstalk environment.
  • B. Consider using Cloudformation to deploy your environment to Elastic Beanstalk
  • C. Consider using Packer to create a custom platform
  • D. Consider deploying your application using the Elastic Container Service


Answer – C
Elastic Beanstalk supports custom platforms. A custom platform is a more advanced customization than a Custom Image in several ways. A custom platform lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility allows you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn’t provide a platform out of the box. Compare that to custom images, where you modify an AMI for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform’s software stack. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. To create a custom platform, you build an Amazon Machine Image (AMI) from one of the supported operating systems—Ubuntu, RHEL, or Amazon Linux (see the flavor entry in Platform.yaml File Format for the exact version numbers)—and add further customizations. You create your own Elastic Beanstalk platform using Packer, which is an open-source tool for creating machine images for many platforms, including AMIs for use with Amazon EC2. An Elastic Beanstalk platform comprises an AMI configured to run a set of software that supports an application, and metadata that can include custom configuration options and default configuration option settings.
Reference: AWS Elastic Beanstalk Custom Platforms

Top

Q53: Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.

  • A. 10
  • B. 160
  • C. 155
  • D. 16


Answer – B.
Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.
Reference: Read/Write Capacity Mode

Top

Top

Q54: Which AWS Service can be used to automatically install your application code onto EC2, on premises systems and Lambda?

  • A. CodeCommit
  • B. X-Ray
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q55: Which AWS service can be used to compile source code, run tests and package code?

  • A. CodePipeline
  • B. CodeCommit
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy Answer: B.

Reference: AWS CodeBuild


Top

Q56: How can your prevent CloudFormation from deleting your entire stack on failure? (Choose 2)

  • A. Set the Rollback on failure radio button to No in the CloudFormation console
  • B. Set Termination Protection to Enabled in the CloudFormation console
  • C. Use the –disable-rollback flag with the AWS CLI
  • D. Use the –enable-termination-protection protection flag with the AWS CLI

Answer: A. and C.

Reference: Protecting a Stack From Being Deleted

Top

Q57: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?

  • A. Continuous Integration
  • B. Continuous Deployment
  • C. Continuous Delivery
  • D. Continuous Development

Top

Q58: When deploying application code to EC2, the AppSpec file can be written in which language?

  • A. JSON
  • B. JSON or YAML
  • C. XML
  • D. YAML

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q59: Part of your CloudFormation deployment fails due to a mis-configuration, by defaukt what will happen?

  • A. CloudFormation will rollback only the failed components
  • B. CloudFormation will rollback the entire stack
  • C. Failed component will remain available for debugging purposes
  • D. CloudFormation will ask you if you want to continue with the deployment


Top

Q60: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?

  • A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
  • B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
  • C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
  • D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.

Answer: C

Reference: Getting Started with Amazon SNS


Top

Q61: Which AWS service can be used to centrally store and version control your application source code, binaries and libraries

  • A. CodeCommit
  • B. CodeBuild
  • C. CodePipeline
  • D. ElasticFileSystem

Answer: A

Reference: AWS CodeCommit


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q62: You are using CloudFormation to create a new S3 bucket, which of the following sections would you use to define the properties of your bucket?

  • A. Conditions
  • B. Parameters
  • C. Outputs
  • D. Resources

Answer: D

Reference: Resources


Top

Q63: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?

  • A. Transforms
  • B. Outputs
  • C. Resources
  • D. Instances

Answer: C.
The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3.
Reference: Resources

Top

Q64: Which AWS service can be used to fully automate your entire release process?

  • A. CodeDeploy
  • B. CodePipeline
  • C. CodeCommit
  • D. CodeBuild

Answer: B.
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates

Reference: AWS CodePipeline


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q65: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?

  • A. Outputs
  • B. Transforms
  • C. Resources
  • D. Exports

Answer: A.
Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack.
Reference: CloudFormation Outputs

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q66: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?

  • A. Inputs
  • B. Resources
  • C. Transforms
  • D. Files

Answer: C.
Transforms is used to reference code located in S3 and also specififying the use of the Serverless Application Model (SAM) for Lambda deployments.
Reference: Transforms

Top

Q67: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file
used to specify source files and lifecycle hooks?

  • A. buildspec.yml
  • B. appspec.json
  • C. appspec.yml
  • D. buildspec.json

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q68: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?

  • A. Share the code using an EBS volume
  • B. Copy and paste the code into the template each time you need to use it
  • C. Use a cloudformation nested stack
  • D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.

Answer: C.

Reference: Working with Nested Stacks

Top

Q69: In the CodeDeploy AppSpec file, what are hooks used for?

  • A. To reference AWS resources that will be used during the deployment
  • B. Hooks are reserved for future use
  • C. To specify files you want to copy during the deployment.
  • D. To specify, scripts or function that you want to run at set points in the deployment lifecycle

Answer: D.
The ‘hooks’ section for an EC2/On-Premises deployment contains mappings that link deployment lifecycle event hooks to one or more scripts.

Reference: AppSpec ‘hooks’ Section

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q70: Which command can you use to encrypt a plain text file using CMK?

  • A. aws kms-encrypt
  • B. aws iam encrypt
  • C. aws kms encrypt
  • D. aws encrypt

Answer: C.
aws kms encrypt –key-id 1234abcd-12ab-34cd-56ef-1234567890ab –plaintext fileb://ExamplePlaintextFile –output text –query CiphertextBlob > C:\Temp\ExampleEncryptedFile.base64

Reference: AWS CLI Encrypt

Top

Q72: Which of the following is an encrypted key used by KMS to encrypt your data

  • A. Custmoer Mamaged Key
  • B. Encryption Key
  • C. Envelope Key
  • D. Customer Master Key

Answer: C.
Your Data key also known as the Enveloppe key is encrypted using the master key.This approach is known as Envelope encryption.
Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.

Reference: Envelope Encryption

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q73: Which of the following statements are correct? (Choose 2)

  • A. The Customer Master Key is used to encrypt and decrypt the Envelope Key or Data Key
  • B. The Envelope Key or Data Key is used to encrypt and decrypt plain text files.
  • C. The envelope Key or Data Key is used to encrypt and decrypt the Customer Master Key.
  • D. The Customer MasterKey is used to encrypt and decrypt plain text files.

Answer: A. and B.

Reference: AWS Key Management Service Concepts

Top

Q74: Which of the following statements is correct in relation to kMS/ (Choose 2)

  • A. KMS Encryption keys are regional
  • B. You cannot export your customer master key
  • C. You can export your customer master key.
  • D. KMS encryption Keys are global

Answer: A. and B.

Reference: AWS Key Management Service FAQs

Q75:  A developer is preparing a deployment package for a Java implementation of an AWS Lambda function. What should the developer include in the deployment package? (Select TWO.)
A. Compiled application code
B. Java runtime environment
C. References to the event sources
D. Lambda execution role
E. Application dependencies


Answer: C. E.
Notes: To create a Lambda function, you first create a Lambda function deployment package. This package is a .zip or .jar file consisting of your code and any dependencies.
Reference: Lambda deployment packages.

Q76: A developer uses AWS CodeDeploy to deploy a Python application to a fleet of Amazon EC2 instances that run behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. What should the developer include in the CodeDeploy deployment package?
A. A launch template for the Amazon EC2 Auto Scaling group
B. A CodeDeploy AppSpec file
C. An EC2 role that grants the application access to AWS services
D. An IAM policy that grants the application access to AWS services


Answer: B.
Notes: The CodeDeploy AppSpec (application specific) file is unique to CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file.
Reference: CodeDeploy application specification (AppSpec) files.
Category: Deployment

Q76: A company is working on a project to enhance its serverless application development process. The company hosts applications on AWS Lambda. The development team regularly updates the Lambda code and wants to use stable code in production. Which combination of steps should the development team take to configure Lambda functions to meet both development and production requirements? (Select TWO.)

A. Create a new Lambda version every time a new code release needs testing.
B. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready unqualified Amazon Resource Name (ARN) version. Point the Development alias to the $LATEST version.
C. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to the production-ready qualified Amazon Resource Name (ARN) version. Point the Development alias to the variable LAMBDA_TASK_ROOT.
D. Create a new Lambda layer every time a new code release needs testing.
E. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready Lambda layer Amazon Resource Name (ARN). Point the Development alias to the $LATEST layer ARN.


Answer: A. B.
Notes: Lambda function versions are designed to manage deployment of functions. They can be used for code changes, without affecting the stable production version of the code. By creating separate aliases for Production and Development, systems can initiate the correct alias as needed. A Lambda function alias can be used to point to a specific Lambda function version. Using the functionality to update an alias and its linked version, the development team can update the required version as needed. The $LATEST version is the newest published version.
Reference: Lambda function versions.

For more information about Lambda layers, see Creating and sharing Lambda layers.

For more information about Lambda function aliases, see Lambda function aliases.

Category: Deployment

Q77: Each time a developer publishes a new version of an AWS Lambda function, all the dependent event source mappings need to be updated with the reference to the new version’s Amazon Resource Name (ARN). These updates are time consuming and error-prone. Which combination of actions should the developer take to avoid performing these updates when publishing a new Lambda version? (Select TWO.)
A. Update event source mappings with the ARN of the Lambda layer.
B. Point a Lambda alias to a new version of the Lambda function.
C. Create a Lambda alias for each published version of the Lambda function.
D. Point a Lambda alias to a new Lambda function alias.
E. Update the event source mappings with the Lambda alias ARN.


Answer: B. E.
Notes: A Lambda alias is a pointer to a specific Lambda function version. Instead of using ARNs for the Lambda function in event source mappings, you can use an alias ARN. You do not need to update your event source mappings when you promote a new version or roll back to a previous version.
Reference: Lambda function aliases.
Category: Deployment

Q78:  A company wants to store sensitive user data in Amazon S3 and encrypt this data at rest. The company must manage the encryption keys and use Amazon S3 to perform the encryption. How can a developer meet these requirements?
A. Enable default encryption for the S3 bucket by using the option for server-side encryption with customer-provided encryption keys (SSE-C).
B. Enable client-side encryption with an encryption key. Upload the encrypted object to the S3 bucket.
C. Enable server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Upload an object to the S3 bucket.
D. Enable server-side encryption with customer-provided encryption keys (SSE-C). Upload an object to the S3 bucket.


Answer: D.
Notes: When you upload an object, Amazon S3 uses the encryption key you provide to apply AES-256 encryption to your data and removes the encryption key from memory.
Reference: Protecting data using server-side encryption with customer-provided encryption keys (SSE-C).

Category: Security

Q79: A company is developing a Python application that submits data to an Amazon DynamoDB table. The company requires client-side encryption of specific data items and end-to-end protection for the encrypted data in transit and at rest. Which combination of steps will meet the requirement for the encryption of specific data items? (Select TWO.)

A. Generate symmetric encryption keys with AWS Key Management Service (AWS KMS).
B. Generate asymmetric encryption keys with AWS Key Management Service (AWS KMS).
C. Use generated keys with the DynamoDB Encryption Client.
D. Use generated keys to configure DynamoDB table encryption with AWS managed customer master keys (CMKs).
E. Use generated keys to configure DynamoDB table encryption with AWS owned customer master keys (CMKs).


Answer: A. C.
Notes: When the DynamoDB Encryption Client is configured to use AWS KMS, it uses a customer master key (CMK) that is always encrypted when used outside of AWS KMS. This cryptographic materials provider returns a unique encryption key and signing key for every table item. This method of encryption uses a symmetric CMK.
Reference: Direct KMS Materials Provider.
Category: Deployment

Q80: A company is developing a REST API with Amazon API Gateway. Access to the API should be limited to users in the existing Amazon Cognito user pool. Which combination of steps should a developer perform to secure the API? (Select TWO.)
A. Create an AWS Lambda authorizer for the API.
B. Create an Amazon Cognito authorizer for the API.
C. Configure the authorizer for the API resource.
D. Configure the API methods to use the authorizer.
E. Configure the authorizer for the API stage.


Answer: B. D.
Notes: An Amazon Cognito authorizer should be used for integration with Amazon Cognito user pools. In addition to creating an authorizer, you are required to configure an API method to use that authorizer for the API.
Reference: Control access to a REST API using Amazon Cognito user pools as authorizer.
Category: Security

Q81: A developer is implementing a mobile app to provide personalized services to app users. The application code makes calls to Amazon S3 and Amazon Simple Queue Service (Amazon SQS). Which options can the developer use to authenticate the app users? (Select TWO.)
A. Authenticate to the Amazon Cognito identity pool directly.
B. Authenticate to AWS Identity and Access Management (IAM) directly.
C. Authenticate to the Amazon Cognito user pool directly.
D. Federate authentication by using Login with Amazon with the users managed with AWS Security Token Service (AWS STS).
E. Federate authentication by using Login with Amazon with the users managed with the Amazon Cognito user pool.


Answer: C. E.
Notes: The Amazon Cognito user pool provides direct user authentication. The Amazon Cognito user pool provides a federated authentication option with third-party identity provider (IdP), including amazon.com.
Reference: Adding User Pool Sign-in Through a Third Party.
Category: Security

Question: A company is implementing several order processing workflows. Each workflow is implemented by using AWS Lambda functions for each task. Which combination of steps should a developer follow to implement these workflows? (Select TWO.)
A. Define a AWS Step Functions task for each Lambda function.
B. Define a AWS Step Functions task for each workflow.
C. Write code that polls the AWS Step Functions invocation to coordinate each workflow.
D. Define an AWS Step Functions state machine for each workflow.
E. Define an AWS Step Functions state machine for each Lambda function.
Answer: A. D.
Notes: Step Functions is based on state machines and tasks. A state machine is a workflow. Tasks perform work by coordinating with other AWS services, such as Lambda. A state machine is a workflow. It can be used to express a workflow as a number of states, their relationships, and their input and output. You can coordinate individual tasks with Step Functions by expressing your workflow as a finite state machine, written in the Amazon States Language.
ReferenceText: Getting Started with AWS Step Functions.
ReferenceUrl: https://aws.amazon.com/step-functions/getting-started/
Category: Development

Welcome to AWS Certified Developer Associate Exam Preparation: Definition and Objectives, Top 100 Questions and Answers dump, White papers, Courses, Labs and Training Materials, Exam info and details, References, Jobs, Others AWS Certificates

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

What is the AWS Certified Developer Associate Exam?

This AWS Certified Developer-Associate Examination is intended for individuals who perform a Developer role. It validates an examinee’s ability to:

  • Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
  • Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS

Recommended general IT knowledge
The target candidate should have the following:
– In-depth knowledge of at least one high-level programming language
– Understanding of application lifecycle management
– The ability to write code for serverless applications
– Understanding of the use of containers in the development process

Recommended AWS knowledge
The target candidate should be able to do the following:

  • Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications
  • Identify key features of AWS services
  • Understand the AWS shared responsibility model
  • Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
  • Use and interact with AWS services
  • Apply basic understanding of cloud-native applications to write code
  • Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
  • Author, maintain, and debug code modules on AWS

What is considered out of scope for the target candidate?
The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam:
– Design architectures (for example, distributed system, microservices)
– Design and implement CI/CD pipelines

  • Administer IAM users and groups
  • Administer Amazon Elastic Container Service (Amazon ECS)
  • Design AWS networking infrastructure (for example, Amazon VPC, AWS Direct Connect)
  • Understand compliance and licensing

Exam content
Response types
There are two types of questions on the exam:
– Multiple choice: Has one correct response and three incorrect responses (distractors)
– Multiple response: Has two or more correct responses out of five or more response options
Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose.
Distractors are generally plausible responses that match the content area.
Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.

Unscored content
The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.

Exam results
The AWS Certified Developer – Associate (DVA-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines.
Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720.
Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.
Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam.
Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.

Content outline
This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context.
The percentage in each domain represents only scored content.

Domain 1: Deployment 22%
Domain 2: Security 26%
Domain 3: Development with AWS Services 30%
Domain 4: Refactoring 10%
Domain 5: Monitoring and Troubleshooting 12%

Domain 1: Deployment
1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
–  Commit code to a repository and invoke build, test and/or deployment actions
–  Use labels and branches for version and release management
–  Use AWS CodePipeline to orchestrate workflows against different environments
–  Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS
CodeDeploy for CI/CD purposes
–  Perform a roll back plan based on application deployment policy

1.2 Deploy applications using AWS Elastic Beanstalk.
–  Utilize existing supported environments to define a new application stack
–  Package the application
–  Introduce a new application version into the Elastic Beanstalk environment
–  Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable)
–  Validate application health using Elastic Beanstalk dashboard
–  Use Amazon CloudWatch Logs to instrument application logging

1.3 Prepare the application deployment package to be deployed to AWS.
–  Manage the dependencies of the code module (like environment variables, config files and static image files) within the package
–  Outline the package/container directory structure and organize files appropriately
–  Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores)

1.4 Deploy serverless applications.
–  Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template
–  Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)

Domain 2: Security
2.1 Make authenticated calls to AWS services.
–  Communicate required policy based on least privileges required by application.
–  Assume an IAM role to access a service
–  Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)

2.2 Implement encryption using AWS services.
– Encrypt data at rest (client side; server side; envelope encryption) using AWS services
–  Encrypt data in transit

2.3 Implement application authentication and authorization.
– Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools
–  Use Amazon Cognito-provided credentials to write code that access AWS services.
–  Use Amazon Cognito sync to synchronize user profiles and data
–  Use developer-authenticated identities to interact between end user devices, backend
authentication, and Amazon Cognito

Domain 3: Development with AWS Services
3.1 Write code for serverless applications.
– Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications)
– Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler)
– Create an API endpoint using Amazon API Gateway
–  Create and test appropriate API actions like GET, POST using the API endpoint
–  Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes)
–  Compute read/write capacity units for Amazon DynamoDB based on application requirements
–  Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis)
–  Invoke an AWS Lambda function synchronously and asynchronously

3.2 Translate functional requirements into application design.
– Determine real-time vs. batch processing for a given use case
– Determine use of synchronous vs. asynchronous for a given use case
– Determine use of event vs. schedule/poll for a given use case
– Account for tradeoffs for consistency models in an application design

Domain 4: Refactoring
4.1 Optimize applications to best use AWS services and features.
 Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache)
 Apply an Amazon S3 naming scheme for optimal read performance

4.2 Migrate existing application code to run on AWS.
– Isolate dependencies
– Run the application as one or more stateless processes
– Develop in order to enable horizontal scalability
– Externalize state

Domain 5: Monitoring and Troubleshooting

5.1 Write code that can be monitored.
– Create custom Amazon CloudWatch metrics
– Perform logging in a manner available to systems operators
– Instrument application source code to enable tracing in AWS X-Ray

5.2 Perform root cause analysis on faults found in testing or production.
– Interpret the outputs from the logging mechanism in AWS to identify errors in logs
– Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues
– Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component

Which key tools, technologies, and concepts might be covered on the exam?

The following is a non-exhaustive list of the tools and technologies that could appear on the exam.
This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam.
The general tools and technologies in this list appear in no particular order.
AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance:
– Analytics
– Application Integration
– Containers
– Cost and Capacity Management
– Data Movement
– Developer Tools
– Instances (virtual machines)
– Management and Governance
– Networking and Content Delivery
– Security
– Serverless

AWS services and features

Analytics:
– Amazon Elasticsearch Service (Amazon ES)
– Amazon Kinesis
Application Integration:
– Amazon EventBridge (Amazon CloudWatch Events)
– Amazon Simple Notification Service (Amazon SNS)
– Amazon Simple Queue Service (Amazon SQS)
– AWS Step Functions

Compute:
– Amazon EC2
– AWS Elastic Beanstalk
– AWS Lambda

Containers:
– Amazon Elastic Container Registry (Amazon ECR)
– Amazon Elastic Container Service (Amazon ECS)
– Amazon Elastic Kubernetes Services (Amazon EKS)

Database:
– Amazon DynamoDB
– Amazon ElastiCache
– Amazon RDS

Developer Tools:
– AWS CodeArtifact
– AWS CodeBuild
– AWS CodeCommit
– AWS CodeDeploy
– Amazon CodeGuru
– AWS CodePipeline
– AWS CodeStar
– AWS Fault Injection Simulator
– AWS X-Ray

Management and Governance:
– AWS CloudFormation
– Amazon CloudWatch

Networking and Content Delivery:
– Amazon API Gateway
– Amazon CloudFront
– Elastic Load Balancing

Security, Identity, and Compliance:
– Amazon Cognito
– AWS Identity and Access Management (IAM)
– AWS Key Management Service (AWS KMS)

Storage:
– Amazon S3

Out-of-scope AWS services and features

The following is a non-exhaustive list of AWS services and features that are not covered on the exam.
These services and features do not represent every AWS offering that is excluded from the exam content.
Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant.
Out-of-scope AWS services and features include the following:
– AWS Application Discovery Service
– Amazon AppStream 2.0
– Amazon Chime
– Amazon Connect
– AWS Database Migration Service (AWS DMS)
– AWS Device Farm
– Amazon Elastic Transcoder
– Amazon GameLift
– Amazon Lex
– Amazon Machine Learning (Amazon ML)
– AWS Managed Services
– Amazon Mobile Analytics
– Amazon Polly

– Amazon QuickSight
– Amazon Rekognition
– AWS Server Migration Service (AWS SMS)
– AWS Service Catalog
– AWS Shield Advanced
– AWS Shield Standard
– AWS Snow Family
– AWS Storage Gateway
– AWS WAF
– Amazon WorkMail
– Amazon WorkSpaces

To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Top

AWS Certified Developer – Associate Practice Questions And Answers Dump

Q0: Your application reads commands from an SQS queue and sends them to web services hosted by your
partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost.
How can you accommodate the partners’ broken web services without wasting your resources?

  • A. Create a delay queue and set DelaySeconds to 30 seconds
  • B. Requeue the message with a VisibilityTimeout of 30 seconds.
  • C. Create a dead letter queue and set the Maximum Receives to 3.
  • D. Requeue the message with a DelaySeconds of 30 seconds.
AWS Developer Associates DVA-C01 PRO
AWS Developer Associates DVA-C01 PRO
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 


C. After a message is taken from the queue and returned for the maximum number of retries, it is
automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.

Reference: Amazon SQS Dead-Letter Queues


Top

Q1: A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently.
What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. The AWS Documentation mentions the following:

DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

Q2: You are creating a DynamoDB table with the following attributes:

  • PurchaseOrderNumber (partition key)
  • CustomerID
  • PurchaseDate
  • TotalPurchaseValue

One of your applications must retrieve items from the table to calculate the total value of purchases for a
particular customer over a date range. What secondary index do you need to add to the table?

  • A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute
  • C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute


C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition
key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the
TotalPurchaseValue into the index provides all the data needed to satisfy the use case.

Reference: AWS DynamoDB Global Secondary Indexes

Difference between local and global indexes in DynamoDB

    • Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
    • Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
    • Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
    • Global Secondary Indexes defines a new paradigm – different hash/range keys per index.
      This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
    • Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
    • Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.

Throughput :

  • Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
  • Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.


Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q3: When referencing the remaining time left for a Lambda function to run within the function’s code you would use:

  • A. The event object
  • B. The timeLeft object
  • C. The remains object
  • D. The context object


D. The context object.

Reference: AWS Lambda


Top

Q4: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context
D. event, context
def handler_name(event, context):

return some_value

Reference: AWS Lambda Function Handler in Python

Top

Q5: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda Deployment Package

Top

Q6: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime

C. Function code and libraries not included within the runtime

Reference: AWS Lambda Deployment Package in PowerShell

Top

Q7: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?

  • A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
  • B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
  • C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
  • D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.


D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.

Reference: Comparison of Security Groups and Network ACLs

AWS Security Groups and NACL


Top

Q8: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.

Reference: AWS Network Address Translation Gateway


Top

Q9: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.

Reference: AWS Autoscalling


Top

Q10: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda

Top

Q11: You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?

  • A. RegisterImage
  • B. CreateImage
  • C. ami-register-image
  • D. ami-create-image

A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.

Reference: API RegisterImage

Top

Q12: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?

  • A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
  • B. Permenantly assigning users to specific instances and always routing their traffic to those instances
  • C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
  • D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance

Top

Q13: Which API call would best be used to describe an Amazon Machine Image?

  • A. ami-describe-image
  • B. ami-describe-images
  • C. DescribeImage
  • D. DescribeImages

D. In general, API actions stick to the PascalCase style with the first letter of every word capitalized.

Reference: API DescribeImages

Top

Q14: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?

  • A. Autoscaling requires using Amazon EBS-backed instances
  • B. Virtual Private Cloud requires EBS backed instances
  • C. Amazon EBS-backed instances can be stopped and restarted without losing data
  • D. Instance-store backed instances can be stopped and restarted without losing data

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.

Reference: What is the difference between EBS and Instance Store?

Top

Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command.
ssh -i my_key.pem ec2-user@52.2.222.22
However you receive the following error.
@@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@
What is the most probable reason for this and how can you fix it?

  • A. You do not have root access on your terminal and need to use the sudo option for this to work.
  • B. You do not have enough permissions to perform the operation.
  • C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
  • D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.

D. You need to run something like: chmod 400 my_key.pem

Reference:

Top

Q16: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?

  • A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
  • B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
  • C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
  • D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.

D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.

Reference: AWS Root Device Storage

Top

Q17: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:

  • A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
  • B. Can only be used to launch EC2 instances in the same country as the AMI is stored
  • C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
  • D. Can be used to launch EC2 instances in any AWS region

C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another

Reference: https://aws.amazon.com/amazon-linux-ami/

Top

Q18: Which of the following statements is true about the Elastic File System (EFS)?

  • A. EFS can scale out to meet capacity requirements and scale back down when no longer needed
  • B. EFS can be used by multiple EC2 instances simultaneously
  • C. EFS cannot be used by an instance using EBS
  • D. EFS can be configured on an instance before launch just like an IAM role or EBS volumes

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

A. and B.

Reference: https://aws.amazon.com/efs/

Top

Q19: IAM Policies, at a minimum, contain what elements?

  • A. ID
  • B. Effects
  • C. Resources
  • D. Sid
  • E. Principle
  • F. Actions

B. C. and F.

Effect – Use Allow or Deny to indicate whether the policy allows or denies access.

Resource – Specify a list of resources to which the actions apply.

Action – Include a list of actions that the policy allows or denies.

Id, Sid aren’t required fields in IAM Policies. But they are optional fields

Reference: AWS IAM Access Policies

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q20: What are the main benefits of IAM groups?

  • A. The ability to create custom permission policies.
  • B. Assigning IAM permission policies to more than one user at a time.
  • C. Easier user/policy management.
  • D. Allowing EC2 instances to gain access to S3.

B. and C.

A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.

Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html

 

Top

Q21: What are benefits of using AWS STS?

  • A. Grant access to AWS resources without having to create an IAM identity for them
  • B. Since credentials are temporary, you don’t have to rotate or revoke them
  • C. Temporary security credentials can be extended indefinitely
  • D. Temporary security credentials can be restricted to a specific region

Top

Q22: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q23: A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?

  • A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
  • B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
  • C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
  • D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.

B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence.
Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation.
During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:

  • Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
  • Saved Configurations
    Settings for any options that are not applied directly to the
    environment are loaded from a saved configuration, if specified.
  • Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the
    environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.

     

    Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.

  • Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.

If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI
.
Settings in configuration files are not applied
directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.
If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.

Reference: Managing ec2 features – Elastic beanstalk

Q24: What statements are true about Availability Zones (AZs) and Regions?

  • A. There is only one AZ in each AWS Region
  • B. AZs are geographically separated inside a region to help protect against natural disasters affecting more than one at a time.
  • C. AZs can be moved between AWS Regions based on your needs
  • D. There are (almost always) two or more AZs in each AWS Region

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B and D.

Reference: AWS global infrastructure/

Top

Q25: An AWS Region contains:

  • A. Edge Locations
  • B. Data Centers
  • C. AWS Services
  • D. Availability Zones


B. C. D. Edge locations are actually distinct locations that don’t explicitly fall within AWS regions.

Reference: AWS Global Infrastructure


Top

Q26: Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?

  • A. Eventual Consistent Reads
  • B. Conditional reads for Consistency
  • C. Strongly Consistent Reads
  • D. Not possible


C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.

Reference: https://aws.amazon.com/dynamodb/faqs/


Top

Q27: You’ ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?

  • A. Create an Opswork stack and deploy the Docker containers
  • B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
  • C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
  • D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.


B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.

Reference: Create and Deploy Docker in AWS


Top

Q28: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?

  • A. Create multiple threads and upload the objects in the multiple threads
  • B. Write the items in batches for better performance
  • C. Use the Multipart upload API
  • D. Enable versioning on the Bucket

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 


C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.

Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html


Top

Q29: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?

  • A. 6000
  • B. 10
  • C. 3600
  • D. 600

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.

You can specify the Write capacity in the Capacity tab of the DynamoDB table.

Reference: AWS working with tables

Q30: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context


D. event, context def handler_name(event, context):

return some_value
Reference: AWS Lambda Function Handler in Python

Top

Q31: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda Deployment Package

Top

Q32: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime


C. Function code and libraries not included within the runtime
Reference: AWS Lambda Deployment Package in PowerShell

Top

Q33: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
Reference: AWS Network Address Translation Gateway

Top

Q34: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.
Reference: AWS Autoscalling

Top

Q30: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

Answer:


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda

Top

Q31: An organization is using an Amazon ElastiCache cluster in front of their Amazon RDS instance. The organization would like the Developer to implement logic into the code so that the cluster only retrieves data from RDS when there is a cache miss. What strategy can the Developer implement to achieve this?

  • A. Lazy loading
  • B. Write-through
  • C. Error retries
  • D. Exponential backoff

Answer:


Answer – A
Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. All other options are incorrect.
Reference: Caching Strategies

Top

Q32: A developer is writing an application that will run on Ec2 instances and read messages from SQS queue. The nessages will arrive every 15-60 seconds. How should the Developer efficiently query the queue for new messages?

  • A. Use long polling
  • B. Set a custom visibility timeout
  • C. Use short polling
  • D. Implement exponential backoff


Answer – A Long polling will help insure that the applications make less requests for messages in a shorter period of time. This is more cost effective. Since the messages are only going to be available after 15 seconds and we don’t know exacly when they would be available, it is better to use Long Polling.
Reference: Amazon SQS Long Polling

Top

Q33: You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With new Lambda function working as per expectation which of the following will shift traffic from original Lambda function to new Lambda function in the shortest time frame?

  • A. Canary10Percent5Minutes
  • B. Linear10PercentEvery10Minutes
  • C. Canary10Percent15Minutes
  • D. Linear10PercentEvery1Minute


Answer – A
With Canary Deployment Preference type, Traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while remaining all traffic is shifted after 5 minutes.
Reference: Gradual Code Deployment

Top

Q34: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?

  • A. AWS::Serverless::Api
  • B. AWS::Serverless::Application
  • C. AWS::Serverless::Layerversion
  • D. AWS::Serverless::Function


Answer – B
AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets.
Reference: Declaring Serverless Resources

Top

Q35: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?

  • A. Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
  • B. Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
  • C. Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
  • D. Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.


Answer – D
With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys.
Reference: AWS Key Management Service Concepts

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q36: You are developing an application that will be comprised of the following architecture –

  1. A set of Ec2 instances to process the videos.
  2. These (Ec2 instances) will be spun up by an autoscaling group.
  3. SQS Queues to maintain the processing messages.
  4. There will be 2 pricing tiers.

How will you ensure that the premium customers videos are given more preference?

  • A. Create 2 Autoscaling Groups, one for normal and one for premium customers
  • B. Create 2 set of Ec2 Instances, one for normal and one for premium customers
  • C. Create 2 SQS queus, one for normal and one for premium customers
  • D. Create 2 Elastic Load Balancers, one for normal and one for premium customers.


Answer – C
The ideal option would be to create 2 SQS queues. Messages can then be processed by the application from the high priority queue first.<br? The other options are not the ideal options. They would lead to extra costs and also extra maintenance.
Reference: SQS

Top

Q37: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?

  • A. CustomerID
  • B. CustomerName
  • C. Location
  • D. Age


Answer- A
Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on..
Use composite attributes. Try to combine more than one attribute to form a unique key.
Reference: Choosing the right DynamoDB Partition Key

Top

Q38: A developer is making use of AWS services to develop an application. He has been asked to develop the application in a manner to compensate any network delays. Which of the following two mechanisms should he implement in the application?

  • A. Multiple SQS queues
  • B. Exponential backoff algorithm
  • C. Retries in your application code
  • D. Consider using the Java sdk.


Answer- B. and C.
In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values, and should be set based on the operation being performed, as well as other local factors, such as network latency.
Reference: Error Retries and Exponential Backoff in AWS

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q39: An application is being developed that is going to write data to a DynamoDB table. You have to setup the read and write throughput for the table. Data is going to be read at the rate of 300 items every 30 seconds. Each item is of size 6KB. The reads can be eventual consistent reads. What should be the read capacity that needs to be set on the table?

  • A. 10
  • B. 20
  • C. 6
  • D. 30


Answer – A

Since there are 300 items read every 30 seconds , that means there are (300/30) = 10 items read every second.
Since each item is 6KB in size , that means , 2 reads will be required for each item.
So we have total of 2*10 = 20 reads for the number of items per second
Since eventual consistency is required , we can divide the number of reads(20) by 2 , and in the end we get the Read Capacity of 10.

Reference: Read/Write Capacity Mode


Top

Q40: You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load balancer. You have been requested to monitor the incoming connections to the Elastic Load Balancer. Which of the below options can suffice this requirement?

  • A. Use AWS CloudTrail with your load balancer
  • B. Enable access logs on the load balancer
  • C. Use a CloudWatch Logs Agent
  • D. Create a custom metric CloudWatch lter on your load balancer


Answer – B
Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues.
Reference: Access Logs for Your Application Load Balancer

Top

Q41: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?

  • A. Enable versioning for the underlying S3 bucket.
  • B. Enable Replication so that the objects get replicated to the other bucket
  • C. Enable CORS for the bucket
  • D. Change the Bucket policy for the bucket to allow access from the other bucket


Answer – C

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.

Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:

Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.

Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.

Reference: Cross-Origin Resource Sharing (CORS)


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q42: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below

  • A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
  • B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
  • C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
  • D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.


Answer- C
The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3.
You can then provides access to the objects based on the key values generated via the user id.

Reference: The AWS Security Token Service (STS)


Top

Q43: Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.

  • A. Publish your data to CloudWatch Logs, and congure your application to Autoscale to handle the load on demand.
  • B. Publish your log data to an Amazon S3 bucket.  Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is congured to pull down your log les stored an Amazon S3
  • C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is congured to process your logging data.
  • D. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.

Answer:


Answer – C
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data.
Reference: Amazon Kinesis

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q44: You’ve been instructed to develop a mobile application that will make use of AWS services. You need to decide on a data store to store the user sessions. Which of the following would be an ideal data store for session management?

  • A. AWS Simple Storage Service
  • B. AWS DynamoDB
  • C. AWS RDS
  • D. AWS Redshift

Answer:


Answer – B
DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management
Reference: Scalable Session Handling in PHP Using Amazon DynamoDB

Top

Q45: Your application currently interacts with a DynamoDB table. Records are inserted into the table via the application. There is now a requirement to ensure that whenever items are updated in the DynamoDB primary table , another record is inserted into a secondary table. Which of the below feature should be used when developing such a solution?

  • A. AWS DynamoDB Encryption
  • B. AWS DynamoDB Streams
  • C. AWS DynamoDB Accelerator
  • D. AWSTable Accelerator


Answer – B
DynamoDB Streams Use Cases and Design Patterns This post describes some common use cases you might encounter, along with their design options and solutions, when migrating data from relational data stores to Amazon DynamoDB. We will consider how to manage the following scenarios:

  • How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
  • How do you trigger an event based on a particular transaction?
  • How do you audit or archive transactions?
  • How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?

Relational databases provide native support for transactions, triggers, auditing, and replication. Typically, a transaction in a database refers to performing create, read, update, and delete (CRUD) operations against multiple tables in a block. A transaction can have only two states—success or failure. In other words, there is no partial completion. As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit. You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time. AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region. All of the other options are incorrect since none of these would meet the core requirement.
Reference: DynamoDB Streams Use Cases and Design Patterns


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q46: An application has been making use of AWS DynamoDB for its back-end data store. The size of the table has now grown to 20 GB , and the scans on the table are causing throttling errors. Which of the following should now be implemented to avoid such errors?

  • A. Large Page size
  • B. Reduced page size
  • C. Parallel Scans
  • D. Sequential scans

Answer – B
When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity. You can use the following technique to minimize the impact of a scan on a table’s provisioned throughput. Reduce page size Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request. For example, suppose that each item is 4 KB and you set the page size to 40 items. A Query request would then consume only 20 eventually consistent read operations or 40 strongly consistent read operations. A larger number of smaller Query or Scan operations would allow your other critical requests to succeed without throttling.
Reference1: Rate-Limited Scans in Amazon DynamoDB

Reference2: Best Practices for Querying and Scanning Data


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q47: Which of the following is correct way of passing a stage variable to an HTTP URL ? (Select TWO.)

  • A. http://example.com/${}/prod
  • B. http://example.com/${stageVariables.}/prod
  • C. http://${stageVariables.}.example.com/dev/operation
  • D. http://${stageVariables}.example.com/dev/operation
  • E. http://${}.example.com/dev/operation
  • F. http://example.com/${stageVariables}/prod


Answer – B. and C.
A stage variable can be used as part of HTTP integration URL as in following cases, ·         A full URI without protocol ·         A full domain ·         A subdomain ·         A path ·         A query string In the above case , option B & C displays stage variable as a path & sub-domain.
Reference: Amazon API Gateway Stage Variables Reference

Top

Q48: Your company is planning on creating new development environments in AWS. They want to make use of their existing Chef recipes which they use for their on-premise configuration for servers in AWS. Which of the following service would be ideal to use in this regard?

  • A. AWS Elastic Beanstalk
  • B. AWS OpsWork
  • C. AWS Cloudformation
  • D. AWS SQS


Answer – B
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments All other options are invalid since they cannot be used to work with Chef recipes for configuration management.
Reference: AWS OpsWorks

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q49: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?

  • A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
  • B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access.
  • C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
  • D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.


Answer – C
With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used
Reference: About Web Identity Federation

Top

Q50: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?

  • A. Cognito Data
  • B. Cognito Events
  • C. Cognito Streams
  • D. Cognito Callbacks


Answer – C
Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams
Reference:

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q51: You’ve developed a set of scripts using AWS Lambda. These scripts need to access EC2 Instances in a VPC. Which of the following needs to be done to ensure that the AWS Lambda function can access the resources in the VPC. Choose 2 answers from the options given below

  • A. Ensure that the subnet ID’s are mentioned when conguring the Lambda function
  • B. Ensure that the NACL ID’s are mentioned when conguring the Lambda function
  • C. Ensure that the Security Group ID’s are mentioned when conguring the Lambda function
  • D. Ensure that the VPC Flow Log ID’s are mentioned when conguring the Lambda function


Answer: A and C.
AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPCspecific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC.
Reference: Configuring a Lambda Function to Access Resources in an Amazon VPC

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q52: You’ve currently been tasked to migrate an existing on-premise environment into Elastic Beanstalk. The application does not make use of Docker containers. You also can’t see any relevant environments in the beanstalk service that would be suitable to host your application. What should you consider doing in this case?

  • A. Migrate your application to using Docker containers and then migrate the app to the Elastic Beanstalk environment.
  • B. Consider using Cloudformation to deploy your environment to Elastic Beanstalk
  • C. Consider using Packer to create a custom platform
  • D. Consider deploying your application using the Elastic Container Service


Answer – C
Elastic Beanstalk supports custom platforms. A custom platform is a more advanced customization than a Custom Image in several ways. A custom platform lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility allows you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn’t provide a platform out of the box. Compare that to custom images, where you modify an AMI for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform’s software stack. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. To create a custom platform, you build an Amazon Machine Image (AMI) from one of the supported operating systems—Ubuntu, RHEL, or Amazon Linux (see the flavor entry in Platform.yaml File Format for the exact version numbers)—and add further customizations. You create your own Elastic Beanstalk platform using Packer, which is an open-source tool for creating machine images for many platforms, including AMIs for use with Amazon EC2. An Elastic Beanstalk platform comprises an AMI configured to run a set of software that supports an application, and metadata that can include custom configuration options and default configuration option settings.
Reference: AWS Elastic Beanstalk Custom Platforms

Top

Q53: Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.

  • A. 10
  • B. 160
  • C. 155
  • D. 16


Answer – B.
Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.
Reference: Read/Write Capacity Mode

Top

Top

Q54: Which AWS Service can be used to automatically install your application code onto EC2, on premises systems and Lambda?

  • A. CodeCommit
  • B. X-Ray
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q55: Which AWS service can be used to compile source code, run tests and package code?

  • A. CodePipeline
  • B. CodeCommit
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy Answer: B.

Reference: AWS CodeBuild


Top

Q56: How can your prevent CloudFormation from deleting your entire stack on failure? (Choose 2)

  • A. Set the Rollback on failure radio button to No in the CloudFormation console
  • B. Set Termination Protection to Enabled in the CloudFormation console
  • C. Use the –disable-rollback flag with the AWS CLI
  • D. Use the –enable-termination-protection protection flag with the AWS CLI

Answer: A. and C.

Reference: Protecting a Stack From Being Deleted

Top

Q57: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?

  • A. Continuous Integration
  • B. Continuous Deployment
  • C. Continuous Delivery
  • D. Continuous Development

Top

Q58: When deploying application code to EC2, the AppSpec file can be written in which language?

  • A. JSON
  • B. JSON or YAML
  • C. XML
  • D. YAML

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q59: Part of your CloudFormation deployment fails due to a mis-configuration, by defaukt what will happen?

  • A. CloudFormation will rollback only the failed components
  • B. CloudFormation will rollback the entire stack
  • C. Failed component will remain available for debugging purposes
  • D. CloudFormation will ask you if you want to continue with the deployment


Top

Q60: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?

  • A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
  • B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
  • C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
  • D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.

Answer: C

Reference: Getting Started with Amazon SNS


Top

Q61: Which AWS service can be used to centrally store and version control your application source code, binaries and libraries

  • A. CodeCommit
  • B. CodeBuild
  • C. CodePipeline
  • D. ElasticFileSystem

Answer: A

Reference: AWS CodeCommit


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q62: You are using CloudFormation to create a new S3 bucket, which of the following sections would you use to define the properties of your bucket?

  • A. Conditions
  • B. Parameters
  • C. Outputs
  • D. Resources

Answer: D

Reference: Resources


Top

Q63: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?

  • A. Transforms
  • B. Outputs
  • C. Resources
  • D. Instances

Answer: C.
The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3.
Reference: Resources

Top

Q64: Which AWS service can be used to fully automate your entire release process?

  • A. CodeDeploy
  • B. CodePipeline
  • C. CodeCommit
  • D. CodeBuild

Answer: B.
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates

Reference: AWS CodePipeline


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q65: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?

  • A. Outputs
  • B. Transforms
  • C. Resources
  • D. Exports

Answer: A.
Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack.
Reference: CloudFormation Outputs

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q66: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?

  • A. Inputs
  • B. Resources
  • C. Transforms
  • D. Files

Answer: C.
Transforms is used to reference code located in S3 and also specififying the use of the Serverless Application Model (SAM) for Lambda deployments.
Reference: Transforms

Top

Q67: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file
used to specify source files and lifecycle hooks?

  • A. buildspec.yml
  • B. appspec.json
  • C. appspec.yml
  • D. buildspec.json

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q68: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?

  • A. Share the code using an EBS volume
  • B. Copy and paste the code into the template each time you need to use it
  • C. Use a cloudformation nested stack
  • D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.

Answer: C.

Reference: Working with Nested Stacks

Top

Q69: In the CodeDeploy AppSpec file, what are hooks used for?

  • A. To reference AWS resources that will be used during the deployment
  • B. Hooks are reserved for future use
  • C. To specify files you want to copy during the deployment.
  • D. To specify, scripts or function that you want to run at set points in the deployment lifecycle

Answer: D.
The ‘hooks’ section for an EC2/On-Premises deployment contains mappings that link deployment lifecycle event hooks to one or more scripts.

Reference: AppSpec ‘hooks’ Section

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q70: Which command can you use to encrypt a plain text file using CMK?

  • A. aws kms-encrypt
  • B. aws iam encrypt
  • C. aws kms encrypt
  • D. aws encrypt

Answer: C.
aws kms encrypt –key-id 1234abcd-12ab-34cd-56ef-1234567890ab –plaintext fileb://ExamplePlaintextFile –output text –query CiphertextBlob > C:\Temp\ExampleEncryptedFile.base64

Reference: AWS CLI Encrypt

Top

Q72: Which of the following is an encrypted key used by KMS to encrypt your data

  • A. Custmoer Mamaged Key
  • B. Encryption Key
  • C. Envelope Key
  • D. Customer Master Key

Answer: C.
Your Data key also known as the Enveloppe key is encrypted using the master key.This approach is known as Envelope encryption.
Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.

Reference: Envelope Encryption

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q73: Which of the following statements are correct? (Choose 2)

  • A. The Customer Master Key is used to encrypt and decrypt the Envelope Key or Data Key
  • B. The Envelope Key or Data Key is used to encrypt and decrypt plain text files.
  • C. The envelope Key or Data Key is used to encrypt and decrypt the Customer Master Key.
  • D. The Customer MasterKey is used to encrypt and decrypt plain text files.

Answer: A. and B.

Reference: AWS Key Management Service Concepts

Top

 
 

Q74: Which of the following statements is correct in relation to kMS/ (Choose 2)

  • A. KMS Encryption keys are regional
  • B. You cannot export your customer master key
  • C. You can export your customer master key.
  • D. KMS encryption Keys are global

Answer: A. and B.

Reference: AWS Key Management Service FAQs

Q75:  A developer is preparing a deployment package for a Java implementation of an AWS Lambda function. What should the developer include in the deployment package? (Select TWO.)
A. Compiled application code
B. Java runtime environment
C. References to the event sources
D. Lambda execution role
E. Application dependencies


Answer: C. E.
Notes: To create a Lambda function, you first create a Lambda function deployment package. This package is a .zip or .jar file consisting of your code and any dependencies.
Reference: Lambda deployment packages.

Q76: A developer uses AWS CodeDeploy to deploy a Python application to a fleet of Amazon EC2 instances that run behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. What should the developer include in the CodeDeploy deployment package?
A. A launch template for the Amazon EC2 Auto Scaling group
B. A CodeDeploy AppSpec file
C. An EC2 role that grants the application access to AWS services
D. An IAM policy that grants the application access to AWS services


Answer: B.
Notes: The CodeDeploy AppSpec (application specific) file is unique to CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file.
Reference: CodeDeploy application specification (AppSpec) files.
Category: Deployment

Q76: A company is working on a project to enhance its serverless application development process. The company hosts applications on AWS Lambda. The development team regularly updates the Lambda code and wants to use stable code in production. Which combination of steps should the development team take to configure Lambda functions to meet both development and production requirements? (Select TWO.)

A. Create a new Lambda version every time a new code release needs testing.
B. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready unqualified Amazon Resource Name (ARN) version. Point the Development alias to the $LATEST version.
C. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to the production-ready qualified Amazon Resource Name (ARN) version. Point the Development alias to the variable LAMBDA_TASK_ROOT.
D. Create a new Lambda layer every time a new code release needs testing.
E. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready Lambda layer Amazon Resource Name (ARN). Point the Development alias to the $LATEST layer ARN.


Answer: A. B.
Notes: Lambda function versions are designed to manage deployment of functions. They can be used for code changes, without affecting the stable production version of the code. By creating separate aliases for Production and Development, systems can initiate the correct alias as needed. A Lambda function alias can be used to point to a specific Lambda function version. Using the functionality to update an alias and its linked version, the development team can update the required version as needed. The $LATEST version is the newest published version.
Reference: Lambda function versions.

For more information about Lambda layers, see Creating and sharing Lambda layers.

For more information about Lambda function aliases, see Lambda function aliases.

Category: Deployment

Q77: Each time a developer publishes a new version of an AWS Lambda function, all the dependent event source mappings need to be updated with the reference to the new version’s Amazon Resource Name (ARN). These updates are time consuming and error-prone. Which combination of actions should the developer take to avoid performing these updates when publishing a new Lambda version? (Select TWO.)
A. Update event source mappings with the ARN of the Lambda layer.
B. Point a Lambda alias to a new version of the Lambda function.
C. Create a Lambda alias for each published version of the Lambda function.
D. Point a Lambda alias to a new Lambda function alias.
E. Update the event source mappings with the Lambda alias ARN.


Answer: B. E.
Notes: A Lambda alias is a pointer to a specific Lambda function version. Instead of using ARNs for the Lambda function in event source mappings, you can use an alias ARN. You do not need to update your event source mappings when you promote a new version or roll back to a previous version.
Reference: Lambda function aliases.
Category: Deployment

Q78:  A company wants to store sensitive user data in Amazon S3 and encrypt this data at rest. The company must manage the encryption keys and use Amazon S3 to perform the encryption. How can a developer meet these requirements?
A. Enable default encryption for the S3 bucket by using the option for server-side encryption with customer-provided encryption keys (SSE-C).
B. Enable client-side encryption with an encryption key. Upload the encrypted object to the S3 bucket.
C. Enable server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Upload an object to the S3 bucket.
D. Enable server-side encryption with customer-provided encryption keys (SSE-C). Upload an object to the S3 bucket.


Answer: D.
Notes: When you upload an object, Amazon S3 uses the encryption key you provide to apply AES-256 encryption to your data and removes the encryption key from memory.
Reference: Protecting data using server-side encryption with customer-provided encryption keys (SSE-C).

Category: Security

Q79: A company is developing a Python application that submits data to an Amazon DynamoDB table. The company requires client-side encryption of specific data items and end-to-end protection for the encrypted data in transit and at rest. Which combination of steps will meet the requirement for the encryption of specific data items? (Select TWO.)

A. Generate symmetric encryption keys with AWS Key Management Service (AWS KMS).
B. Generate asymmetric encryption keys with AWS Key Management Service (AWS KMS).
C. Use generated keys with the DynamoDB Encryption Client.
D. Use generated keys to configure DynamoDB table encryption with AWS managed customer master keys (CMKs).
E. Use generated keys to configure DynamoDB table encryption with AWS owned customer master keys (CMKs).


Answer: A. C.
Notes: When the DynamoDB Encryption Client is configured to use AWS KMS, it uses a customer master key (CMK) that is always encrypted when used outside of AWS KMS. This cryptographic materials provider returns a unique encryption key and signing key for every table item. This method of encryption uses a symmetric CMK.
Reference: Direct KMS Materials Provider.
Category: Deployment

Q80: A company is developing a REST API with Amazon API Gateway. Access to the API should be limited to users in the existing Amazon Cognito user pool. Which combination of steps should a developer perform to secure the API? (Select TWO.)
A. Create an AWS Lambda authorizer for the API.
B. Create an Amazon Cognito authorizer for the API.
C. Configure the authorizer for the API resource.
D. Configure the API methods to use the authorizer.
E. Configure the authorizer for the API stage.


Answer: B. D.
Notes: An Amazon Cognito authorizer should be used for integration with Amazon Cognito user pools. In addition to creating an authorizer, you are required to configure an API method to use that authorizer for the API.
Reference: Control access to a REST API using Amazon Cognito user pools as authorizer.
Category: Security

Q81: A developer is implementing a mobile app to provide personalized services to app users. The application code makes calls to Amazon S3 and Amazon Simple Queue Service (Amazon SQS). Which options can the developer use to authenticate the app users? (Select TWO.)
A. Authenticate to the Amazon Cognito identity pool directly.
B. Authenticate to AWS Identity and Access Management (IAM) directly.
C. Authenticate to the Amazon Cognito user pool directly.
D. Federate authentication by using Login with Amazon with the users managed with AWS Security Token Service (AWS STS).
E. Federate authentication by using Login with Amazon with the users managed with the Amazon Cognito user pool.


Answer: C. E.
Notes: The Amazon Cognito user pool provides direct user authentication. The Amazon Cognito user pool provides a federated authentication option with third-party identity provider (IdP), including amazon.com.
Reference: Adding User Pool Sign-in Through a Third Party.
Category: Security

 
 

Q82: A company is implementing several order processing workflows. Each workflow is implemented by using AWS Lambda functions for each task. Which combination of steps should a developer follow to implement these workflows? (Select TWO.)
A. Define a AWS Step Functions task for each Lambda function.
B. Define a AWS Step Functions task for each workflow.
C. Write code that polls the AWS Step Functions invocation to coordinate each workflow.
D. Define an AWS Step Functions state machine for each workflow.
E. Define an AWS Step Functions state machine for each Lambda function.


Answer: A. D.
Notes: Step Functions is based on state machines and tasks. A state machine is a workflow. Tasks perform work by coordinating with other AWS services, such as Lambda. A state machine is a workflow. It can be used to express a workflow as a number of states, their relationships, and their input and output. You can coordinate individual tasks with Step Functions by expressing your workflow as a finite state machine, written in the Amazon States Language.
Reference: Getting Started with AWS Step Functions.

Category: Development

Q83: A company is migrating a web service to the AWS Cloud. The web service accepts requests by using HTTP (port 80). The company wants to use an AWS Lambda function to process HTTP requests. Which application design will satisfy these requirements?
A. Create an Amazon API Gateway API. Configure proxy integration with the Lambda function.
B. Create an Amazon API Gateway API. Configure non-proxy integration with the Lambda function.
C. Configure the Lambda function to listen to inbound network connections on port 80.
D. Configure the Lambda function as a target in the Application Load Balancer target group.


Answer: D.
Notes: Elastic Load Balancing supports Lambda functions as a target for an Application Load Balancer. You can use load balancer rules to route HTTP requests to a function, based on the path or the header values. Then, process the request and return an HTTP response from your Lambda function.
Reference: Using AWS Lambda with an Application Load Balancer.
Category: Development

Q84: A company is developing an image processing application. When an image is uploaded to an Amazon S3 bucket, a number of independent and separate services must be invoked to process the image. The services do not have to be available immediately, but they must process every image. Which application design satisfies these requirements?
A. Configure an Amazon S3 event notification that publishes to an Amazon Simple Queue Service (Amazon SQS) queue. Each service pulls the message from the same queue.
B. Configure an Amazon S3 event notification that publishes to an Amazon Simple Notification Service (Amazon SNS) topic. Each service subscribes to the same topic.
C. Configure an Amazon S3 event notification that publishes to an Amazon Simple Queue Service (Amazon SQS) queue. Subscribe a separate Amazon Simple Notification Service (Amazon SNS) topic for each service to an Amazon SQS queue.
D. Configure an Amazon S3 event notification that publishes to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe a separate Simple Queue Service (Amazon SQS) queue for each service to the Amazon SNS topic.


Answer: D.
Notes: Each service can subscribe to an individual Amazon SQS queue, which receives an event notification from the Amazon SNS topic. This is a fanout architectural implementation.
Reference: Common Amazon SNS scenarios.
Category: Development

Q85: A developer wants to implement Amazon EC2 Auto Scaling for a Multi-AZ web application. However, the developer is concerned that user sessions will be lost during scale-in events. How can the developer store the session state and share it across the EC2 instances?
A. Write the sessions to an Amazon Kinesis data stream. Configure the application to poll the stream.
B. Publish the sessions to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe each instance in the group to the topic.
C. Store the sessions in an Amazon ElastiCache for Memcached cluster. Configure the application to use the Memcached API.
D. Write the sessions to an Amazon Elastic Block Store (Amazon EBS) volume. Mount the volume to each instance in the group.


Answer: C.
Notes: ElastiCache for Memcached is a distributed in-memory data store or cache environment in the cloud. It will meet the developer’s requirement of persistent storage and is fast to access.
Reference: What is Amazon ElastiCache for Memcached?

Category: Development

 
 
 

Q86: A developer is integrating a legacy web application that runs on a fleet of Amazon EC2 instances with an Amazon DynamoDB table. There is no AWS SDK for the programming language that was used to implement the web application. Which combination of steps should the developer perform to make an API call to Amazon DynamoDB from the instances? (Select TWO.)
A. Make an HTTPS POST request to the DynamoDB API endpoint for the AWS Region. In the request body, include an XML document that contains the request attributes.
B. Make an HTTPS POST request to the DynamoDB API endpoint for the AWS Region. In the request body, include a JSON document that contains the request attributes.
C. Sign the requests by using AWS access keys and Signature Version 4.
D. Use an EC2 SSH key to calculate Signature Version 4 of the request.
E. Provide the signature value through the HTTP X-API-Key header.


Answer: B. C.
Notes: The HTTPS-based low-level AWS API for DynamoDB uses JSON as a wire protocol format. When you send HTTP requests to AWS, you sign the requests so that AWS can identify who sent them. Requests are signed with your AWS access key, which consists of an access key ID and secret access key. AWS supports two signature versions: Signature Version 4 and Signature Version 2. AWS recommends the use of Signature Version 4.
Reference: Signing AWS API requests.
Category: Development

Q87: A developer has written several custom applications that read and write to the same Amazon DynamoDB table. Each time the data in the DynamoDB table is modified, this change should be sent to an external API. Which combination of steps should the developer perform to accomplish this task? (Select TWO.)
A. Configure an AWS Lambda function to poll the stream and call the external API.
B. Configure an event in Amazon EventBridge (Amazon CloudWatch Events) that publishes the change to an Amazon Managed Streaming for Apache Kafka (Amazon MSK) data stream.
C. Create a trigger in the DynamoDB table to publish the change to an Amazon Kinesis data stream.
D. Deliver the stream to an Amazon Simple Notification Service (Amazon SNS) topic and subscribe the API to the topic.
E. Enable DynamoDB Streams on the table.


Answer: A. E.
Notes: If you enable DynamoDB Streams on a table, you can associate the stream Amazon Resource Name (ARN) with an Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table’s stream. Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. You can enable DynamoDB Streams on a table to create an event that invokes an AWS Lambda function.
Reference: Tutorial: Process New Items with DynamoDB Streams and Lambda.
Category: Monitoring

 
 
 

Q88: A company is migrating the create, read, update, and delete (CRUD) functionality of an existing Java web application to AWS Lambda. Which minimal code refactoring is necessary for the CRUD operations to run in the Lambda function?
A. Implement a Lambda handler function.
B. Import an AWS X-Ray package.
C. Rewrite the application code in Python.
D. Add a reference to the Lambda execution role.


Answer: A.
Notes: Every Lambda function needs a Lambda-specific handler. Specifics of authoring vary between runtimes, but all runtimes share a common programming model that defines the interface between your code and the runtime code. You tell the runtime which method to run by defining a handler in the function configuration. The runtime runs that method. Next, the runtime passes in objects to the handler that contain the invocation event and context, such as the function name and request ID.
Reference: Getting started with Lambda.
Category: Refactoring

Top

Q89: A company plans to use AWS log monitoring services to monitor an application that runs on premises. Currently, the application runs on a recent version of Ubuntu Server and outputs the logs to a local file. Which combination of steps should a developer perform to accomplish this goal? (Select TWO.)
A. Update the application code to include calls to the agent API for log collection.
B. Install the Amazon Elastic Container Service (Amazon ECS) container agent on the server.
C. Install the unified Amazon CloudWatch agent on the server.
D. Configure the long-term AWS credentials on the server to enable log collection by the agent.
E. Attach an IAM role to the server to enable log collection by the agent.


Answer: C. D.
Notes: The unified CloudWatch agent needs to be installed on the server. Ubuntu Server 18.04 is one of the many supported operating systems. When you install the unified CloudWatch agent on an on-premises server, you will specify a named profile that contains the credentials of the IAM user.
Reference: Collecting metrics and logs from Amazon EC2 instances and on-premises servers with the CloudWatch agent.
Category: Monitoring

Q90: A developer wants to monitor invocations of an AWS Lambda function by using Amazon CloudWatch Logs. The developer added a number of print statements to the function code that write the logging information to the stdout stream. After running the function, the developer does not see any log data being generated. Why does the log data NOT appear in the CloudWatch logs?
A. The log data is not written to the stderr stream.
B. Lambda function logging is not automatically enabled.
C. The execution role for the Lambda function did not grant permissions to write log data to CloudWatch Logs.
D. The Lambda function outputs the logs to an Amazon S3 bucket.


Answer: C.
Notes: The function needs permission to call CloudWatch Logs. Update the execution role to grant the permission. You can use the managed policy of AWSLambdaBasicExecutionRole.
Reference: Troubleshoot execution issues in Lambda.
Category: Monitoting

Q91: Which of the following are best practices you should implement into ongoing deployments of your application? (Select THREE.)

A. Use stage variables to manage secrets across environments
B. Create account-specific AWS SAM templates for each environment
C. Use an AutoPublish alias
D. Use traffic shifting with pre- and post-deployment hooks
E. Test throughout the pipeline


Answer: C. D. E.
Notes: Use an AutoPublish alias, Use traffic shifting with pre- and post-deployment hooks, Test throughout the pipeline
Reference: https://enoumen.com/2019/06/23/aws-solution-architect-associate-exam-prep-facts-and-summaries-questions-and-answers-dump/

Q92: You are handing off maintenance of your new serverless application to an incoming team lead. Which recommendations would you make? (Select THREE.)

A. Keep up to date with the quotas and payload sizes for each AWS service you are using

B. Analyze production access patterns to identify potential improvements

C. Design your services to extend their life as long as possible

D. Minimize changes to your production application

E. Compare the value of using the latest first-class integrations versus using Lambda between AWS services


Answer: A. B. D.

Notes: Keep up to date with the quotas and payload sizes for each AWS service you are using, 
Reference: https://enoumen.com/2019/06/23/aws-solution-architect-associate-exam-prep-facts-and-summaries-questions-and-answers-dump/

Q93: You are handing off maintenance of your new serverless application to an incoming team lead. Which recommendations would you make? (Select THREE.)

A. Keep up to date with the quotas and payload sizes for each AWS service you are using

B. Analyze production access patterns to identify potential improvements

C. Design your services to extend their life as long as possible

D. Minimize changes to your production application

E. Compare the value of using the latest first-class integrations versus using Lambda between AWS services


Answer: A. B. D.
Notes: Keep up to date with the quotas and payload sizes for each AWS service you are using. Analyze production access patterns to identify potential improvements. Minimize changes to your production application

Reference: https://enoumen.com/2019/06/23/aws-solution-architect-associate-exam-prep-facts-and-summaries-questions-and-answers-dump/

Q94: Your application needs to connect to an Amazon RDS instance on the backend. What is the best recommendation to the developer whose function must read from and write to the Amazon RDS instance?

A. Initialize the number of connections you want outside of the handler

B. Use the database TTL setting to clean up connections

C. Use reserved concurrency to limit the number of concurrent functions that would try to write to the database

D. Use the database proxy feature to provide connection pooling for the functions


Answer: D.
Notes: Use the database proxy feature to provide connection pooling for the functions

Reference: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/automating-updates-to-serverless-apps.html

 

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

Question 95: A developer reports that a third-party library they need cannot be shared in the Lambda invocation environment. Which suggestion would you make?

A. Decrease the deployment package size

B. Set a provisioned concurrency of one so that the library doesn’t need to be shared across environments

C. Use reserved concurrency for the function that needs to use the library

D. Load the third-party library onto an Amazon EFS volume


Answer: D
Notes: Load the third-party library onto an Amazon EFS volume

Reference: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/automating-updates-to-serverless-apps.html

AWS Certified Developer Associate exam: Whitepapers

AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers for the AWS Certified Developer – Associate Exam.

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Online Training and Labs for AWS Certified Developer Associates Exam

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Top

AWS Developer Associates Jobs

Top

AWS Certified Developer-Associate Exam info and details, How To:

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

The AWS Certified Developer Associate exam is a multiple choice, multiple answer exam. Here is the Exam Overview:

  • Certification Name: AWS Certified Developer Associate.
  • Prerequisites for the Exam: None.
  • Exam Pattern: Multiple Choice Questions
  • The AWS Certified Developer-Associate Examination (DVA-C01) is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals guided by certification industry best practices and guidelines.
  • Your results for the examination are reported as a score from 100 – 1000, with a minimum passing score of 720.
  • Exam fees: US $150
  • Exam Guide on AWS Website
  • Available languages for tests: English, Japanese, Korean, Simplified Chinese
  • Read AWS whitepapers
  • Register for certification account here.
  • Prepare for Certification Here
  • Exam Content Outline

    Domain% of Examination
    Domain 1: Deployment (22%)
    1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
    1.2 Deploy applications using Elastic Beanstalk.
    1.3 Prepare the application deployment package to be deployed to AWS.
    1.4 Deploy serverless applications
    22%
    Domain 2: Security (26%)
    2.1 Make authenticated calls to AWS services.
    2.2 Implement encryption using AWS services.
    2.3 Implement application authentication and authorization.
    26%
    Domain 3: Development with AWS Services (30%)
    3.1 Write code for serverless applications.
    3.2 Translate functional requirements into application design.
    3.3 Implement application design into application code.
    3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.
    30%
    Domain 4: Refactoring
    4.1 Optimize application to best use AWS services and features.
    4.2 Migrate existing application code to run on AWS.
    10%
    Domain 5: Monitoring and Troubleshooting (10%)
    5.1 Write code that can be monitored.
    5.2 Perform root cause analysis on faults found in testing or production.
    10%
    TOTAL100%

Top

AWS Certified Developer Associate exam: Additional Information for reference

Below are some useful reference links that would help you to learn about AWS Certified Developer Associate Exam.

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Other Relevant and Recommended AWS Certifications

AWS Certification Exams Roadmap
AWS Certification Exams Roadmap

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

Top

Other AWS Facts and Summaries and Questions/Answers Dump

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

AWS Certified Developer Associate exam: Whitepapers

AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers for the AWS Certified Developer – Associate Exam.

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Online Training and Labs for AWS Certified Developer Associates Exam

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Top

AWS Developer Associates Jobs

Top

AWS Certified Developer-Associate Exam info and details, How To:

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

The AWS Certified Developer Associate exam is a multiple choice, multiple answer exam. Here is the Exam Overview:

  • Certification Name: AWS Certified Developer Associate.
  • Prerequisites for the Exam: None.
  • Exam Pattern: Multiple Choice Questions
  • The AWS Certified Developer-Associate Examination (DVA-C01) is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals guided by certification industry best practices and guidelines.
  • Your results for the examination are reported as a score from 100 – 1000, with a minimum passing score of 720.
  • Exam fees: US $150
  • Exam Guide on AWS Website
  • Available languages for tests: English, Japanese, Korean, Simplified Chinese
  • Read AWS whitepapers
  • Register for certification account here.
  • Prepare for Certification Here
  • Exam Content Outline

    Domain% of Examination
    Domain 1: Deployment (22%)
    1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
    1.2 Deploy applications using Elastic Beanstalk.
    1.3 Prepare the application deployment package to be deployed to AWS.
    1.4 Deploy serverless applications
    22%
    Domain 2: Security (26%)
    2.1 Make authenticated calls to AWS services.
    2.2 Implement encryption using AWS services.
    2.3 Implement application authentication and authorization.
    26%
    Domain 3: Development with AWS Services (30%)
    3.1 Write code for serverless applications.
    3.2 Translate functional requirements into application design.
    3.3 Implement application design into application code.
    3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.
    30%
    Domain 4: Refactoring
    4.1 Optimize application to best use AWS services and features.
    4.2 Migrate existing application code to run on AWS.
    10%
    Domain 5: Monitoring and Troubleshooting (10%)
    5.1 Write code that can be monitored.
    5.2 Perform root cause analysis on faults found in testing or production.
    10%
    TOTAL100%

Top

 

In this AWS tutorial, we are going to discuss how we can make the best use of AWS services to build a highly scalable, and fault tolerant configuration of EC2 instances. The use of Load Balancers and Auto Scaling Groups falls under a number of best practices in AWS, including Performance Efficiency, Reliability and high availability.

Before we dive into this hands-on tutorial on how exactly we can build this solution, let’s have a brief recap on what an Auto Scaling group is, and what a Load balancer is.

Autoscaling group (ASG)

An Autoscaling group (ASG) is a logical grouping of instances which can scale up and scale down depending on pre-configured settings. By setting Scaling policies of your ASG, you can choose how many EC2 instances are launched and terminated based on your application’s load. You can do this based on manual, dynamic, scheduled or predictive scaling.

Elastic Load Balancer (ELB)

An Elastic Load Balancer (ELB) is a name describing a number of services within AWS designed to distribute traffic across multiple EC2 instances in order to provide enhanced scalability, availability, security and more. The particular type of Load Balancer we will be using today is an Application Load Balancer (ALB). The ALB is a Layer 7 Load Balancer designed to distribute HTTP/HTTPS traffic across multiple nodes – with added features such as TLS termination, Sticky Sessions and Complex routing configurations.

Getting Started

First of all, we open our AWS management console and head to the EC2 management console.

We scroll down on the left-hand side and select ‘Launch Templates’. A Launch Template is a configuration template which defines the settings for EC2 instances launched by the ASG.

Under Launch Templates, we will select “Create launch template”.

We specify the name ‘MyTestTemplate’ and use the same text in the description.

Under the ‘Auto Scaling guidance’ box, tick the box which says ‘Provide guidance to help me set up a template that I can use with EC2 Auto Scaling’ and scroll down to launch template contents.

When it comes to choosing our AMI (Amazon Machine Image) we can choose the Amazon Linux 2 under ‘Quick Start’.

The Amazon Linux 2 AMI is free tier eligible, and easy to use for our demonstration purposes.

Next, we select the ‘t2.micro’ under instance types, as this is also free tier eligible.

Under Network Settings, we create a new Security Group called ExampleSG in our default VPC, allowing HTTP access to everyone. It should look like this.

AWS Certified Developer Associate exam: Additional Information for reference

Below are some useful reference links that would help you to learn about AWS Certified Developer Associate Exam.

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Other Relevant and Recommended AWS Certifications

AWS Certification Exams Roadmap
AWS Certification Exams Roadmap

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

Top

Other AWS Facts and Summaries and Questions/Answers Dump

AWS Developer Associate DVA-C01 Exam Prep

 
 
 

AWS Certifications Breaking News and Top Stories

  • Which are the best mock tests for AWS Solutions Architect sure pass?
    by /u/f2ka07

    Hey everyone! I'm gearing up to take the AWS Solutions Architect exam and honestly feeling a bit overwhelmed with all the practice test options out there. I've seen so many different platforms - Whizlabs, Tutorials Dojo, Udemy courses, and a bunch of others - but I'm not sure which ones are actually worth the money and time. I want practice tests that really feel like the real deal, not just random questions that don't prepare you properly. What I'm looking for: Tests that match the actual exam difficulty (not too easy, not impossibly hard) Good explanations for answers so I actually learn from my mistakes Questions that cover all the important topics For those of you who've already passed the exam - which mock tests did you use? Did they actually help you feel ready on exam day? And are there any I should straight-up avoid because they're a waste of money? Also, how many practice tests did you go through before you felt confident enough to book your exam? Any advice would be seriously appreciated. Thanks in advance! submitted by /u/f2ka07 [link] [comments]

  • AWS EKS via terraform - cni plugin not initialized
    by /u/Meganig

    Ok, I am about to rip my hair out over this...I have been trying to create this eks cluster for a while and I have been stuck on this. TF node group takes 30+ minutes than fails. I go into the console and the nodes are showing errors. I use k9s to connect to the cluster, there are no pods created. The node description shows this: ``` │ Ready False Sun, 18 Jan 2026 18:10:45 -0500 Sun, 18 Jan 2026 18:10:33 -0500 KubeletNotReady │ │ container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin │ │ returns error: cni plugin not initialized ``` Here is my latest TF: https://github.com/sPrime28/eks-test What could I be missing? edit: no addons showing in the cluster: aws eks list-addons --cluster-name <cluster-name> --region us-east-1 { "addons": [] } submitted by /u/Meganig [link] [comments]

  • Took ML Assoxiate Friday - No score yet?
    by /u/fxbuttermilk

    I’ve taken 8 AWS certifications with passes and typically have gotten score within a few hours. Still haven’t received a score. Does that mean I likely didn’t pass? I know their SLA says 5 business days, it’s just that it has never taken that long for me. Wondering if anyone has a similar anecdote? submitted by /u/fxbuttermilk [link] [comments]

  • Migrating scheduled jobs to ECS
    by /u/Character_Status8351

    Background: Moving about 8 C# apps from Windows Task Scheduler to AWS Most of these apps fetch data from the same db(sql server), preform some business logic and update data. Some questions I have: Should each scheduled task handle everything start to finish, or do people break it up? Like having one ECS task fetch work items and queue them, then separate tasks to actually process them? One repo per job or throw them all in a monorepo? Does everyone just use CloudWatch and the ECS console to manage jobs or a third party tool(preferably open source)? What's the standard approach for retries? CloudWatch alarms + SNS? submitted by /u/Character_Status8351 [link] [comments]

  • When does my 50% off voucher actually expire ?
    by /u/sufferingSoftwaredev

    I have a 50% off voucher from taking the DVA in feb 2023, it says on my benefits page that the voucher expires on the 27 of feb 2026, does this mean I have till then to take the exam, or till then to book it, i.e, can i book the exam for April with the voucher as long as I book it before it expires ? submitted by /u/sufferingSoftwaredev [link] [comments]

  • AI-Practitioner worth getting?
    by /u/lukamillie

    Hello, I am an aspiring Cloud Engineer, as of the moment I am studying SAA-C03 and looking to take exam and pass it soon. Is the AI-Practitioner exam worth getting? I mean, I am using AI but not on the projects I am trying to make. Does studying and passing this certification help me to gain more skills in being a Cloud Engineer? submitted by /u/lukamillie [link] [comments]

  • Should I do AWS developer associate or solution architect associate
    by /u/Niki_me8863

    Guys, I have been confused on a topic since few days. Can someone help me in deciding whether I should go ahead preparing for a developer associate or solution architect certificate? I'm currently a 2 years experienced software engineer, working on AWS mainly focused on a few services like EC2, ECS, S3, API Gateway etc., I want to get into devops role mainly as a next step. This will be my first AWS certificate, and I really need your suggestion on helping me out on this based on all factors like difficulty of exam, value, topics, preparation etc., Thankyou very much in advance. submitted by /u/Niki_me8863 [link] [comments]

  • Failed SAA-C03, Any advice?
    by /u/dmvtea

    Failed last week. Was averaging 86% on TD exams…was studying for 2 months. Any advice? submitted by /u/dmvtea [link] [comments]

  • Searching new job opportunities
    by /u/Terrible_Dog9609

    Very hard to fine a job currently in sri lanka ,so i am going to face saa 03 exame ,and also i need to freelance project or support task to improve my knowlegde as devops engineer.how to find that opportunity to take few experience on freelance project submitted by /u/Terrible_Dog9609 [link] [comments]

  • Using Amazon Bedrock AgentCore via REST API Tutorial
    by /u/nurulmac11

    I’ve been experimenting with Amazon Bedrock AgentCore and couldn’t find many clear examples of using it directly via REST API, so I documented what I learned while setting it up. The post covers: Setting up Agentcore Agent that can use your rest api endpoints as tools Things that weren’t obvious from the docs at first Small implementation details that might save time Sharing in case it helps others working with Amazon Bedrock Agentcore service in real projects. Article: https://medium.com/p/c4f50839fb4d Text me if you can't read article for any reason. Happy to hear feedback or alternative approaches from folks who’ve used it in production. Since this is a very new service, I am not sure if the infra I established is the best way. submitted by /u/nurulmac11 [link] [comments]

  • aws sa exam upcoming
    by /u/meteor-zeth

    So i am getting the score of around 60% on totally new practice tests and i have my exam very soon . Do you guys think i would pass or should i postpone ? submitted by /u/meteor-zeth [link] [comments]

  • I built a CLI tool to find "zombie" AWS resources (stopped instances, unused volumes) because I didn't want to check manually anymore.
    by /u/compacompila

    Hello everyone, as a Cloud Architect, I used to do the same repetitive tasks in the AWS Console. This is why I created this CLI, initially to solve a pretty specific necessity related to cost explorer: Basically I like to check the current month cost behavior and compare it to the previous month but the same period. For example, of today is 15th, I compare the first 15 days of this month with the first 15 days of last month. This is the initiall problem I solved using this CLI After this I wanted to expand its functionalities and a waste functionality. Currently this checks many of the checks by aws-trusted-advisor but without the need of getting a business support in AWS t’s basically a free, local alternative to some "Trusted Advisor" checks. Tech Stack: Go, AWS SDK v2 I’d love to hear what other "waste checks" you think I should add. Repo: https://github.com/elC0mpa/aws-doctor Thank you guys!!! submitted by /u/compacompila [link] [comments]

  • Need learning/career path Suggestions
    by /u/Nitesh_071

    submitted by /u/Nitesh_071 [link] [comments]

  • Options to run user submitted code with node.js express as backend on AWS ecosystem?
    by /u/PrestigiousZombie531

    Options to run user submitted code in various languages with a node.js express backend? You have seen one of those live code online type websites that let you submit code in bash, python, rust, ruby, swift, scala, java, node, kotlin etc and run on the browser with a live terminal of sorts I am trying to build one of those in node.js and could definitely use some suggestions Option 1: Run directly just run on the ec2 instance along with everything else (absolutely horrible idea i suppose) Option 2: Run inside a docker container how long do you think each container should run / timeout? What size of an EC2 instance would you need to support say 10 languages? Pros / cons? Option 3: Run inside an AWS Elastic Container Service Task Timeout per task? Pros / cons? Questions Any other better methods? Does this kind of application run on queuing where a user submits code and it is immediately put inside bullmq that spins one of the above options? How does data get returned to the user? What about terminal commands that users type and the stream they see (downloading packages...installing libraries etc?) submitted by /u/PrestigiousZombie531 [link] [comments]

  • Principals, tags, SCPs, and ABAC
    by /u/bobaduk

    Hello friends. I have a reasonably complex AWS account structure with a bunch of workloads and sandboxes in an AWS Organization. I'm thinking about applying ABAC to simplify IAM setup in certain cases. For example, imagine that we have an account sandbox-bobaduk, where I have broad access for playing around. We also have an account secret-data where we store some dataset in an S3 bucket. We use Google Workspace as our IDP, and I can apply tags to my role session based on attributes. For example, I authenticate as arn:aws:sts::$sandbox-bobaduk:assumed-role/AWSReservedSSO_MyRole_08759cec7ee3fdc9/bobaduk@org.org. Because I used sso to authenticate, I have the tag team=data-guy on my role session. I can write a resource policy for my s3 bucket that allows GetObject if the OrgId=myorg, and the team tag has the value "data-guy". So far so good. My question, which I'm struggling a little to answer is "can I trust the provenance of that tag?". My thinking is that I can use an SCP that denies tagging a session with the "team" tag, unless the user is adopting a role matching "AWSReservedSSO_*". I should also have an SCP that prevents a user from creating a new role or user with that tag. the AWSReservedSSO_* roles can only be created by identity centre, and the trust policy restricts their use to identity centre, so with those SCPs in place, am I missing anything? I don't need transitive tagging for role chaining, because these tags are only used for this kind of cross-account access based on a resource policy. if I assume another role, I should only have the permissions granted explicitly to that role. submitted by /u/bobaduk [link] [comments]

  • Moving to CloudFormation with Terraform/Terragrunt background, having difficulties
    by /u/hardvochtig

    Hi all, I'm used to Terraform/Terragrunt when setting up infra and got used to its DRY principles and all. However my new company requires me to use CloudFormation for setting up a whole infra from scratch due to audit/compliance reasons. Any tips? Because upon research it seems like everybody hates it and no one actually uses it in this great year of 2026. I've encountered it before, but that's when I was playing around AWS, not production. I've heard of CDK, might lean into this compared to SAM. submitted by /u/hardvochtig [link] [comments]

  • Failed AWS Solutions Architect Associate today – surprised by new questions 😔 Any advice?
    by /u/Aware-Kick-5445

    Hey everyone, https://preview.redd.it/mu2jvruu63eg1.png?width=837&format=png&auto=webp&s=a51d1a73885ccde8b2606650f7128b9859d0733c I took the AWS Certified Solutions Architect – Associate exam today and unfortunately didn’t pass. What really surprised me is that even after doing a lot of practice exams, I faced many questions I had never seen before. Some scenarios felt completely new, and that threw me off. I wanted to ask: Is this normal for this exam? For those who passed on the second attempt, did you see similar questions again, or was it a totally different set? What would you recommend I focus on now: more hands-on labs, AWS docs, exam guide, or different practice exams? Any tips, resources, or motivation would really help right now. Feeling a bit discouraged but I don’t want to give up. Thanks in advance 🙏 submitted by /u/Aware-Kick-5445 [link] [comments]

  • Software developer to Cloud Engineer
    by /u/QuickPenalty7829

    Hey! I would like to have some insight/suggestions on a career switch from software developer to cloud engineer role. I currently work as a software developer with 3+ years of experience, mainly involved in building and maintaining backend systems for large-scale business applications. I’m planning to do AWS cloud practitioner certification and try switching my career path from there. But I don’t know if that’ll be worth it. Or if the role will have better scope than my current role. Could someone please help me understand the pros and cons of this switch and a roadmap to guide me with the right path? - if you have any insights please submitted by /u/QuickPenalty7829 [link] [comments]

  • Passed AWS Certified Data Engineer(DEA-C01) thanks to this community 🙏
    by /u/gopi_pandit

    Hey everyone, I’m happy to share that I’ve cleared the AWS Certified Data Engineer Associate exam today. I mainly wanted to post here to say thank you. This subreddit helped me a lot during my preparation. I connected with people who had already earned the certification and tried to understand the types of questions, their preparation approaches, and the materials they used. One thing that helped me the most was hands on practice. I made it a point to try many common exam scenarios myself, from SCT and DMS migrations to analytics with QuickSight, Kinesis streaming into Redshift, and working with Redshift DDM, RDS, DynamoDB, DataBrew, the Glue Catalog, and EMR. Even though I work mostly with AWS Glue in my day to day role, deliberately practicing other services helped me understand why and when each service is used. I started my preparation about six months ago, but due to health issues I could not stay consistent. I initially planned to earn the certification by December 2025, but I did not feel confident enough and decided to postpone it a little. Thanks again to everyone who shares knowledge and takes the time to answer questions here. It genuinely makes a difference. Happy to answer questions or share preparation insights if it helps others. submitted by /u/gopi_pandit [link] [comments]

  • S3 Bucket Live Replication. Does `Empty` source Bucket action deletes objects from the destination Bucket?
    by /u/IceAdministrative711

    I configured a Live Replication for my source Bucket, and it works (when I create/delete objects in the Source bucket, the same applies to the Destination bucket). I was curious what happens if I `Empty` the source Bucket. I did that, and this did NOT propagate to the Destination Bucket. Objects in the Destination Bucket are still there, although the Source Bucket is empty. Is it expected? Could somebody explain why? submitted by /u/IceAdministrative711 [link] [comments]

  • Solution Architect maybe?
    by /u/Ok-Willingness-9942

    So im going to take a few aws certs this year. Cloud practioner and Ai practioner and machine learning developer. Im kinda debating on taking the solutions architect! What do you think? Is it worth it? I wanna have a firm cloud foundation on top of ai submitted by /u/Ok-Willingness-9942 [link] [comments]

  • Centralized CI/CD security scanning for 30+ repos. Best practices?
    by /u/_1noob_

    Hi everyone, We are currently working on integrating CI/CD security tools across our platform and wanted to sanity-check our approach with the community. We have 30+ repositories in bitbucket and are using AWS for CI/CD. What we are trying to achieve: A centralized or shared pipeline for security scanning (SAST, SCA, Container Scanning, DAST). Reuse the same scanning logic for all the repos Keep pipelines scalable and maintainable as the number of repos grows. The main challenge we are facing: Each repository has different variables for SAST (eg sonarqube) Questions: Is it a good practice to have one shared security pipeline/template used by all repos for scanning? How do teams typically manage repo-specific variables and Sonar tokens when using shared pipelines? Any real-world patterns or pitfalls to watch out for at this scale (30+ pipelines)? Again, goal is to keep security enforcement consistent without over-coupling pipelines as possible. Would really appreciate hearing how others have solved this in production. Thanks in advance. submitted by /u/_1noob_ [link] [comments]

  • Passed AWS SAA - 03!! What I followed and the reality that I faced
    by /u/anuragdoshi

    •Started AWS SAA preparation around mid-October 2025 using Stéphane Maarek’s course to build strong conceptual clarity. •Completed the preparation with multiple Tutorial Dojo practice exams, scoring around 70% initially. These exams are absolutely worth it and closely reflect the real exam difficulty. •Before the actual exam, attempted the official AWS SAA practice test from the AWS Skill Builder website. •Used a comprehensive mind map for revision. This mind map ties all services and their attributes together and works perfectly for last-minute revision. Huge kudos to the creator. Link: https://www.mindmeister.com/app/map/3471885158 The actual exam was much tougher than expected. The questions were not straightforward, and at one point I genuinely thought I might have to reappear. However, all the effort, practice, and conceptual understanding paid off in the end. I strongly recommend aiming for 80%+ in Tutorial Dojo exams and using a good mind map for revision. Hope this helps someone preparing for the exam. All the best, mate. submitted by /u/anuragdoshi [link] [comments]

  • iOS (Swift) + AWS Lambda Backend: For user auth is AWS Cognito/Amplify stable enough, or should I just use Firebase?
    by /u/Purple_Secret_8388

    Hi everyone, I’m building a native iOS app (SwiftUI). My backend is AWS Lambda and MongoDB. I need to handle User Auth (Sign-up/Sign-in) with support for Google and Apple Sign-in. I’m stuck between Amazon Cognito and Firebase Auth. Why I want Cognito: Since my backend is already on Lambda, I want to use the API Gateway Cognito Authorizer. This would make my backend much cleaner because the authentication is handled at the 'front door' before the Lambda even runs. My Concern: I’ve heard mixed reviews about the Amplify SDK for iOS. I don't want to fight with a buggy or overly complex SDK on the client side just to save a few lines of code on the backend. Questions: How is the developer experience for the Amplify Swift library lately? Is it smooth for Google/Apple sign-in, or is it a nightmare of configuration compared to Firebase? If you’ve used Cognito for an iOS app was the authentication worth it? Would you recommend just using Firebase Auth for the better iOS SDK and manually verifying the tokens in my Lambdas instead? I'm looking for stability and speed of development. Thanks! submitted by /u/Purple_Secret_8388 [link] [comments]

  • I've just made a new site using Antigravity to calculate the best cloud region for hosting based on where your users are located. Still needs more google regions and Oracle Cloud to complete.
    by /u/antyg

    submitted by /u/antyg [link] [comments]

  • Offer individual file storage under my own AWS account
    by /u/East_Sentence_4245

    Let’s say my company (MyClients.com) has 20 customers. I want to offer these customers some space to store their stuff (documents, images, files, etc). Does AWS offer a version of storage where I can offer some space to these customers from my own account? For example, I have customer Joe Smith. Is there a way I can offer Joe Smith some space, but from the AWS I’m paying for? In the case of Joe Smith, I’d tell him that he can access his own “cloud” storage by going to MyClients.com/JSmith or maybe visiting my domain and entering his credentials under MyClients.com (which is actually his own partition under AWS)? It would be my AWS account that’s divided into several smaller storage accounts, with each account being a personal store for the customer. submitted by /u/East_Sentence_4245 [link] [comments]

  • Best written study material for AWS certifications?
    by /u/420rav

    Hi everyone! I’m currently studying for an AWS certification (starting with Solutions Architect Associate) and I’m looking for the best written study material out there. I already know about video courses, but I learn much better with high-quality written resources (books, notes, guides, github repo etc.) Thanks in advance! submitted by /u/420rav [link] [comments]

  • USB redirection in Workspace
    by /u/OkArt331

    Not even sure if this is the best place to post this, but here goes: I'm using an Amazon Workspace, Windows 10 desktop, from an Android phone, and I need to plug a USB device and have it recognized by the remote desktop. It's not a security key...it's actually a Ledger hardware wallet (long story...). How does one do this? I'm having trouble figuring this one out. If I can't get this to work, an alternative for what I'm trying to do is to take a picture of a QR code with my phone, but I also don't know if it's possible to give Workspace access to my camera. In audio/video settings it seems to detect my front and back cameras, but to actually get the action of snapping the QR code to register from the desktop seems unlikely...? Sorry for being so naive with this stuff. submitted by /u/OkArt331 [link] [comments]

  • Passed SAA-C03!
    by /u/askalik

    Yes!!! I passed SAA-C03 today!! submitted by /u/askalik [link] [comments]

  • Kiro - can't get a good web UI
    by /u/GodAtum

    Compared to Claude Code, I can't get Kiro to make a decent website UI. I'm trying to create a web app from scratch and it has done but the UI is terrible. Any advice? submitted by /u/GodAtum [link] [comments]

  • AWS Cloud, how to get there?
    by /u/depechecooper

    I no very little about IT, but Im very interested and want to learn and get to be certified for AWS Cloud. What classes/certifications should I get to learn basics and other helpful things before going for AWS Cloud? submitted by /u/depechecooper [link] [comments]

  • Passed AWS Certified Developer Associate exam!
    by /u/Holy_Shifter

    I wanted to start this year with a new AWS certification and thankfully I passed the exam today. I finished the exam within 1.5 hrs and got the results within 5 hours of giving the exam so it was pretty fast. For my studies, I used Stephane Marek's (dude rocks 😉 ) Developer Associate Practice Exam. I honestly didn't read any notes beside the explanations why the answers are correct or incorrect. In total it took me 2 weeks to prepare and give the exam. Tbf I already have years of AWS experience by now and already got SAP certification so that helped a lot. If you have got SAA or SAP, then studying for the DVA exam will be a lot less difficult as they have quite a lot of overlap. Overall the exam isn't that difficult if you have experience building with AWS. submitted by /u/Holy_Shifter [link] [comments]

  • I have my cloud practitioner exam in 3 hours (CLF-C02), please give me tips and any last minute revision topics, this is my first exam
    by /u/Successful-Cold8415

    submitted by /u/Successful-Cold8415 [link] [comments]

  • SAA-C03 results: do they get posted on weekends?
    by /u/Reasonable-Light1809

    Hi everyone, I took the AWS Solutions Architect Associate exam (SAA-C03) yesterday (Friday, 16.01.2026) and finished around 11:00 GMT. It’s been ~24 hours and I still don’t see my result/score report in my AWS Certification account. AWS says final results are posted within five business days, so I’m trying to understand what’s normal.​ Do AWS exam results get published on weekends as well, or is it only on business days (so likely next week in my case)?​ submitted by /u/Reasonable-Light1809 [link] [comments]

  • Passed SAA 780
    by /u/eta_tauri

    A belated congrats to me! I passed a few days ago with a 780. I was running out of time and started to rush with 8 questions remaining. Submitted with seconds left on the clock. I was depressed the whole day thinking I failed and came the email with my shiny badge! I did all of stephane maareks SAA course, including any hands on I could do with the free tier AWS. I did an additional 5 out of 6 practice tests he has on a separate course. First 4 practice scores were around 58 - 68% until the last 2 which I scored 70 and 72. I have no experience with AWS, but have been a full stack dev for 4 years. Studied for about 3 weeks, 3-4 hours a day. The real test felt more difficult for me than the practice but I'm so happy I passed! submitted by /u/eta_tauri [link] [comments]

  • Locked out of my account help
    by /u/curious-af-9550

    I changed phone recently and my old device had MFA and now it is wiped, I cant even open support ticket because I am locked out of my account. Idk what to do can someone help me? submitted by /u/curious-af-9550 [link] [comments]

  • Passed AWS Generative AI Professional Certification!
    by /u/iCHAIT

    Scored 765. Needs improvement in all domains except one lol. Background: I have 8 AWS certifications (including all professional and all AI/ML related) Resources Used: Udemy (Frank Kane + Stephane Maarek): Very high level, the course is definitely not enough on its own. It covers the breadth and touches on all topics in the exam blueprint, but doesn't go into the details. Scored 86% on their practice exam. AWS Skill Builder Practice Test: 40% first attempt → 95% second attempt AWS Skill Builder Full Test: 61% I'd recommend doing Skill Builder and the tests and reviewing them religiously. Good luck to anyone taking it! submitted by /u/iCHAIT [link] [comments]

  • Passed AWS SAA - Don’t let Tutorial Dojo scores stop you
    by /u/Akhil_305

    Hey everyone, I passed the AWS Solutions Architect – Associate exam 2 days back with a score of 812, and I wanted to share this for anyone who’s stuck thinking “Should I give now or wait?” I finished my prep about 3 weeks before the exam and then took all 6 Tutorials Dojo (Jon Bonso) practice tests. My scores (first attempts): • Test 1: 67% • Test 2: 70% • Test 3: 72% • Test 4: 73% • Test 5: 70% • Test 6: 56% After that 56%, I seriously thought I wasn’t ready. But here’s the key point: The real exam questions are nowhere near as confusing as Tutorials Dojo. TD questions are often long (5–6 lines) and intentionally tricky. The real exam questions were mostly short (2–3 lines) and straight to the point, simply testing which AWS service fits the scenario. While reviewing TD wrong answers, I realized I knew the concepts - I was just getting confused by how the questions were framed. If your TD scores are around 65–75% and you’ve finished your prep, don’t overthink it. Go ahead and write the exam. Hope this helps someone who’s on the fence. All the best. submitted by /u/Akhil_305 [link] [comments]

  • AWS SAM attach child template lambda to parent template s3 event
    by /u/post_hazanko

    So I have a master stack template and a bunch of child template lambdas. master stack with s3 bucket child lambda template 1 (triggered by s3 object created event) child lambda template 2 (triggered by s3 object deleted event) a child lambda with SNS topic tied to S3 bucket above I ran into this problem of S3 events must reference an S3 Bucket in the same template Which lead me to this AWS repost thread I'm really trying to avoid doing extra work, unfortunately we are working backwards (deployed resources via AWS console and now turning prod into IaC) The S3 bucket has an SNS topic tied to it already, and it's in the parent stack so another lambda can get that SNS topic. If I really had to I could do that again for these lambdas. From what I've read it doesn't seem possible without using code eg. SDK, Event Bridge, SNS... I tried EventSourceArn with EventSourceMapping but I don't think that's working, I mean the SAM deploy is failing. Just want to know if this can be done or not. There's even a request from 2019 to add this feature. Maybe it is simple with EventSource and I'm just using it wrong, looking around. Oh I guess EventSource is the way that doesn't work if the S3 bucket is outside of the lambda template. It is pretty easy to use SNS I just gotta ask the team if they're cool with me switching that up if I have to choose between SNS or EventBridge. I'm trying NotificationConfiguration on the S3 bucket itself right now. Damn circular dep probs hmm. To avoid this dependency, you can create all resources without specifying the notification configuration. Then, update the stack with a notification configuration. Might do that, I was hoping you'd just deploy everything at once together one time, Yeah so it does work if you comment out NotificationConfiguration on first deploy to setup the S3 bucket/lambda but then you have to add it back in with the lambda's ARN to get it attached, it doesn't seem right/clean. Will keep an eye on this for other thoughts. submitted by /u/post_hazanko [link] [comments]

  • Efficient storage and filtering of millions of products from multiple users – which NoSQL database to use?
    by /u/Notoa34

    Hi everyone, I have a use case and need advice on the right database: ~1,000 users, each with their own warehouses. Some warehouses have up to 1 million products. Data comes from suppliers every 2–4 hours, and I need to update the database quickly. Each product has fields like warehouse ID, type (e.g., car parts, screws), price, quantity, last update, tags, labels, etc. Users need to filter dynamically across most fields (~80%), including tags and labels. Requirements: Very fast insert/update, both in bulk (1000+ records) and single records. Fast filtering across many fields. No need for transactions – data can be overwritten. Question: Which database would work best for this? How would you efficiently handle millions of records every few hours while keeping fast filtering? OpenSearch ? MongoDB ? Thanks! submitted by /u/Notoa34 [link] [comments]

  • How I'd enter the AWS 10,000 AIdeas Competition: A step-by-step guide to crafting a winning pitch (deadline Jan 21)
    by /u/vogejona

    The competition closes in a week and honestly, the submission form is trickier than it looks. I wrote a guide walking through exactly how I'd approach it, from picking a track to filling out each field with my actual pitch draft. My architecture for a mentorship matching app Free Tier survival guide (what actually costs money vs. what's free) The cost optimization mistakes I've already made (left an EC2 instance running, $12 gone) I see a lot of people overthinking this. You don't need to build anything yet. Just a clear pitch. If you're entering, this article might save you some time. submitted by /u/vogejona [link] [comments]

  • CodeBreach: Supply Chain Vuln & AWS CodeBuild Misconfig
    by /u/shadowsyntax

    submitted by /u/shadowsyntax [link] [comments]

  • Passed MLA
    by /u/nedenburdayimlan

    I passed the exam ✅ The exam was harder than I expected. Although many people prefer Maarek’s Udemy course, I personally found Nikolai Schuler stronger, especially in the AI/ML domain. For my preparation: • I completed all exam-related tests from Nikolai Schuler • I solved around two tests from Maarek • I completed all Dojo tests, starting with review mode Some questions were very similar to the Dojo exams, but they changed 1–2 answer choices to make them misleading—so be careful. The study guide is excellent in my opinion; it gathers scattered information into one place. It’s only $3, and I highly recommend it. I already hold AI Practitioner and Cloud Practitioner certifications, and I also completed a minor specialization in AI, which helped overall—but I still struggled more than expected. Most of my mistakes were in security and metrics, which turned out to be more challenging than I anticipated. I’m currently preparing for DEA. If you have any questions, feel free to ask. submitted by /u/nedenburdayimlan [link] [comments]

  • Account suspended during active DDoS billing review — seeking guidance on escalation paths
    by /u/Plane-Management-176

    Looking for guidance from others who have dealt with AWS account suspensions during active billing or security reviews. Our production workload was hit by a large DDoS attack, which caused a sudden spike in AWS WAF, CloudFront, and CloudWatch usage and a very large, unexpected bill. We opened support cases immediately, shared ARNs, detailed timelines, WAF analytics, request counts in the millions per day, and attacker IP samples. AWS acknowledged the issue and escalated it for service-team review and possible billing adjustment. While this review was still ongoing, and despite requesting temporary billing hold during the investigation, the account was suspended for non-payment. We’re now unable to log in to the console, which has taken production applications offline and blocked access to CloudWatch and infrastructure management. At this point, we’re trying to understand the correct escalation path. For those who’ve experienced something similar: Is there a recommended way to get an account reinstated while a billing dispute is under review? Are there escalation channels beyond the standard account support form once console access is blocked? Appreciate any guidance or experiences from the community. submitted by /u/Plane-Management-176 [link] [comments]

  • Development environment monitoring?
    by /u/alangibson

    We keep having problems where development, testing, and acceptance environments are left running long after they're needed. We also loose track of what, and what version, is deployed to each environment. Some times its not even clear what team owns what. Does anyone know of a tool that can keep track such a mess? At a minimum I'd like a dashboard that shows me: Basic environment stats like: age, average utilization (ie is anyone using this?) Deployed commits, application versions, etc Team that owns it I'd really prefer a standalone solution since managers, marketing and sales people are also interested in this information. They're easily alarmed by the complexity of the AWS interface. "Deployed commits, application versions," is there mainly for marketing and management so they can look for themselves where the features they requested have progressed to. Edit: clarity. submitted by /u/alangibson [link] [comments]

  • CodeBreach: Infiltrating the AWS Console Supply Chain and Hijacking AWS GitHub Repositories via CodeBuild
    by /u/Kralizek82

    https://www.wiz.io/blog/wiz-research-codebreach-vulnerability-aws-codebuild submitted by /u/Kralizek82 [link] [comments]

  • AWS flips switch on Euro cloud as sovereignty fears mount
    by /u/NISMO1968

    submitted by /u/NISMO1968 [link] [comments]

  • Update to AWS Certified Data Engineer - Associate (DEA-C01) Exam Guide
    by /u/madrasi2021

    The DEA Exam Guide was versioned up recently with some additional changes and includes additional services in scope. Fortunately the move of the exam guide from PDF to Docs page also includes a list of revisions. https://docs.aws.amazon.com/aws-certification/latest/examguides/dea-01-revisions.html Please see the page above but just to give a gist of changes - this is a copy / paste of the new skills added. Basically more AI related services in DEA. This makes sense if you are studying MLA, DEA and then aiming for AIP. New skills added Skill 1.2.10: Integrate Large Language Models (LLM) for data processing. Skill 2.1.7: Manage open table formats (for example Apache Iceberg). Skill 2.1.8: Describe vector index types (for example, HNSW, IVF). Skill 2.2.6: Create and manage business data catalogs (for example Amazon SageMaker Catalog). Skill 2.4.6: Describe vectorization concepts (for example, Amazon Bedrock knowledge base). Skill 4.1.7: Use domain, domain units, and projects for SageMaker Unified Studio. Skill 4.5.6: Manage data access through Amazon SageMaker Catalog projects. Skill 4.5.7: Describe governance data framework and data sharing patterns. I will be revamping all my resources guides for 2026 soon to cover these changes and more. submitted by /u/madrasi2021 [link] [comments]

  • Thanks Werner
    by /u/m0t0rbr3th

    I've enjoyed and been inspired by your keynotes over the past 14 years. Context: Dr. Werner Vogels announced that his closing keynote at the 2025 re:Invent will be his last. submitted by /u/m0t0rbr3th [link] [comments]

  • Frequently Asked Questions on this subreddit.
    by /u/madrasi2021

    Before posting a question, please see if it is already answered below (especially if you are new to this subreddit). It saves us a lot of work repeatedly answering the same questions. If you are looking for resources to study for Certifications, please make sure you have reviewed the official AWS Certification page first and then use the exam code for resources guides below. Vouchers / Discounts for 2026 AWS Certification Exams Recommended study resources for Foundational level Exams Cloud Practitioner CCP/CLF AI Practitioner AIF Recommended study resources for Associate Level Exams Solutions Architect SAA Developer DVA Data Engineer DEA Machine Learning MLA CloudOps (prev. SysOps) SOA Recommended study resources for Professional Level Exams SA Professional SAP DevOps Professional DOP Gen AI Developer Professional AIP Recommended study resources for Specialty Level Exams Security (old version) SCS / New SCS-C03 exam Advanced Networking ANS Machine Learning is being deprecated 31-March-2026 - I don't have a guide for this. How long do results take and why did I not get a Pass/Fail on completing exam? Absolute Beginners guide to skilling up for FREE (not certifications) Free Learning / Digital Badges : Beginner levelIntermediate Level (not certifications) -if you cannot afford the exams and want something to boost your resume - start here What happened to Emerging Talent Community (ETC) rewards? Should I buy Tutorialsdojo via Udemy or their website? 50% off any other AWS exam if you pass any AWS Exam - All your Exam Benefit questions answered How much % pass do I need on practice exams? leaving blank Projects and Hands on practice New Certifications, Certification Retirements New Rule - No resale / transfer of 50% exam benefit vouchers in this subreddit submitted by /u/madrasi2021 [link] [comments]

 

Reference: https://enoumen.com/2019/06/23/aws-solution-architect-associate-exam-prep-facts-and-summaries-questions-and-answers-dump/

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

AWS Certified Developer Associate exam: Whitepapers

AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers for the AWS Certified Developer – Associate Exam.

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Online Training and Labs for AWS Certified Developer Associates Exam

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Top

AWS Developer Associates Jobs

Top

AWS Certified Developer-Associate Exam info and details, How To:

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

The AWS Certified Developer Associate exam is a multiple choice, multiple answer exam. Here is the Exam Overview:

  • Certification Name: AWS Certified Developer Associate.
  • Prerequisites for the Exam: None.
  • Exam Pattern: Multiple Choice Questions
  • The AWS Certified Developer-Associate Examination (DVA-C01) is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals guided by certification industry best practices and guidelines.
  • Your results for the examination are reported as a score from 100 – 1000, with a minimum passing score of 720.
  • Exam fees: US $150
  • Exam Guide on AWS Website
  • Available languages for tests: English, Japanese, Korean, Simplified Chinese
  • Read AWS whitepapers
  • Register for certification account here.
  • Prepare for Certification Here
  • Exam Content Outline

    Domain% of Examination
    Domain 1: Deployment (22%)
    1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
    1.2 Deploy applications using Elastic Beanstalk.
    1.3 Prepare the application deployment package to be deployed to AWS.
    1.4 Deploy serverless applications
    22%
    Domain 2: Security (26%)
    2.1 Make authenticated calls to AWS services.
    2.2 Implement encryption using AWS services.
    2.3 Implement application authentication and authorization.
    26%
    Domain 3: Development with AWS Services (30%)
    3.1 Write code for serverless applications.
    3.2 Translate functional requirements into application design.
    3.3 Implement application design into application code.
    3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.
    30%
    Domain 4: Refactoring
    4.1 Optimize application to best use AWS services and features.
    4.2 Migrate existing application code to run on AWS.
    10%
    Domain 5: Monitoring and Troubleshooting (10%)
    5.1 Write code that can be monitored.
    5.2 Perform root cause analysis on faults found in testing or production.
    10%
    TOTAL100%

Top

AWS Certified Developer Associate exam: Additional Information for reference

Below are some useful reference links that would help you to learn about AWS Certified Developer Associate Exam.

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Other Relevant and Recommended AWS Certifications

AWS Certification Exams Roadmap
AWS Certification Exams Roadmap

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

Top

Other AWS Facts and Summaries and Questions/Answers Dump

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

AWS Certified Developer Associate exam: Whitepapers

AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers for the AWS Certified Developer – Associate Exam.

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Online Training and Labs for AWS Certified Developer Associates Exam

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Top

AWS Developer Associates Jobs

Top

AWS Certified Developer-Associate Exam info and details, How To:

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

The AWS Certified Developer Associate exam is a multiple choice, multiple answer exam. Here is the Exam Overview:

  • Certification Name: AWS Certified Developer Associate.
  • Prerequisites for the Exam: None.
  • Exam Pattern: Multiple Choice Questions
  • The AWS Certified Developer-Associate Examination (DVA-C01) is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals guided by certification industry best practices and guidelines.
  • Your results for the examination are reported as a score from 100 – 1000, with a minimum passing score of 720.
  • Exam fees: US $150
  • Exam Guide on AWS Website
  • Available languages for tests: English, Japanese, Korean, Simplified Chinese
  • Read AWS whitepapers
  • Register for certification account here.
  • Prepare for Certification Here
  • Exam Content Outline

    Domain% of Examination
    Domain 1: Deployment (22%)
    1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
    1.2 Deploy applications using Elastic Beanstalk.
    1.3 Prepare the application deployment package to be deployed to AWS.
    1.4 Deploy serverless applications
    22%
    Domain 2: Security (26%)
    2.1 Make authenticated calls to AWS services.
    2.2 Implement encryption using AWS services.
    2.3 Implement application authentication and authorization.
    26%
    Domain 3: Development with AWS Services (30%)
    3.1 Write code for serverless applications.
    3.2 Translate functional requirements into application design.
    3.3 Implement application design into application code.
    3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.
    30%
    Domain 4: Refactoring
    4.1 Optimize application to best use AWS services and features.
    4.2 Migrate existing application code to run on AWS.
    10%
    Domain 5: Monitoring and Troubleshooting (10%)
    5.1 Write code that can be monitored.
    5.2 Perform root cause analysis on faults found in testing or production.
    10%
    TOTAL100%

Top

 

In this AWS tutorial, we are going to discuss how we can make the best use of AWS services to build a highly scalable, and fault tolerant configuration of EC2 instances. The use of Load Balancers and Auto Scaling Groups falls under a number of best practices in AWS, including Performance Efficiency, Reliability and high availability.

Before we dive into this hands-on tutorial on how exactly we can build this solution, let’s have a brief recap on what an Auto Scaling group is, and what a Load balancer is.

Autoscaling group (ASG)

An Autoscaling group (ASG) is a logical grouping of instances which can scale up and scale down depending on pre-configured settings. By setting Scaling policies of your ASG, you can choose how many EC2 instances are launched and terminated based on your application’s load. You can do this based on manual, dynamic, scheduled or predictive scaling.

Elastic Load Balancer (ELB)

An Elastic Load Balancer (ELB) is a name describing a number of services within AWS designed to distribute traffic across multiple EC2 instances in order to provide enhanced scalability, availability, security and more. The particular type of Load Balancer we will be using today is an Application Load Balancer (ALB). The ALB is a Layer 7 Load Balancer designed to distribute HTTP/HTTPS traffic across multiple nodes – with added features such as TLS termination, Sticky Sessions and Complex routing configurations.

Getting Started

First of all, we open our AWS management console and head to the EC2 management console.

We scroll down on the left-hand side and select ‘Launch Templates’. A Launch Template is a configuration template which defines the settings for EC2 instances launched by the ASG.

Under Launch Templates, we will select “Create launch template”.

We specify the name ‘MyTestTemplate’ and use the same text in the description.

Under the ‘Auto Scaling guidance’ box, tick the box which says ‘Provide guidance to help me set up a template that I can use with EC2 Auto Scaling’ and scroll down to launch template contents.

When it comes to choosing our AMI (Amazon Machine Image) we can choose the Amazon Linux 2 under ‘Quick Start’.

The Amazon Linux 2 AMI is free tier eligible, and easy to use for our demonstration purposes.

Next, we select the ‘t2.micro’ under instance types, as this is also free tier eligible.

Under Network Settings, we create a new Security Group called ExampleSG in our default VPC, allowing HTTP access to everyone. It should look like this.

AWS Certified Developer Associate exam: Additional Information for reference

Below are some useful reference links that would help you to learn about AWS Certified Developer Associate Exam.

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Other Relevant and Recommended AWS Certifications

AWS Certification Exams Roadmap
AWS Certification Exams Roadmap

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

Top

Other AWS Facts and Summaries and Questions/Answers Dump

AWS Developer Associate DVA-C01 Exam Prep

 
 
 

AWS Certifications Breaking News and Top Stories

  • Which are the best mock tests for AWS Solutions Architect sure pass?
    by /u/f2ka07

    Hey everyone! I'm gearing up to take the AWS Solutions Architect exam and honestly feeling a bit overwhelmed with all the practice test options out there. I've seen so many different platforms - Whizlabs, Tutorials Dojo, Udemy courses, and a bunch of others - but I'm not sure which ones are actually worth the money and time. I want practice tests that really feel like the real deal, not just random questions that don't prepare you properly. What I'm looking for: Tests that match the actual exam difficulty (not too easy, not impossibly hard) Good explanations for answers so I actually learn from my mistakes Questions that cover all the important topics For those of you who've already passed the exam - which mock tests did you use? Did they actually help you feel ready on exam day? And are there any I should straight-up avoid because they're a waste of money? Also, how many practice tests did you go through before you felt confident enough to book your exam? Any advice would be seriously appreciated. Thanks in advance! submitted by /u/f2ka07 [link] [comments]

  • AWS EKS via terraform - cni plugin not initialized
    by /u/Meganig

    Ok, I am about to rip my hair out over this...I have been trying to create this eks cluster for a while and I have been stuck on this. TF node group takes 30+ minutes than fails. I go into the console and the nodes are showing errors. I use k9s to connect to the cluster, there are no pods created. The node description shows this: ``` │ Ready False Sun, 18 Jan 2026 18:10:45 -0500 Sun, 18 Jan 2026 18:10:33 -0500 KubeletNotReady │ │ container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin │ │ returns error: cni plugin not initialized ``` Here is my latest TF: https://github.com/sPrime28/eks-test What could I be missing? edit: no addons showing in the cluster: aws eks list-addons --cluster-name <cluster-name> --region us-east-1 { "addons": [] } submitted by /u/Meganig [link] [comments]

  • Took ML Assoxiate Friday - No score yet?
    by /u/fxbuttermilk

    I’ve taken 8 AWS certifications with passes and typically have gotten score within a few hours. Still haven’t received a score. Does that mean I likely didn’t pass? I know their SLA says 5 business days, it’s just that it has never taken that long for me. Wondering if anyone has a similar anecdote? submitted by /u/fxbuttermilk [link] [comments]

  • Migrating scheduled jobs to ECS
    by /u/Character_Status8351

    Background: Moving about 8 C# apps from Windows Task Scheduler to AWS Most of these apps fetch data from the same db(sql server), preform some business logic and update data. Some questions I have: Should each scheduled task handle everything start to finish, or do people break it up? Like having one ECS task fetch work items and queue them, then separate tasks to actually process them? One repo per job or throw them all in a monorepo? Does everyone just use CloudWatch and the ECS console to manage jobs or a third party tool(preferably open source)? What's the standard approach for retries? CloudWatch alarms + SNS? submitted by /u/Character_Status8351 [link] [comments]

  • When does my 50% off voucher actually expire ?
    by /u/sufferingSoftwaredev

    I have a 50% off voucher from taking the DVA in feb 2023, it says on my benefits page that the voucher expires on the 27 of feb 2026, does this mean I have till then to take the exam, or till then to book it, i.e, can i book the exam for April with the voucher as long as I book it before it expires ? submitted by /u/sufferingSoftwaredev [link] [comments]

  • AI-Practitioner worth getting?
    by /u/lukamillie

    Hello, I am an aspiring Cloud Engineer, as of the moment I am studying SAA-C03 and looking to take exam and pass it soon. Is the AI-Practitioner exam worth getting? I mean, I am using AI but not on the projects I am trying to make. Does studying and passing this certification help me to gain more skills in being a Cloud Engineer? submitted by /u/lukamillie [link] [comments]

  • Should I do AWS developer associate or solution architect associate
    by /u/Niki_me8863

    Guys, I have been confused on a topic since few days. Can someone help me in deciding whether I should go ahead preparing for a developer associate or solution architect certificate? I'm currently a 2 years experienced software engineer, working on AWS mainly focused on a few services like EC2, ECS, S3, API Gateway etc., I want to get into devops role mainly as a next step. This will be my first AWS certificate, and I really need your suggestion on helping me out on this based on all factors like difficulty of exam, value, topics, preparation etc., Thankyou very much in advance. submitted by /u/Niki_me8863 [link] [comments]

  • Failed SAA-C03, Any advice?
    by /u/dmvtea

    Failed last week. Was averaging 86% on TD exams…was studying for 2 months. Any advice? submitted by /u/dmvtea [link] [comments]

  • Searching new job opportunities
    by /u/Terrible_Dog9609

    Very hard to fine a job currently in sri lanka ,so i am going to face saa 03 exame ,and also i need to freelance project or support task to improve my knowlegde as devops engineer.how to find that opportunity to take few experience on freelance project submitted by /u/Terrible_Dog9609 [link] [comments]

  • Using Amazon Bedrock AgentCore via REST API Tutorial
    by /u/nurulmac11

    I’ve been experimenting with Amazon Bedrock AgentCore and couldn’t find many clear examples of using it directly via REST API, so I documented what I learned while setting it up. The post covers: Setting up Agentcore Agent that can use your rest api endpoints as tools Things that weren’t obvious from the docs at first Small implementation details that might save time Sharing in case it helps others working with Amazon Bedrock Agentcore service in real projects. Article: https://medium.com/p/c4f50839fb4d Text me if you can't read article for any reason. Happy to hear feedback or alternative approaches from folks who’ve used it in production. Since this is a very new service, I am not sure if the infra I established is the best way. submitted by /u/nurulmac11 [link] [comments]

 

AWS Developer Associate DVA-C01 Exam Prep
 
 
 

I Passed AWS Developer Associate Certification DVA-C01 Testimonials

Passed DVA-C01

Passed the certified developer associate this week.

Primary study was Stephane Maarek’s course on Udemy.

I also used the Practice Exams by Stephane Maarek and Abhishek Singh.

I used Stephane’s course and practice exams for the Solutions Architect Associate as well, and find his course does a good job preparing you to pass the exams.

The practice exams were more challenging than the actual exam, so they are a good gauge to see if you are ready for the exam.

Haven’t decided if I’ll do another associate level certification next or try for the solutions architect professional.

Cleared AWS Certified Developer – Associate (DVA-C01)

 

I cleared Developer associate exam yesterday. I scored 873.
Actual Exam Exp: More questions were focused on mainly on Lambda, API, Dynamodb, cloudfront, cognito(must know proper difference between user pool and identity pool)
3 questions I found were just for redis vs memecached (so maybe you can focus more here also to know exact use case& difference.) other topic were cloudformation, beanstalk, sts, ec2. Exam was mix of too easy and too tough for me. some questions were one liner and somewhere too long.

Resources: The main resources I used was udemy. Course of Stéphane Maarek and practice exams of Neal Davis and Stéphane Maarek. These exams proved really good and they even helped me in focusing the area which I lacked. And they are up to the level to actual exam, I found 3-4 exact same questions in actual exam(This might be just luck ! ). so I feel, the course of stephane is more than sufficient and you can trust it. I have achieved solution architect associate previously so I knew basic things, so I took around 2 weeks for preparation and revised the Stephen’s course as much as possible. Parallelly I gave the mentioned exams as well, which guided me where to focus more.

Thanks to all of you and feel free to comment/DM me, if you think I can help you in anyway for achieving the same.

Passed the Developer Associate. My Notes.

  1. There was more SNS “fan out” options. None of the Udemy or Tutorial Dojo tests had that.

  2. They aren’t joking about the 30 min check I and expiring your exam if you don’t check in at least 15 mins before hand.

  3. When you finish do a pass over review for multi select questions and check the number required.

  4. Review again and look for gotcha phrases like “least operational cost”, “fastest solution”, “most secure”, and “least expensive”. Change your answers of you put on what “you” would do.

  5. Watch out for questions that mention services that don’t really have anything to do with the problem.

  6. Look at every service mentioned in the question. You can probably think of a better stack for the solution but just adhere to what the present.

  7. If you are clueless of an answer start by ruling out the ones you KNOW are wrong and then guess.

  8. Take as many practice exams like tutorial dojo as you can. On review filter out the review for “incorrect”. Open another tab on the subject and read up or book mark it.

  9. I would get 50-60% first pass at each exam. Then 85-95% after reading the answer and the open tabs and book marks.

  10. If taking the Pearson proctored online test down load the OnVue App as soon as you can. Installing that thing made me miss the 15 min window and I had to rebook and pay. Their checkin is confusing between all the cert portals and their own site. Just use the “Manage Pearson Exams” from the aws cert portal to force auth to theirs.

 

New versions of the Developer (Associate) and DevOps Engineer (Professional) exams in Feb/March 2023

AWS Certified Developer – Associate

  • Current version: DVA-C01

  • New version: DVA-C02

  • Last day to take the current exam: 2023-02-27

  • Registration open for the updated exam: 2023-01-31

  • First day to take the updated exam: 2023-02-28

AWS Certified DevOps Engineer – Professional

  • Current version: DOP-C01

  • New version: DOP-C02

  • Last day to take the current exam: 2023-03-06

  • Registration open for the updated exam: 2023-01-31

  • First day to take the updated exam: 2023-03-07

Both updates were posted on 2022-10-04 on https://aws.amazon.com/certification/coming-soon/. The exam guides and practice questions are also available there.

Passed DVA-C01 scored 910

Past January as usual I chose some goals to achieve during 2022, including to obtain an AWS certification. In February I started studying using the Pluralsight platform. The course for developers is a good introduction but is messy sometimes. At the end of the course I had a big picture of the main AWS services but I was really confused by all the different topics. Pluralsight offers alsl some labs that allowed me to practice with AWS services because I my work wasn’t related to cloud.

Then a senior engineer suggested me to study using ACloudGuru platform and I loved their contents. They focus only on the details necessary for the exam. During the videos they show useful list to summarize services and tables to compare them. Moreover they offer a quite unrestricted playground where to put your hands. So i could try their labs but also try my first cloudformation scripts and lambdas. There are also 4 practice exams that are very similar to real questions. Even though their platform is buggy sometimes, the subscription was worth it.

At the same time I purchased the pack of practice questions from Jon Bonso on Udemy and indeed the answer explainations are very detailed and very useful.

For practicing more, I was adding wrong questions to the Anki desktop application but I was revising them on my phone whenever I could. I ended up with 150 different questions on Anki.

In the meantime, in August I left my software engineer job because my boss told me that I was not able to work with cloud technologies. In september I started a new job as cloud engineer in a new company. Finally, in October I passed the Developer Associate exam and I scored 910.

I put a lot of effort into prepare the exam and eventually i got it! However I believe this is just my first step and I want to keep studying. My next objective is the Solutions Architect Associate certification.

AWS Certified Developer Associate Exam Prep - DVA-C01 DVA-C02
AWS Certified Developer Associate Exam Prep – DVA-C01 DVA-C02

Top 100 AWS Solutions Architect Associate Certification Exam Questions and Answers Dump SAA-C03

Top 60 AWS Solution Architect Associate Exam Tips

What is Google Workspace?
Google Workspace is a cloud-based productivity suite that helps teams communicate, collaborate and get things done from anywhere and on any device. It's simple to set up, use and manage, so your business can focus on what really matters.

Watch a video or find out more here.

Here are some highlights:
Business email for your domain
Look professional and communicate as you@yourcompany.com. Gmail's simple features help you build your brand while getting more done.

Access from any location or device
Check emails, share files, edit documents, hold video meetings and more, whether you're at work, at home or on the move. You can pick up where you left off from a computer, tablet or phone.

Enterprise-level management tools
Robust admin settings give you total command over users, devices, security and more.

Sign up using my link https://referworkspace.app.goo.gl/Q371 and get a 14-day trial, and message me to get an exclusive discount when you try Google Workspace for your business.

Google Workspace Business Standard Promotion code for the Americas 63F733CLLY7R7MM 63F7D7CPD9XXUVT 63FLKQHWV3AEEE6 63JGLWWK36CP7WM
Email me for more promo codes

Active Hydrating Toner, Anti-Aging Replenishing Advanced Face Moisturizer, with Vitamins A, C, E & Natural Botanicals to Promote Skin Balance & Collagen Production, 6.7 Fl Oz

Age Defying 0.3% Retinol Serum, Anti-Aging Dark Spot Remover for Face, Fine Lines & Wrinkle Pore Minimizer, with Vitamin E & Natural Botanicals

Firming Moisturizer, Advanced Hydrating Facial Replenishing Cream, with Hyaluronic Acid, Resveratrol & Natural Botanicals to Restore Skin's Strength, Radiance, and Resilience, 1.75 Oz

Skin Stem Cell Serum

Smartphone 101 - Pick a smartphone for me - android or iOS - Apple iPhone or Samsung Galaxy or Huawei or Xaomi or Google Pixel

Can AI Really Predict Lottery Results? We Asked an Expert.

Ace the 2025 AWS Solutions Architect Associate SAA-C03 Exam with Confidence Pass the 2025 AWS Certified Machine Learning Specialty MLS-C01 Exam with Flying Colors

List of Freely available programming books - What is the single most influential book every Programmers should read



#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks

Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
zCanadian Quiz and Trivia, Canadian History, Citizenship Test, Geography, Wildlife, Secenries, Banff, Tourism

Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Africa Quiz, Africa Trivia, Quiz, African History, Geography, Wildlife, Culture

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA


Health Health, a science-based community to discuss human health

Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.

Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, NCAA, F1, and other leagues around the world.

Turn your dream into reality with Google Workspace: It’s free for the first 14 days.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes:
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6 96DRHDRA9J7GTN6
63F733CLLY7R7MM
63F7D7CPD9XXUVT
63FLKQHWV3AEEE6
63JGLWWK36CP7WM
63KKR9EULQRR7VE
63KNY4N7VHCUA9R
63LDXXFYU6VXDG9
63MGNRCKXURAYWC
63NGNDVVXJP4N99
63P4G3ELRPADKQU
With Google Workspace, Get custom email @yourcompany, Work from anywhere; Easily scale up or down
Google gives you the tools you need to run your business like a pro. Set up custom email, share files securely online, video chat from any device, and more.
Google Workspace provides a platform, a common ground, for all our internal teams and operations to collaboratively support our primary business goal, which is to deliver quality information to our readers quickly.
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE
C37HCAQRVR7JTFK
C3AE76E7WATCTL9
C3C3RGUF9VW6LXE
C3D9LD4L736CALC
C3EQXV674DQ6PXP
C3G9M3JEHXM3XC7
C3GGR3H4TRHUD7L
C3LVUVC3LHKUEQK
C3PVGM4CHHPMWLE
C3QHQ763LWGTW4C
Even if you’re small, you want people to see you as a professional business. If you’re still growing, you need the building blocks to get you where you want to be. I’ve learned so much about business through Google Workspace—I can’t imagine working without it.
(Email us for more codes)