And before we wrap up today's AI news, I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.
Definition 1:Amazon DynamoDB is a fully managed proprietary NoSQL database service that supports key-value and document data structures and is offered by Amazon.com as part of the Amazon Web Services portfolio. DynamoDB exposes a similar data model to and derives its name from Dynamo, but has a different underlying implementation. Dynamo had a multi-master design requiring the client to resolve version conflicts and DynamoDB uses synchronous replication across multiple datacenters for high durability and availability.
Definition 2:DynamoDB is a fast and flexible non-relational database service for any scale. DynamoDB enables customers to offload the administrative burdens of operating and scaling distributed databases to AWS so that they don’t have to worry about hardware provisioning, setup and configuration, throughput capacity planning, replication, software patching, or cluster scaling.
Amazon DynamoDB explained
Fully Managed
Fast, consistent Performance
Fine-grained access control
Flexible
Amazon DynamoDB explained
AWS DynamoDB Facts and Summaries
Amazon DynamoDB is a low-latency NoSQL database.
DynamoDB consists of Tables, Items, and Attributes
DynamoDb supports both document and key-value data models
DynamoDB Supported documents formats are JSON, HTML, XML
DynamoDB has 2 types of Primary Keys: Partition Key and combination of Partition Key + Sort Key (Composite Key)
DynamoDB has 2 consistency models: Strongly Consistent / Eventually Consistent
DynamoDB Access is controlled using IAM policies.
DynamoDB has fine grained access control using IAM Condition parameter dynamodb:LeadingKeys to allow users to access only the items where the partition key vakue matches their user ID.
DynamoDB Indexes enable fast queries on specific data columns
DynamoDB indexes give you a different view of your data based on alternative Partition / Sort Keys.
DynamoDB Local Secondary indexes must be created when you create your table, they have same partition Key as your table, and they have a different Sort Key.
DynamoDB Global Secondary Index Can be created at any time: at table creation or after. They have a different partition Key as your table and a different sort key as your table.
A DynamoDB query operation finds items in a table using only the primary Key attribute: You provide the Primary Key name and a distinct value to search for.
A DynamoDB Scan operation examines every item in the table. By default, it return data attributes.
DynamoDB Query operation is generally more efficient than a Scan.
With DynamoDB, you can reduce the impact of a query or scan by setting a smaller page size which uses fewer read operations.
To optimize DynamoDB performance, isolate scan operations to specific tables and segregate them from your mission-critical traffic.
To optimize DynamoDB performance, try Parallel scans rather than the default sequential scan.
To optimize DynamoDB performance: Avoid using scan operations if you can: design tables in a way that you can use Query, Get, or BatchGetItems APIs.
When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity.
DynamoDb Provisioned Throughput is measured in Capacity Units.
1 Write Capacity Unit = 1 x 1KB Write per second.
1 Read Capacity Unit = 1 x 4KB Strongly Consistent Read Or 2 x 4KB Eventually Consistent Reads per second. Eventual consistent reads give us the maximum performance with the read operation.
What is the maximum throughput that can be provisioned for a single DynamoDB table? DynamoDB is designed to scale without limits. However, if you want to exceed throughput rates of 10,000 write capacity units or 10,000 read capacity units for an individual table, you must Contact AWS to increase it. If you want to provision more than 20,000 write capacity units or 20,000 read capacity units from a single subscriber account, you must first contact AWS to request a limit increase.
Dynamo Db Performance: DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications.
As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds
DAX improves response times for Eventually Consistent reads only.
With DAX, you point your API calls to the DAX cluster instead of your table.
If the item you are querying is on the cache, DAX will return it; otherwise, it will perform and Eventually Consistent GetItem operation to your DynamoDB table.
DAX reduces operational and application complexity by providing a managed service that is API compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
DAX is not suitable for write-intensive applications or applications that require Strongly Consistent reads.
For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
Dynamo Db Performance: ElastiCache
In-memory cache sits between your application and database
2 different caching strategies: Lazy loading and Write Through: Lazy loading only caches the data when it is requested
Elasticache Node failures are not fatal, just lots of cache misses
Avoid stale data by implementing a TTL.
Write-Through strategy writes data into cache whenever there is a change to the database. Data is never stale
Write-Through penalty: Each write involves a write to the cache. Elasticache node failure means that data is missing until added or updated in the database.
Elasticache is wasted resources if most of the data is never used.
Time To Live (TTL) for DynamoDB allows you to define when items in a table expire so that they can be automatically deleted from the database. TTL is provided at no extra cost as a way to reduce storage usage and reduce the cost of storing irrelevant data without using provisioned throughput. With TTL enabled on a table, you can set a timestamp for deletion on a per-item basis, allowing you to limit storage usage to only those records that are relevant.
DynamoDB Security: DynamoDB uses the CMK to generate and encrypt a unique data key for the table, known as the table key. With DynamoDB, AWS Owned, or AWS Managed CMK can be used to generate & encrypt keys. AWS Owned CMK is free of charge while AWS Managed CMK is chargeable. Customer managed CMK’s are not supported with encryption at rest.
Amazon DynamoDB offers fully managed encryption at rest. DynamoDB encryption at rest provides enhanced security by encrypting your data at rest using an AWS Key Management Service (AWS KMS) managed encryption key for DynamoDB. This functionality eliminates the operational burden and complexity involved in protecting sensitive data.
DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management
DynamoDB Streams Use Cases and Design Patterns: How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table? How do you trigger an event based on a particular transaction? How do you audit or archive transactions? How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)? As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit.
You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time.
AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region
20 global secondary indexes are allowed per table? (by default)
What is one key difference between a global secondary index and a local secondary index? A local secondary index must have the same partition key as the main table
How many tables can an AWS account have per region? 256
How many secondary indexes (global and local combined) are allowed per table? (by default): 25 You can define up to 5 local secondary indexes and 20 global secondary indexes per table (by default) – for a total of 25.
How can you increase your DynamoDB table limit in a region? By contacting AWS and requesting a limit increase
For any AWS account, there is an initial limit of 256 tables per region.
The minimum length of a partition key value is 1 byte. The maximum length is 2048 bytes.
The minimum length of a sort key value is 1 byte. The maximum length is 1024 bytes.
For tables with local secondary indexes, there is a 10 GB size limit per partition key value. A table with local secondary indexes can store any number of items, as long as the total size for any one partition key value does not exceed 10 GB.
The following diagram shows a local secondary index named LastPostIndex. Note that the partition key is the same as that of the Thread table, but the sort key is LastPostDateTime.
AWS DynamoDB secondary indexes example
Relational vs Non Relational (SQL vs NoSQL)
Relational vs Non RelationalSQL vs NOSQLSQL vs NoSQL in AWS
Q0: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?
A. Amazon DynamoDB auto scaling
B. Amazon DynamoDB cross-region replication
C. Amazon DynamoDB Streams
D. Amazon DynamoDB Accelerator
D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:
As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
Are you passionate about AI and looking for your next career challenge? In the fast-evolving world of artificial intelligence, connecting with the right opportunities can make all the difference. We're excited to recommend Mercor, a premier platform dedicated to bridging the gap between exceptional AI professionals and innovative companies.
Whether you're seeking roles in machine learning, data science, or other cutting-edge AI fields, Mercor offers a streamlined path to your ideal position. Explore the possibilities and accelerate your AI career by visiting Mercor through our exclusive referral link:
Your next big opportunity in AI could be just a click away!
Q2: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?
A. 6000
B. 10
C. 3600
D. 600
B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.
You can specify the Write capacity in the Capacity tab of the DynamoDB table.
Q3: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?
A. CustomerID
B. CustomerName
C. Location
D. Age
Answer- A Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on.. Use composite attributes. Try to combine more than one attribute to form a unique key. Reference: Choosing the right DynamoDB Partition Key
Q4: A DynamoDB table is set with a Read Throughput capacity of 5 RCU. Which of the following read configuration will provide us the maximum read throughput?
A. Read capacity set to 5 for 4KB reads of data at strong consistency
B. Read capacity set to 5 for 4KB reads of data at eventual consistency
C. Read capacity set to 15 for 1KB reads of data at strong consistency
D. Read capacity set to 5 for 1KB reads of data at eventual consistency
Answer: B. The calculation of throughput capacity for option B would be: Read capacity(5) * Amount of data(4) = 20. Since its required at eventual consistency , we can double the read throughput to 20*2=40
And before we wrap up today's AI news, I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.
Q5: Your team is developing a solution that will make use of DynamoDB tables. Due to the nature of the application, the data is needed across a couple of regions across the world. Which of the following would help reduce the latency of requests to DynamoDB from different regions?
A. Enable Multi-AZ for the DynamoDB table
B. Enable global tables for DynamoDB
C. Enable Indexes for the table
D. Increase the read and write throughput for the tablez
Answer: B Amazon DynamoDB global tables provide a fully managed solution for deploying a multi-region, multimaster database, without having to build and maintain your own replication solution. When you create a global table, you specify the AWS regions where you want the table to be available. DynamoDB performs all of the necessary tasks to create identical tables in these regions, and propagate ongoing data changes to all of them. Reference: Global Tables
Q6: An application is currently accessing a DynamoDB table. Currently the tables queries are performing well. Changes have been made to the application and now the performance of the application is starting to degrade. After looking at the changes , you see that the queries are making use of an attribute which is not the partition key? Which of the following would be the adequate change to make to resolve the issue?
A. Add an index for the DynamoDB table
B. Change all the queries to ensure they use the partition key
C. Enable global tables for DynamoDB
D. Change the read capacity on the table
Answer: A Amazon DynamoDB provides fast access to items in a table by specifying primary key values. However, many applications might benefit from having one or more secondary (or alternate) keys available, to allow efficient access to data with attributes other than the primary key. To address this, you can create one or more secondary indexes on a table, and issue Query or Scan requests against these indexes.
A secondary index is a data structure that contains a subset of attributes from a table, along with an alternate key to support Query operations. You can retrieve data from the index using a Query, in much the same way as you use Query with a table. A table can have multiple secondary indexes, which gives your applications access to many different query patterns.
Q7: Company B has created an e-commerce site using DynamoDB and is designing a products table that includes items purchased and the users who purchased the item. When creating a primary key on a table which of the following would be the best attribute for the partition key? Select the BEST possible answer.
A. None of these are correct.
B. user_id where there are many users to few products
C. category_id where there are few categories to many products
D. product_id where there are few products to many users
Answer: B. When designing tables it is important for the data to be distributed evenly across the entire table. It is best practice for performance to set your primary key where there are many primary keys to few rows. An example would be many users to few products. An example of bad design would be a primary key of product_id where there are few products but many users. When designing tables it is important for the data to be distributed evenly across the entire table. It is best practice for performance to set your primary key where there are many primary keys to few rows. An example would be many users to few products. An example of bad design would be a primary key of product_id where there are few products but many users. Reference: Partition Keys and Sort Keys
Q8: Which API call can be used to retrieve up to 100 items at a time or 16 MB of data from a DynamoDB table?
A. BatchItem
B. GetItem
C. BatchGetItem
D. ChunkGetItem
Answer: C. BatchGetItem
The BatchGetItem operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key.
A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. BatchGetItem will return a partial result if the response size limit is exceeded, the table’s provisioned throughput is exceeded, or an internal processing failure occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys. You can use this value to retry the operation starting with the next item to get.Reference: API-Specific Limits
Q9: Which DynamoDB limits can be raised by contacting AWS support?
A. The number of hash keys per account
B. The maximum storage used per account
C. The number of tables per account
D. The number of local secondary indexes per account
E. The number of provisioned throughput units per account
Answer: C. and E.
For any AWS account, there is an initial limit of 256 tables per region. AWS places some default limits on the throughput you can provision. These are the limits unless you request a higher amount. To request a service limit increase see https://aws.amazon.com/support.
Q10: Which approach below provides the least impact to provisioned throughput on the “Product” table?
A. Create an “Images” DynamoDB table to store the Image with a foreign key constraint to the “Product” table
B. Add an image data type to the “Product” table to store the images in binary format
C. Serialize the image and store it in multiple DynamoDB tables
D. Store the images in Amazon S3 and add an S3 URL pointer to the “Product” table item for each image
Answer: D.
Amazon DynamoDB currently limits the size of each item that you store in a table (see Limits in DynamoDB). If your application needs to store more data in an item than the DynamoDB size limit permits, you can try compressing one or more large attributes, or you can store them as an object in Amazon Simple Storage Service (Amazon S3) and store the Amazon S3 object identifier in your DynamoDB item. Compressing large attribute values can let them fit within item limits in DynamoDB and reduce your storage costs. Compression algorithms such as GZIP or LZO produce binary output that you can then store in a Binary attribute type. Reference: Best Practices for Storing Large Items and Attributes
Q11: You’re creating a forum DynamoDB database for hosting forums. Your “thread” table contains the forum name and each “forum name” can have one or more “subjects”. What primary key type would you give the thread table in order to allow more than one subject to be tied to the forum primary key name?
A. Hash
B. Range and Hash
C. Primary and Range
D. Hash and Range
Answer: D. Each forum name can have one or more subjects. In this case, ForumName is the hash attribute and Subject is the range attribute.
And before we wrap up today's AI news, I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.
Definition 1: Amazon S3 or Amazon Simple Storage Service is a “simple storage service” offered by Amazon Web Services that provides object storage through a web service interface. Amazon S3 uses the same scalable storage infrastructure that Amazon.com uses to run its global e-commerce network.
Definition 2: Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
AWS S3 Explained graphically:
Amazon S3 Explained
Amazon S3 Explained in picturesAmazon S3 Explained graphically
AWS S3 Facts and summaries
S3 is a universal namespace, meaning each S3 bucket you create must have a unique name that is not being used by anyone else in the world.
S3 is object based: i.e allows you to upload files.
Files can be from 0 Bytes to 5 TB
What is the maximum length, in bytes, of a DynamoDB range primary key attribute value? The maximum length of a DynamoDB range primary key attribute value is 2048 bytes (NOT 256 bytes).
S3 has unlimited storage.
Files are stored in Buckets.
Read after write consistency for PUTS of new Objects
Eventual Consistency for overwrite PUTS and DELETES (can take some time to propagate)
S3 Standard (durable, immediately available, frequently accesses)
Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering): It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access.
S3 – One Zone-Infrequent Access – S3 One Zone IA: Same ad IA. However, data is stored in a single Availability Zone only
S3 – Reduced Redundancy Storage (data that is easily reproducible, such as thumbnails, etc.)
Glacier – Archived data, where you can wait 3-5 hours before accessing
You can have a bucket that has different objects stored in S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA.
The default URL for S3 hosted websites lists the bucket name first followed by s3-website-region.amazonaws.com . Example: enoumen.com.s3-website-us-east-1.amazonaws.com
Core fundamentals of an S3 object
Key (name)
Value (data)
Version (ID)
Metadata
Sub-resources (used to manage bucket-specific configuration)
Bucket Policies, ACLs,
CORS
Transfer Acceleration
Object-based storage only for files
Not suitable to install OS on.
Successful uploads will generate a HTTP 200 status code.
S3 Security – Summary
By default, all newly created buckets are PRIVATE.
You can set up access control to your buckets using:
Bucket Policies – Applied at the bucket level
Access Control Lists – Applied at an object level.
S3 buckets can be configured to create access logs, which log all requests made to the S3 bucket. These logs can be written to another bucket.
S3 Encryption
Encryption In-Transit (SSL/TLS)
Encryption At Rest:
Server side Encryption (SSE-S3, SSE-KMS, SSE-C)
Client Side Encryption
Remember that we can use a Bucket policy to prevent unencrypted files from being uploaded by creating a policy which only allows requests which include the x-amz-server-side-encryption parameter in the request header.
S3 CORS (Cross Origin Resource Sharing): CORS defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.
Used to enable cross origin access for your AWS resources, e.g. S3 hosted website accessing javascript or image files located in another bucket. By default, resources in one bucket cannot access resources located in another. To allow this we need to configure CORS on the bucket being accessed and enable access for the origin (bucket) attempting to access.
Always use the S3 website URL, not the regular bucket URL. E.g.: https://s3-eu-west-2.amazonaws.com/acloudguru
S3 CloudFront:
Edge locations are not just READ only – you can WRITE to them too (i.e put an object on to them.)
Objects are cached for the life of the TTL (Time to Live)
You can clear cached objects, but you will be charged. (Invalidation)
S3 Performance optimization – 2 main approaches to Performance Optimization for S3:
GET-Intensive Workloads – Use Cloudfront
Mixed Workload – Avoid sequencial key names for your S3 objects. Instead, add a random prefix like a hex hash to the key name to prevent multiple objects from being stored on the same partition.
The best way to handle large objects uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts.
You can enable versioning on a bucket, even if that bucket already has objects in it. The already existing objects, though, will show their versions as null. All new objects will have version IDs.
Bucket names cannot start with a . or – characters. S3 bucket names can contain both the . and – characters. There can only be one . or one – between labels. E.G mybucket-com mybucket.com are valid names but mybucket–com and mybucket..com are not valid bucket names.
What is the maximum number of S3 buckets allowed per AWS account (by default)? 100
You successfully upload an item to the us-east-1 region. You then immediately make another API call and attempt to read the object. What will happen? All AWS regions now have read-after-write consistency for PUT operations of new objects. Read-after-write consistency allows you to retrieve objects immediately after creation in Amazon S3. Other actions still follow the eventual consistency model (where you will sometimes get stale results if you have recently made changes)
S3 bucket policies require a Principal be defined. Review the access policy elements here
What checksums does Amazon S3 employ to detect data corruption? Amazon S3 uses a combination of Content-MD5 checksums and cyclic redundancy checks (CRCs) to detect data corruption. Amazon S3 performs these checksums on data at rest and repairs any corruption using redundant data. In addition, the service calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data.
Q0: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?
A. Create multiple threads and upload the objects in the multiple threads
B. Write the items in batches for better performance
C. Use the Multipart upload API
D. Enable versioning on the Bucket
C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.
Are you passionate about AI and looking for your next career challenge? In the fast-evolving world of artificial intelligence, connecting with the right opportunities can make all the difference. We're excited to recommend Mercor, a premier platform dedicated to bridging the gap between exceptional AI professionals and innovative companies.
Whether you're seeking roles in machine learning, data science, or other cutting-edge AI fields, Mercor offers a streamlined path to your ideal position. Explore the possibilities and accelerate your AI career by visiting Mercor through our exclusive referral link:
Your next big opportunity in AI could be just a click away!
Q2: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?
A. AWS::Serverless::Api
B. AWS::Serverless::Application
C. AWS::Serverless::Layerversion
D. AWS::Serverless::Function
Answer – B AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets. Reference: Declaring Serverless Resources
Q3: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?
A. Enable versioning for the underlying S3 bucket.
B. Enable Replication so that the objects get replicated to the other bucket
C. Enable CORS for the bucket
D. Change the Bucket policy for the bucket to allow access from the other bucket
Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.
Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:
And before we wrap up today's AI news, I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.
Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can congure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.
Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would congure the bucket that is hosting the web font to allow any origin to make these requests.
Q4: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below
A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.
Answer- C The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3. You can then provides access to the objects based on the key values generated via the user id.
Q5: Both ACLs and Bucket Policies can be used to grant access to S3 buckets. Which of the following statements is true about ACLs and Bucket policies?
A. Bucket Policies are Written in JSON and ACLs are written in XML
B. ACLs can be attached to S3 objects or S3 Buckets
C. Bucket Policies and ACLs are written in JSON
D. Bucket policies are only attached to s3 buckets, ACLs are only attached to s3 objects
Answer: A. and B. Only Bucket Policies are written in JSON, ACLs are written in XML. While Bucket policies are indeed only attached to S3 buckets, ACLs can be attached to S3 Buckets OR S3 Objects. Reference:
Q6: What are good options to improve S3 performance when you have significantly high numbers of GET requests?
A. Introduce random prefixes to S3 objects
B. Introduce random suffixes to S3 objects
C. Setup CloudFront for S3 objects
D. Migrate commonly used objects to Amazon Glacier
Answer: C CloudFront caching is an excellent way to avoid putting extra strain on the S3 service and to improve the response times of reqeusts by caching data closer to users at CloudFront locations. S3 Transfer Acceleration optimizes the TCP protocol and adds additional intelligence between the client and the S3 bucket, making S3 Transfer Acceleration a better choice if a higher throughput is desired. If you have objects that are smaller than 1GB or if the data set is less than 1GB in size, you should consider using Amazon CloudFront’s PUT/POST commands for optimal performance. Reference: Amazon S3 Transfer Acceleration
Q7: If an application is storing hourly log files from thousands of instances from a high traffic web site, which naming scheme would give optimal performance on S3?
A. Sequential
B. HH-DD-MM-YYYY-log_instanceID
C. YYYY-MM-DD-HH-log_instanceID
D. instanceID_log-HH-DD-MM-YYYY
E. instanceID_log-YYYY-MM-DD-HH
Answer: A. B. C. D. and E. Amazon S3 now provides increased performance to support at least 3,500 requests per second to add data and 5,500 requests per second to retrieve data, which can save significant processing time for no additional charge. Each S3 prefix can support these request rates, making it simple to increase performance significantly. This S3 request rate performance increase removes any previous guidance to randomize object prefixes to achieve faster performance. That means you can now use logical or sequential naming patterns in S3 object naming without any performance implications.
Q9: You created three S3 buckets – “mywebsite.com”, “downloads.mywebsite.com”, and “www.mywebsite.com”. You uploaded your files and enabled static website hosting. You specified both of the default documents under the “enable static website hosting” header. You also set the “Make Public” permission for the objects in each of the three buckets. You create the Route 53 Aliases for the three buckets. You are going to have your end users test your websites by browsing to http://mydomain.com/error.html, http://downloads.mydomain.com/index.html, and http://www.mydomain.com. What problems will your testers encounter?
A. http://mydomain.com/error.html will not work because you did not set a value for the error.html file
B. There will be no problems, all three sites should work.
C. http://www.mywebsite.com will not work because the URL does not include a file name at the end of it.
D. http://downloads.mywebsite.com/index.html will not work because the “downloads” prefix is not a supported prefix for S3 websites using Route 53 aliases
Answer: B. It used to be that the only allowed domain prefix when creating Route 53 Aliases for S3 static websites was the “www” prefix. However, this is no longer the case. You can now use other subdomain.
And before we wrap up today's AI news, I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.
What is the AWS Certified Developer Associate Exam?
This AWS Certified Developer-Associate Examination is intended for individuals who perform a Developer role. It validates an examinee’s ability to:
Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS
Recommended general IT knowledge The target candidate should have the following: – In-depth knowledge of at least one high-level programming language – Understanding of application lifecycle management – The ability to write code for serverless applications – Understanding of the use of containers in the development process
Recommended AWS knowledge The target candidate should be able to do the following:
Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications
Identify key features of AWS services
Understand the AWS shared responsibility model
Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
Use and interact with AWS services
Apply basic understanding of cloud-native applications to write code
Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
Author, maintain, and debug code modules on AWS
What is considered out of scope for the target candidate? The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam: – Design architectures (for example, distributed system, microservices) – Design and implement CI/CD pipelines
Administer IAM users and groups
Administer Amazon Elastic Container Service (Amazon ECS)
Design AWS networking infrastructure (for example, Amazon VPC, AWS Direct Connect)
Understand compliance and licensing
Exam content Response types There are two types of questions on the exam: – Multiple choice: Has one correct response and three incorrect responses (distractors) – Multiple response: Has two or more correct responses out of five or more response options Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose. Distractors are generally plausible responses that match the content area. Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.
Unscored content The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.
Exam results The AWS Certified Developer – Associate (DVA-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines. Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720. Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels. Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam. Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.
Content outline This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context. The percentage in each domain represents only scored content.
Are you passionate about AI and looking for your next career challenge? In the fast-evolving world of artificial intelligence, connecting with the right opportunities can make all the difference. We're excited to recommend Mercor, a premier platform dedicated to bridging the gap between exceptional AI professionals and innovative companies.
Whether you're seeking roles in machine learning, data science, or other cutting-edge AI fields, Mercor offers a streamlined path to your ideal position. Explore the possibilities and accelerate your AI career by visiting Mercor through our exclusive referral link:
Your next big opportunity in AI could be just a click away!
Domain 1: Deployment 22% Domain 2: Security 26% Domain 3: Development with AWS Services 30% Domain 4: Refactoring 10% Domain 5: Monitoring and Troubleshooting 12%
Domain 1: Deployment 1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns. – Commit code to a repository and invoke build, test and/or deployment actions – Use labels and branches for version and release management – Use AWS CodePipeline to orchestrate workflows against different environments – Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS CodeDeploy for CI/CD purposes – Perform a roll back plan based on application deployment policy
1.2 Deploy applications using AWS Elastic Beanstalk. – Utilize existing supported environments to define a new application stack – Package the application – Introduce a new application version into the Elastic Beanstalk environment – Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable) – Validate application health using Elastic Beanstalk dashboard – Use Amazon CloudWatch Logs to instrument application logging
1.3 Prepare the application deployment package to be deployed to AWS. – Manage the dependencies of the code module (like environment variables, config files and static image files) within the package – Outline the package/container directory structure and organize files appropriately – Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores)
And before we wrap up today's AI news, I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.
1.4 Deploy serverless applications. – Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template – Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)
Domain 2: Security 2.1 Make authenticated calls to AWS services. – Communicate required policy based on least privileges required by application. – Assume an IAM role to access a service – Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)
2.2 Implement encryption using AWS services. – Encrypt data at rest (client side; server side; envelope encryption) using AWS services – Encrypt data in transit
2.3 Implement application authentication and authorization. – Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools – Use Amazon Cognito-provided credentials to write code that access AWS services. – Use Amazon Cognito sync to synchronize user profiles and data – Use developer-authenticated identities to interact between end user devices, backend authentication, and Amazon Cognito
Domain 3: Development with AWS Services 3.1 Write code for serverless applications. – Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications) – Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler) – Create an API endpoint using Amazon API Gateway – Create and test appropriate API actions like GET, POST using the API endpoint – Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes) – Compute read/write capacity units for Amazon DynamoDB based on application requirements – Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis) – Invoke an AWS Lambda function synchronously and asynchronously
3.2 Translate functional requirements into application design. – Determine real-time vs. batch processing for a given use case – Determine use of synchronous vs. asynchronous for a given use case – Determine use of event vs. schedule/poll for a given use case – Account for tradeoffs for consistency models in an application design
Domain 4: Refactoring 4.1 Optimize applications to best use AWS services and features. Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache) Apply an Amazon S3 naming scheme for optimal read performance
4.2 Migrate existing application code to run on AWS. – Isolate dependencies – Run the application as one or more stateless processes – Develop in order to enable horizontal scalability – Externalize state
Domain 5: Monitoring and Troubleshooting
5.1 Write code that can be monitored. – Create custom Amazon CloudWatch metrics – Perform logging in a manner available to systems operators – Instrument application source code to enable tracing in AWS X-Ray
5.2 Perform root cause analysis on faults found in testing or production. – Interpret the outputs from the logging mechanism in AWS to identify errors in logs – Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues – Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component
Which key tools, technologies, and concepts might be covered on the exam?
The following is a non-exhaustive list of the tools and technologies that could appear on the exam. This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam. The general tools and technologies in this list appear in no particular order. AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance: – Analytics – Application Integration – Containers – Cost and Capacity Management – Data Movement – Developer Tools – Instances (virtual machines) – Management and Governance – Networking and Content Delivery – Security – Serverless
The following is a non-exhaustive list of AWS services and features that are not covered on the exam. These services and features do not represent every AWS offering that is excluded from the exam content. Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant. Out-of-scope AWS services and features include the following: – AWS Application Discovery Service – Amazon AppStream 2.0 – Amazon Chime – Amazon Connect – AWS Database Migration Service (AWS DMS) – AWS Device Farm – Amazon Elastic Transcoder – Amazon GameLift – Amazon Lex – Amazon Machine Learning (Amazon ML) – AWS Managed Services – Amazon Mobile Analytics – Amazon Polly
– Amazon QuickSight – Amazon Rekognition – AWS Server Migration Service (AWS SMS) – AWS Service Catalog – AWS Shield Advanced – AWS Shield Standard – AWS Snow Family – AWS Storage Gateway – AWS WAF – Amazon WorkMail – Amazon WorkSpaces
To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
AWS Certified Developer – Associate Practice Questions And Answers Dump
Q0: Your application reads commands from an SQS queue and sends them to web services hosted by your partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost. How can you accommodate the partners’ broken web services without wasting your resources?
A. Create a delay queue and set DelaySeconds to 30 seconds
B. Requeue the message with a VisibilityTimeout of 30 seconds.
C. Create a dead letter queue and set the Maximum Receives to 3.
D. Requeue the message with a DelaySeconds of 30 seconds.
C. After a message is taken from the queue and returned for the maximum number of retries, it is automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.
Q1: A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently. What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?
A. Amazon DynamoDB auto scaling
B. Amazon DynamoDB cross-region replication
C. Amazon DynamoDB Streams
D. Amazon DynamoDB Accelerator
D. The AWS Documentation mentions the following:
DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios
As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
Q2: You are creating a DynamoDB table with the following attributes:
PurchaseOrderNumber (partition key)
CustomerID
PurchaseDate
TotalPurchaseValue
One of your applications must retrieve items from the table to calculate the total value of purchases for a particular customer over a date range. What secondary index do you need to add to the table?
A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the TotalPurchaseValue attribute
B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the TotalPurchaseValue attribute
C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the TotalPurchaseValue attribute
D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the TotalPurchaseValue attribute
C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the TotalPurchaseValue into the index provides all the data needed to satisfy the use case.
Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
Global Secondary Indexes defines a new paradigm – different hash/range keys per index. This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.
Throughput :
Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q5: Lambda allows you to upload code and dependencies for function packages:
A. Only from a directly uploaded zip file
B. Only via SFTP
C. Only from a zip file in AWS S3
D. From a zip file in AWS S3 or uploaded directly from elsewhere
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q7: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?
A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.
D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.
Q8: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?
A. Create and assign EIP to each instance
B. Create and attach a second IGW to the VPC.
C. Create and utilize a NAT Gateway
D. Connect to a VPN
C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
Q9: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?
A. Security Groups
B. Route Tables
C. Elastic Load Balancer
D. Auto Scaling
D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.
Q11: You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?
A. RegisterImage
B. CreateImage
C. ami-register-image
D. ami-create-image
A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.
Q12: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?
A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
B. Permenantly assigning users to specific instances and always routing their traffic to those instances
C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance
Q14: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?
A. Autoscaling requires using Amazon EBS-backed instances
B. Virtual Private Cloud requires EBS backed instances
C. Amazon EBS-backed instances can be stopped and restarted without losing data
D. Instance-store backed instances can be stopped and restarted without losing data
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.
Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command. ssh -i my_key.pem ec2-user@52.2.222.22 However you receive the following error. @@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@ What is the most probable reason for this and how can you fix it?
A. You do not have root access on your terminal and need to use the sudo option for this to work.
B. You do not have enough permissions to perform the operation.
C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.
D. You need to run something like: chmod 400 my_key.pem
Q16: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?
A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.
D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.
Q17: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:
A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
B. Can only be used to launch EC2 instances in the same country as the AMI is stored
C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
D. Can be used to launch EC2 instances in any AWS region
C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another
Q18: Which of the following statements is true about the Elastic File System (EFS)?
A. EFS can scale out to meet capacity requirements and scale back down when no longer needed
B. EFS can be used by multiple EC2 instances simultaneously
C. EFS cannot be used by an instance using EBS
D. EFS can be configured on an instance before launch just like an IAM role or EBS volumes
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
A. The ability to create custom permission policies.
B. Assigning IAM permission policies to more than one user at a time.
C. Easier user/policy management.
D. Allowing EC2 instances to gain access to S3.
B. and C.
A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.
Q22: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?
A. Amazon DynamoDB auto scaling
B. Amazon DynamoDB cross-region replication
C. Amazon DynamoDB Streams
D. Amazon DynamoDB Accelerator
D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:
As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
Q23: A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?
A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.
B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence. Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation. During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:
Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
Saved Configurations– Settings for any options that are not applied directly to the environment are loaded from a saved configuration, if specified.
Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.
Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.
Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.
If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI . Settings in configuration files are not applied directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.
Q24: What statements are true about Availability Zones (AZs) and Regions?
A. There is only one AZ in each AWS Region
B. AZs are geographically separated inside a region to help protect against natural disasters affecting more than one at a time.
C. AZs can be moved between AWS Regions based on your needs
D. There are (almost always) two or more AZs in each AWS Region
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q26: Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?
A. Eventual Consistent Reads
B. Conditional reads for Consistency
C. Strongly Consistent Reads
D. Not possible
C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.
Q27: You’ ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?
A. Create an Opswork stack and deploy the Docker containers
B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.
B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.
Q28: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?
A. Create multiple threads and upload the objects in the multiple threads
B. Write the items in batches for better performance
C. Use the Multipart upload API
D. Enable versioning on the Bucket
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.
Q29: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?
A. 6000
B. 10
C. 3600
D. 600
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.
You can specify the Write capacity in the Capacity tab of the DynamoDB table.
Q33: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?
A. Create and assign EIP to each instance
B. Create and attach a second IGW to the VPC.
C. Create and utilize a NAT Gateway
D. Connect to a VPN
C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet. Reference: AWS Network Address Translation Gateway
Q34: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?
A. Security Groups
B. Route Tables
C. Elastic Load Balancer
D. Auto Scaling
D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture. Reference: AWS Autoscalling
Q31: An organization is using an Amazon ElastiCache cluster in front of their Amazon RDS instance. The organization would like the Developer to implement logic into the code so that the cluster only retrieves data from RDS when there is a cache miss. What strategy can the Developer implement to achieve this?
A. Lazy loading
B. Write-through
C. Error retries
D. Exponential backoff
Answer:
Answer – A Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. All other options are incorrect. Reference: Caching Strategies
Q32: A developer is writing an application that will run on Ec2 instances and read messages from SQS queue. The nessages will arrive every 15-60 seconds. How should the Developer efficiently query the queue for new messages?
A. Use long polling
B. Set a custom visibility timeout
C. Use short polling
D. Implement exponential backoff
Answer – A Long polling will help insure that the applications make less requests for messages in a shorter period of time. This is more cost effective. Since the messages are only going to be available after 15 seconds and we don’t know exacly when they would be available, it is better to use Long Polling. Reference: Amazon SQS Long Polling
Q33: You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With new Lambda function working as per expectation which of the following will shift traffic from original Lambda function to new Lambda function in the shortest time frame?
A. Canary10Percent5Minutes
B. Linear10PercentEvery10Minutes
C. Canary10Percent15Minutes
D. Linear10PercentEvery1Minute
Answer – A With Canary Deployment Preference type, Traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while remaining all traffic is shifted after 5 minutes. Reference: Gradual Code Deployment
Q34: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?
A. AWS::Serverless::Api
B. AWS::Serverless::Application
C. AWS::Serverless::Layerversion
D. AWS::Serverless::Function
Answer – B AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets. Reference: Declaring Serverless Resources
Q35: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?
A. Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
B. Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
C. Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
D. Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.
Answer – D With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys. Reference: AWS Key Management Service Concepts
Q36: You are developing an application that will be comprised of the following architecture –
A set of Ec2 instances to process the videos.
These (Ec2 instances) will be spun up by an autoscaling group.
SQS Queues to maintain the processing messages.
There will be 2 pricing tiers.
How will you ensure that the premium customers videos are given more preference?
A. Create 2 Autoscaling Groups, one for normal and one for premium customers
B. Create 2 set of Ec2 Instances, one for normal and one for premium customers
C. Create 2 SQS queus, one for normal and one for premium customers
D. Create 2 Elastic Load Balancers, one for normal and one for premium customers.
Answer – C The ideal option would be to create 2 SQS queues. Messages can then be processed by the application from the high priority queue first.<br? The other options are not the ideal options. They would lead to extra costs and also extra maintenance. Reference: SQS
Q37: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?
A. CustomerID
B. CustomerName
C. Location
D. Age
Answer- A Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on.. Use composite attributes. Try to combine more than one attribute to form a unique key. Reference: Choosing the right DynamoDB Partition Key
Q38: A developer is making use of AWS services to develop an application. He has been asked to develop the application in a manner to compensate any network delays. Which of the following two mechanisms should he implement in the application?
A. Multiple SQS queues
B. Exponential backoff algorithm
C. Retries in your application code
D. Consider using the Java sdk.
Answer- B. and C. In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values, and should be set based on the operation being performed, as well as other local factors, such as network latency. Reference: Error Retries and Exponential Backoff in AWS
Q39: An application is being developed that is going to write data to a DynamoDB table. You have to setup the read and write throughput for the table. Data is going to be read at the rate of 300 items every 30 seconds. Each item is of size 6KB. The reads can be eventual consistent reads. What should be the read capacity that needs to be set on the table?
A. 10
B. 20
C. 6
D. 30
Answer – A
Since there are 300 items read every 30 seconds , that means there are (300/30) = 10 items read every second. Since each item is 6KB in size , that means , 2 reads will be required for each item. So we have total of 2*10 = 20 reads for the number of items per second Since eventual consistency is required , we can divide the number of reads(20) by 2 , and in the end we get the Read Capacity of 10.
Q40: You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load balancer. You have been requested to monitor the incoming connections to the Elastic Load Balancer. Which of the below options can suffice this requirement?
A. Use AWS CloudTrail with your load balancer
B. Enable access logs on the load balancer
C. Use a CloudWatch Logs Agent
D. Create a custom metric CloudWatch lter on your load balancer
Answer – B Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues. Reference: Access Logs for Your Application Load Balancer
Q41: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?
A. Enable versioning for the underlying S3 bucket.
B. Enable Replication so that the objects get replicated to the other bucket
C. Enable CORS for the bucket
D. Change the Bucket policy for the bucket to allow access from the other bucket
Answer – C
Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.
Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:
Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.
Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.
Q42: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below
A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.
Answer- C The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3. You can then provides access to the objects based on the key values generated via the user id.
Q43: Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.
A. Publish your data to CloudWatch Logs, and congure your application to Autoscale to handle the load on demand.
B. Publish your log data to an Amazon S3 bucket. Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is congured to pull down your log les stored an Amazon S3
C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is congured to process your logging data.
D. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.
Answer:
Answer – C Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data. Reference: Amazon Kinesis
Q44: You’ve been instructed to develop a mobile application that will make use of AWS services. You need to decide on a data store to store the user sessions. Which of the following would be an ideal data store for session management?
A. AWS Simple Storage Service
B. AWS DynamoDB
C. AWS RDS
D. AWS Redshift
Answer:
Answer – B DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management Reference: Scalable Session Handling in PHP Using Amazon DynamoDB
Q45: Your application currently interacts with a DynamoDB table. Records are inserted into the table via the application. There is now a requirement to ensure that whenever items are updated in the DynamoDB primary table , another record is inserted into a secondary table. Which of the below feature should be used when developing such a solution?
A. AWS DynamoDB Encryption
B. AWS DynamoDB Streams
C. AWS DynamoDB Accelerator
D. AWSTable Accelerator
Answer – B DynamoDB Streams Use Cases and Design Patterns This post describes some common use cases you might encounter, along with their design options and solutions, when migrating data from relational data stores to Amazon DynamoDB. We will consider how to manage the following scenarios:
How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
How do you trigger an event based on a particular transaction?
How do you audit or archive transactions?
How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?
Relational databases provide native support for transactions, triggers, auditing, and replication. Typically, a transaction in a database refers to performing create, read, update, and delete (CRUD) operations against multiple tables in a block. A transaction can have only two states—success or failure. In other words, there is no partial completion. As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit. You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time. AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region. All of the other options are incorrect since none of these would meet the core requirement. Reference: DynamoDB Streams Use Cases and Design Patterns
Q46: An application has been making use of AWS DynamoDB for its back-end data store. The size of the table has now grown to 20 GB , and the scans on the table are causing throttling errors. Which of the following should now be implemented to avoid such errors?
A. Large Page size
B. Reduced page size
C. Parallel Scans
D. Sequential scans
Answer – B When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity. You can use the following technique to minimize the impact of a scan on a table’s provisioned throughput. Reduce page size Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request. For example, suppose that each item is 4 KB and you set the page size to 40 items. A Query request would then consume only 20 eventually consistent read operations or 40 strongly consistent read operations. A larger number of smaller Query or Scan operations would allow your other critical requests to succeed without throttling. Reference1: Rate-Limited Scans in Amazon DynamoDB
Q47: Which of the following is correct way of passing a stage variable to an HTTP URL ? (Select TWO.)
A. http://example.com/${}/prod
B. http://example.com/${stageVariables.}/prod
C. http://${stageVariables.}.example.com/dev/operation
D. http://${stageVariables}.example.com/dev/operation
E. http://${}.example.com/dev/operation
F. http://example.com/${stageVariables}/prod
Answer – B. and C. A stage variable can be used as part of HTTP integration URL as in following cases, · A full URI without protocol · A full domain · A subdomain · A path · A query string In the above case , option B & C displays stage variable as a path & sub-domain. Reference: Amazon API Gateway Stage Variables Reference
Q48: Your company is planning on creating new development environments in AWS. They want to make use of their existing Chef recipes which they use for their on-premise configuration for servers in AWS. Which of the following service would be ideal to use in this regard?
A. AWS Elastic Beanstalk
B. AWS OpsWork
C. AWS Cloudformation
D. AWS SQS
Answer – B AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments All other options are invalid since they cannot be used to work with Chef recipes for configuration management. Reference: AWS OpsWorks
Q49: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?
A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access.
C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.
Answer – C With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used Reference: About Web Identity Federation
Q50: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?
A. Cognito Data
B. Cognito Events
C. Cognito Streams
D. Cognito Callbacks
Answer – C Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams Reference:
Q51: You’ve developed a set of scripts using AWS Lambda. These scripts need to access EC2 Instances in a VPC. Which of the following needs to be done to ensure that the AWS Lambda function can access the resources in the VPC. Choose 2 answers from the options given below
A. Ensure that the subnet ID’s are mentioned when conguring the Lambda function
B. Ensure that the NACL ID’s are mentioned when conguring the Lambda function
C. Ensure that the Security Group ID’s are mentioned when conguring the Lambda function
D. Ensure that the VPC Flow Log ID’s are mentioned when conguring the Lambda function
Answer: A and C. AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPCspecific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC. Reference: Configuring a Lambda Function to Access Resources in an Amazon VPC
Q52: You’ve currently been tasked to migrate an existing on-premise environment into Elastic Beanstalk. The application does not make use of Docker containers. You also can’t see any relevant environments in the beanstalk service that would be suitable to host your application. What should you consider doing in this case?
A. Migrate your application to using Docker containers and then migrate the app to the Elastic Beanstalk environment.
B. Consider using Cloudformation to deploy your environment to Elastic Beanstalk
C. Consider using Packer to create a custom platform
D. Consider deploying your application using the Elastic Container Service
Answer – C Elastic Beanstalk supports custom platforms. A custom platform is a more advanced customization than a Custom Image in several ways. A custom platform lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility allows you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn’t provide a platform out of the box. Compare that to custom images, where you modify an AMI for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform’s software stack. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. To create a custom platform, you build an Amazon Machine Image (AMI) from one of the supported operating systems—Ubuntu, RHEL, or Amazon Linux (see the flavor entry in Platform.yaml File Format for the exact version numbers)—and add further customizations. You create your own Elastic Beanstalk platform using Packer, which is an open-source tool for creating machine images for many platforms, including AMIs for use with Amazon EC2. An Elastic Beanstalk platform comprises an AMI configured to run a set of software that supports an application, and metadata that can include custom configuration options and default configuration option settings. Reference: AWS Elastic Beanstalk Custom Platforms
Q53: Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.
A. 10
B. 160
C. 155
D. 16
Answer – B. Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below. Reference: Read/Write Capacity Mode
Q57: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?
Q60: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?
A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.
Q63: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?
A. Transforms
B. Outputs
C. Resources
D. Instances
Answer: C. The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3. Reference: Resources
Q64: Which AWS service can be used to fully automate your entire release process?
A. CodeDeploy
B. CodePipeline
C. CodeCommit
D. CodeBuild
Answer: B. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates
Q65: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?
A. Outputs
B. Transforms
C. Resources
D. Exports
Answer: A. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Reference: CloudFormation Outputs
Q66: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?
A. Inputs
B. Resources
C. Transforms
D. Files
Answer: C. Transforms is used to reference code located in S3 and also specififying the use of the Serverless Application Model (SAM) for Lambda deployments. Reference: Transforms
Q67: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file used to specify source files and lifecycle hooks?
Q68: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?
A. Share the code using an EBS volume
B. Copy and paste the code into the template each time you need to use it
C. Use a cloudformation nested stack
D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.
Q72: Which of the following is an encrypted key used by KMS to encrypt your data
A. Custmoer Mamaged Key
B. Encryption Key
C. Envelope Key
D. Customer Master Key
Answer: C. Your Data key also known as the Enveloppe key is encrypted using the master key.This approach is known as Envelope encryption. Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.
Q75: A developer is preparing a deployment package for a Java implementation of an AWS Lambda function. What should the developer include in the deployment package? (Select TWO.) A. Compiled application code B. Java runtime environment C. References to the event sources D. Lambda execution role E. Application dependencies
Answer: C. E. Notes: To create a Lambda function, you first create a Lambda function deployment package. This package is a .zip or .jar file consisting of your code and any dependencies. Reference:Lambda deployment packages.
Q76: A developer uses AWS CodeDeploy to deploy a Python application to a fleet of Amazon EC2 instances that run behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. What should the developer include in the CodeDeploy deployment package? A. A launch template for the Amazon EC2 Auto Scaling group B. A CodeDeploy AppSpec file C. An EC2 role that grants the application access to AWS services D. An IAM policy that grants the application access to AWS services
Answer: B. Notes: The CodeDeploy AppSpec (application specific) file is unique to CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file. Reference: CodeDeploy application specification (AppSpec) files. Category: Deployment
Q76: A company is working on a project to enhance its serverless application development process. The company hosts applications on AWS Lambda. The development team regularly updates the Lambda code and wants to use stable code in production. Which combination of steps should the development team take to configure Lambda functions to meet both development and production requirements? (Select TWO.)
A. Create a new Lambda version every time a new code release needs testing. B. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready unqualified Amazon Resource Name (ARN) version. Point the Development alias to the $LATEST version. C. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to the production-ready qualified Amazon Resource Name (ARN) version. Point the Development alias to the variable LAMBDA_TASK_ROOT. D. Create a new Lambda layer every time a new code release needs testing. E. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready Lambda layer Amazon Resource Name (ARN). Point the Development alias to the $LATEST layer ARN.
Answer: A. B. Notes: Lambda function versions are designed to manage deployment of functions. They can be used for code changes, without affecting the stable production version of the code. By creating separate aliases for Production and Development, systems can initiate the correct alias as needed. A Lambda function alias can be used to point to a specific Lambda function version. Using the functionality to update an alias and its linked version, the development team can update the required version as needed. The $LATEST version is the newest published version. Reference: Lambda function versions.
Q77: Each time a developer publishes a new version of an AWS Lambda function, all the dependent event source mappings need to be updated with the reference to the new version’s Amazon Resource Name (ARN). These updates are time consuming and error-prone. Which combination of actions should the developer take to avoid performing these updates when publishing a new Lambda version? (Select TWO.) A. Update event source mappings with the ARN of the Lambda layer. B. Point a Lambda alias to a new version of the Lambda function. C. Create a Lambda alias for each published version of the Lambda function. D. Point a Lambda alias to a new Lambda function alias. E. Update the event source mappings with the Lambda alias ARN.
Answer: B. E. Notes: A Lambda alias is a pointer to a specific Lambda function version. Instead of using ARNs for the Lambda function in event source mappings, you can use an alias ARN. You do not need to update your event source mappings when you promote a new version or roll back to a previous version. Reference: Lambda function aliases. Category: Deployment
Q78: A company wants to store sensitive user data in Amazon S3 and encrypt this data at rest. The company must manage the encryption keys and use Amazon S3 to perform the encryption. How can a developer meet these requirements? A. Enable default encryption for the S3 bucket by using the option for server-side encryption with customer-provided encryption keys (SSE-C). B. Enable client-side encryption with an encryption key. Upload the encrypted object to the S3 bucket. C. Enable server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Upload an object to the S3 bucket. D. Enable server-side encryption with customer-provided encryption keys (SSE-C). Upload an object to the S3 bucket.
Q79: A company is developing a Python application that submits data to an Amazon DynamoDB table. The company requires client-side encryption of specific data items and end-to-end protection for the encrypted data in transit and at rest. Which combination of steps will meet the requirement for the encryption of specific data items? (Select TWO.)
A. Generate symmetric encryption keys with AWS Key Management Service (AWS KMS). B. Generate asymmetric encryption keys with AWS Key Management Service (AWS KMS). C. Use generated keys with the DynamoDB Encryption Client. D. Use generated keys to configure DynamoDB table encryption with AWS managed customer master keys (CMKs). E. Use generated keys to configure DynamoDB table encryption with AWS owned customer master keys (CMKs).
Answer: A. C. Notes: When the DynamoDB Encryption Client is configured to use AWS KMS, it uses a customer master key (CMK) that is always encrypted when used outside of AWS KMS. This cryptographic materials provider returns a unique encryption key and signing key for every table item. This method of encryption uses a symmetric CMK. Reference: Direct KMS Materials Provider. Category: Deployment
Q80: A company is developing a REST API with Amazon API Gateway. Access to the API should be limited to users in the existing Amazon Cognito user pool. Which combination of steps should a developer perform to secure the API? (Select TWO.) A. Create an AWS Lambda authorizer for the API. B. Create an Amazon Cognito authorizer for the API. C. Configure the authorizer for the API resource. D. Configure the API methods to use the authorizer. E. Configure the authorizer for the API stage.
Answer: B. D. Notes: An Amazon Cognito authorizer should be used for integration with Amazon Cognito user pools. In addition to creating an authorizer, you are required to configure an API method to use that authorizer for the API. Reference: Control access to a REST API using Amazon Cognito user pools as authorizer. Category: Security
Q81: A developer is implementing a mobile app to provide personalized services to app users. The application code makes calls to Amazon S3 and Amazon Simple Queue Service (Amazon SQS). Which options can the developer use to authenticate the app users? (Select TWO.) A. Authenticate to the Amazon Cognito identity pool directly. B. Authenticate to AWS Identity and Access Management (IAM) directly. C. Authenticate to the Amazon Cognito user pool directly. D. Federate authentication by using Login with Amazon with the users managed with AWS Security Token Service (AWS STS). E. Federate authentication by using Login with Amazon with the users managed with the Amazon Cognito user pool.
Answer: C. E. Notes: The Amazon Cognito user pool provides direct user authentication. The Amazon Cognito user pool provides a federated authentication option with third-party identity provider (IdP), including amazon.com. Reference: Adding User Pool Sign-in Through a Third Party. Category: Security
Question: A company is implementing several order processing workflows. Each workflow is implemented by using AWS Lambda functions for each task. Which combination of steps should a developer follow to implement these workflows? (Select TWO.) A. Define a AWS Step Functions task for each Lambda function. B. Define a AWS Step Functions task for each workflow. C. Write code that polls the AWS Step Functions invocation to coordinate each workflow. D. Define an AWS Step Functions state machine for each workflow. E. Define an AWS Step Functions state machine for each Lambda function. Answer: A. D. Notes: Step Functions is based on state machines and tasks. A state machine is a workflow. Tasks perform work by coordinating with other AWS services, such as Lambda. A state machine is a workflow. It can be used to express a workflow as a number of states, their relationships, and their input and output. You can coordinate individual tasks with Step Functions by expressing your workflow as a finite state machine, written in the Amazon States Language. ReferenceText: Getting Started with AWS Step Functions. ReferenceUrl: https://aws.amazon.com/step-functions/getting-started/ Category: Development
What is the AWS Certified Developer Associate Exam?
This AWS Certified Developer-Associate Examination is intended for individuals who perform a Developer role. It validates an examinee’s ability to:
Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS
Recommended general IT knowledge The target candidate should have the following: – In-depth knowledge of at least one high-level programming language – Understanding of application lifecycle management – The ability to write code for serverless applications – Understanding of the use of containers in the development process
Recommended AWS knowledge The target candidate should be able to do the following:
Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications
Identify key features of AWS services
Understand the AWS shared responsibility model
Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
Use and interact with AWS services
Apply basic understanding of cloud-native applications to write code
Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
Author, maintain, and debug code modules on AWS
What is considered out of scope for the target candidate? The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam: – Design architectures (for example, distributed system, microservices) – Design and implement CI/CD pipelines
Administer IAM users and groups
Administer Amazon Elastic Container Service (Amazon ECS)
Design AWS networking infrastructure (for example, Amazon VPC, AWS Direct Connect)
Understand compliance and licensing
Exam content Response types There are two types of questions on the exam: – Multiple choice: Has one correct response and three incorrect responses (distractors) – Multiple response: Has two or more correct responses out of five or more response options Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose. Distractors are generally plausible responses that match the content area. Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.
Unscored content The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.
Exam results The AWS Certified Developer – Associate (DVA-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines. Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720. Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels. Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam. Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.
Content outline This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context. The percentage in each domain represents only scored content.
Domain 1: Deployment 22% Domain 2: Security 26% Domain 3: Development with AWS Services 30% Domain 4: Refactoring 10% Domain 5: Monitoring and Troubleshooting 12%
Domain 1: Deployment 1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns. – Commit code to a repository and invoke build, test and/or deployment actions – Use labels and branches for version and release management – Use AWS CodePipeline to orchestrate workflows against different environments – Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS CodeDeploy for CI/CD purposes – Perform a roll back plan based on application deployment policy
1.2 Deploy applications using AWS Elastic Beanstalk. – Utilize existing supported environments to define a new application stack – Package the application – Introduce a new application version into the Elastic Beanstalk environment – Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable) – Validate application health using Elastic Beanstalk dashboard – Use Amazon CloudWatch Logs to instrument application logging
1.3 Prepare the application deployment package to be deployed to AWS. – Manage the dependencies of the code module (like environment variables, config files and static image files) within the package – Outline the package/container directory structure and organize files appropriately – Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores)
1.4 Deploy serverless applications. – Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template – Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)
Domain 2: Security 2.1 Make authenticated calls to AWS services. – Communicate required policy based on least privileges required by application. – Assume an IAM role to access a service – Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)
2.2 Implement encryption using AWS services. – Encrypt data at rest (client side; server side; envelope encryption) using AWS services – Encrypt data in transit
2.3 Implement application authentication and authorization. – Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools – Use Amazon Cognito-provided credentials to write code that access AWS services. – Use Amazon Cognito sync to synchronize user profiles and data – Use developer-authenticated identities to interact between end user devices, backend authentication, and Amazon Cognito
Domain 3: Development with AWS Services 3.1 Write code for serverless applications. – Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications) – Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler) – Create an API endpoint using Amazon API Gateway – Create and test appropriate API actions like GET, POST using the API endpoint – Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes) – Compute read/write capacity units for Amazon DynamoDB based on application requirements – Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis) – Invoke an AWS Lambda function synchronously and asynchronously
3.2 Translate functional requirements into application design. – Determine real-time vs. batch processing for a given use case – Determine use of synchronous vs. asynchronous for a given use case – Determine use of event vs. schedule/poll for a given use case – Account for tradeoffs for consistency models in an application design
Domain 4: Refactoring 4.1 Optimize applications to best use AWS services and features. Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache) Apply an Amazon S3 naming scheme for optimal read performance
4.2 Migrate existing application code to run on AWS. – Isolate dependencies – Run the application as one or more stateless processes – Develop in order to enable horizontal scalability – Externalize state
Domain 5: Monitoring and Troubleshooting
5.1 Write code that can be monitored. – Create custom Amazon CloudWatch metrics – Perform logging in a manner available to systems operators – Instrument application source code to enable tracing in AWS X-Ray
5.2 Perform root cause analysis on faults found in testing or production. – Interpret the outputs from the logging mechanism in AWS to identify errors in logs – Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues – Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component
Which key tools, technologies, and concepts might be covered on the exam?
The following is a non-exhaustive list of the tools and technologies that could appear on the exam. This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam. The general tools and technologies in this list appear in no particular order. AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance: – Analytics – Application Integration – Containers – Cost and Capacity Management – Data Movement – Developer Tools – Instances (virtual machines) – Management and Governance – Networking and Content Delivery – Security – Serverless
Management and Governance: – AWS CloudFormation – Amazon CloudWatch
Networking and Content Delivery: – Amazon API Gateway – Amazon CloudFront – Elastic Load Balancing
Security, Identity, and Compliance: – Amazon Cognito – AWS Identity and Access Management (IAM) – AWS Key Management Service (AWS KMS)
Storage: – Amazon S3
Out-of-scope AWS services and features
The following is a non-exhaustive list of AWS services and features that are not covered on the exam. These services and features do not represent every AWS offering that is excluded from the exam content. Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant. Out-of-scope AWS services and features include the following: – AWS Application Discovery Service – Amazon AppStream 2.0 – Amazon Chime – Amazon Connect – AWS Database Migration Service (AWS DMS) – AWS Device Farm – Amazon Elastic Transcoder – Amazon GameLift – Amazon Lex – Amazon Machine Learning (Amazon ML) – AWS Managed Services – Amazon Mobile Analytics – Amazon Polly
– Amazon QuickSight – Amazon Rekognition – AWS Server Migration Service (AWS SMS) – AWS Service Catalog – AWS Shield Advanced – AWS Shield Standard – AWS Snow Family – AWS Storage Gateway – AWS WAF – Amazon WorkMail – Amazon WorkSpaces
To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
AWS Certified Developer – Associate Practice Questions And Answers Dump
Q0: Your application reads commands from an SQS queue and sends them to web services hosted by your partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost. How can you accommodate the partners’ broken web services without wasting your resources?
A. Create a delay queue and set DelaySeconds to 30 seconds
B. Requeue the message with a VisibilityTimeout of 30 seconds.
C. Create a dead letter queue and set the Maximum Receives to 3.
D. Requeue the message with a DelaySeconds of 30 seconds.
C. After a message is taken from the queue and returned for the maximum number of retries, it is automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.
Q1: A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently. What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?
A. Amazon DynamoDB auto scaling
B. Amazon DynamoDB cross-region replication
C. Amazon DynamoDB Streams
D. Amazon DynamoDB Accelerator
D. The AWS Documentation mentions the following:
DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios
As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
Q2: You are creating a DynamoDB table with the following attributes:
PurchaseOrderNumber (partition key)
CustomerID
PurchaseDate
TotalPurchaseValue
One of your applications must retrieve items from the table to calculate the total value of purchases for a particular customer over a date range. What secondary index do you need to add to the table?
A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the TotalPurchaseValue attribute
B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the TotalPurchaseValue attribute
C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the TotalPurchaseValue attribute
D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the TotalPurchaseValue attribute
C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the TotalPurchaseValue into the index provides all the data needed to satisfy the use case.
Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
Global Secondary Indexes defines a new paradigm – different hash/range keys per index. This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.
Throughput :
Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q5: Lambda allows you to upload code and dependencies for function packages:
A. Only from a directly uploaded zip file
B. Only via SFTP
C. Only from a zip file in AWS S3
D. From a zip file in AWS S3 or uploaded directly from elsewhere
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q7: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?
A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.
D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.
Q8: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?
A. Create and assign EIP to each instance
B. Create and attach a second IGW to the VPC.
C. Create and utilize a NAT Gateway
D. Connect to a VPN
C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
Q9: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?
A. Security Groups
B. Route Tables
C. Elastic Load Balancer
D. Auto Scaling
D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.
Q11: You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?
A. RegisterImage
B. CreateImage
C. ami-register-image
D. ami-create-image
A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.
Q12: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?
A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
B. Permenantly assigning users to specific instances and always routing their traffic to those instances
C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance
Q14: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?
A. Autoscaling requires using Amazon EBS-backed instances
B. Virtual Private Cloud requires EBS backed instances
C. Amazon EBS-backed instances can be stopped and restarted without losing data
D. Instance-store backed instances can be stopped and restarted without losing data
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.
Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command. ssh -i my_key.pem ec2-user@52.2.222.22 However you receive the following error. @@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@ What is the most probable reason for this and how can you fix it?
A. You do not have root access on your terminal and need to use the sudo option for this to work.
B. You do not have enough permissions to perform the operation.
C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.
D. You need to run something like: chmod 400 my_key.pem
Q16: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?
A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.
D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.
Q17: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:
A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
B. Can only be used to launch EC2 instances in the same country as the AMI is stored
C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
D. Can be used to launch EC2 instances in any AWS region
C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another
Q18: Which of the following statements is true about the Elastic File System (EFS)?
A. EFS can scale out to meet capacity requirements and scale back down when no longer needed
B. EFS can be used by multiple EC2 instances simultaneously
C. EFS cannot be used by an instance using EBS
D. EFS can be configured on an instance before launch just like an IAM role or EBS volumes
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
A. The ability to create custom permission policies.
B. Assigning IAM permission policies to more than one user at a time.
C. Easier user/policy management.
D. Allowing EC2 instances to gain access to S3.
B. and C.
A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.
Q22: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?
A. Amazon DynamoDB auto scaling
B. Amazon DynamoDB cross-region replication
C. Amazon DynamoDB Streams
D. Amazon DynamoDB Accelerator
D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:
As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
Q23: A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?
A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.
B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence. Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation. During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:
Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
Saved Configurations– Settings for any options that are not applied directly to the environment are loaded from a saved configuration, if specified.
Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.
Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.
Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.
If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI . Settings in configuration files are not applied directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.
Q24: What statements are true about Availability Zones (AZs) and Regions?
A. There is only one AZ in each AWS Region
B. AZs are geographically separated inside a region to help protect against natural disasters affecting more than one at a time.
C. AZs can be moved between AWS Regions based on your needs
D. There are (almost always) two or more AZs in each AWS Region
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q26: Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?
A. Eventual Consistent Reads
B. Conditional reads for Consistency
C. Strongly Consistent Reads
D. Not possible
C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.
Q27: You’ ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?
A. Create an Opswork stack and deploy the Docker containers
B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.
B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.
Q28: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?
A. Create multiple threads and upload the objects in the multiple threads
B. Write the items in batches for better performance
C. Use the Multipart upload API
D. Enable versioning on the Bucket
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.
Q29: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?
A. 6000
B. 10
C. 3600
D. 600
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.
You can specify the Write capacity in the Capacity tab of the DynamoDB table.
Q33: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?
A. Create and assign EIP to each instance
B. Create and attach a second IGW to the VPC.
C. Create and utilize a NAT Gateway
D. Connect to a VPN
C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet. Reference: AWS Network Address Translation Gateway
Q34: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?
A. Security Groups
B. Route Tables
C. Elastic Load Balancer
D. Auto Scaling
D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture. Reference: AWS Autoscalling
Q31: An organization is using an Amazon ElastiCache cluster in front of their Amazon RDS instance. The organization would like the Developer to implement logic into the code so that the cluster only retrieves data from RDS when there is a cache miss. What strategy can the Developer implement to achieve this?
A. Lazy loading
B. Write-through
C. Error retries
D. Exponential backoff
Answer:
Answer – A Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. All other options are incorrect. Reference: Caching Strategies
Q32: A developer is writing an application that will run on Ec2 instances and read messages from SQS queue. The nessages will arrive every 15-60 seconds. How should the Developer efficiently query the queue for new messages?
A. Use long polling
B. Set a custom visibility timeout
C. Use short polling
D. Implement exponential backoff
Answer – A Long polling will help insure that the applications make less requests for messages in a shorter period of time. This is more cost effective. Since the messages are only going to be available after 15 seconds and we don’t know exacly when they would be available, it is better to use Long Polling. Reference: Amazon SQS Long Polling
Q33: You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With new Lambda function working as per expectation which of the following will shift traffic from original Lambda function to new Lambda function in the shortest time frame?
A. Canary10Percent5Minutes
B. Linear10PercentEvery10Minutes
C. Canary10Percent15Minutes
D. Linear10PercentEvery1Minute
Answer – A With Canary Deployment Preference type, Traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while remaining all traffic is shifted after 5 minutes. Reference: Gradual Code Deployment
Q34: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?
A. AWS::Serverless::Api
B. AWS::Serverless::Application
C. AWS::Serverless::Layerversion
D. AWS::Serverless::Function
Answer – B AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets. Reference: Declaring Serverless Resources
Q35: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?
A. Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
B. Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
C. Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
D. Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.
Answer – D With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys. Reference: AWS Key Management Service Concepts
Q36: You are developing an application that will be comprised of the following architecture –
A set of Ec2 instances to process the videos.
These (Ec2 instances) will be spun up by an autoscaling group.
SQS Queues to maintain the processing messages.
There will be 2 pricing tiers.
How will you ensure that the premium customers videos are given more preference?
A. Create 2 Autoscaling Groups, one for normal and one for premium customers
B. Create 2 set of Ec2 Instances, one for normal and one for premium customers
C. Create 2 SQS queus, one for normal and one for premium customers
D. Create 2 Elastic Load Balancers, one for normal and one for premium customers.
Answer – C The ideal option would be to create 2 SQS queues. Messages can then be processed by the application from the high priority queue first.<br? The other options are not the ideal options. They would lead to extra costs and also extra maintenance. Reference: SQS
Q37: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?
A. CustomerID
B. CustomerName
C. Location
D. Age
Answer- A Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on.. Use composite attributes. Try to combine more than one attribute to form a unique key. Reference: Choosing the right DynamoDB Partition Key
Q38: A developer is making use of AWS services to develop an application. He has been asked to develop the application in a manner to compensate any network delays. Which of the following two mechanisms should he implement in the application?
A. Multiple SQS queues
B. Exponential backoff algorithm
C. Retries in your application code
D. Consider using the Java sdk.
Answer- B. and C. In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values, and should be set based on the operation being performed, as well as other local factors, such as network latency. Reference: Error Retries and Exponential Backoff in AWS
Q39: An application is being developed that is going to write data to a DynamoDB table. You have to setup the read and write throughput for the table. Data is going to be read at the rate of 300 items every 30 seconds. Each item is of size 6KB. The reads can be eventual consistent reads. What should be the read capacity that needs to be set on the table?
A. 10
B. 20
C. 6
D. 30
Answer – A
Since there are 300 items read every 30 seconds , that means there are (300/30) = 10 items read every second. Since each item is 6KB in size , that means , 2 reads will be required for each item. So we have total of 2*10 = 20 reads for the number of items per second Since eventual consistency is required , we can divide the number of reads(20) by 2 , and in the end we get the Read Capacity of 10.
Q40: You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load balancer. You have been requested to monitor the incoming connections to the Elastic Load Balancer. Which of the below options can suffice this requirement?
A. Use AWS CloudTrail with your load balancer
B. Enable access logs on the load balancer
C. Use a CloudWatch Logs Agent
D. Create a custom metric CloudWatch lter on your load balancer
Answer – B Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues. Reference: Access Logs for Your Application Load Balancer
Q41: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?
A. Enable versioning for the underlying S3 bucket.
B. Enable Replication so that the objects get replicated to the other bucket
C. Enable CORS for the bucket
D. Change the Bucket policy for the bucket to allow access from the other bucket
Answer – C
Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.
Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:
Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.
Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.
Q42: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below
A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.
Answer- C The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3. You can then provides access to the objects based on the key values generated via the user id.
Q43: Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.
A. Publish your data to CloudWatch Logs, and congure your application to Autoscale to handle the load on demand.
B. Publish your log data to an Amazon S3 bucket. Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is congured to pull down your log les stored an Amazon S3
C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is congured to process your logging data.
D. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.
Answer:
Answer – C Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data. Reference: Amazon Kinesis
Q44: You’ve been instructed to develop a mobile application that will make use of AWS services. You need to decide on a data store to store the user sessions. Which of the following would be an ideal data store for session management?
A. AWS Simple Storage Service
B. AWS DynamoDB
C. AWS RDS
D. AWS Redshift
Answer:
Answer – B DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management Reference: Scalable Session Handling in PHP Using Amazon DynamoDB
Q45: Your application currently interacts with a DynamoDB table. Records are inserted into the table via the application. There is now a requirement to ensure that whenever items are updated in the DynamoDB primary table , another record is inserted into a secondary table. Which of the below feature should be used when developing such a solution?
A. AWS DynamoDB Encryption
B. AWS DynamoDB Streams
C. AWS DynamoDB Accelerator
D. AWSTable Accelerator
Answer – B DynamoDB Streams Use Cases and Design Patterns This post describes some common use cases you might encounter, along with their design options and solutions, when migrating data from relational data stores to Amazon DynamoDB. We will consider how to manage the following scenarios:
How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
How do you trigger an event based on a particular transaction?
How do you audit or archive transactions?
How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?
Relational databases provide native support for transactions, triggers, auditing, and replication. Typically, a transaction in a database refers to performing create, read, update, and delete (CRUD) operations against multiple tables in a block. A transaction can have only two states—success or failure. In other words, there is no partial completion. As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit. You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time. AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region. All of the other options are incorrect since none of these would meet the core requirement. Reference: DynamoDB Streams Use Cases and Design Patterns
Q46: An application has been making use of AWS DynamoDB for its back-end data store. The size of the table has now grown to 20 GB , and the scans on the table are causing throttling errors. Which of the following should now be implemented to avoid such errors?
A. Large Page size
B. Reduced page size
C. Parallel Scans
D. Sequential scans
Answer – B When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity. You can use the following technique to minimize the impact of a scan on a table’s provisioned throughput. Reduce page size Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request. For example, suppose that each item is 4 KB and you set the page size to 40 items. A Query request would then consume only 20 eventually consistent read operations or 40 strongly consistent read operations. A larger number of smaller Query or Scan operations would allow your other critical requests to succeed without throttling. Reference1: Rate-Limited Scans in Amazon DynamoDB
Q47: Which of the following is correct way of passing a stage variable to an HTTP URL ? (Select TWO.)
A. http://example.com/${}/prod
B. http://example.com/${stageVariables.}/prod
C. http://${stageVariables.}.example.com/dev/operation
D. http://${stageVariables}.example.com/dev/operation
E. http://${}.example.com/dev/operation
F. http://example.com/${stageVariables}/prod
Answer – B. and C. A stage variable can be used as part of HTTP integration URL as in following cases, · A full URI without protocol · A full domain · A subdomain · A path · A query string In the above case , option B & C displays stage variable as a path & sub-domain. Reference: Amazon API Gateway Stage Variables Reference
Q48: Your company is planning on creating new development environments in AWS. They want to make use of their existing Chef recipes which they use for their on-premise configuration for servers in AWS. Which of the following service would be ideal to use in this regard?
A. AWS Elastic Beanstalk
B. AWS OpsWork
C. AWS Cloudformation
D. AWS SQS
Answer – B AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments All other options are invalid since they cannot be used to work with Chef recipes for configuration management. Reference: AWS OpsWorks
Q49: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?
A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access.
C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.
Answer – C With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used Reference: About Web Identity Federation
Q50: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?
A. Cognito Data
B. Cognito Events
C. Cognito Streams
D. Cognito Callbacks
Answer – C Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams Reference:
Q51: You’ve developed a set of scripts using AWS Lambda. These scripts need to access EC2 Instances in a VPC. Which of the following needs to be done to ensure that the AWS Lambda function can access the resources in the VPC. Choose 2 answers from the options given below
A. Ensure that the subnet ID’s are mentioned when conguring the Lambda function
B. Ensure that the NACL ID’s are mentioned when conguring the Lambda function
C. Ensure that the Security Group ID’s are mentioned when conguring the Lambda function
D. Ensure that the VPC Flow Log ID’s are mentioned when conguring the Lambda function
Answer: A and C. AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPCspecific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC. Reference: Configuring a Lambda Function to Access Resources in an Amazon VPC
Q52: You’ve currently been tasked to migrate an existing on-premise environment into Elastic Beanstalk. The application does not make use of Docker containers. You also can’t see any relevant environments in the beanstalk service that would be suitable to host your application. What should you consider doing in this case?
A. Migrate your application to using Docker containers and then migrate the app to the Elastic Beanstalk environment.
B. Consider using Cloudformation to deploy your environment to Elastic Beanstalk
C. Consider using Packer to create a custom platform
D. Consider deploying your application using the Elastic Container Service
Answer – C Elastic Beanstalk supports custom platforms. A custom platform is a more advanced customization than a Custom Image in several ways. A custom platform lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility allows you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn’t provide a platform out of the box. Compare that to custom images, where you modify an AMI for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform’s software stack. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. To create a custom platform, you build an Amazon Machine Image (AMI) from one of the supported operating systems—Ubuntu, RHEL, or Amazon Linux (see the flavor entry in Platform.yaml File Format for the exact version numbers)—and add further customizations. You create your own Elastic Beanstalk platform using Packer, which is an open-source tool for creating machine images for many platforms, including AMIs for use with Amazon EC2. An Elastic Beanstalk platform comprises an AMI configured to run a set of software that supports an application, and metadata that can include custom configuration options and default configuration option settings. Reference: AWS Elastic Beanstalk Custom Platforms
Q53: Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.
A. 10
B. 160
C. 155
D. 16
Answer – B. Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below. Reference: Read/Write Capacity Mode
Q57: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?
Q60: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?
A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.
Q63: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?
A. Transforms
B. Outputs
C. Resources
D. Instances
Answer: C. The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3. Reference: Resources
Q64: Which AWS service can be used to fully automate your entire release process?
A. CodeDeploy
B. CodePipeline
C. CodeCommit
D. CodeBuild
Answer: B. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates
Q65: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?
A. Outputs
B. Transforms
C. Resources
D. Exports
Answer: A. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Reference: CloudFormation Outputs
Q66: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?
A. Inputs
B. Resources
C. Transforms
D. Files
Answer: C. Transforms is used to reference code located in S3 and also specififying the use of the Serverless Application Model (SAM) for Lambda deployments. Reference: Transforms
Q67: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file used to specify source files and lifecycle hooks?
Q68: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?
A. Share the code using an EBS volume
B. Copy and paste the code into the template each time you need to use it
C. Use a cloudformation nested stack
D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.
Q72: Which of the following is an encrypted key used by KMS to encrypt your data
A. Custmoer Mamaged Key
B. Encryption Key
C. Envelope Key
D. Customer Master Key
Answer: C. Your Data key also known as the Enveloppe key is encrypted using the master key.This approach is known as Envelope encryption. Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.
Q75: A developer is preparing a deployment package for a Java implementation of an AWS Lambda function. What should the developer include in the deployment package? (Select TWO.) A. Compiled application code B. Java runtime environment C. References to the event sources D. Lambda execution role E. Application dependencies
Answer: C. E. Notes: To create a Lambda function, you first create a Lambda function deployment package. This package is a .zip or .jar file consisting of your code and any dependencies. Reference:Lambda deployment packages.
Q76: A developer uses AWS CodeDeploy to deploy a Python application to a fleet of Amazon EC2 instances that run behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. What should the developer include in the CodeDeploy deployment package? A. A launch template for the Amazon EC2 Auto Scaling group B. A CodeDeploy AppSpec file C. An EC2 role that grants the application access to AWS services D. An IAM policy that grants the application access to AWS services
Answer: B. Notes: The CodeDeploy AppSpec (application specific) file is unique to CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file. Reference: CodeDeploy application specification (AppSpec) files. Category: Deployment
Q76: A company is working on a project to enhance its serverless application development process. The company hosts applications on AWS Lambda. The development team regularly updates the Lambda code and wants to use stable code in production. Which combination of steps should the development team take to configure Lambda functions to meet both development and production requirements? (Select TWO.)
A. Create a new Lambda version every time a new code release needs testing. B. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready unqualified Amazon Resource Name (ARN) version. Point the Development alias to the $LATEST version. C. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to the production-ready qualified Amazon Resource Name (ARN) version. Point the Development alias to the variable LAMBDA_TASK_ROOT. D. Create a new Lambda layer every time a new code release needs testing. E. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready Lambda layer Amazon Resource Name (ARN). Point the Development alias to the $LATEST layer ARN.
Answer: A. B. Notes: Lambda function versions are designed to manage deployment of functions. They can be used for code changes, without affecting the stable production version of the code. By creating separate aliases for Production and Development, systems can initiate the correct alias as needed. A Lambda function alias can be used to point to a specific Lambda function version. Using the functionality to update an alias and its linked version, the development team can update the required version as needed. The $LATEST version is the newest published version. Reference: Lambda function versions.
Q77: Each time a developer publishes a new version of an AWS Lambda function, all the dependent event source mappings need to be updated with the reference to the new version’s Amazon Resource Name (ARN). These updates are time consuming and error-prone. Which combination of actions should the developer take to avoid performing these updates when publishing a new Lambda version? (Select TWO.) A. Update event source mappings with the ARN of the Lambda layer. B. Point a Lambda alias to a new version of the Lambda function. C. Create a Lambda alias for each published version of the Lambda function. D. Point a Lambda alias to a new Lambda function alias. E. Update the event source mappings with the Lambda alias ARN.
Answer: B. E. Notes: A Lambda alias is a pointer to a specific Lambda function version. Instead of using ARNs for the Lambda function in event source mappings, you can use an alias ARN. You do not need to update your event source mappings when you promote a new version or roll back to a previous version. Reference: Lambda function aliases. Category: Deployment
Q78: A company wants to store sensitive user data in Amazon S3 and encrypt this data at rest. The company must manage the encryption keys and use Amazon S3 to perform the encryption. How can a developer meet these requirements? A. Enable default encryption for the S3 bucket by using the option for server-side encryption with customer-provided encryption keys (SSE-C). B. Enable client-side encryption with an encryption key. Upload the encrypted object to the S3 bucket. C. Enable server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Upload an object to the S3 bucket. D. Enable server-side encryption with customer-provided encryption keys (SSE-C). Upload an object to the S3 bucket.
Q79: A company is developing a Python application that submits data to an Amazon DynamoDB table. The company requires client-side encryption of specific data items and end-to-end protection for the encrypted data in transit and at rest. Which combination of steps will meet the requirement for the encryption of specific data items? (Select TWO.)
A. Generate symmetric encryption keys with AWS Key Management Service (AWS KMS). B. Generate asymmetric encryption keys with AWS Key Management Service (AWS KMS). C. Use generated keys with the DynamoDB Encryption Client. D. Use generated keys to configure DynamoDB table encryption with AWS managed customer master keys (CMKs). E. Use generated keys to configure DynamoDB table encryption with AWS owned customer master keys (CMKs).
Answer: A. C. Notes: When the DynamoDB Encryption Client is configured to use AWS KMS, it uses a customer master key (CMK) that is always encrypted when used outside of AWS KMS. This cryptographic materials provider returns a unique encryption key and signing key for every table item. This method of encryption uses a symmetric CMK. Reference: Direct KMS Materials Provider. Category: Deployment
Q80: A company is developing a REST API with Amazon API Gateway. Access to the API should be limited to users in the existing Amazon Cognito user pool. Which combination of steps should a developer perform to secure the API? (Select TWO.) A. Create an AWS Lambda authorizer for the API. B. Create an Amazon Cognito authorizer for the API. C. Configure the authorizer for the API resource. D. Configure the API methods to use the authorizer. E. Configure the authorizer for the API stage.
Answer: B. D. Notes: An Amazon Cognito authorizer should be used for integration with Amazon Cognito user pools. In addition to creating an authorizer, you are required to configure an API method to use that authorizer for the API. Reference: Control access to a REST API using Amazon Cognito user pools as authorizer. Category: Security
Q81: A developer is implementing a mobile app to provide personalized services to app users. The application code makes calls to Amazon S3 and Amazon Simple Queue Service (Amazon SQS). Which options can the developer use to authenticate the app users? (Select TWO.) A. Authenticate to the Amazon Cognito identity pool directly. B. Authenticate to AWS Identity and Access Management (IAM) directly. C. Authenticate to the Amazon Cognito user pool directly. D. Federate authentication by using Login with Amazon with the users managed with AWS Security Token Service (AWS STS). E. Federate authentication by using Login with Amazon with the users managed with the Amazon Cognito user pool.
Answer: C. E. Notes: The Amazon Cognito user pool provides direct user authentication. The Amazon Cognito user pool provides a federated authentication option with third-party identity provider (IdP), including amazon.com. Reference: Adding User Pool Sign-in Through a Third Party. Category: Security
Q82: A company is implementing several order processing workflows. Each workflow is implemented by using AWS Lambda functions for each task. Which combination of steps should a developer follow to implement these workflows? (Select TWO.) A. Define a AWS Step Functions task for each Lambda function. B. Define a AWS Step Functions task for each workflow. C. Write code that polls the AWS Step Functions invocation to coordinate each workflow. D. Define an AWS Step Functions state machine for each workflow. E. Define an AWS Step Functions state machine for each Lambda function.
Answer: A. D. Notes: Step Functions is based on state machines and tasks. A state machine is a workflow. Tasks perform work by coordinating with other AWS services, such as Lambda. A state machine is a workflow. It can be used to express a workflow as a number of states, their relationships, and their input and output. You can coordinate individual tasks with Step Functions by expressing your workflow as a finite state machine, written in the Amazon States Language. Reference: Getting Started with AWS Step Functions.
Category: Development
Q83: A company is migrating a web service to the AWS Cloud. The web service accepts requests by using HTTP (port 80). The company wants to use an AWS Lambda function to process HTTP requests. Which application design will satisfy these requirements? A. Create an Amazon API Gateway API. Configure proxy integration with the Lambda function. B. Create an Amazon API Gateway API. Configure non-proxy integration with the Lambda function. C. Configure the Lambda function to listen to inbound network connections on port 80. D. Configure the Lambda function as a target in the Application Load Balancer target group.
Answer: D. Notes: Elastic Load Balancing supports Lambda functions as a target for an Application Load Balancer. You can use load balancer rules to route HTTP requests to a function, based on the path or the header values. Then, process the request and return an HTTP response from your Lambda function. Reference: Using AWS Lambda with an Application Load Balancer. Category: Development
Q84: A company is developing an image processing application. When an image is uploaded to an Amazon S3 bucket, a number of independent and separate services must be invoked to process the image. The services do not have to be available immediately, but they must process every image. Which application design satisfies these requirements? A. Configure an Amazon S3 event notification that publishes to an Amazon Simple Queue Service (Amazon SQS) queue. Each service pulls the message from the same queue. B. Configure an Amazon S3 event notification that publishes to an Amazon Simple Notification Service (Amazon SNS) topic. Each service subscribes to the same topic. C. Configure an Amazon S3 event notification that publishes to an Amazon Simple Queue Service (Amazon SQS) queue. Subscribe a separate Amazon Simple Notification Service (Amazon SNS) topic for each service to an Amazon SQS queue. D. Configure an Amazon S3 event notification that publishes to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe a separate Simple Queue Service (Amazon SQS) queue for each service to the Amazon SNS topic.
Answer: D. Notes: Each service can subscribe to an individual Amazon SQS queue, which receives an event notification from the Amazon SNS topic. This is a fanout architectural implementation. Reference: Common Amazon SNS scenarios. Category: Development
Q85: A developer wants to implement Amazon EC2 Auto Scaling for a Multi-AZ web application. However, the developer is concerned that user sessions will be lost during scale-in events. How can the developer store the session state and share it across the EC2 instances? A. Write the sessions to an Amazon Kinesis data stream. Configure the application to poll the stream. B. Publish the sessions to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe each instance in the group to the topic. C. Store the sessions in an Amazon ElastiCache for Memcached cluster. Configure the application to use the Memcached API. D. Write the sessions to an Amazon Elastic Block Store (Amazon EBS) volume. Mount the volume to each instance in the group.
Answer: C. Notes: ElastiCache for Memcached is a distributed in-memory data store or cache environment in the cloud. It will meet the developer’s requirement of persistent storage and is fast to access. Reference: What is Amazon ElastiCache for Memcached?
Q86: A developer is integrating a legacy web application that runs on a fleet of Amazon EC2 instances with an Amazon DynamoDB table. There is no AWS SDK for the programming language that was used to implement the web application. Which combination of steps should the developer perform to make an API call to Amazon DynamoDB from the instances? (Select TWO.) A. Make an HTTPS POST request to the DynamoDB API endpoint for the AWS Region. In the request body, include an XML document that contains the request attributes. B. Make an HTTPS POST request to the DynamoDB API endpoint for the AWS Region. In the request body, include a JSON document that contains the request attributes. C. Sign the requests by using AWS access keys and Signature Version 4. D. Use an EC2 SSH key to calculate Signature Version 4 of the request. E. Provide the signature value through the HTTP X-API-Key header.
Answer: B. C. Notes: The HTTPS-based low-level AWS API for DynamoDB uses JSON as a wire protocol format. When you send HTTP requests to AWS, you sign the requests so that AWS can identify who sent them. Requests are signed with your AWS access key, which consists of an access key ID and secret access key. AWS supports two signature versions: Signature Version 4 and Signature Version 2. AWS recommends the use of Signature Version 4. Reference: Signing AWS API requests. Category: Development
Q87: A developer has written several custom applications that read and write to the same Amazon DynamoDB table. Each time the data in the DynamoDB table is modified, this change should be sent to an external API. Which combination of steps should the developer perform to accomplish this task? (Select TWO.) A. Configure an AWS Lambda function to poll the stream and call the external API. B. Configure an event in Amazon EventBridge (Amazon CloudWatch Events) that publishes the change to an Amazon Managed Streaming for Apache Kafka (Amazon MSK) data stream. C. Create a trigger in the DynamoDB table to publish the change to an Amazon Kinesis data stream. D. Deliver the stream to an Amazon Simple Notification Service (Amazon SNS) topic and subscribe the API to the topic. E. Enable DynamoDB Streams on the table.
Answer: A. E. Notes: If you enable DynamoDB Streams on a table, you can associate the stream Amazon Resource Name (ARN) with an Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table’s stream. Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. You can enable DynamoDB Streams on a table to create an event that invokes an AWS Lambda function. Reference: Tutorial: Process New Items with DynamoDB Streams and Lambda. Category: Monitoring
Q88: A company is migrating the create, read, update, and delete (CRUD) functionality of an existing Java web application to AWS Lambda. Which minimal code refactoring is necessary for the CRUD operations to run in the Lambda function? A. Implement a Lambda handler function. B. Import an AWS X-Ray package. C. Rewrite the application code in Python. D. Add a reference to the Lambda execution role.
Answer: A. Notes: Every Lambda function needs a Lambda-specific handler. Specifics of authoring vary between runtimes, but all runtimes share a common programming model that defines the interface between your code and the runtime code. You tell the runtime which method to run by defining a handler in the function configuration. The runtime runs that method. Next, the runtime passes in objects to the handler that contain the invocation event and context, such as the function name and request ID. Reference: Getting started with Lambda. Category: Refactoring
Q89: A company plans to use AWS log monitoring services to monitor an application that runs on premises. Currently, the application runs on a recent version of Ubuntu Server and outputs the logs to a local file. Which combination of steps should a developer perform to accomplish this goal? (Select TWO.) A. Update the application code to include calls to the agent API for log collection. B. Install the Amazon Elastic Container Service (Amazon ECS) container agent on the server. C. Install the unified Amazon CloudWatch agent on the server. D. Configure the long-term AWS credentials on the server to enable log collection by the agent. E. Attach an IAM role to the server to enable log collection by the agent.
Answer: C. D. Notes: The unified CloudWatch agent needs to be installed on the server. Ubuntu Server 18.04 is one of the many supported operating systems. When you install the unified CloudWatch agent on an on-premises server, you will specify a named profile that contains the credentials of the IAM user. Reference: Collecting metrics and logs from Amazon EC2 instances and on-premises servers with the CloudWatch agent. Category: Monitoring
Q90: A developer wants to monitor invocations of an AWS Lambda function by using Amazon CloudWatch Logs. The developer added a number of print statements to the function code that write the logging information to the stdout stream. After running the function, the developer does not see any log data being generated. Why does the log data NOT appear in the CloudWatch logs? A. The log data is not written to the stderr stream. B. Lambda function logging is not automatically enabled. C. The execution role for the Lambda function did not grant permissions to write log data to CloudWatch Logs. D. The Lambda function outputs the logs to an Amazon S3 bucket.
Answer: C. Notes: The function needs permission to call CloudWatch Logs. Update the execution role to grant the permission. You can use the managed policy of AWSLambdaBasicExecutionRole. Reference: Troubleshoot execution issues in Lambda. Category: Monitoting
Q91: Which of the following are best practices you should implement into ongoing deployments of your application? (Select THREE.)
A. Use stage variables to manage secrets across environments B. Create account-specific AWS SAM templates for each environment C. Use an AutoPublish alias D. Use traffic shifting with pre- and post-deployment hooks E. Test throughout the pipeline
Q92: You are handing off maintenance of your new serverless application to an incoming team lead. Which recommendations would you make? (Select THREE.)
A. Keep up to date with the quotas and payload sizes for each AWS service you are using
B. Analyze production access patterns to identify potential improvements
C. Design your services to extend their life as long as possible
D. Minimize changes to your production application
E. Compare the value of using the latest first-class integrations versus using Lambda between AWS services
Answer: A. B. D.
Notes: Keep up to date with the quotas and payload sizes for each AWS service you are using,
What is the AWS Certified Developer Associate Exam?
This AWS Certified Developer-Associate Examination is intended for individuals who perform a Developer role. It validates an examinee’s ability to:
Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS
Recommended general IT knowledge The target candidate should have the following: – In-depth knowledge of at least one high-level programming language – Understanding of application lifecycle management – The ability to write code for serverless applications – Understanding of the use of containers in the development process
Recommended AWS knowledge The target candidate should be able to do the following:
Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications
Identify key features of AWS services
Understand the AWS shared responsibility model
Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
Use and interact with AWS services
Apply basic understanding of cloud-native applications to write code
Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
Author, maintain, and debug code modules on AWS
What is considered out of scope for the target candidate? The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam: – Design architectures (for example, distributed system, microservices) – Design and implement CI/CD pipelines
Administer IAM users and groups
Administer Amazon Elastic Container Service (Amazon ECS)
Design AWS networking infrastructure (for example, Amazon VPC, AWS Direct Connect)
Understand compliance and licensing
Exam content Response types There are two types of questions on the exam: – Multiple choice: Has one correct response and three incorrect responses (distractors) – Multiple response: Has two or more correct responses out of five or more response options Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose. Distractors are generally plausible responses that match the content area. Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.
Unscored content The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.
Exam results The AWS Certified Developer – Associate (DVA-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines. Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720. Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels. Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam. Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.
Content outline This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context. The percentage in each domain represents only scored content.
Domain 1: Deployment 22% Domain 2: Security 26% Domain 3: Development with AWS Services 30% Domain 4: Refactoring 10% Domain 5: Monitoring and Troubleshooting 12%
Domain 1: Deployment 1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns. – Commit code to a repository and invoke build, test and/or deployment actions – Use labels and branches for version and release management – Use AWS CodePipeline to orchestrate workflows against different environments – Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS CodeDeploy for CI/CD purposes – Perform a roll back plan based on application deployment policy
1.2 Deploy applications using AWS Elastic Beanstalk. – Utilize existing supported environments to define a new application stack – Package the application – Introduce a new application version into the Elastic Beanstalk environment – Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable) – Validate application health using Elastic Beanstalk dashboard – Use Amazon CloudWatch Logs to instrument application logging
1.3 Prepare the application deployment package to be deployed to AWS. – Manage the dependencies of the code module (like environment variables, config files and static image files) within the package – Outline the package/container directory structure and organize files appropriately – Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores)
1.4 Deploy serverless applications. – Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template – Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)
Domain 2: Security 2.1 Make authenticated calls to AWS services. – Communicate required policy based on least privileges required by application. – Assume an IAM role to access a service – Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)
2.2 Implement encryption using AWS services. – Encrypt data at rest (client side; server side; envelope encryption) using AWS services – Encrypt data in transit
2.3 Implement application authentication and authorization. – Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools – Use Amazon Cognito-provided credentials to write code that access AWS services. – Use Amazon Cognito sync to synchronize user profiles and data – Use developer-authenticated identities to interact between end user devices, backend authentication, and Amazon Cognito
Domain 3: Development with AWS Services 3.1 Write code for serverless applications. – Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications) – Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler) – Create an API endpoint using Amazon API Gateway – Create and test appropriate API actions like GET, POST using the API endpoint – Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes) – Compute read/write capacity units for Amazon DynamoDB based on application requirements – Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis) – Invoke an AWS Lambda function synchronously and asynchronously
3.2 Translate functional requirements into application design. – Determine real-time vs. batch processing for a given use case – Determine use of synchronous vs. asynchronous for a given use case – Determine use of event vs. schedule/poll for a given use case – Account for tradeoffs for consistency models in an application design
Domain 4: Refactoring 4.1 Optimize applications to best use AWS services and features. Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache) Apply an Amazon S3 naming scheme for optimal read performance
4.2 Migrate existing application code to run on AWS. – Isolate dependencies – Run the application as one or more stateless processes – Develop in order to enable horizontal scalability – Externalize state
Domain 5: Monitoring and Troubleshooting
5.1 Write code that can be monitored. – Create custom Amazon CloudWatch metrics – Perform logging in a manner available to systems operators – Instrument application source code to enable tracing in AWS X-Ray
5.2 Perform root cause analysis on faults found in testing or production. – Interpret the outputs from the logging mechanism in AWS to identify errors in logs – Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues – Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component
Which key tools, technologies, and concepts might be covered on the exam?
The following is a non-exhaustive list of the tools and technologies that could appear on the exam. This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam. The general tools and technologies in this list appear in no particular order. AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance: – Analytics – Application Integration – Containers – Cost and Capacity Management – Data Movement – Developer Tools – Instances (virtual machines) – Management and Governance – Networking and Content Delivery – Security – Serverless
Management and Governance: – AWS CloudFormation – Amazon CloudWatch
Networking and Content Delivery: – Amazon API Gateway – Amazon CloudFront – Elastic Load Balancing
Security, Identity, and Compliance: – Amazon Cognito – AWS Identity and Access Management (IAM) – AWS Key Management Service (AWS KMS)
Storage: – Amazon S3
Out-of-scope AWS services and features
The following is a non-exhaustive list of AWS services and features that are not covered on the exam. These services and features do not represent every AWS offering that is excluded from the exam content. Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant. Out-of-scope AWS services and features include the following: – AWS Application Discovery Service – Amazon AppStream 2.0 – Amazon Chime – Amazon Connect – AWS Database Migration Service (AWS DMS) – AWS Device Farm – Amazon Elastic Transcoder – Amazon GameLift – Amazon Lex – Amazon Machine Learning (Amazon ML) – AWS Managed Services – Amazon Mobile Analytics – Amazon Polly
– Amazon QuickSight – Amazon Rekognition – AWS Server Migration Service (AWS SMS) – AWS Service Catalog – AWS Shield Advanced – AWS Shield Standard – AWS Snow Family – AWS Storage Gateway – AWS WAF – Amazon WorkMail – Amazon WorkSpaces
To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
AWS Certified Developer – Associate Practice Questions And Answers Dump
Q0: Your application reads commands from an SQS queue and sends them to web services hosted by your partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost. How can you accommodate the partners’ broken web services without wasting your resources?
A. Create a delay queue and set DelaySeconds to 30 seconds
B. Requeue the message with a VisibilityTimeout of 30 seconds.
C. Create a dead letter queue and set the Maximum Receives to 3.
D. Requeue the message with a DelaySeconds of 30 seconds.
C. After a message is taken from the queue and returned for the maximum number of retries, it is automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.
Q1: A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently. What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?
A. Amazon DynamoDB auto scaling
B. Amazon DynamoDB cross-region replication
C. Amazon DynamoDB Streams
D. Amazon DynamoDB Accelerator
D. The AWS Documentation mentions the following:
DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios
As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
Q2: You are creating a DynamoDB table with the following attributes:
PurchaseOrderNumber (partition key)
CustomerID
PurchaseDate
TotalPurchaseValue
One of your applications must retrieve items from the table to calculate the total value of purchases for a particular customer over a date range. What secondary index do you need to add to the table?
A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the TotalPurchaseValue attribute
B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the TotalPurchaseValue attribute
C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the TotalPurchaseValue attribute
D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the TotalPurchaseValue attribute
C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the TotalPurchaseValue into the index provides all the data needed to satisfy the use case.
Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
Global Secondary Indexes defines a new paradigm – different hash/range keys per index. This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.
Throughput :
Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q5: Lambda allows you to upload code and dependencies for function packages:
A. Only from a directly uploaded zip file
B. Only via SFTP
C. Only from a zip file in AWS S3
D. From a zip file in AWS S3 or uploaded directly from elsewhere
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q7: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?
A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.
D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.
Q8: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?
A. Create and assign EIP to each instance
B. Create and attach a second IGW to the VPC.
C. Create and utilize a NAT Gateway
D. Connect to a VPN
C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
Q9: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?
A. Security Groups
B. Route Tables
C. Elastic Load Balancer
D. Auto Scaling
D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.
Q11: You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?
A. RegisterImage
B. CreateImage
C. ami-register-image
D. ami-create-image
A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.
Q12: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?
A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
B. Permenantly assigning users to specific instances and always routing their traffic to those instances
C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance
Q14: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?
A. Autoscaling requires using Amazon EBS-backed instances
B. Virtual Private Cloud requires EBS backed instances
C. Amazon EBS-backed instances can be stopped and restarted without losing data
D. Instance-store backed instances can be stopped and restarted without losing data
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.
Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command. ssh -i my_key.pem ec2-user@52.2.222.22 However you receive the following error. @@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@ What is the most probable reason for this and how can you fix it?
A. You do not have root access on your terminal and need to use the sudo option for this to work.
B. You do not have enough permissions to perform the operation.
C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.
D. You need to run something like: chmod 400 my_key.pem
Q16: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?
A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.
D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.
Q17: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:
A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
B. Can only be used to launch EC2 instances in the same country as the AMI is stored
C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
D. Can be used to launch EC2 instances in any AWS region
C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another
Q18: Which of the following statements is true about the Elastic File System (EFS)?
A. EFS can scale out to meet capacity requirements and scale back down when no longer needed
B. EFS can be used by multiple EC2 instances simultaneously
C. EFS cannot be used by an instance using EBS
D. EFS can be configured on an instance before launch just like an IAM role or EBS volumes
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
A. The ability to create custom permission policies.
B. Assigning IAM permission policies to more than one user at a time.
C. Easier user/policy management.
D. Allowing EC2 instances to gain access to S3.
B. and C.
A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.
Q22: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?
A. Amazon DynamoDB auto scaling
B. Amazon DynamoDB cross-region replication
C. Amazon DynamoDB Streams
D. Amazon DynamoDB Accelerator
D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:
As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
Q23: A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?
A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.
B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence. Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation. During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:
Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
Saved Configurations– Settings for any options that are not applied directly to the environment are loaded from a saved configuration, if specified.
Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.
Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.
Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.
If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI . Settings in configuration files are not applied directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.
Q24: What statements are true about Availability Zones (AZs) and Regions?
A. There is only one AZ in each AWS Region
B. AZs are geographically separated inside a region to help protect against natural disasters affecting more than one at a time.
C. AZs can be moved between AWS Regions based on your needs
D. There are (almost always) two or more AZs in each AWS Region
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q26: Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?
A. Eventual Consistent Reads
B. Conditional reads for Consistency
C. Strongly Consistent Reads
D. Not possible
C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.
Q27: You’ ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?
A. Create an Opswork stack and deploy the Docker containers
B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.
B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.
Q28: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?
A. Create multiple threads and upload the objects in the multiple threads
B. Write the items in batches for better performance
C. Use the Multipart upload API
D. Enable versioning on the Bucket
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.
Q29: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?
A. 6000
B. 10
C. 3600
D. 600
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.
You can specify the Write capacity in the Capacity tab of the DynamoDB table.
Q33: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?
A. Create and assign EIP to each instance
B. Create and attach a second IGW to the VPC.
C. Create and utilize a NAT Gateway
D. Connect to a VPN
C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet. Reference: AWS Network Address Translation Gateway
Q34: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?
A. Security Groups
B. Route Tables
C. Elastic Load Balancer
D. Auto Scaling
D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture. Reference: AWS Autoscalling
Q31: An organization is using an Amazon ElastiCache cluster in front of their Amazon RDS instance. The organization would like the Developer to implement logic into the code so that the cluster only retrieves data from RDS when there is a cache miss. What strategy can the Developer implement to achieve this?
A. Lazy loading
B. Write-through
C. Error retries
D. Exponential backoff
Answer:
Answer – A Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. All other options are incorrect. Reference: Caching Strategies
Q32: A developer is writing an application that will run on Ec2 instances and read messages from SQS queue. The nessages will arrive every 15-60 seconds. How should the Developer efficiently query the queue for new messages?
A. Use long polling
B. Set a custom visibility timeout
C. Use short polling
D. Implement exponential backoff
Answer – A Long polling will help insure that the applications make less requests for messages in a shorter period of time. This is more cost effective. Since the messages are only going to be available after 15 seconds and we don’t know exacly when they would be available, it is better to use Long Polling. Reference: Amazon SQS Long Polling
Q33: You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With new Lambda function working as per expectation which of the following will shift traffic from original Lambda function to new Lambda function in the shortest time frame?
A. Canary10Percent5Minutes
B. Linear10PercentEvery10Minutes
C. Canary10Percent15Minutes
D. Linear10PercentEvery1Minute
Answer – A With Canary Deployment Preference type, Traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while remaining all traffic is shifted after 5 minutes. Reference: Gradual Code Deployment
Q34: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?
A. AWS::Serverless::Api
B. AWS::Serverless::Application
C. AWS::Serverless::Layerversion
D. AWS::Serverless::Function
Answer – B AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets. Reference: Declaring Serverless Resources
Q35: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?
A. Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
B. Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
C. Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
D. Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.
Answer – D With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys. Reference: AWS Key Management Service Concepts
Q36: You are developing an application that will be comprised of the following architecture –
A set of Ec2 instances to process the videos.
These (Ec2 instances) will be spun up by an autoscaling group.
SQS Queues to maintain the processing messages.
There will be 2 pricing tiers.
How will you ensure that the premium customers videos are given more preference?
A. Create 2 Autoscaling Groups, one for normal and one for premium customers
B. Create 2 set of Ec2 Instances, one for normal and one for premium customers
C. Create 2 SQS queus, one for normal and one for premium customers
D. Create 2 Elastic Load Balancers, one for normal and one for premium customers.
Answer – C The ideal option would be to create 2 SQS queues. Messages can then be processed by the application from the high priority queue first.<br? The other options are not the ideal options. They would lead to extra costs and also extra maintenance. Reference: SQS
Q37: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?
A. CustomerID
B. CustomerName
C. Location
D. Age
Answer- A Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on.. Use composite attributes. Try to combine more than one attribute to form a unique key. Reference: Choosing the right DynamoDB Partition Key
Q38: A developer is making use of AWS services to develop an application. He has been asked to develop the application in a manner to compensate any network delays. Which of the following two mechanisms should he implement in the application?
A. Multiple SQS queues
B. Exponential backoff algorithm
C. Retries in your application code
D. Consider using the Java sdk.
Answer- B. and C. In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values, and should be set based on the operation being performed, as well as other local factors, such as network latency. Reference: Error Retries and Exponential Backoff in AWS
Q39: An application is being developed that is going to write data to a DynamoDB table. You have to setup the read and write throughput for the table. Data is going to be read at the rate of 300 items every 30 seconds. Each item is of size 6KB. The reads can be eventual consistent reads. What should be the read capacity that needs to be set on the table?
A. 10
B. 20
C. 6
D. 30
Answer – A
Since there are 300 items read every 30 seconds , that means there are (300/30) = 10 items read every second. Since each item is 6KB in size , that means , 2 reads will be required for each item. So we have total of 2*10 = 20 reads for the number of items per second Since eventual consistency is required , we can divide the number of reads(20) by 2 , and in the end we get the Read Capacity of 10.
Q40: You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load balancer. You have been requested to monitor the incoming connections to the Elastic Load Balancer. Which of the below options can suffice this requirement?
A. Use AWS CloudTrail with your load balancer
B. Enable access logs on the load balancer
C. Use a CloudWatch Logs Agent
D. Create a custom metric CloudWatch lter on your load balancer
Answer – B Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues. Reference: Access Logs for Your Application Load Balancer
Q41: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?
A. Enable versioning for the underlying S3 bucket.
B. Enable Replication so that the objects get replicated to the other bucket
C. Enable CORS for the bucket
D. Change the Bucket policy for the bucket to allow access from the other bucket
Answer – C
Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.
Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:
Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.
Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.
Q42: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below
A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.
Answer- C The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3. You can then provides access to the objects based on the key values generated via the user id.
Q43: Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.
A. Publish your data to CloudWatch Logs, and congure your application to Autoscale to handle the load on demand.
B. Publish your log data to an Amazon S3 bucket. Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is congured to pull down your log les stored an Amazon S3
C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is congured to process your logging data.
D. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.
Answer:
Answer – C Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data. Reference: Amazon Kinesis
Q44: You’ve been instructed to develop a mobile application that will make use of AWS services. You need to decide on a data store to store the user sessions. Which of the following would be an ideal data store for session management?
A. AWS Simple Storage Service
B. AWS DynamoDB
C. AWS RDS
D. AWS Redshift
Answer:
Answer – B DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management Reference: Scalable Session Handling in PHP Using Amazon DynamoDB
Q45: Your application currently interacts with a DynamoDB table. Records are inserted into the table via the application. There is now a requirement to ensure that whenever items are updated in the DynamoDB primary table , another record is inserted into a secondary table. Which of the below feature should be used when developing such a solution?
A. AWS DynamoDB Encryption
B. AWS DynamoDB Streams
C. AWS DynamoDB Accelerator
D. AWSTable Accelerator
Answer – B DynamoDB Streams Use Cases and Design Patterns This post describes some common use cases you might encounter, along with their design options and solutions, when migrating data from relational data stores to Amazon DynamoDB. We will consider how to manage the following scenarios:
How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
How do you trigger an event based on a particular transaction?
How do you audit or archive transactions?
How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?
Relational databases provide native support for transactions, triggers, auditing, and replication. Typically, a transaction in a database refers to performing create, read, update, and delete (CRUD) operations against multiple tables in a block. A transaction can have only two states—success or failure. In other words, there is no partial completion. As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit. You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time. AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region. All of the other options are incorrect since none of these would meet the core requirement. Reference: DynamoDB Streams Use Cases and Design Patterns
Q46: An application has been making use of AWS DynamoDB for its back-end data store. The size of the table has now grown to 20 GB , and the scans on the table are causing throttling errors. Which of the following should now be implemented to avoid such errors?
A. Large Page size
B. Reduced page size
C. Parallel Scans
D. Sequential scans
Answer – B When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity. You can use the following technique to minimize the impact of a scan on a table’s provisioned throughput. Reduce page size Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request. For example, suppose that each item is 4 KB and you set the page size to 40 items. A Query request would then consume only 20 eventually consistent read operations or 40 strongly consistent read operations. A larger number of smaller Query or Scan operations would allow your other critical requests to succeed without throttling. Reference1: Rate-Limited Scans in Amazon DynamoDB
Q47: Which of the following is correct way of passing a stage variable to an HTTP URL ? (Select TWO.)
A. http://example.com/${}/prod
B. http://example.com/${stageVariables.}/prod
C. http://${stageVariables.}.example.com/dev/operation
D. http://${stageVariables}.example.com/dev/operation
E. http://${}.example.com/dev/operation
F. http://example.com/${stageVariables}/prod
Answer – B. and C. A stage variable can be used as part of HTTP integration URL as in following cases, · A full URI without protocol · A full domain · A subdomain · A path · A query string In the above case , option B & C displays stage variable as a path & sub-domain. Reference: Amazon API Gateway Stage Variables Reference
Q48: Your company is planning on creating new development environments in AWS. They want to make use of their existing Chef recipes which they use for their on-premise configuration for servers in AWS. Which of the following service would be ideal to use in this regard?
A. AWS Elastic Beanstalk
B. AWS OpsWork
C. AWS Cloudformation
D. AWS SQS
Answer – B AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments All other options are invalid since they cannot be used to work with Chef recipes for configuration management. Reference: AWS OpsWorks
Q49: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?
A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access.
C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.
Answer – C With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used Reference: About Web Identity Federation
Q50: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?
A. Cognito Data
B. Cognito Events
C. Cognito Streams
D. Cognito Callbacks
Answer – C Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams Reference:
Q51: You’ve developed a set of scripts using AWS Lambda. These scripts need to access EC2 Instances in a VPC. Which of the following needs to be done to ensure that the AWS Lambda function can access the resources in the VPC. Choose 2 answers from the options given below
A. Ensure that the subnet ID’s are mentioned when conguring the Lambda function
B. Ensure that the NACL ID’s are mentioned when conguring the Lambda function
C. Ensure that the Security Group ID’s are mentioned when conguring the Lambda function
D. Ensure that the VPC Flow Log ID’s are mentioned when conguring the Lambda function
Answer: A and C. AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPCspecific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC. Reference: Configuring a Lambda Function to Access Resources in an Amazon VPC
Q52: You’ve currently been tasked to migrate an existing on-premise environment into Elastic Beanstalk. The application does not make use of Docker containers. You also can’t see any relevant environments in the beanstalk service that would be suitable to host your application. What should you consider doing in this case?
A. Migrate your application to using Docker containers and then migrate the app to the Elastic Beanstalk environment.
B. Consider using Cloudformation to deploy your environment to Elastic Beanstalk
C. Consider using Packer to create a custom platform
D. Consider deploying your application using the Elastic Container Service
Answer – C Elastic Beanstalk supports custom platforms. A custom platform is a more advanced customization than a Custom Image in several ways. A custom platform lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility allows you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn’t provide a platform out of the box. Compare that to custom images, where you modify an AMI for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform’s software stack. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. To create a custom platform, you build an Amazon Machine Image (AMI) from one of the supported operating systems—Ubuntu, RHEL, or Amazon Linux (see the flavor entry in Platform.yaml File Format for the exact version numbers)—and add further customizations. You create your own Elastic Beanstalk platform using Packer, which is an open-source tool for creating machine images for many platforms, including AMIs for use with Amazon EC2. An Elastic Beanstalk platform comprises an AMI configured to run a set of software that supports an application, and metadata that can include custom configuration options and default configuration option settings. Reference: AWS Elastic Beanstalk Custom Platforms
Q53: Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.
A. 10
B. 160
C. 155
D. 16
Answer – B. Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below. Reference: Read/Write Capacity Mode
Q57: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?
Q60: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?
A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.
Q63: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?
A. Transforms
B. Outputs
C. Resources
D. Instances
Answer: C. The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3. Reference: Resources
Q64: Which AWS service can be used to fully automate your entire release process?
A. CodeDeploy
B. CodePipeline
C. CodeCommit
D. CodeBuild
Answer: B. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates
Q65: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?
A. Outputs
B. Transforms
C. Resources
D. Exports
Answer: A. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Reference: CloudFormation Outputs
Q66: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?
A. Inputs
B. Resources
C. Transforms
D. Files
Answer: C. Transforms is used to reference code located in S3 and also specififying the use of the Serverless Application Model (SAM) for Lambda deployments. Reference: Transforms
Q67: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file used to specify source files and lifecycle hooks?
Q68: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?
A. Share the code using an EBS volume
B. Copy and paste the code into the template each time you need to use it
C. Use a cloudformation nested stack
D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.
Q72: Which of the following is an encrypted key used by KMS to encrypt your data
A. Custmoer Mamaged Key
B. Encryption Key
C. Envelope Key
D. Customer Master Key
Answer: C. Your Data key also known as the Enveloppe key is encrypted using the master key.This approach is known as Envelope encryption. Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.
Q75: A developer is preparing a deployment package for a Java implementation of an AWS Lambda function. What should the developer include in the deployment package? (Select TWO.) A. Compiled application code B. Java runtime environment C. References to the event sources D. Lambda execution role E. Application dependencies
Answer: C. E. Notes: To create a Lambda function, you first create a Lambda function deployment package. This package is a .zip or .jar file consisting of your code and any dependencies. Reference:Lambda deployment packages.
Q76: A developer uses AWS CodeDeploy to deploy a Python application to a fleet of Amazon EC2 instances that run behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. What should the developer include in the CodeDeploy deployment package? A. A launch template for the Amazon EC2 Auto Scaling group B. A CodeDeploy AppSpec file C. An EC2 role that grants the application access to AWS services D. An IAM policy that grants the application access to AWS services
Answer: B. Notes: The CodeDeploy AppSpec (application specific) file is unique to CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file. Reference: CodeDeploy application specification (AppSpec) files. Category: Deployment
Q76: A company is working on a project to enhance its serverless application development process. The company hosts applications on AWS Lambda. The development team regularly updates the Lambda code and wants to use stable code in production. Which combination of steps should the development team take to configure Lambda functions to meet both development and production requirements? (Select TWO.)
A. Create a new Lambda version every time a new code release needs testing. B. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready unqualified Amazon Resource Name (ARN) version. Point the Development alias to the $LATEST version. C. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to the production-ready qualified Amazon Resource Name (ARN) version. Point the Development alias to the variable LAMBDA_TASK_ROOT. D. Create a new Lambda layer every time a new code release needs testing. E. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready Lambda layer Amazon Resource Name (ARN). Point the Development alias to the $LATEST layer ARN.
Answer: A. B. Notes: Lambda function versions are designed to manage deployment of functions. They can be used for code changes, without affecting the stable production version of the code. By creating separate aliases for Production and Development, systems can initiate the correct alias as needed. A Lambda function alias can be used to point to a specific Lambda function version. Using the functionality to update an alias and its linked version, the development team can update the required version as needed. The $LATEST version is the newest published version. Reference: Lambda function versions.
Q77: Each time a developer publishes a new version of an AWS Lambda function, all the dependent event source mappings need to be updated with the reference to the new version’s Amazon Resource Name (ARN). These updates are time consuming and error-prone. Which combination of actions should the developer take to avoid performing these updates when publishing a new Lambda version? (Select TWO.) A. Update event source mappings with the ARN of the Lambda layer. B. Point a Lambda alias to a new version of the Lambda function. C. Create a Lambda alias for each published version of the Lambda function. D. Point a Lambda alias to a new Lambda function alias. E. Update the event source mappings with the Lambda alias ARN.
Answer: B. E. Notes: A Lambda alias is a pointer to a specific Lambda function version. Instead of using ARNs for the Lambda function in event source mappings, you can use an alias ARN. You do not need to update your event source mappings when you promote a new version or roll back to a previous version. Reference: Lambda function aliases. Category: Deployment
Q78: A company wants to store sensitive user data in Amazon S3 and encrypt this data at rest. The company must manage the encryption keys and use Amazon S3 to perform the encryption. How can a developer meet these requirements? A. Enable default encryption for the S3 bucket by using the option for server-side encryption with customer-provided encryption keys (SSE-C). B. Enable client-side encryption with an encryption key. Upload the encrypted object to the S3 bucket. C. Enable server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Upload an object to the S3 bucket. D. Enable server-side encryption with customer-provided encryption keys (SSE-C). Upload an object to the S3 bucket.
Q79: A company is developing a Python application that submits data to an Amazon DynamoDB table. The company requires client-side encryption of specific data items and end-to-end protection for the encrypted data in transit and at rest. Which combination of steps will meet the requirement for the encryption of specific data items? (Select TWO.)
A. Generate symmetric encryption keys with AWS Key Management Service (AWS KMS). B. Generate asymmetric encryption keys with AWS Key Management Service (AWS KMS). C. Use generated keys with the DynamoDB Encryption Client. D. Use generated keys to configure DynamoDB table encryption with AWS managed customer master keys (CMKs). E. Use generated keys to configure DynamoDB table encryption with AWS owned customer master keys (CMKs).
Answer: A. C. Notes: When the DynamoDB Encryption Client is configured to use AWS KMS, it uses a customer master key (CMK) that is always encrypted when used outside of AWS KMS. This cryptographic materials provider returns a unique encryption key and signing key for every table item. This method of encryption uses a symmetric CMK. Reference: Direct KMS Materials Provider. Category: Deployment
Q80: A company is developing a REST API with Amazon API Gateway. Access to the API should be limited to users in the existing Amazon Cognito user pool. Which combination of steps should a developer perform to secure the API? (Select TWO.) A. Create an AWS Lambda authorizer for the API. B. Create an Amazon Cognito authorizer for the API. C. Configure the authorizer for the API resource. D. Configure the API methods to use the authorizer. E. Configure the authorizer for the API stage.
Answer: B. D. Notes: An Amazon Cognito authorizer should be used for integration with Amazon Cognito user pools. In addition to creating an authorizer, you are required to configure an API method to use that authorizer for the API. Reference: Control access to a REST API using Amazon Cognito user pools as authorizer. Category: Security
Q81: A developer is implementing a mobile app to provide personalized services to app users. The application code makes calls to Amazon S3 and Amazon Simple Queue Service (Amazon SQS). Which options can the developer use to authenticate the app users? (Select TWO.) A. Authenticate to the Amazon Cognito identity pool directly. B. Authenticate to AWS Identity and Access Management (IAM) directly. C. Authenticate to the Amazon Cognito user pool directly. D. Federate authentication by using Login with Amazon with the users managed with AWS Security Token Service (AWS STS). E. Federate authentication by using Login with Amazon with the users managed with the Amazon Cognito user pool.
Answer: C. E. Notes: The Amazon Cognito user pool provides direct user authentication. The Amazon Cognito user pool provides a federated authentication option with third-party identity provider (IdP), including amazon.com. Reference: Adding User Pool Sign-in Through a Third Party. Category: Security
Question: A company is implementing several order processing workflows. Each workflow is implemented by using AWS Lambda functions for each task. Which combination of steps should a developer follow to implement these workflows? (Select TWO.) A. Define a AWS Step Functions task for each Lambda function. B. Define a AWS Step Functions task for each workflow. C. Write code that polls the AWS Step Functions invocation to coordinate each workflow. D. Define an AWS Step Functions state machine for each workflow. E. Define an AWS Step Functions state machine for each Lambda function. Answer: A. D. Notes: Step Functions is based on state machines and tasks. A state machine is a workflow. Tasks perform work by coordinating with other AWS services, such as Lambda. A state machine is a workflow. It can be used to express a workflow as a number of states, their relationships, and their input and output. You can coordinate individual tasks with Step Functions by expressing your workflow as a finite state machine, written in the Amazon States Language. ReferenceText: Getting Started with AWS Step Functions. ReferenceUrl: https://aws.amazon.com/step-functions/getting-started/ Category: Development
What is the AWS Certified Developer Associate Exam?
This AWS Certified Developer-Associate Examination is intended for individuals who perform a Developer role. It validates an examinee’s ability to:
Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS
Recommended general IT knowledge The target candidate should have the following: – In-depth knowledge of at least one high-level programming language – Understanding of application lifecycle management – The ability to write code for serverless applications – Understanding of the use of containers in the development process
Recommended AWS knowledge The target candidate should be able to do the following:
Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications
Identify key features of AWS services
Understand the AWS shared responsibility model
Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
Use and interact with AWS services
Apply basic understanding of cloud-native applications to write code
Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
Author, maintain, and debug code modules on AWS
What is considered out of scope for the target candidate? The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam: – Design architectures (for example, distributed system, microservices) – Design and implement CI/CD pipelines
Administer IAM users and groups
Administer Amazon Elastic Container Service (Amazon ECS)
Design AWS networking infrastructure (for example, Amazon VPC, AWS Direct Connect)
Understand compliance and licensing
Exam content Response types There are two types of questions on the exam: – Multiple choice: Has one correct response and three incorrect responses (distractors) – Multiple response: Has two or more correct responses out of five or more response options Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose. Distractors are generally plausible responses that match the content area. Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.
Unscored content The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.
Exam results The AWS Certified Developer – Associate (DVA-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines. Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720. Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels. Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam. Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.
Content outline This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context. The percentage in each domain represents only scored content.
Domain 1: Deployment 22% Domain 2: Security 26% Domain 3: Development with AWS Services 30% Domain 4: Refactoring 10% Domain 5: Monitoring and Troubleshooting 12%
Domain 1: Deployment 1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns. – Commit code to a repository and invoke build, test and/or deployment actions – Use labels and branches for version and release management – Use AWS CodePipeline to orchestrate workflows against different environments – Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS CodeDeploy for CI/CD purposes – Perform a roll back plan based on application deployment policy
1.2 Deploy applications using AWS Elastic Beanstalk. – Utilize existing supported environments to define a new application stack – Package the application – Introduce a new application version into the Elastic Beanstalk environment – Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable) – Validate application health using Elastic Beanstalk dashboard – Use Amazon CloudWatch Logs to instrument application logging
1.3 Prepare the application deployment package to be deployed to AWS. – Manage the dependencies of the code module (like environment variables, config files and static image files) within the package – Outline the package/container directory structure and organize files appropriately – Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores)
1.4 Deploy serverless applications. – Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template – Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)
Domain 2: Security 2.1 Make authenticated calls to AWS services. – Communicate required policy based on least privileges required by application. – Assume an IAM role to access a service – Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)
2.2 Implement encryption using AWS services. – Encrypt data at rest (client side; server side; envelope encryption) using AWS services – Encrypt data in transit
2.3 Implement application authentication and authorization. – Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools – Use Amazon Cognito-provided credentials to write code that access AWS services. – Use Amazon Cognito sync to synchronize user profiles and data – Use developer-authenticated identities to interact between end user devices, backend authentication, and Amazon Cognito
Domain 3: Development with AWS Services 3.1 Write code for serverless applications. – Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications) – Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler) – Create an API endpoint using Amazon API Gateway – Create and test appropriate API actions like GET, POST using the API endpoint – Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes) – Compute read/write capacity units for Amazon DynamoDB based on application requirements – Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis) – Invoke an AWS Lambda function synchronously and asynchronously
3.2 Translate functional requirements into application design. – Determine real-time vs. batch processing for a given use case – Determine use of synchronous vs. asynchronous for a given use case – Determine use of event vs. schedule/poll for a given use case – Account for tradeoffs for consistency models in an application design
Domain 4: Refactoring 4.1 Optimize applications to best use AWS services and features. Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache) Apply an Amazon S3 naming scheme for optimal read performance
4.2 Migrate existing application code to run on AWS. – Isolate dependencies – Run the application as one or more stateless processes – Develop in order to enable horizontal scalability – Externalize state
Domain 5: Monitoring and Troubleshooting
5.1 Write code that can be monitored. – Create custom Amazon CloudWatch metrics – Perform logging in a manner available to systems operators – Instrument application source code to enable tracing in AWS X-Ray
5.2 Perform root cause analysis on faults found in testing or production. – Interpret the outputs from the logging mechanism in AWS to identify errors in logs – Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues – Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component
Which key tools, technologies, and concepts might be covered on the exam?
The following is a non-exhaustive list of the tools and technologies that could appear on the exam. This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam. The general tools and technologies in this list appear in no particular order. AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance: – Analytics – Application Integration – Containers – Cost and Capacity Management – Data Movement – Developer Tools – Instances (virtual machines) – Management and Governance – Networking and Content Delivery – Security – Serverless
Management and Governance: – AWS CloudFormation – Amazon CloudWatch
Networking and Content Delivery: – Amazon API Gateway – Amazon CloudFront – Elastic Load Balancing
Security, Identity, and Compliance: – Amazon Cognito – AWS Identity and Access Management (IAM) – AWS Key Management Service (AWS KMS)
Storage: – Amazon S3
Out-of-scope AWS services and features
The following is a non-exhaustive list of AWS services and features that are not covered on the exam. These services and features do not represent every AWS offering that is excluded from the exam content. Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant. Out-of-scope AWS services and features include the following: – AWS Application Discovery Service – Amazon AppStream 2.0 – Amazon Chime – Amazon Connect – AWS Database Migration Service (AWS DMS) – AWS Device Farm – Amazon Elastic Transcoder – Amazon GameLift – Amazon Lex – Amazon Machine Learning (Amazon ML) – AWS Managed Services – Amazon Mobile Analytics – Amazon Polly
– Amazon QuickSight – Amazon Rekognition – AWS Server Migration Service (AWS SMS) – AWS Service Catalog – AWS Shield Advanced – AWS Shield Standard – AWS Snow Family – AWS Storage Gateway – AWS WAF – Amazon WorkMail – Amazon WorkSpaces
To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
AWS Certified Developer – Associate Practice Questions And Answers Dump
Q0: Your application reads commands from an SQS queue and sends them to web services hosted by your partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost. How can you accommodate the partners’ broken web services without wasting your resources?
A. Create a delay queue and set DelaySeconds to 30 seconds
B. Requeue the message with a VisibilityTimeout of 30 seconds.
C. Create a dead letter queue and set the Maximum Receives to 3.
D. Requeue the message with a DelaySeconds of 30 seconds.
C. After a message is taken from the queue and returned for the maximum number of retries, it is automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.
Q1: A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently. What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?
A. Amazon DynamoDB auto scaling
B. Amazon DynamoDB cross-region replication
C. Amazon DynamoDB Streams
D. Amazon DynamoDB Accelerator
D. The AWS Documentation mentions the following:
DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios
As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
Q2: You are creating a DynamoDB table with the following attributes:
PurchaseOrderNumber (partition key)
CustomerID
PurchaseDate
TotalPurchaseValue
One of your applications must retrieve items from the table to calculate the total value of purchases for a particular customer over a date range. What secondary index do you need to add to the table?
A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the TotalPurchaseValue attribute
B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the TotalPurchaseValue attribute
C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the TotalPurchaseValue attribute
D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the TotalPurchaseValue attribute
C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the TotalPurchaseValue into the index provides all the data needed to satisfy the use case.
Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
Global Secondary Indexes defines a new paradigm – different hash/range keys per index. This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.
Throughput :
Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q5: Lambda allows you to upload code and dependencies for function packages:
A. Only from a directly uploaded zip file
B. Only via SFTP
C. Only from a zip file in AWS S3
D. From a zip file in AWS S3 or uploaded directly from elsewhere
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q7: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?
A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.
D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.
Q8: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?
A. Create and assign EIP to each instance
B. Create and attach a second IGW to the VPC.
C. Create and utilize a NAT Gateway
D. Connect to a VPN
C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
Q9: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?
A. Security Groups
B. Route Tables
C. Elastic Load Balancer
D. Auto Scaling
D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.
Q11: You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?
A. RegisterImage
B. CreateImage
C. ami-register-image
D. ami-create-image
A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.
Q12: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?
A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
B. Permenantly assigning users to specific instances and always routing their traffic to those instances
C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance
Q14: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?
A. Autoscaling requires using Amazon EBS-backed instances
B. Virtual Private Cloud requires EBS backed instances
C. Amazon EBS-backed instances can be stopped and restarted without losing data
D. Instance-store backed instances can be stopped and restarted without losing data
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.
Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command. ssh -i my_key.pem ec2-user@52.2.222.22 However you receive the following error. @@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@ What is the most probable reason for this and how can you fix it?
A. You do not have root access on your terminal and need to use the sudo option for this to work.
B. You do not have enough permissions to perform the operation.
C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.
D. You need to run something like: chmod 400 my_key.pem
Q16: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?
A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.
D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.
Q17: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:
A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
B. Can only be used to launch EC2 instances in the same country as the AMI is stored
C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
D. Can be used to launch EC2 instances in any AWS region
C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another
Q18: Which of the following statements is true about the Elastic File System (EFS)?
A. EFS can scale out to meet capacity requirements and scale back down when no longer needed
B. EFS can be used by multiple EC2 instances simultaneously
C. EFS cannot be used by an instance using EBS
D. EFS can be configured on an instance before launch just like an IAM role or EBS volumes
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
A. The ability to create custom permission policies.
B. Assigning IAM permission policies to more than one user at a time.
C. Easier user/policy management.
D. Allowing EC2 instances to gain access to S3.
B. and C.
A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.
Q22: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?
A. Amazon DynamoDB auto scaling
B. Amazon DynamoDB cross-region replication
C. Amazon DynamoDB Streams
D. Amazon DynamoDB Accelerator
D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:
As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
Q23: A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?
A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.
B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence. Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation. During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:
Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
Saved Configurations– Settings for any options that are not applied directly to the environment are loaded from a saved configuration, if specified.
Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.
Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.
Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.
If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI . Settings in configuration files are not applied directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.
Q24: What statements are true about Availability Zones (AZs) and Regions?
A. There is only one AZ in each AWS Region
B. AZs are geographically separated inside a region to help protect against natural disasters affecting more than one at a time.
C. AZs can be moved between AWS Regions based on your needs
D. There are (almost always) two or more AZs in each AWS Region
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Q26: Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?
A. Eventual Consistent Reads
B. Conditional reads for Consistency
C. Strongly Consistent Reads
D. Not possible
C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.
Q27: You’ ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?
A. Create an Opswork stack and deploy the Docker containers
B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.
B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.
Q28: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?
A. Create multiple threads and upload the objects in the multiple threads
B. Write the items in batches for better performance
C. Use the Multipart upload API
D. Enable versioning on the Bucket
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.
Q29: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?
A. 6000
B. 10
C. 3600
D. 600
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.
You can specify the Write capacity in the Capacity tab of the DynamoDB table.
Q33: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?
A. Create and assign EIP to each instance
B. Create and attach a second IGW to the VPC.
C. Create and utilize a NAT Gateway
D. Connect to a VPN
C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet. Reference: AWS Network Address Translation Gateway
Q34: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?
A. Security Groups
B. Route Tables
C. Elastic Load Balancer
D. Auto Scaling
D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture. Reference: AWS Autoscalling
Q31: An organization is using an Amazon ElastiCache cluster in front of their Amazon RDS instance. The organization would like the Developer to implement logic into the code so that the cluster only retrieves data from RDS when there is a cache miss. What strategy can the Developer implement to achieve this?
A. Lazy loading
B. Write-through
C. Error retries
D. Exponential backoff
Answer:
Answer – A Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. All other options are incorrect. Reference: Caching Strategies
Q32: A developer is writing an application that will run on Ec2 instances and read messages from SQS queue. The nessages will arrive every 15-60 seconds. How should the Developer efficiently query the queue for new messages?
A. Use long polling
B. Set a custom visibility timeout
C. Use short polling
D. Implement exponential backoff
Answer – A Long polling will help insure that the applications make less requests for messages in a shorter period of time. This is more cost effective. Since the messages are only going to be available after 15 seconds and we don’t know exacly when they would be available, it is better to use Long Polling. Reference: Amazon SQS Long Polling
Q33: You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With new Lambda function working as per expectation which of the following will shift traffic from original Lambda function to new Lambda function in the shortest time frame?
A. Canary10Percent5Minutes
B. Linear10PercentEvery10Minutes
C. Canary10Percent15Minutes
D. Linear10PercentEvery1Minute
Answer – A With Canary Deployment Preference type, Traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while remaining all traffic is shifted after 5 minutes. Reference: Gradual Code Deployment
Q34: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?
A. AWS::Serverless::Api
B. AWS::Serverless::Application
C. AWS::Serverless::Layerversion
D. AWS::Serverless::Function
Answer – B AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets. Reference: Declaring Serverless Resources
Q35: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?
A. Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
B. Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
C. Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
D. Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.
Answer – D With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys. Reference: AWS Key Management Service Concepts
Q36: You are developing an application that will be comprised of the following architecture –
A set of Ec2 instances to process the videos.
These (Ec2 instances) will be spun up by an autoscaling group.
SQS Queues to maintain the processing messages.
There will be 2 pricing tiers.
How will you ensure that the premium customers videos are given more preference?
A. Create 2 Autoscaling Groups, one for normal and one for premium customers
B. Create 2 set of Ec2 Instances, one for normal and one for premium customers
C. Create 2 SQS queus, one for normal and one for premium customers
D. Create 2 Elastic Load Balancers, one for normal and one for premium customers.
Answer – C The ideal option would be to create 2 SQS queues. Messages can then be processed by the application from the high priority queue first.<br? The other options are not the ideal options. They would lead to extra costs and also extra maintenance. Reference: SQS
Q37: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?
A. CustomerID
B. CustomerName
C. Location
D. Age
Answer- A Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on.. Use composite attributes. Try to combine more than one attribute to form a unique key. Reference: Choosing the right DynamoDB Partition Key
Q38: A developer is making use of AWS services to develop an application. He has been asked to develop the application in a manner to compensate any network delays. Which of the following two mechanisms should he implement in the application?
A. Multiple SQS queues
B. Exponential backoff algorithm
C. Retries in your application code
D. Consider using the Java sdk.
Answer- B. and C. In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values, and should be set based on the operation being performed, as well as other local factors, such as network latency. Reference: Error Retries and Exponential Backoff in AWS
Q39: An application is being developed that is going to write data to a DynamoDB table. You have to setup the read and write throughput for the table. Data is going to be read at the rate of 300 items every 30 seconds. Each item is of size 6KB. The reads can be eventual consistent reads. What should be the read capacity that needs to be set on the table?
A. 10
B. 20
C. 6
D. 30
Answer – A
Since there are 300 items read every 30 seconds , that means there are (300/30) = 10 items read every second. Since each item is 6KB in size , that means , 2 reads will be required for each item. So we have total of 2*10 = 20 reads for the number of items per second Since eventual consistency is required , we can divide the number of reads(20) by 2 , and in the end we get the Read Capacity of 10.
Q40: You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load balancer. You have been requested to monitor the incoming connections to the Elastic Load Balancer. Which of the below options can suffice this requirement?
A. Use AWS CloudTrail with your load balancer
B. Enable access logs on the load balancer
C. Use a CloudWatch Logs Agent
D. Create a custom metric CloudWatch lter on your load balancer
Answer – B Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues. Reference: Access Logs for Your Application Load Balancer
Q41: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?
A. Enable versioning for the underlying S3 bucket.
B. Enable Replication so that the objects get replicated to the other bucket
C. Enable CORS for the bucket
D. Change the Bucket policy for the bucket to allow access from the other bucket
Answer – C
Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.
Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:
Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.
Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.
Q42: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below
A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.
Answer- C The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3. You can then provides access to the objects based on the key values generated via the user id.
Q43: Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.
A. Publish your data to CloudWatch Logs, and congure your application to Autoscale to handle the load on demand.
B. Publish your log data to an Amazon S3 bucket. Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is congured to pull down your log les stored an Amazon S3
C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is congured to process your logging data.
D. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.
Answer:
Answer – C Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data. Reference: Amazon Kinesis
Q44: You’ve been instructed to develop a mobile application that will make use of AWS services. You need to decide on a data store to store the user sessions. Which of the following would be an ideal data store for session management?
A. AWS Simple Storage Service
B. AWS DynamoDB
C. AWS RDS
D. AWS Redshift
Answer:
Answer – B DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management Reference: Scalable Session Handling in PHP Using Amazon DynamoDB
Q45: Your application currently interacts with a DynamoDB table. Records are inserted into the table via the application. There is now a requirement to ensure that whenever items are updated in the DynamoDB primary table , another record is inserted into a secondary table. Which of the below feature should be used when developing such a solution?
A. AWS DynamoDB Encryption
B. AWS DynamoDB Streams
C. AWS DynamoDB Accelerator
D. AWSTable Accelerator
Answer – B DynamoDB Streams Use Cases and Design Patterns This post describes some common use cases you might encounter, along with their design options and solutions, when migrating data from relational data stores to Amazon DynamoDB. We will consider how to manage the following scenarios:
How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
How do you trigger an event based on a particular transaction?
How do you audit or archive transactions?
How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?
Relational databases provide native support for transactions, triggers, auditing, and replication. Typically, a transaction in a database refers to performing create, read, update, and delete (CRUD) operations against multiple tables in a block. A transaction can have only two states—success or failure. In other words, there is no partial completion. As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit. You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time. AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region. All of the other options are incorrect since none of these would meet the core requirement. Reference: DynamoDB Streams Use Cases and Design Patterns
Q46: An application has been making use of AWS DynamoDB for its back-end data store. The size of the table has now grown to 20 GB , and the scans on the table are causing throttling errors. Which of the following should now be implemented to avoid such errors?
A. Large Page size
B. Reduced page size
C. Parallel Scans
D. Sequential scans
Answer – B When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity. You can use the following technique to minimize the impact of a scan on a table’s provisioned throughput. Reduce page size Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request. For example, suppose that each item is 4 KB and you set the page size to 40 items. A Query request would then consume only 20 eventually consistent read operations or 40 strongly consistent read operations. A larger number of smaller Query or Scan operations would allow your other critical requests to succeed without throttling. Reference1: Rate-Limited Scans in Amazon DynamoDB
Q47: Which of the following is correct way of passing a stage variable to an HTTP URL ? (Select TWO.)
A. http://example.com/${}/prod
B. http://example.com/${stageVariables.}/prod
C. http://${stageVariables.}.example.com/dev/operation
D. http://${stageVariables}.example.com/dev/operation
E. http://${}.example.com/dev/operation
F. http://example.com/${stageVariables}/prod
Answer – B. and C. A stage variable can be used as part of HTTP integration URL as in following cases, · A full URI without protocol · A full domain · A subdomain · A path · A query string In the above case , option B & C displays stage variable as a path & sub-domain. Reference: Amazon API Gateway Stage Variables Reference
Q48: Your company is planning on creating new development environments in AWS. They want to make use of their existing Chef recipes which they use for their on-premise configuration for servers in AWS. Which of the following service would be ideal to use in this regard?
A. AWS Elastic Beanstalk
B. AWS OpsWork
C. AWS Cloudformation
D. AWS SQS
Answer – B AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments All other options are invalid since they cannot be used to work with Chef recipes for configuration management. Reference: AWS OpsWorks
Q49: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?
A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access.
C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.
Answer – C With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used Reference: About Web Identity Federation
Q50: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?
A. Cognito Data
B. Cognito Events
C. Cognito Streams
D. Cognito Callbacks
Answer – C Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams Reference:
Q51: You’ve developed a set of scripts using AWS Lambda. These scripts need to access EC2 Instances in a VPC. Which of the following needs to be done to ensure that the AWS Lambda function can access the resources in the VPC. Choose 2 answers from the options given below
A. Ensure that the subnet ID’s are mentioned when conguring the Lambda function
B. Ensure that the NACL ID’s are mentioned when conguring the Lambda function
C. Ensure that the Security Group ID’s are mentioned when conguring the Lambda function
D. Ensure that the VPC Flow Log ID’s are mentioned when conguring the Lambda function
Answer: A and C. AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPCspecific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC. Reference: Configuring a Lambda Function to Access Resources in an Amazon VPC
Q52: You’ve currently been tasked to migrate an existing on-premise environment into Elastic Beanstalk. The application does not make use of Docker containers. You also can’t see any relevant environments in the beanstalk service that would be suitable to host your application. What should you consider doing in this case?
A. Migrate your application to using Docker containers and then migrate the app to the Elastic Beanstalk environment.
B. Consider using Cloudformation to deploy your environment to Elastic Beanstalk
C. Consider using Packer to create a custom platform
D. Consider deploying your application using the Elastic Container Service
Answer – C Elastic Beanstalk supports custom platforms. A custom platform is a more advanced customization than a Custom Image in several ways. A custom platform lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility allows you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn’t provide a platform out of the box. Compare that to custom images, where you modify an AMI for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform’s software stack. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. To create a custom platform, you build an Amazon Machine Image (AMI) from one of the supported operating systems—Ubuntu, RHEL, or Amazon Linux (see the flavor entry in Platform.yaml File Format for the exact version numbers)—and add further customizations. You create your own Elastic Beanstalk platform using Packer, which is an open-source tool for creating machine images for many platforms, including AMIs for use with Amazon EC2. An Elastic Beanstalk platform comprises an AMI configured to run a set of software that supports an application, and metadata that can include custom configuration options and default configuration option settings. Reference: AWS Elastic Beanstalk Custom Platforms
Q53: Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.
A. 10
B. 160
C. 155
D. 16
Answer – B. Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below. Reference: Read/Write Capacity Mode
Q57: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?
Q60: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?
A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.
Q63: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?
A. Transforms
B. Outputs
C. Resources
D. Instances
Answer: C. The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3. Reference: Resources
Q64: Which AWS service can be used to fully automate your entire release process?
A. CodeDeploy
B. CodePipeline
C. CodeCommit
D. CodeBuild
Answer: B. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates
Q65: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?
A. Outputs
B. Transforms
C. Resources
D. Exports
Answer: A. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Reference: CloudFormation Outputs
Q66: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?
A. Inputs
B. Resources
C. Transforms
D. Files
Answer: C. Transforms is used to reference code located in S3 and also specififying the use of the Serverless Application Model (SAM) for Lambda deployments. Reference: Transforms
Q67: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file used to specify source files and lifecycle hooks?
Q68: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?
A. Share the code using an EBS volume
B. Copy and paste the code into the template each time you need to use it
C. Use a cloudformation nested stack
D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.
Q72: Which of the following is an encrypted key used by KMS to encrypt your data
A. Custmoer Mamaged Key
B. Encryption Key
C. Envelope Key
D. Customer Master Key
Answer: C. Your Data key also known as the Enveloppe key is encrypted using the master key.This approach is known as Envelope encryption. Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.
Q75: A developer is preparing a deployment package for a Java implementation of an AWS Lambda function. What should the developer include in the deployment package? (Select TWO.) A. Compiled application code B. Java runtime environment C. References to the event sources D. Lambda execution role E. Application dependencies
Answer: C. E. Notes: To create a Lambda function, you first create a Lambda function deployment package. This package is a .zip or .jar file consisting of your code and any dependencies. Reference:Lambda deployment packages.
Q76: A developer uses AWS CodeDeploy to deploy a Python application to a fleet of Amazon EC2 instances that run behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. What should the developer include in the CodeDeploy deployment package? A. A launch template for the Amazon EC2 Auto Scaling group B. A CodeDeploy AppSpec file C. An EC2 role that grants the application access to AWS services D. An IAM policy that grants the application access to AWS services
Answer: B. Notes: The CodeDeploy AppSpec (application specific) file is unique to CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file. Reference: CodeDeploy application specification (AppSpec) files. Category: Deployment
Q76: A company is working on a project to enhance its serverless application development process. The company hosts applications on AWS Lambda. The development team regularly updates the Lambda code and wants to use stable code in production. Which combination of steps should the development team take to configure Lambda functions to meet both development and production requirements? (Select TWO.)
A. Create a new Lambda version every time a new code release needs testing. B. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready unqualified Amazon Resource Name (ARN) version. Point the Development alias to the $LATEST version. C. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to the production-ready qualified Amazon Resource Name (ARN) version. Point the Development alias to the variable LAMBDA_TASK_ROOT. D. Create a new Lambda layer every time a new code release needs testing. E. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready Lambda layer Amazon Resource Name (ARN). Point the Development alias to the $LATEST layer ARN.
Answer: A. B. Notes: Lambda function versions are designed to manage deployment of functions. They can be used for code changes, without affecting the stable production version of the code. By creating separate aliases for Production and Development, systems can initiate the correct alias as needed. A Lambda function alias can be used to point to a specific Lambda function version. Using the functionality to update an alias and its linked version, the development team can update the required version as needed. The $LATEST version is the newest published version. Reference: Lambda function versions.
Q77: Each time a developer publishes a new version of an AWS Lambda function, all the dependent event source mappings need to be updated with the reference to the new version’s Amazon Resource Name (ARN). These updates are time consuming and error-prone. Which combination of actions should the developer take to avoid performing these updates when publishing a new Lambda version? (Select TWO.) A. Update event source mappings with the ARN of the Lambda layer. B. Point a Lambda alias to a new version of the Lambda function. C. Create a Lambda alias for each published version of the Lambda function. D. Point a Lambda alias to a new Lambda function alias. E. Update the event source mappings with the Lambda alias ARN.
Answer: B. E. Notes: A Lambda alias is a pointer to a specific Lambda function version. Instead of using ARNs for the Lambda function in event source mappings, you can use an alias ARN. You do not need to update your event source mappings when you promote a new version or roll back to a previous version. Reference: Lambda function aliases. Category: Deployment
Q78: A company wants to store sensitive user data in Amazon S3 and encrypt this data at rest. The company must manage the encryption keys and use Amazon S3 to perform the encryption. How can a developer meet these requirements? A. Enable default encryption for the S3 bucket by using the option for server-side encryption with customer-provided encryption keys (SSE-C). B. Enable client-side encryption with an encryption key. Upload the encrypted object to the S3 bucket. C. Enable server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Upload an object to the S3 bucket. D. Enable server-side encryption with customer-provided encryption keys (SSE-C). Upload an object to the S3 bucket.
Q79: A company is developing a Python application that submits data to an Amazon DynamoDB table. The company requires client-side encryption of specific data items and end-to-end protection for the encrypted data in transit and at rest. Which combination of steps will meet the requirement for the encryption of specific data items? (Select TWO.)
A. Generate symmetric encryption keys with AWS Key Management Service (AWS KMS). B. Generate asymmetric encryption keys with AWS Key Management Service (AWS KMS). C. Use generated keys with the DynamoDB Encryption Client. D. Use generated keys to configure DynamoDB table encryption with AWS managed customer master keys (CMKs). E. Use generated keys to configure DynamoDB table encryption with AWS owned customer master keys (CMKs).
Answer: A. C. Notes: When the DynamoDB Encryption Client is configured to use AWS KMS, it uses a customer master key (CMK) that is always encrypted when used outside of AWS KMS. This cryptographic materials provider returns a unique encryption key and signing key for every table item. This method of encryption uses a symmetric CMK. Reference: Direct KMS Materials Provider. Category: Deployment
Q80: A company is developing a REST API with Amazon API Gateway. Access to the API should be limited to users in the existing Amazon Cognito user pool. Which combination of steps should a developer perform to secure the API? (Select TWO.) A. Create an AWS Lambda authorizer for the API. B. Create an Amazon Cognito authorizer for the API. C. Configure the authorizer for the API resource. D. Configure the API methods to use the authorizer. E. Configure the authorizer for the API stage.
Answer: B. D. Notes: An Amazon Cognito authorizer should be used for integration with Amazon Cognito user pools. In addition to creating an authorizer, you are required to configure an API method to use that authorizer for the API. Reference: Control access to a REST API using Amazon Cognito user pools as authorizer. Category: Security
Q81: A developer is implementing a mobile app to provide personalized services to app users. The application code makes calls to Amazon S3 and Amazon Simple Queue Service (Amazon SQS). Which options can the developer use to authenticate the app users? (Select TWO.) A. Authenticate to the Amazon Cognito identity pool directly. B. Authenticate to AWS Identity and Access Management (IAM) directly. C. Authenticate to the Amazon Cognito user pool directly. D. Federate authentication by using Login with Amazon with the users managed with AWS Security Token Service (AWS STS). E. Federate authentication by using Login with Amazon with the users managed with the Amazon Cognito user pool.
Answer: C. E. Notes: The Amazon Cognito user pool provides direct user authentication. The Amazon Cognito user pool provides a federated authentication option with third-party identity provider (IdP), including amazon.com. Reference: Adding User Pool Sign-in Through a Third Party. Category: Security
Q82: A company is implementing several order processing workflows. Each workflow is implemented by using AWS Lambda functions for each task. Which combination of steps should a developer follow to implement these workflows? (Select TWO.) A. Define a AWS Step Functions task for each Lambda function. B. Define a AWS Step Functions task for each workflow. C. Write code that polls the AWS Step Functions invocation to coordinate each workflow. D. Define an AWS Step Functions state machine for each workflow. E. Define an AWS Step Functions state machine for each Lambda function.
Answer: A. D. Notes: Step Functions is based on state machines and tasks. A state machine is a workflow. Tasks perform work by coordinating with other AWS services, such as Lambda. A state machine is a workflow. It can be used to express a workflow as a number of states, their relationships, and their input and output. You can coordinate individual tasks with Step Functions by expressing your workflow as a finite state machine, written in the Amazon States Language. Reference: Getting Started with AWS Step Functions.
Category: Development
Q83: A company is migrating a web service to the AWS Cloud. The web service accepts requests by using HTTP (port 80). The company wants to use an AWS Lambda function to process HTTP requests. Which application design will satisfy these requirements? A. Create an Amazon API Gateway API. Configure proxy integration with the Lambda function. B. Create an Amazon API Gateway API. Configure non-proxy integration with the Lambda function. C. Configure the Lambda function to listen to inbound network connections on port 80. D. Configure the Lambda function as a target in the Application Load Balancer target group.
Answer: D. Notes: Elastic Load Balancing supports Lambda functions as a target for an Application Load Balancer. You can use load balancer rules to route HTTP requests to a function, based on the path or the header values. Then, process the request and return an HTTP response from your Lambda function. Reference: Using AWS Lambda with an Application Load Balancer. Category: Development
Q84: A company is developing an image processing application. When an image is uploaded to an Amazon S3 bucket, a number of independent and separate services must be invoked to process the image. The services do not have to be available immediately, but they must process every image. Which application design satisfies these requirements? A. Configure an Amazon S3 event notification that publishes to an Amazon Simple Queue Service (Amazon SQS) queue. Each service pulls the message from the same queue. B. Configure an Amazon S3 event notification that publishes to an Amazon Simple Notification Service (Amazon SNS) topic. Each service subscribes to the same topic. C. Configure an Amazon S3 event notification that publishes to an Amazon Simple Queue Service (Amazon SQS) queue. Subscribe a separate Amazon Simple Notification Service (Amazon SNS) topic for each service to an Amazon SQS queue. D. Configure an Amazon S3 event notification that publishes to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe a separate Simple Queue Service (Amazon SQS) queue for each service to the Amazon SNS topic.
Answer: D. Notes: Each service can subscribe to an individual Amazon SQS queue, which receives an event notification from the Amazon SNS topic. This is a fanout architectural implementation. Reference: Common Amazon SNS scenarios. Category: Development
Q85: A developer wants to implement Amazon EC2 Auto Scaling for a Multi-AZ web application. However, the developer is concerned that user sessions will be lost during scale-in events. How can the developer store the session state and share it across the EC2 instances? A. Write the sessions to an Amazon Kinesis data stream. Configure the application to poll the stream. B. Publish the sessions to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe each instance in the group to the topic. C. Store the sessions in an Amazon ElastiCache for Memcached cluster. Configure the application to use the Memcached API. D. Write the sessions to an Amazon Elastic Block Store (Amazon EBS) volume. Mount the volume to each instance in the group.
Answer: C. Notes: ElastiCache for Memcached is a distributed in-memory data store or cache environment in the cloud. It will meet the developer’s requirement of persistent storage and is fast to access. Reference: What is Amazon ElastiCache for Memcached?
Q86: A developer is integrating a legacy web application that runs on a fleet of Amazon EC2 instances with an Amazon DynamoDB table. There is no AWS SDK for the programming language that was used to implement the web application. Which combination of steps should the developer perform to make an API call to Amazon DynamoDB from the instances? (Select TWO.) A. Make an HTTPS POST request to the DynamoDB API endpoint for the AWS Region. In the request body, include an XML document that contains the request attributes. B. Make an HTTPS POST request to the DynamoDB API endpoint for the AWS Region. In the request body, include a JSON document that contains the request attributes. C. Sign the requests by using AWS access keys and Signature Version 4. D. Use an EC2 SSH key to calculate Signature Version 4 of the request. E. Provide the signature value through the HTTP X-API-Key header.
Answer: B. C. Notes: The HTTPS-based low-level AWS API for DynamoDB uses JSON as a wire protocol format. When you send HTTP requests to AWS, you sign the requests so that AWS can identify who sent them. Requests are signed with your AWS access key, which consists of an access key ID and secret access key. AWS supports two signature versions: Signature Version 4 and Signature Version 2. AWS recommends the use of Signature Version 4. Reference: Signing AWS API requests. Category: Development
Q87: A developer has written several custom applications that read and write to the same Amazon DynamoDB table. Each time the data in the DynamoDB table is modified, this change should be sent to an external API. Which combination of steps should the developer perform to accomplish this task? (Select TWO.) A. Configure an AWS Lambda function to poll the stream and call the external API. B. Configure an event in Amazon EventBridge (Amazon CloudWatch Events) that publishes the change to an Amazon Managed Streaming for Apache Kafka (Amazon MSK) data stream. C. Create a trigger in the DynamoDB table to publish the change to an Amazon Kinesis data stream. D. Deliver the stream to an Amazon Simple Notification Service (Amazon SNS) topic and subscribe the API to the topic. E. Enable DynamoDB Streams on the table.
Answer: A. E. Notes: If you enable DynamoDB Streams on a table, you can associate the stream Amazon Resource Name (ARN) with an Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table’s stream. Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. You can enable DynamoDB Streams on a table to create an event that invokes an AWS Lambda function. Reference: Tutorial: Process New Items with DynamoDB Streams and Lambda. Category: Monitoring
Q88: A company is migrating the create, read, update, and delete (CRUD) functionality of an existing Java web application to AWS Lambda. Which minimal code refactoring is necessary for the CRUD operations to run in the Lambda function? A. Implement a Lambda handler function. B. Import an AWS X-Ray package. C. Rewrite the application code in Python. D. Add a reference to the Lambda execution role.
Answer: A. Notes: Every Lambda function needs a Lambda-specific handler. Specifics of authoring vary between runtimes, but all runtimes share a common programming model that defines the interface between your code and the runtime code. You tell the runtime which method to run by defining a handler in the function configuration. The runtime runs that method. Next, the runtime passes in objects to the handler that contain the invocation event and context, such as the function name and request ID. Reference: Getting started with Lambda. Category: Refactoring
Q89: A company plans to use AWS log monitoring services to monitor an application that runs on premises. Currently, the application runs on a recent version of Ubuntu Server and outputs the logs to a local file. Which combination of steps should a developer perform to accomplish this goal? (Select TWO.) A. Update the application code to include calls to the agent API for log collection. B. Install the Amazon Elastic Container Service (Amazon ECS) container agent on the server. C. Install the unified Amazon CloudWatch agent on the server. D. Configure the long-term AWS credentials on the server to enable log collection by the agent. E. Attach an IAM role to the server to enable log collection by the agent.
Answer: C. D. Notes: The unified CloudWatch agent needs to be installed on the server. Ubuntu Server 18.04 is one of the many supported operating systems. When you install the unified CloudWatch agent on an on-premises server, you will specify a named profile that contains the credentials of the IAM user. Reference: Collecting metrics and logs from Amazon EC2 instances and on-premises servers with the CloudWatch agent. Category: Monitoring
Q90: A developer wants to monitor invocations of an AWS Lambda function by using Amazon CloudWatch Logs. The developer added a number of print statements to the function code that write the logging information to the stdout stream. After running the function, the developer does not see any log data being generated. Why does the log data NOT appear in the CloudWatch logs? A. The log data is not written to the stderr stream. B. Lambda function logging is not automatically enabled. C. The execution role for the Lambda function did not grant permissions to write log data to CloudWatch Logs. D. The Lambda function outputs the logs to an Amazon S3 bucket.
Answer: C. Notes: The function needs permission to call CloudWatch Logs. Update the execution role to grant the permission. You can use the managed policy of AWSLambdaBasicExecutionRole. Reference: Troubleshoot execution issues in Lambda. Category: Monitoting
Q91: Which of the following are best practices you should implement into ongoing deployments of your application? (Select THREE.)
A. Use stage variables to manage secrets across environments B. Create account-specific AWS SAM templates for each environment C. Use an AutoPublish alias D. Use traffic shifting with pre- and post-deployment hooks E. Test throughout the pipeline
Q92: You are handing off maintenance of your new serverless application to an incoming team lead. Which recommendations would you make? (Select THREE.)
A. Keep up to date with the quotas and payload sizes for each AWS service you are using
B. Analyze production access patterns to identify potential improvements
C. Design your services to extend their life as long as possible
D. Minimize changes to your production application
E. Compare the value of using the latest first-class integrations versus using Lambda between AWS services
Q93: You are handing off maintenance of your new serverless application to an incoming team lead. Which recommendations would you make? (Select THREE.)
A. Keep up to date with the quotas and payload sizes for each AWS service you are using
B. Analyze production access patterns to identify potential improvements
C. Design your services to extend their life as long as possible
D. Minimize changes to your production application
E. Compare the value of using the latest first-class integrations versus using Lambda between AWS services
Answer: A. B. D. Notes: Keep up to date with the quotas and payload sizes for each AWS service you are using. Analyze production access patterns to identify potential improvements. Minimize changes to your production application
Q94: Your application needs to connect to an Amazon RDS instance on the backend. What is the best recommendation to the developer whose function must read from and write to the Amazon RDS instance?
A. Initialize the number of connections you want outside of the handler
B. Use the database TTL setting to clean up connections
C. Use reserved concurrency to limit the number of concurrent functions that would try to write to the database
D. Use the database proxy feature to provide connection pooling for the functions
Answer: D. Notes: Use the database proxy feature to provide connection pooling for the functions
Question 95: A developer reports that a third-party library they need cannot be shared in the Lambda invocation environment. Which suggestion would you make?
A. Decrease the deployment package size
B. Set a provisioned concurrency of one so that the library doesn’t need to be shared across environments
C. Use reserved concurrency for the function that needs to use the library
D. Load the third-party library onto an Amazon EFS volume
Answer: D Notes: Load the third-party library onto an Amazon EFS volume
AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers for the AWS Certified Developer – Associate Exam.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The AWS Certified Developer-Associate Examination (DVA-C01) is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals guided by certification industry best practices and guidelines.
Your results for the examination are reported as a score from 100 – 1000, with a minimum passing score of 720.
Domain 1: Deployment (22%) 1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns. 1.2 Deploy applications using Elastic Beanstalk. 1.3 Prepare the application deployment package to be deployed to AWS. 1.4 Deploy serverless applications
22%
Domain 2: Security (26%) 2.1 Make authenticated calls to AWS services. 2.2 Implement encryption using AWS services. 2.3 Implement application authentication and authorization.
26%
Domain 3: Development with AWS Services (30%) 3.1 Write code for serverless applications. 3.2 Translate functional requirements into application design. 3.3 Implement application design into application code. 3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.
30%
Domain 4: Refactoring 4.1 Optimize application to best use AWS services and features. 4.2 Migrate existing application code to run on AWS.
10%
Domain 5: Monitoring and Troubleshooting (10%) 5.1 Write code that can be monitored. 5.2 Perform root cause analysis on faults found in testing or production.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers for the AWS Certified Developer – Associate Exam.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The AWS Certified Developer-Associate Examination (DVA-C01) is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals guided by certification industry best practices and guidelines.
Your results for the examination are reported as a score from 100 – 1000, with a minimum passing score of 720.
Domain 1: Deployment (22%) 1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns. 1.2 Deploy applications using Elastic Beanstalk. 1.3 Prepare the application deployment package to be deployed to AWS. 1.4 Deploy serverless applications
22%
Domain 2: Security (26%) 2.1 Make authenticated calls to AWS services. 2.2 Implement encryption using AWS services. 2.3 Implement application authentication and authorization.
26%
Domain 3: Development with AWS Services (30%) 3.1 Write code for serverless applications. 3.2 Translate functional requirements into application design. 3.3 Implement application design into application code. 3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.
30%
Domain 4: Refactoring 4.1 Optimize application to best use AWS services and features. 4.2 Migrate existing application code to run on AWS.
10%
Domain 5: Monitoring and Troubleshooting (10%) 5.1 Write code that can be monitored. 5.2 Perform root cause analysis on faults found in testing or production.
In this AWS tutorial, we are going to discuss how we can make the best use of AWS services to build a highly scalable, and fault tolerant configuration of EC2 instances. The use of Load Balancers and Auto Scaling Groups falls under a number of best practices in AWS, including Performance Efficiency, Reliability and high availability.
Before we dive into this hands-on tutorial on how exactly we can build this solution, let’s have a brief recap on what an Auto Scaling group is, and what a Load balancer is.
Autoscaling group (ASG)
An Autoscaling group (ASG) is a logical grouping of instances which can scale up and scale down depending on pre-configured settings. By setting Scaling policies of your ASG, you can choose how many EC2 instances are launched and terminated based on your application’s load. You can do this based on manual, dynamic, scheduled or predictive scaling.
Elastic Load Balancer (ELB)
An Elastic Load Balancer (ELB) is a name describing a number of services within AWS designed to distribute traffic across multiple EC2 instances in order to provide enhanced scalability, availability, security and more. The particular type of Load Balancer we will be using today is an Application Load Balancer (ALB). The ALB is a Layer 7 Load Balancer designed to distribute HTTP/HTTPS traffic across multiple nodes – with added features such as TLS termination, Sticky Sessions and Complex routing configurations.
Getting Started
First of all, we open our AWS management console and head to the EC2 management console.
We scroll down on the left-hand side and select ‘Launch Templates’. A Launch Template is a configuration template which defines the settings for EC2 instances launched by the ASG.
Under Launch Templates, we will select “Create launch template”.
We specify the name ‘MyTestTemplate’ and use the same text in the description.
Under the ‘Auto Scaling guidance’ box, tick the box which says ‘Provide guidance to help me set up a template that I can use with EC2 Auto Scaling’ and scroll down to launch template contents.
When it comes to choosing our AMI (Amazon Machine Image) we can choose the Amazon Linux 2 under ‘Quick Start’.
The Amazon Linux 2 AMI is free tier eligible, and easy to use for our demonstration purposes.
Next, we select the ‘t2.micro’ under instance types, as this is also free tier eligible.
Under Network Settings, we create a new Security Group called ExampleSG in our default VPC, allowing HTTP access to everyone. It should look like this.
AWS Certified Developer Associate exam: Additional Information for reference
Below are some useful reference links that would help you to learn about AWS Certified Developer Associate Exam.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Hey everyone, I’m currently preparing for the AWS CloudOps certification. I have some experience with AWS, but not a lot. My manager recommended this certification because the work I’ve been doing aligns closely with its topics. If you have any tips, study recommendations, or resources that helped you, I’d really appreciate it. I’ve already picked up Stéphane Maarek’s course on Udemy. FYI: I haven’t taken any other AWS certification exams before—this will be my first one. Thanks in advance! submitted by /u/Thick-Mud-1037 [link] [comments]
https://preview.redd.it/916uz310ylzf1.png?width=1912&format=png&auto=webp&s=81df21324cf68c2c94ad7e4633913b9384d204b6 I just took the security specialty exam yesterday, and received the results this morning! Thank you very much y'all for the tips and pieces of advice. Studied for it for exactly a month with Adrian Cantrill's course, Tutorials Dojo, and AWS official questions sets. The exam definitely had questions I was not expecting but I would say they were variations of the questions out there in TD and AWS official questions, I had 3 that were exactly like TD still. And really understanding the concepts is key here I'd say. The pressure was big (not because of questions length, although that played a part), but because time concerns. Finished with over 1h left on the clock though. submitted by /u/Popular-Indication20 [link] [comments]
I’m currently in the AWS re/Start program, and I have to say it’s been a really eye-opening journey into the world of cloud computing. The exposure to AWS concepts, Python, Linux, and employability skills is incredible, especially for people like me who didn’t have a strong tech background before joining. That said, the pace of the program feels extremely intense. We cover so many topics — Python, Linux, networking, security, databases etc. — all within a few short weeks. It sometimes feels impossible to properly grasp one topic before moving on to the next. For learners who prefer to understand things deeply rather than just rushing through, this can get overwhelming and stressful. I really think the program would benefit from a slightly longer schedule or more built-in review periods, especially around the Python and AWS service modules. Slowing things down just a bit would help learners absorb the material more confidently and enjoy the process rather than feeling constantly behind. Overall, I’m grateful for the opportunity, it’s a great program with solid content and supportive trainers. I just hope the pace can be adjusted so that more learners can thrive instead of feeling overloaded. submitted by /u/Previous_Gene_254 [link] [comments]
hi, I see that OpenSSL in amazonlinux repository is 3.2.2. $ dnf info openssl Installed Packages Name : openssl Epoch : 1 Version : 3.2.2 Release : 1.amzn2023.0.2 Architecture : aarch64 Size : 2.0 M Source : openssl-3.2.2-1.amzn2023.0.2.src.rpm Repository : @System From repo : amazonlinux Summary : Utilities from the general purpose cryptography library with TLS implementation URL : http://www.openssl.org/ License : ASL 2.0 Description : The OpenSSL toolkit provides support for secure communications between : machines. OpenSSL includes a certificate management tool and shared : libraries which provide various cryptographic algorithms and : protocols. I also notice that OpenSSL EOL is at 2025-11-23; it's about 2 weeks from now. Is there any plan from AWS to upgrade from 3.2 to 3.6 or 3.5 (LTS)? With regards to current and future releases the OpenSSL project has adopted the following policy: Version 3.5 will be supported until 2030-04-08 (LTS) Version 3.4 will be supported until 2026-10-22 Version 3.3 will be supported until 2026-04-09 Version 3.2 will be supported until 2025-11-23 Version 3.0 will be supported until 2026-09-07 (LTS). Versions 1.1.1 and 1.0.2 are no longer supported. Extended support for 1.1.1 and 1.0.2 to gain access to security fixes for those versions is available. Versions 1.1.0, 1.0.1, 1.0.0 and 0.9.8 are no longer supported. Ref: https://endoflife.date/openssl https://openssl-library.org/policies/releasestrat/index.html submitted by /u/Suitable-Mail-1989 [link] [comments]
No recivo el SMS de verificación de la cuenta con el código Este es el número de reclamo que abri: 176240002500002 submitted by /u/MortensenCristian [link] [comments]
I have 3+ years of help desk experience. For those who been in cloud computing for long time what’s your tips on starting, with what certs do you recommend. Curious in could security or network engineering in cloud. Any tips and advice helps. Also should I purse cloud or go the regular CCNA etc. network engineering route and stay away from cloud, thanks. submitted by /u/MrRandome23 [link] [comments]
Passed AWS CCP few days ago! 🙌 Huge thanks to everyone here who shared tips and study resources — it truly helped me pass. Now I’m wondering, what should be my next step? I’m currently a Project Manager/Scrum Master with no technical background, just aiming to grow my cloud knowledge so I can better support my teams. Any advice on what cert or learning path to take next if I’m on more of a project/delivery management track? submitted by /u/Awkward_Guide_9270 [link] [comments]
Got my results back today for the CloudOps exam and failed with a score of 70%. I sat the SysOps Admin exam a month before with a failure score of 70%. I used CloudGuru for the content and paid Udemy and SkillsCertPro mocks to prepare for the second attempt. I studied for two solid weeks and learnt a tonne of new stuff and felt confident, I was sure I passed the second time today. But I failed… with the exact same score?! Feeling very deflated. Any advice is welcome 🙁 submitted by /u/External-Eye7025 [link] [comments]
I have a fairly basic use case where users via a web app (written in Elixir/Phoenix) will upload .docx files and a Lambda will do some processing on it and save the result in S3, which is then fetched by the same web app on demand. Considering that the AWS resources are only accessed by a web app on a VPS, I'm wondering if the simplest setup (considering cost and security as well) for this is to use Lambdas with AuthType IAM, and use CloudFront + WAF with an IP policy as well as enabling OAC targetting the Lambda and S3 bucket. I'm wondering if there's anything I've overlooked or if there are potentially better solutions. I guess IP allowlists feel a bit antiquated but probably work fine in this scenario. submitted by /u/StraightPlane [link] [comments]
The grade isn’t amazing, but I didn’t cram, revise, or take any practice tests to really assess my level, so I’m actually quite proud of myself. The Solutions Architect Pro exam I’ll be taking in six months will be a different story though, I think I’ll definitely need to study for that, lol! By the way, I have a question: I haven’t received the badge on Credly.com. Do you know why? submitted by /u/Go0ners14 [link] [comments]
Hey ya'll, just took and passed the Solutions Architect Associate exam yesterday, giving me my third AWS cert in a month! Certified AI Practitioner October 3rd, Developer Associate October 21st, and SAA November 4th (technically a day more than a month so I lied). Background: 2 years working hands-on as developer with AWS, obtained Certified Cloud Practitioner last year. First off, shoutout the GOAT Stephane Maarek, I have used his courses for every AWS cert I've studied for - his lectures pretty much covered everything, and the slides are a great reference. Also shoutout TutorialDojo (🇵🇭), it has a TON of practice questions with an excellent interface to practice. My study plan for each cert was to finish the Stephane course on Udemy, take notes, and then from that point, grind the Tutorial Dojos exams. I would start with the Review Mode until I was pretty much acing each one, then would spam the Randomized tests the days leading to the actual test. I took a LOT of practice tests, so I won't post each result. But for each cert, I basically got to the point where I could skim through the Randomized test and score at least a 90%. I'd say that the actual AI Practitioner and SAA were pretty much on par with the Tutorial Dojo tests in terms of difficulty, but the Developer one was a bit harder. If I were to do studying over again, I would have spent more time just reviewing and studying notes than spamming the TD tests, as it got to a point where I was memorizing the answers based on specific keywords in the question/answers, which wasn't really reinforcing any knowledge. I was "overfitting" a bit too much to the TD tests. I'd say I had around 2-4 hours to study each day for the certs. Studying for the AI Practitioner test took around 1-2 weeks total, I have a data science minor so a lot of the AI terms were familiar. As expected, definitely the easiest of the three. The Developer and SAA both took around three weeks each, though at one point I was studying for both at the same time. The content of the Developer and SAA exams have a lot of overlap, so it was definitely nice to take them both around the same time. The Developer exam is much more specific (and a lot harder) - needing to know some specific API commands or configurations, and a lot more about deployment. SAA is much more high-level, focusing more on things like cost-savings, disaster recovery, migrations, high availability, scaling, etc. But for the most part, the actual services covered by both are the same - it's just a matter of looking at them from a different lens. If you finished one of Stephane's courses for Dev, you probably have about 10 more hours of new content for the SAA (and vice versa). I don't remember specifics from the AI or Dev exams, but the SAA exam had a lot of tricky questions about networking (VPC, hybrid cloud networking w Storage Gateway, etc) and storage solutions (EBS vs EFS, FSx types, etc). It's useful to know the different protocols available for the different FSx types, and use cases for ALB vs NLB. One major recommendation in my opinion - take the exams in person. I took the CCP and AI exams online, and the whole time I was worried about if I was fidgeting too much, looking around too much, or if my internet would cut out (I've heard you can't even open your mouth to read the questions aloud). I had to clear literally everything from the cubbies under my desk, resulting in 2 minutes of me filming myself chucking everything on my desk over my shoulder to the corner of the room. Taking the exams at a test center allowed me to chill out a bit on all that and just focus on the test, which was definitely helpful when taking the more complicated tests like Dev and SAA. I was able to lean back, stretch, and was given a whiteboard for notes - just make sure you bring TWO forms of ID, some guy before me forgot that rule. Also the test center lady was very sweet 🙂 submitted by /u/kittykat87654321 [link] [comments]
Whats everyone’s experience working with either AWS partners or using aws enterprise support? Any general red flags or green flags to expect from using any service? Had my fair share of discussions so far with mixed feelings. submitted by /u/WhitebeardJr [link] [comments]
We are a startup business and AWS is our first choice when thinking about cloud infra hosting services. But everything turn down when CloudFront and ALB restriction is set out of nowhere. We can't do anything without CloudFront, and have to move our code to EC2. Without ECS, S3, our CI/CD is a nightmare when we have to manage it. But the worst thing is, our support case has been ignored for almost a month, since 20 Oct till today. Possible is that because our Support Plan is still on Free? Does anyone having this issue or have a way to liftoff this restriction? Our team is planning to choose another cloud service providers as an alternative as it's heavily affected our business. Update: I think by sharing my incident, we may have more idea about the case. My business account is registered with a valid business email domain (not from common one like gmail, outlook...). I already added my credit card and fill in everything about my company's profile. However, when I create a new CloudFront distribution, both with CLI and Console, I got this error message: Your account must be verified before you can add new CloudFront resources. To verify your account, please contact AWS Support (https://console.aws.amazon.com/support/home#/) and include this error message. submitted by /u/My_name_is_random [link] [comments]
I gave my test yesterday and I'm relieved to say that I passed. My TD scores were between 75-85%, I took 4 timed mode tests. I also did the topic-based questions on TD which is a part of the practice test bundle. I took the practice test from Stephan's course and I scored 66%. Prep and Background: My background, experienced solutions architect but AWS usage is minimal in our company. This was my recertification; I had passed my SAAC03 exam in Nov 2022. I started prepping with Cantril's course back in August, I was 80% done when I realized that the course may be outdated. I switched to Stephan's course since I had already bought it. I started switching back and forth on these courses on certain topics which I found hard to grasp. Sometimes different teaching styles help. The biggest help with Stephan's course was the slides that you can download and study. I studied whenever I got a chance at work, time between meetings etc. The one thing about Stephan's course is there is not a lot of hands-on, but he has scenario-based sections which are so practical and provide a way to understand solutions. His material is crisp and concise which helps you revise quickly. Exam: The exam is hard. I found it on par with Stephan's practice test. I was surprised to see that there were hardly any questions on CloudFront, Route 53, EC2 and VPC. Most of the questions were focused on S3, ECS/EKS/Kubernetes, Microservices architecture, S3 data querying (3-4 questions), datawarehouse migration from on-prem to cloud, ALB/NLB. There were maybe 10-15 questions which I would categorize as easy. Rest of them had two options that you can eliminate and the other two options that were left were tricky. I had to really read the options 2-3 times and go back to question to pick up the key words like: What is the most COST-EFFECTIVE solution? What will minimize the OPERATIONAL overhead? etc. Focus on the key words in the last sentence of the question. Some of the questions were verbose, that is where I think you can get lost in the question and options. Next time I'm going to get an extra 30 mins accommodation. I hardly had any time to review “Flag for review" questions which were quite a lot. When I got out of test center, I thought I was going to fail. I would also give my credit to this reddit community for posts on exam experience and the mind map that I discovered here: https://www.mindmeister.com/app/map/3471885158?t=lE6MXlXHYC Here is the list of things I remember: Direct connect/VPN site to site-Data encryption at transit and rest Instance classes for EC2 with RDS-Memory Optimized or Storage Optimized S3 ECS SES Reserved constancy Aurora global db EBS accidental delete Backup and retention Security groups and Acl Microserices architecture- ecs Cloud hsm File gateway Fsx smb 50 tb of data transfer from on prem to cloud. File gateway S3 gateway Lambda concurrency question Typed on my phone and in a hurry. Pardon any typos!!! submitted by /u/Character-Eagle-214 [link] [comments]
When a pod is launched for our gitlab runner, there will be 1 failure out of 20. Here's the error. What is the solution to this? ERROR: Job failed (system failure): prepare environment: error dialing backend: remote error: tls: internal error. submitted by /u/Oxffff0000 [link] [comments]
Hey guys recently got an internship at Amazon and I will be part of AWS, specifically working on DynamoDB. To be honest I dont know anything about this, how should I prepare, any project ideas to help me prepare? Anyone who has worked with AWS or specifically DynamoDB have any tips? Any input is welcome submitted by /u/EmbarrassedBorder615 [link] [comments]
I passed the Machine Learning Associate certification with very few days of preparation. My first relevant certification, I don't have anyone to talk to to share the news with, so I came here to share my joy and say that this community helped a lot. I didn't know, I joined recently but it was essential. (I'm still ecstatic, I'll update later with more information and links that helped me, but basically it was 4 insane days, day and night marathoning a Udemy course that they recommended here, until 10 minutes before the test I was studying). If you could leave any tips for upcoming certifications (I've been a data analyst for 6 years and I entered the world of Data Science 1.5 years ago but I still work as an analyst) Note: I'm not going to stray too far from the other testimonials that have already helped me, and I'll give credit later. But the test was at a good level, I didn't find it easy, I passed with 737, but study is necessary mainly due to the details covered (whether in the services or their respective functionalities, parameters...). Lots of trade-offs, architecture design as needed. Last detail: VERY SageMaker and its integrations, is the heart of certification submitted by /u/Puzzled_Surprise_383 [link] [comments]
https://preview.redd.it/mzct2iy7ggzf1.png?width=1384&format=png&auto=webp&s=e51c93f18ba37969541ea3ba137eab02c10921b8 I am redirected back to skillbuilder main page instead of Benchprep after clicking Practice Exam. Just yesterday I could access it normally, is anybody else having the same issue ? submitted by /u/Popular-Indication20 [link] [comments]
Sorry for the long title but this is exactly what's happening: 1) My admin sent a reset link 2) I click on the link to change my password 3) I sign in with the changed password successfully 4) I sign out, or the session has expired 5) When I come back and use the new password to sign in, I can't get in At first, I thought it was just human error, and I let my admin know to send me a new password link. This issue happened again. This is the third time, and I made sure to place my password in a document (yes, I know it's unsafe) and copied it from the document into the fields. Back to it today, I'm using the password, and it's not letting me in again submitted by /u/Environmental_Ad2855 [link] [comments]
Every since we switched to AWS phones my headphones wont work for both the phone and my personal device at the sametime. I would really love to go back to listening to podcast and working. Any suggestions submitted by /u/Frannirox [link] [comments]
I learned sap for 2 months, everyday two hours. Four backend experiences. Amost no Aws experience. I use TD and exam at 8am. Get message at 8pm. I meet network error(I am from china) and delayed 40min, big challenge to my heart. Finally using Japan vpn to connect onvue. The real exam description is short than TD. And many new concept and service, cannot learn from TD. Maybe really need to learn from work experience. submitted by /u/Diligent_Ambition_14 [link] [comments]
I am not able to ssh to my ec2. We were in the middle of a deployment when the ec2 stopped responding, so as usual we try to reconnect right? But it keeps timing out. We have tried everything we know of in the book. Support team is asking us to get the technical support plan, but why should we when the issue seems to be clearly on their side. We noticed that when we try to connect to the ec2, it makes an api call to us-east-1 and fails. Our server is in ap-south-1. We have tried fixing the .pem file permissions, tried ec2instanceconnect directly from the console, rebooted the instance, checked inbound rules port 22. Is there anything else we can try? submitted by /u/This-Commission5238 [link] [comments]
Good day, everyone. I'm writing this to inquire on the way forward. I'm to sit for AWS Certified Cloud Practitioner exam but Pearson OnVue has proven to be really unreliable. I'm studying really far from home and also quite far from the available physical testing centers which influenced my decision to try the OnVue option. My testing history has been: Attempt 1 (Oct 28/29): I had an exam scheduled for Oct 28, but I had to reschedule it myself to Oct 29 due to a mismatch in name order between the portal and my national ID. This one was on me; I got the changes effected and was ready. Attempt 2 (Oct 29): On the rescheduled day, Oct 29, I was unable to check in. The check-in portal and the Certmetrics portal were both down. Customer support confirmed this was all due to the "us-east-1" incident. I was forced to reschedule. Attempt 3 (Nov 3): My next exam was scheduled for Nov 3. On the test day, I went through the check-in, but my Starlink network failed their network tests, so I was not allowed to proceed. Another forced reschedule. Attempt 4 (Nov 5 - Today): ...and lastly, today. My exam was scheduled for Nov 5, at 10:30 AM. On checking in today at 10:00 AM on the dot, I just kept getting "OnVUE check-in can not be started at this time." submitted by /u/PieRat-2534 [link] [comments]
I have a question regarding VPCFLOW logging. According to the documentation, there are only two action states “accept” and “reject”. Scenario: I have a tcp session with 30 packets, for whatever reason only 15 were accept the other 15 were rejected (could be due to NACL, etc). How will this reflect in the logs? Would it be two lines with the same 5 tuple src,dst ip port and protocol? with the same time? One with action “reject” one with action “accept”? Are there any official documentation that talks about this behavior? There was a article about VPC public access feature but it seems that feature is evaluated after SG and NACLs. Please, any help is appreciated. submitted by /u/potatoes25 [link] [comments]
Can anyone suggest the right study resources for AWS ML Associate Exam? A dedicated WhatsApp group or subreddit would be really helpful. submitted by /u/rukikazuru [link] [comments]
Apart from the general syllabus & prep for exam. I also wanted to clear AWS Cloud Practitioner exam without spending any money. So I collected those 4500 ETC points and now while I applied, it says: I have to complete AWS Cloud Practitioner learning path & 1 mock test on skill builder website I want to know if someone got the 100% voucher with this method, can you please tell on how to do things and how to clear the exam??!! submitted by /u/No-Construction-9928 [link] [comments]
Hey everyone I wanted to share that I have successfully cleared AWS Certified Machine Learning Engineer - Associate (MLA-C01) with the score of 843! https://preview.redd.it/rr6m8w6g9dzf1.png?width=894&format=png&auto=webp&s=2095be1dd41b4b55bc120efbc1666d6cf1b46dc6 I had scheduled the exam (with the 50% off voucher many have used here) and started preparing about 2 weeks before. I used Stephane Marek and Frank Kane's course to get familiar with the exam contents. Due to office work (and bit of procrastination), got delayed in completing the course work and had to reschedule the exam once. I had planned to do practice tests from TD, but due to lack of time just completed and revised the AWS skill builder free test and 3 tests from Stephane Marek and Abhishek Singh's practice tests on Udemy. They are actually quite good to help with some of the topics that might be missed during coursework or require deeper understanding. My learnings from the whole experience: Take more time to prepare thoroughly and practice as many times as needed to be confident before exam. I couldn't reschedule it after 4 Nov due to voucher constraints, and was somewhat stressed due to lack of practice. Believe in your preparation and your aptitude. Thanks to the community for the all the advices & experiences that made the process lot more easier! submitted by /u/SmartConnection4230 [link] [comments]
Hello, I’ve been working on automating AWS credit balance monitoring and found that AWS Cost Explorer API can show credit usage, but there doesn’t seem to be an API that directly returns the current remaining promotional credits balance for an account. I have to manually update total credits in my CloudFormation parameters and subtract usage from Cost Explorer results. Before I continue down this path, I wanted to ask: • Does anyone know if AWS provides or plans to provide an official API or SDK call that gives you the exact remaining credits available in your AWS account in real-time? • Or is the Cost Explorer usage query still the best / only practical way to estimate remaining credits at the moment? • Are there any undocumented or third-party APIs people use for this? Any pointers, official docs, personal experience, or open-source projects that simplify this would be much appreciated! Thanks in advance. submitted by /u/XxThatWeirdGuyxX [link] [comments]
I got turned away by the exam center because apparently my name doesn't exactly fit the first name / last name field. The email receipt seemed to show my name correctly (although there isn't a breakdown of first name / last name). They only informed me a few hours before I arrived when I was already on the road. Has anyone been in the same situation? Were you allowed to reschedule without shelling out another hundred bucks? I've already reached out via email and they said they'd respond within 2 days but I'm anxious and a bit disheartened. submitted by /u/contemplating-all [link] [comments]
I’m trying to register an SMS use case in Amazon Pinpoint, but my application keeps getting rejected with the reason: “Opt-in Consent Bundling Issue. Consent to receive messages must be obtained separately and cannot be bundled with other agreements.” Here’s my current flow: Users must check a box to agree to the Terms of Service and Privacy Policy before they can click “Verify and Login.” At the bottom of the login screen, I added this text: “By entering your phone number and clicking ‘Verify and Login’, you agree to receive a one-time SMS verification code for login purposes only.” Users cannot proceed without checking the Terms/Privacy checkbox. My questions: Is this flow acceptable, or do I need to add a separate standalone checkbox specifically for SMS consent? If a standalone checkbox is required, what wording/placement has worked for others to pass AWS review? Also, side note: AWS Support has been really slow to respond on this issue, and the experience has been pretty frustrating. I feel like I’m stuck waiting without clear guidance, which makes it hard to move forward. Has anyone else run into the same support delays? Thanks in advance for any advice! submitted by /u/YuanShui233 [link] [comments]
I was testing ways to process 5TB of data using Lambda, Step Functions, S3, and DynamoDB on my personal AWS account. During the tests, I found issues when over 400 Lambdas were invoked in parallel, Step Functions would crash after about 500GB processed. Limiting it to 250 parallel invocations solved the problem, though I'm not sure why. However, the failure runs left around 1.3TB of “hidden” data in S3. These incomplete objects can’t be listed directly from the bucket, you can only see information about initiated multipart upload processes, but you can't actually see the parts that have already been uploaded. I only discovered it when I noticed, through my cost monitoring, that it was accounting for +$15 in that bucket, even though it was literally empty. Looking at the bucket's monitoring dashboard, I immediately figured out what was happening. This lack of transparency is dangerous. I imagine how many companies are paying for incomplete multipart uploads without even realizing they're unnecessarily paying more. AWS needs to somehow make this type of information more transparent: Create an internal policy to abort multipart uploads that have more than X days (what kind of file takes more than 2 days to upload and build?). Create a box that is checked by default to create a lifecycle policy to clean up these incomplete files. Or simply put a warning message in the console informing that there are +1GB data of incomplete uploads in this bucket. But simply guessing that there's hidden data, which we can't even access through the console or boto3, is really crazy. submitted by /u/LordWitness [link] [comments]
I’ve been working as a full-stack developer for about 6 years, recently leaning more toward cloud architecture. My team’s now moving more workloads into AWS (ECS, Lambda, RDS, the usual suspects), and I’m trying to level up from “I can deploy” to “I can design this whole thing well.” I still love writing code. I don’t want to just diagram boxes in Lucidchart all day, but lately most of my time is spent reviewing IaC, chasing IAM edge cases, and debugging pipelines instead of actually building features. To prep for an upcoming internal architecture interview, I’ve been running small design sessions with Claude and Beyz coding assistant. It turned my side project into a mock system design. I use it to talk through trade-offs like “ECS vs. Fargate,” or simulate explaining cost optimization choices to a non-technical manager. But I’m struggling to find the right balance between staying deep in code (so I don’t go rusty) and learning to think more strategically about distributed design. So how did you keep your technical edge while growing into more architecture-heavy roles? Do you set time aside for side projects, certifications to stay close to the work? Would love to hear what worked for you. submitted by /u/CreditOk5063 [link] [comments]
Been working in this field for some time now and had been canceling and rescheduling this a lot. Finally decided to go hail mary and took the exam yesterday. Woke up to this today morning. Nearly two weeks of dedicated preparation with Stéphane's course and Tutorials Dojo. Thanks a lot to this community for guiding me the right way - and also for the promo code (guess why I took it yesterday!). submitted by /u/Salt_in_Stress [link] [comments]
I created an HTTP API endpoint in APGW which uses JWT Authorizer I went into Cognito and set up a user pool and with the client id/secret I'm able to create a JWT although the scope is just <name>/read I don't get how the scopes work, I go into Cognito > Domain, create a resource (which I don't even know if it's appropriate regarding being REST vs. HTTP). I add it to the scope in APGW But yeah I make my request against the HTTP API APGW URL with an Authorization header with the key and get 401. I need to enable logging on the APGW to see what's happening. One thing when I try to setup a resource server scope and matching it in APGW I get invalid grant when requesting a token so not sure still working on it. Alright the scope thing when dealing with the console UI have to go into login pages tab and add it in custom scopes Still 401 when doing a request with my token Alright I got it thank the stars, the issuer had a trailing slash, hint came from the error I luckily found in postman headers response where it said "issuer in OIDC discovery endpoint metadata does not match the configured issuer" submitted by /u/post_hazanko [link] [comments]
https://preview.redd.it/eyg9msh01czf1.png?width=1152&format=png&auto=webp&s=d79926e74bc3c5759501084ffd5a7b314557cbed Spent the last 2 weeks squeezing in 1–2 hours a night after work with Stephane Maarek’s Udemy course. Not easy switching into study mode after a long day, but I had a 50% voucher expiring today so there was no turning back. Was still reviewing notes an hour before the exam! Score came in tonight: 702 Limped across the line, battered and bruised, but I’m here. Big thanks to this sub and everyone who shares tips & resources! You guys rock!! Seeing other people grind through really helped keep me motivated, even when I didn’t feel fully ready. That’s 2 AWS certs down this year, onward to SAA in early 2026 💪 submitted by /u/RTM179 [link] [comments]
Just passed SAA a few months ago and SOA recently. I want to get more comfortable with automated resource deployments because I see most Cloud Engineer jobs are looking for the following: - Cloudformation or Terraform - Container Orchestration (Ecs/Docker/K8) Please help me understand: 1) Is it better to Learn CF or TF? 2) Whats the best material to master this? Is there a book, video course or guide that helped you? 3) K8, I want to learn it but have no idea on how to approach. Thank you. submitted by /u/S4LTYSgt [link] [comments]
Both EKS and RDS have deletion protection for cluster and RDS instances. Sources: Amazon EKS adds safety control to prevent accidental cluster deletion Amazon RDS Now Provides Database Deletion Protection Will this prevent deletion of AWS Account or Organization? Put another way, if I delete my Account/Organization, do I need to delete all resources manually myself or AWS would do it (thus overriding any deletion protection config)? submitted by /u/ekapitu [link] [comments]
We've a number of disks on DB servers that have become way too big and, mostly thanks to colleagues not understanding computers. they're mostly empty. They're in production though with SLAs and all, and I need to shrink them down by doing file copies. So to leave them alone as much as possible I've an Ansbile playbook that uses a recent snapshot to create a volume, fires up a new ec2 instance and copy the data to a suitably sized disk, then destroys the new instance and switches the new volume to the original instance. Testing with multi TB disks though, but when only copying 10gb, it took 20 minutes! Locally copying on the original disk this is more like 20 seconds. So there are plenty of different options to create volumes from snapshots, potentially using FSR, and also now cloning volumes directly. These all boast being fast, but it seems nothing is actually "fast" or "instant" when it comes to being able to copy a big chunk of data from an even chunkier disk as they all want to slowly copy the source volume blocks, mostly even if they are empty as filesystem level. I'm surprised that this new "volume copy" functionality isn't just copy on write or such. Not doubt it's more complicated than I want it to be, but why not just keep reading the actual same blocks as the source volume until you write to them, at which point you duplicate that block to a new space? So anyway, what would be a good approach to get the quickest result away from the production instance? I expect it'd be acceptable to prep a volume a day early or such like, so when we come to do the main automation the data will be able to be copied fast, but I still have this utopian view I should be able to copy a terrabyte in about 20minutes and toddle off to lunch. Once we have done this main copy, I'm then moving that volume back to the original instance, and rsyncing the volumes to pick up the absent data from the time we did the main copy, and I think that's all going to be OK, but it's this seemingly huge time delay to read all the data from a newly created volume, however it's created. Any suggestions appreciated! submitted by /u/BarryTownCouncil [link] [comments]
Did the 50% deal as so many others have. Took Stephen Maarek's course and paired it with Tutorial Dojo questions. Short amount of time to pull it together as I just passed the AWS Certified Cloud Practitioner exam. Couldn't pass up the deal though! Onward and upward to Solutions Architect Associate. Let's go! submitted by /u/Real-Theory8840 [link] [comments]
Honestly, the entire exam I was sure I’ll not pass but I did. I studies for about 4 month but not very intensively. I have no experience with AWS services. Here is how I prepared: As a first step I took Stephane Maarek Udemy course. I watched it passively for about 6 hour a week, by sections. The course was detailed enough in my opinion and it gave me a good foundation with the learning journey. Tutorial Dojo- I purchased the practice exams pack and the study guide (pdf with 200 pages). I first read the book and then started practicing the questions. As a beginning I used the sectioned mode, then took full exams but without timing them and at the end I did the timed mode. The exams has very good explanation for the answers and I tried to document my mistakes in a separate file and mention points that are missing in my knowledge. The questions in the tutorial dojo exams sometimes repeat themselves but that ok! I took the online test with WiFi connection and had no troubles at all. Good luck!! submitted by /u/MassiveCarpenter3611 [link] [comments]
Looks like AWS has finally reverted the insane courageous separate pricing tier for M2M clients introduced last year: https://aws.amazon.com/about-aws/whats-new/2025/11/amazon-cognito-removes-machine-machine-app-client-price-dimension/ submitted by /u/notospez [link] [comments]
Hey everyone, just wanted to say a huge thank you to this community. I passed the AWS AIF-C01 exam, barely above the passing mark, but a pass is a pass! 😅 I actually don’t have any background in AI/ML, but I decided to take advantage of the discounted exam promo from AWS AI/ML Certification Challenge 2025 which will end today. With work, family, and volunteer commitments, I only had about two weeks to prepare. Ended up cramming Stephen Maarek’s Udemy course and going through the 4 practice question review sections from Tutorials Dojo (Jon Bonso). Still found myself studying two hours before the exam and decided to just wing it and somehow managed to scrape through. Big thanks to everyone who shares tips, breakdowns, and motivation here. You guys really helped keep me on track even when I thought I was way behind. On to the next cert! What are your suggestions? Looking at maybe shifting my career to more of a Cloud or AI/ML Project Management. submitted by /u/Plane_Tadpole_1175 [link] [comments]
Lack of AWS credentials hygiene and ignorance even when security researchers demonstrated proof of leak is worrisome. submitted by /u/heldsteel7 [link] [comments]
Hello all, i have passed the AI practitioner exam today, scored 776/1000. The exam was really easy tbh. I took the exam because of the 50% offer that is going on. I used stephen maarek's course and practice tests which was really useful. I never crossed 70% in the mock exams. I found the practical tests harder than the real exam. I also used aws skill builder free version domain reviews and sample test which was very helpful. Planning to do some small projects after this to secure a cloud job somehow. Thank you! submitted by /u/lfcclf [link] [comments]
AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers for the AWS Certified Developer – Associate Exam.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The AWS Certified Developer-Associate Examination (DVA-C01) is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals guided by certification industry best practices and guidelines.
Your results for the examination are reported as a score from 100 – 1000, with a minimum passing score of 720.
Domain 1: Deployment (22%) 1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns. 1.2 Deploy applications using Elastic Beanstalk. 1.3 Prepare the application deployment package to be deployed to AWS. 1.4 Deploy serverless applications
22%
Domain 2: Security (26%) 2.1 Make authenticated calls to AWS services. 2.2 Implement encryption using AWS services. 2.3 Implement application authentication and authorization.
26%
Domain 3: Development with AWS Services (30%) 3.1 Write code for serverless applications. 3.2 Translate functional requirements into application design. 3.3 Implement application design into application code. 3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.
30%
Domain 4: Refactoring 4.1 Optimize application to best use AWS services and features. 4.2 Migrate existing application code to run on AWS.
10%
Domain 5: Monitoring and Troubleshooting (10%) 5.1 Write code that can be monitored. 5.2 Perform root cause analysis on faults found in testing or production.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers for the AWS Certified Developer – Associate Exam.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The AWS Certified Developer-Associate Examination (DVA-C01) is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals guided by certification industry best practices and guidelines.
Your results for the examination are reported as a score from 100 – 1000, with a minimum passing score of 720.
Domain 1: Deployment (22%) 1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns. 1.2 Deploy applications using Elastic Beanstalk. 1.3 Prepare the application deployment package to be deployed to AWS. 1.4 Deploy serverless applications
22%
Domain 2: Security (26%) 2.1 Make authenticated calls to AWS services. 2.2 Implement encryption using AWS services. 2.3 Implement application authentication and authorization.
26%
Domain 3: Development with AWS Services (30%) 3.1 Write code for serverless applications. 3.2 Translate functional requirements into application design. 3.3 Implement application design into application code. 3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.
30%
Domain 4: Refactoring 4.1 Optimize application to best use AWS services and features. 4.2 Migrate existing application code to run on AWS.
10%
Domain 5: Monitoring and Troubleshooting (10%) 5.1 Write code that can be monitored. 5.2 Perform root cause analysis on faults found in testing or production.
In this AWS tutorial, we are going to discuss how we can make the best use of AWS services to build a highly scalable, and fault tolerant configuration of EC2 instances. The use of Load Balancers and Auto Scaling Groups falls under a number of best practices in AWS, including Performance Efficiency, Reliability and high availability.
Before we dive into this hands-on tutorial on how exactly we can build this solution, let’s have a brief recap on what an Auto Scaling group is, and what a Load balancer is.
Autoscaling group (ASG)
An Autoscaling group (ASG) is a logical grouping of instances which can scale up and scale down depending on pre-configured settings. By setting Scaling policies of your ASG, you can choose how many EC2 instances are launched and terminated based on your application’s load. You can do this based on manual, dynamic, scheduled or predictive scaling.
Elastic Load Balancer (ELB)
An Elastic Load Balancer (ELB) is a name describing a number of services within AWS designed to distribute traffic across multiple EC2 instances in order to provide enhanced scalability, availability, security and more. The particular type of Load Balancer we will be using today is an Application Load Balancer (ALB). The ALB is a Layer 7 Load Balancer designed to distribute HTTP/HTTPS traffic across multiple nodes – with added features such as TLS termination, Sticky Sessions and Complex routing configurations.
Getting Started
First of all, we open our AWS management console and head to the EC2 management console.
We scroll down on the left-hand side and select ‘Launch Templates’. A Launch Template is a configuration template which defines the settings for EC2 instances launched by the ASG.
Under Launch Templates, we will select “Create launch template”.
We specify the name ‘MyTestTemplate’ and use the same text in the description.
Under the ‘Auto Scaling guidance’ box, tick the box which says ‘Provide guidance to help me set up a template that I can use with EC2 Auto Scaling’ and scroll down to launch template contents.
When it comes to choosing our AMI (Amazon Machine Image) we can choose the Amazon Linux 2 under ‘Quick Start’.
The Amazon Linux 2 AMI is free tier eligible, and easy to use for our demonstration purposes.
Next, we select the ‘t2.micro’ under instance types, as this is also free tier eligible.
Under Network Settings, we create a new Security Group called ExampleSG in our default VPC, allowing HTTP access to everyone. It should look like this.
AWS Certified Developer Associate exam: Additional Information for reference
Below are some useful reference links that would help you to learn about AWS Certified Developer Associate Exam.
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
The Cloud is the future: Get Certified now. The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:
Hey everyone, I’m currently preparing for the AWS CloudOps certification. I have some experience with AWS, but not a lot. My manager recommended this certification because the work I’ve been doing aligns closely with its topics. If you have any tips, study recommendations, or resources that helped you, I’d really appreciate it. I’ve already picked up Stéphane Maarek’s course on Udemy. FYI: I haven’t taken any other AWS certification exams before—this will be my first one. Thanks in advance! submitted by /u/Thick-Mud-1037 [link] [comments]
https://preview.redd.it/916uz310ylzf1.png?width=1912&format=png&auto=webp&s=81df21324cf68c2c94ad7e4633913b9384d204b6 I just took the security specialty exam yesterday, and received the results this morning! Thank you very much y'all for the tips and pieces of advice. Studied for it for exactly a month with Adrian Cantrill's course, Tutorials Dojo, and AWS official questions sets. The exam definitely had questions I was not expecting but I would say they were variations of the questions out there in TD and AWS official questions, I had 3 that were exactly like TD still. And really understanding the concepts is key here I'd say. The pressure was big (not because of questions length, although that played a part), but because time concerns. Finished with over 1h left on the clock though. submitted by /u/Popular-Indication20 [link] [comments]
I’m currently in the AWS re/Start program, and I have to say it’s been a really eye-opening journey into the world of cloud computing. The exposure to AWS concepts, Python, Linux, and employability skills is incredible, especially for people like me who didn’t have a strong tech background before joining. That said, the pace of the program feels extremely intense. We cover so many topics — Python, Linux, networking, security, databases etc. — all within a few short weeks. It sometimes feels impossible to properly grasp one topic before moving on to the next. For learners who prefer to understand things deeply rather than just rushing through, this can get overwhelming and stressful. I really think the program would benefit from a slightly longer schedule or more built-in review periods, especially around the Python and AWS service modules. Slowing things down just a bit would help learners absorb the material more confidently and enjoy the process rather than feeling constantly behind. Overall, I’m grateful for the opportunity, it’s a great program with solid content and supportive trainers. I just hope the pace can be adjusted so that more learners can thrive instead of feeling overloaded. submitted by /u/Previous_Gene_254 [link] [comments]
hi, I see that OpenSSL in amazonlinux repository is 3.2.2. $ dnf info openssl Installed Packages Name : openssl Epoch : 1 Version : 3.2.2 Release : 1.amzn2023.0.2 Architecture : aarch64 Size : 2.0 M Source : openssl-3.2.2-1.amzn2023.0.2.src.rpm Repository : @System From repo : amazonlinux Summary : Utilities from the general purpose cryptography library with TLS implementation URL : http://www.openssl.org/ License : ASL 2.0 Description : The OpenSSL toolkit provides support for secure communications between : machines. OpenSSL includes a certificate management tool and shared : libraries which provide various cryptographic algorithms and : protocols. I also notice that OpenSSL EOL is at 2025-11-23; it's about 2 weeks from now. Is there any plan from AWS to upgrade from 3.2 to 3.6 or 3.5 (LTS)? With regards to current and future releases the OpenSSL project has adopted the following policy: Version 3.5 will be supported until 2030-04-08 (LTS) Version 3.4 will be supported until 2026-10-22 Version 3.3 will be supported until 2026-04-09 Version 3.2 will be supported until 2025-11-23 Version 3.0 will be supported until 2026-09-07 (LTS). Versions 1.1.1 and 1.0.2 are no longer supported. Extended support for 1.1.1 and 1.0.2 to gain access to security fixes for those versions is available. Versions 1.1.0, 1.0.1, 1.0.0 and 0.9.8 are no longer supported. Ref: https://endoflife.date/openssl https://openssl-library.org/policies/releasestrat/index.html submitted by /u/Suitable-Mail-1989 [link] [comments]
No recivo el SMS de verificación de la cuenta con el código Este es el número de reclamo que abri: 176240002500002 submitted by /u/MortensenCristian [link] [comments]
I have 3+ years of help desk experience. For those who been in cloud computing for long time what’s your tips on starting, with what certs do you recommend. Curious in could security or network engineering in cloud. Any tips and advice helps. Also should I purse cloud or go the regular CCNA etc. network engineering route and stay away from cloud, thanks. submitted by /u/MrRandome23 [link] [comments]
Passed AWS CCP few days ago! 🙌 Huge thanks to everyone here who shared tips and study resources — it truly helped me pass. Now I’m wondering, what should be my next step? I’m currently a Project Manager/Scrum Master with no technical background, just aiming to grow my cloud knowledge so I can better support my teams. Any advice on what cert or learning path to take next if I’m on more of a project/delivery management track? submitted by /u/Awkward_Guide_9270 [link] [comments]
Got my results back today for the CloudOps exam and failed with a score of 70%. I sat the SysOps Admin exam a month before with a failure score of 70%. I used CloudGuru for the content and paid Udemy and SkillsCertPro mocks to prepare for the second attempt. I studied for two solid weeks and learnt a tonne of new stuff and felt confident, I was sure I passed the second time today. But I failed… with the exact same score?! Feeling very deflated. Any advice is welcome 🙁 submitted by /u/External-Eye7025 [link] [comments]
Passed the certified developer associate this week.
Primary study was Stephane Maarek’s course on Udemy.
I also used the Practice Exams by Stephane Maarek and Abhishek Singh.
I used Stephane’s course and practice exams for the Solutions Architect Associate as well, and find his course does a good job preparing you to pass the exams.
The practice exams were more challenging than the actual exam, so they are a good gauge to see if you are ready for the exam.
Haven’t decided if I’ll do another associate level certification next or try for the solutions architect professional.
I cleared Developer associate exam yesterday. I scored 873. Actual Exam Exp: More questions were focused on mainly on Lambda, API, Dynamodb, cloudfront, cognito(must know proper difference between user pool and identity pool) 3 questions I found were just for redis vs memecached (so maybe you can focus more here also to know exact use case& difference.) other topic were cloudformation, beanstalk, sts, ec2. Exam was mix of too easy and too tough for me. some questions were one liner and somewhere too long.
Resources: The main resources I used was udemy. Course of Stéphane Maarek and practice exams of Neal Davis and Stéphane Maarek. These exams proved really good and they even helped me in focusing the area which I lacked. And they are up to the level to actual exam, I found 3-4 exact same questions in actual exam(This might be just luck ! ). so I feel, the course of stephane is more than sufficient and you can trust it. I have achieved solution architect associate previously so I knew basic things, so I took around 2 weeks for preparation and revised the Stephen’s course as much as possible. Parallelly I gave the mentioned exams as well, which guided me where to focus more.
Thanks to all of you and feel free to comment/DM me, if you think I can help you in anyway for achieving the same.
Passed the Developer Associate. My Notes.
There was more SNS “fan out” options. None of the Udemy or Tutorial Dojo tests had that.
They aren’t joking about the 30 min check I and expiring your exam if you don’t check in at least 15 mins before hand.
When you finish do a pass over review for multi select questions and check the number required.
Review again and look for gotcha phrases like “least operational cost”, “fastest solution”, “most secure”, and “least expensive”. Change your answers of you put on what “you” would do.
Watch out for questions that mention services that don’t really have anything to do with the problem.
Look at every service mentioned in the question. You can probably think of a better stack for the solution but just adhere to what the present.
If you are clueless of an answer start by ruling out the ones you KNOW are wrong and then guess.
Take as many practice exams like tutorial dojo as you can. On review filter out the review for “incorrect”. Open another tab on the subject and read up or book mark it.
I would get 50-60% first pass at each exam. Then 85-95% after reading the answer and the open tabs and book marks.
If taking the Pearson proctored online test down load the OnVue App as soon as you can. Installing that thing made me miss the 15 min window and I had to rebook and pay. Their checkin is confusing between all the cert portals and their own site. Just use the “Manage Pearson Exams” from the aws cert portal to force auth to theirs.
New versions of the Developer (Associate) and DevOps Engineer (Professional) exams in Feb/March 2023
Past January as usual I chose some goals to achieve during 2022, including to obtain an AWS certification. In February I started studying using the Pluralsight platform. The course for developers is a good introduction but is messy sometimes. At the end of the course I had a big picture of the main AWS services but I was really confused by all the different topics. Pluralsight offers alsl some labs that allowed me to practice with AWS services because I my work wasn’t related to cloud.
Then a senior engineer suggested me to study using ACloudGuru platform and I loved their contents. They focus only on the details necessary for the exam. During the videos they show useful list to summarize services and tables to compare them. Moreover they offer a quite unrestricted playground where to put your hands. So i could try their labs but also try my first cloudformation scripts and lambdas. There are also 4 practice exams that are very similar to real questions. Even though their platform is buggy sometimes, the subscription was worth it.
At the same time I purchased the pack of practice questions from Jon Bonso on Udemy and indeed the answer explainations are very detailed and very useful.
For practicing more, I was adding wrong questions to the Anki desktop application but I was revising them on my phone whenever I could. I ended up with 150 different questions on Anki.
In the meantime, in August I left my software engineer job because my boss told me that I was not able to work with cloud technologies. In september I started a new job as cloud engineer in a new company. Finally, in October I passed the Developer Associate exam and I scored 910.
I put a lot of effort into prepare the exam and eventually i got it! However I believe this is just my first step and I want to keep studying. My next objective is the Solutions Architect Associate certification.
And before we wrap up today's AI news, I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.
Are you passionate about AI and looking for your next career challenge? In the fast-evolving world of artificial intelligence, connecting with the right opportunities can make all the difference. We're excited to recommend Mercor, a premier platform dedicated to bridging the gap between exceptional AI professionals and innovative companies.
Whether you're seeking roles in machine learning, data science, or other cutting-edge AI fields, Mercor offers a streamlined path to your ideal position. Explore the possibilities and accelerate your AI career by visiting Mercor through our exclusive referral link:
And before we wrap up today's AI news, I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.
The AWS Certified Cloud Practitioner Exam (CLF-C02) is an introduction to AWS services and the intention is to examine the candidates ability to define what the AWS cloud is and its global infrastructure. It provides an overview of AWS core services security aspects, pricing and support services. The main objective is to provide an overall understanding about the Amazon Web Services Cloud platform. The course helps you get the conceptual understanding of the AWS and can help you know about the basics of AWS and cloud computing, including the services, cases and benefits [Get AWS CCP Practice Exam PDF Dumps here]
Prepare and Ace the AWS Cloud Practitioner Certification CCP CLF-C02: Practice Exam, Quizzes for each Exam Category, Detailed Answers, FAQs, I Passed AWS CCP Testimonials, Top 10 Tips and Tricks to help you ace the AWS CCP exam
To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
aws cloud practitioner practice questions and answers
aws cloud practitioner practice exam questions and references
Q1:For auditing purposes, your company now wants to monitor all API activity for all regions in your AWS environment. What can you use to fulfill this new requirement?
A. For each region, enable CloudTrail and send all logs to a bucket in each region.
B. Enable CloudTrail for all regions.
C. Ensure one CloudTrail is enabled for all regions.
D. Use AWS Config to enable the trail for all regions.
Ensure one CloudTrail is enabled for all regions. Turn on CloudTrail for all regions in your environment and CloudTrail will deliver log files from all regions to one S3 bucket. AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting.
Use a VPC Endpoint to access S3. A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.
AWS PrivateLink simplifies the security of data shared with cloud-based applications by eliminating the exposure of data to the public Internet.
[Get AWS CCP Practice Exam PDF Dumps here] It is AWS responsibility to secure Edge locations and decommission the data. AWS responsibility “Security of the Cloud” – AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.
Q4:You have EC2 instances running at 90% utilization and you expect this to continue for at least a year. What type of EC2 instance would you choose to ensure your cost stay at a minimum?
[Get AWS CCP Practice Exam PDF Dumps here] Reserved instances are the best choice for instances with continuous usage and offer a reduced cost because you purchase the instance for the entire year. Amazon EC2 Reserved Instances (RI) provide a significant discount (up to 75%) compared to On-Demand pricing and provide a capacity reservation when used in a specific Availability Zone.
The AWS Simple Monthly Calculator helps customers and prospects estimate their monthly AWS bill more efficiently. Using this tool, they can add, modify and remove services from their ‘bill’ and it will recalculate their estimated monthly charges automatically.
A. Sign up for the free alert under filing preferences in the AWS Management Console.
B. Set a schedule to regularly review the Billing an Cost Management dashboard each month.
C. Create an email alert in AWS Budget
D. In CloudWatch, create an alarm that triggers each time the limit is exceeded.
Answer:
Answer: iOS – Android (C) [Get AWS CCP Practice Exam PDF Dumps here] AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Reservation alerts are supported for Amazon EC2, Amazon RDS, Amazon Redshift, Amazon ElastiCache, and Amazon Elasticsearch reservations.
Q7:An Edge Location is a specialization AWS data centre that works with which services?
A. Lambda
B. CloudWatch
C. CloudFront
D. Route 53
Answer:
Answer: Get AWS Certified Cloud Practitioner Practice Exam CCP CLF-C02 eBook Print Book here Lambda@Edge lets you run Lambda functions to customize the content that CloudFront delivers, executing the functions in AWS locations closer to the viewer. Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you’re serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.
CloudFront speeds up the distribution of your content by routing each user request through the AWS backbone network to the edge location that can best serve your content. Typically, this is a CloudFront edge server that provides the fastest delivery to the viewer. Using the AWS network dramatically reduces the number of networks that your users’ requests must pass through, which improves performance. Users get lower latency—the time it takes to load the first byte of the file—and higher data transfer rates.
You also get increased reliability and availability because copies of your files (also known as objects) are now held (or cached) in multiple edge locations around the world.
Anser: A. Route 53 is a domain name system service by AWS. When a Disaster does occur , it can be easy to switch to secondary sites using the Route53 service. Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other. Amazon Route 53 is fully compliant with IPv6 as well.
Answer: D. The below snapshot from the AWS Documentation shows the spectrum of the Disaster recovery methods. If you go to the further end of the spectrum you have the least time for downtime for the users.
Q11:Your company is planning to host resources in the AWS Cloud. They want to use services which can be used to decouple resources hosted on the cloud. Which of the following services can help fulfil this requirement?
A. AWS EBS Volumes
B. AWS EBS Snapshots
C. AWS Glacier
D. AWS SQS
Answer:
D. AWS SQS: Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components.
A. 99.999999999% Durability and 99.99% Availability S3 Standard Storage class has a rating of 99.999999999% durability (referred to as 11 nines) and 99.99% availability.
A. Redshift is a database offering that is fully-managed and used for data warehousing and analytics, including compatibility with existing business intelligence tools.
B. and C. CENTRALLY MANAGE POLICIES ACROSS MULTIPLE AWS ACCOUNTS AUTOMATE AWS ACCOUNT CREATION AND MANAGEMENT CONTROL ACCESS TO AWS SERVICES CONSOLIDATE BILLING ACROSS MULTIPLE AWS ACCOUNTS
Q17:There is a requirement hosting a set of servers in the Cloud for a short period of 3 months. Which of the following types of instances should be chosen to be cost effective.
A. Spot Instances
B. On-Demand
C. No Upfront costs Reserved
D. Partial Upfront costs Reserved
Answer:
B. Since the requirement is just for 3 months, then the best cost effective option is to use On-Demand Instances.
You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, and other sources. You can then retrieve the associated log data from CloudWatch Log.
Q22:A company is deploying a new two-tier web application in AWS. The company wants to store their most frequently used data so that the response time for the application is improved. Which AWS service provides the solution for the company’s requirements?
A. MySQL Installed on two Amazon EC2 Instances in a single Availability Zone
Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases.
Q23:You have a distributed application that periodically processes large volumes of data across multiple Amazon EC2 Instances. The application is designed to recover gracefully from Amazon EC2 instance failures. You are required to accomplish this task in the most cost-effective way. Which of the following will meetyour requirements?
When you think of cost effectiveness, you can either have to choose Spot or Reserved instances. Now when you have a regular processing job, the best is to use spot instances and since your application is designed recover gracefully from Amazon EC2 instance failures, then even if you lose the Spot instance , there is no issue because your application can recover.
A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC.
Q25:A company is deploying a two-tier, highly available web application to AWS. Which service provides durable storage for static content while utilizing Overall CPU resources for the web tier?
A. Amazon EBC volume.
B. Amazon S3
C. Amazon EC2 instance store
D. Amazon RDS instance
Answer:
B. Amazon S3 is the default storage service that should be considered for companies. It provides durable storage for all static content.
Q26:When working on the costing for on-demand EC2 instances , which are the following are attributes which determine the costing of the EC2 Instance. Choose 3 answers from the options given below
Q27:You have a mission-critical application which must be globally available at all times. If this is the case, which of the below deployment mechanisms would you employ
Always build components which are loosely coupled. This is so that even if one component does fail, the entire system does not fail. Also if you build with the assumption that everything will fail, then you will ensure that the right measures are taken to build a highly available and fault tolerant system.
Q29: You have 2 accounts in your AWS account. One for the Dev and the other for QA. All are part ofconsolidated billing. The master account has purchase 3 reserved instances. The Dev department is currently using 2 reserved instances. The QA team is planning on using 3 instances which of the same instance type. What is the pricing tier of the instances that can be used by the QA Team?
Since all are a part of consolidating billing, the pricing of reserved instances can be shared by All. And since 2 are already used by the Dev team , another one can be used by the QA team. The rest of the instances can be on-demand instances.
Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components.
Q32:You are exploring what services AWS has off-hand. You have a large number of data sets that need to be processed. Which of the following services can help fulfil this requirement.
A. EMR
B. S3
C. Glacier
D. Storage Gateway
Answer:
A. Amazon EMR helps you analyze and process vast amounts of data by distributing the computational work across a cluster of virtual servers running in the AWS Cloud. The cluster is managed using an open-source framework called Hadoop. Amazon EMR lets you focus on crunching or analyzing your data without having to worry about time-consuming setup, management, and tuning of Hadoop clusters or the compute capacity they rely on.
Amazon Inspector enables you to analyze the behaviour of your AWS resources and helps you to identify potential security issues. Using Amazon Inspector, you can define a collection of AWS resources that you want to include in an assessment target. You can then create an assessment template and launch a security assessment run of this target.
Q34:Your company is planning to offload some of the batch processing workloads on to AWS. These jobs can be interrupted and resumed at any time. Which of the following instance types would be the most cost effective to use for this purpose.
A. On-Demand
B. Spot
C. Full Upfront Reserved
D. Partial Upfront Reserved
Answer:
B. Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. For example, Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks
Note that the AWS Console cannot be used to upload data onto Glacier. The console can only be used to create a Glacier vault which can be used to upload the data.
Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data& into and out of the AWS cloud. Using Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. Transferring data with Snowball is simple, fast, secure, and can be as little as one-fifth the cost of high-speed Internet.
Amazon Inspector enables you to analyze the behavior of your AWS resources and helps you to identify potential security issues. Using Amazon Inspector, you can define a collection of AWS resources that you want to include in an assessment target. You can then create an assessment template and launch a security assessment run of this target.
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open source databases.
You can reduce the load on your source DB Instance by routing read queries from your applications to the read replica. Read replicas allow you to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.
When you create an EBS volume in an Availability Zone, it is automatically replicated within that zone to prevent data loss due to failure of any single hardware component
Q42:Your company is planning to host a large e-commerce application on the AWS Cloud. One of their major concerns is Internet attacks such as DDos attacks.
Which of the following services can help mitigate this concern. Choose 2 answers from the options given below
One of the first techniques to mitigate DDoS attacks is to minimize the surface area that can be attacked thereby limiting the options for attackers and allowing you to build protections in a single place. We want to ensure that we do not expose our application or resources to ports, protocols or applications from where they do not expect any communication. Thus, minimizing the possible points of attack and letting us concentrate our mitigation efforts. In some cases, you can do this by placing your computation resources behind Content Distribution Networks (CDNs), Load Balancers and restricting direct Internet traffic to certain parts of your infrastructure like your database servers. In other cases, you can use firewalls or Access Control Lists (ACLs) to control what traffic reaches your applications.
You can use the consolidated billing feature in AWS Organizations to consolidate payment for multiple AWS accounts or multiple AISPL accounts. With consolidated billing, you can see a combined view of AWS charges incurred by all of your accounts. You also can get a cost report for each member account that is associated with your master account. Consolidated billing is offered at no additional charge.
One of the first techniques to mitigate DDoS attacks is to minimize the surface area that can be attacked thereby limiting the options for attackers and allowing you to build protections in a single place. We want to ensure that we do not expose our application or resources to ports, protocols or applications from where they do not expect any communication. Thus, minimizing the possible points of attack and letting us concentrate our mitigation efforts. In some cases, you can do this by placing your computation resources behind; Content Distribution Networks (CDNs), Load Balancers and restricting direct Internet traffic to certain parts of your infrastructure like your database servers. In other cases, you can use firewalls or Access Control Lists (ACLs) to control what traffic reaches your applications.
If you want a self-managed database, that means you want complete control over the database engine and the underlying infrastructure. In such a case you need to host the database on an EC2 Instance
If the database is going to be used for a minimum of one year at least , then it is better to get Reserved Instances. You can save on costs , and if you use a partial upfront options , you can get a better discount
The AWS Console cannot be used to upload data onto Glacier. The console can only be used to create a Glacier vault which can be used to upload the data.
Security groups acts as a virtual firewall for your instance to control inbound and outbound traffic. Network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets.
Q52:You plan to deploy an application on AWS. This application needs to be PCI Compliant. Which of the below steps are needed to ensure the compliance? Choose 2 answers from the below list:
A. Choose AWS services which are PCI Compliant
B. Ensure the right steps are taken during application development for PCI Compliance
C. Encure the AWS Services are made PCI Compliant
D. Do an audit after the deployment of the application for PCI Compliance.
Q57:Which of the following is a factor when calculating Total Cost of Ownership (TCO) for the AWS Cloud?
A. The number of servers migrated to AWS
B. The number of users migrated to AWS
C. The number of passwords migrated to AWS
D. The number of keys migrated to AWS
Answer:
A. Running servers will incur costs. The number of running servers is one factor of Server Costs; a key component of AWS’s Total Cost of Ownership (TCO). Reference: AWS cost calculator
Q58:Which AWS Services can be used to store files? Choose 2 answers from the options given below:
A. Amazon CloudWatch
B. Amazon Simple Storage Service (Amazon S3)
C. Amazon Elastic Block Store (Amazon EBS)
D. AWS COnfig
D. AWS Amazon Athena
B. and C. Amazon S3 is a Object storage built to store and retrieve any amount of data from anywhere. Amazon Elastic Block Store is a Persistent block storage for Amazon EC2.
C: AWS is defined as a cloud services provider. They provide hundreds of services of which compute and storage are included (not not limited to). Reference: AWS
Q60: Which AWS service can be used as a global content delivery network (CDN) service?
A. Amazon SES
B. Amazon CouldTrail
C. Amazon CloudFront
D. Amazon S3
Answer:
C: Amazon CloudFront is a web service that gives businesses and web application developers an easy and cost effective way to distribute content with low latency and high data transfer speeds. Like other AWS services, Amazon CloudFront is a self-service, pay-per-use offering, requiring no long term commitments or minimum fees. With CloudFront, your files are delivered to end-users using a global network of edge locations.Reference: AWS cloudfront
Q61:What best describes the concept of fault tolerance?
Choose the correct answer:
A. The ability for a system to withstand a certain amount of failure and still remain functional.
B. The ability for a system to grow in size, capacity, and/or scope.
C. The ability for a system to be accessible when you attempt to access it.
D. The ability for a system to grow and shrink based on demand.
Answer:
A: Fault tolerance describes the concept of a system (in our case a web application) to have failure in some of its components and still remain accessible (highly available). Fault tolerant web applications will have at least two web servers (in case one fails).
Q62: The firm you work for is considering migrating to AWS. They are concerned about cost and the initial investment needed. Which of the following features of AWS pricing helps lower the initial investment amount needed?
Choose 2 answers from the options given below:
A. The ability to choose the lowest cost vendor.
B. The ability to pay as you go
C. No upfront costs
D. Discounts for upfront payments
Answer:
B and C: The best features of moving to the AWS Cloud is: No upfront cost and The ability to pay as you go where the customer only pays for the resources needed. Reference: AWS pricing
Q64: Your company has started using AWS. Your IT Security team is concerned with the security of hosting resources in the Cloud. Which AWS service provides security optimization recommendations that could help the IT Security team secure resources using AWS?
An online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment, Trusted Advisor provides real time guidance to help you provision your resources following AWS best practices. Reference: AWS trusted advisor
Q65:What is the relationship between AWS global infrastructure and the concept of high availability?
Choose the correct answer:
A. AWS is centrally located in one location and is subject to widespread outages if something happens at that one location.
B. AWS regions and Availability Zones allow for redundant architecture to be placed in isolated parts of the world.
C. Each AWS region handles a different AWS services, and you must use all regions to fully use AWS.
As an AWS user, you can create your applications infrastructure and duplicate it. By placing duplicate infrastructure in multiple regions, high availability is created because if one region fails you have a backup (in a another region) to use.
Q66: You are hosting a number of EC2 Instances on AWS. You are looking to monitor CPU Utilization on the Instance. Which service would you use to collect and track performance metrics for AWS services?
Answer: iOS – Android C: Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. Reference: AWS cloudwatch
Q67: Which of the following support plans give access to all the checks in the Trusted Advisor service.
Q68: Which of the following in AWS maps to a separate geographic location?
A. AWS Region B. AWS Data Centers C. AWS Availability Zone
Answer:
Answer: iOS – Android A: Amazon cloud computing resources are hosted in multiple locations world-wide. These locations are composed of AWS Regions and Availability Zones. Each AWS Region is a separate geographic area. Reference: AWS Regions And Availability Zone
Q69:What best describes the concept of scalability?
Choose the correct answer:
A. The ability for a system to grow and shrink based on demand.
B. The ability for a system to grow in size, capacity, and/or scope.
C. The ability for a system be be accessible when you attempt to access it.
D. The ability for a system to withstand a certain amount of failure and still remain functional.
Answer
Answer: iOS – Android B: Scalability refers to the concept of a system being able to easily (and cost-effectively) scale UP. For web applications, this means the ability to easily add server capacity when demand requires.
Q70: If you wanted to monitor all events in your AWS account, which of the below services would you use?
A. AWS CloudWatch
B. AWS CloudWatch logs
C. AWS Config
D. AWS CloudTrail
Answer:
D: AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. Reference: Cloudtrail
Q71:What are the four primary benefits of using the cloud/AWS?
Choose the correct answer:
A. Fault tolerance, scalability, elasticity, and high availability.
B. Elasticity, scalability, easy access, limited storage.
C. Fault tolerance, scalability, sometimes available, unlimited storage
D. Unlimited storage, limited compute capacity, fault tolerance, and high availability.
Answer:
Answer: iOS – Android Fault tolerance, scalability, elasticity, and high availability are the four primary benefits of AWS/the cloud.
Q72:What best describes a simplified definition of the “cloud”?
Choose the correct answer:
A. All the computers in your local home network.
B. Your internet service provider
C. A computer located somewhere else that you are utilizing in some capacity.
D. An on-premise data center that your company owns.
Answer
Answer: iOS – Android (D) The simplest definition of the cloud is a computer that is located somewhere else that you are utilizing in some capacity. AWS is a cloud services provider, as the provide access to computers they own (located at AWS data centers), that you use for various purposes.
Q73: Your development team is planning to host a development environment on the cloud. This consists of EC2 and RDS instances. This environment will probably only be required for 2 months.
Which types of instances would you use for this purpose?
A. On-Demand
B. Spot
C. Reserved
D. Dedicated
Answer:
Answer: iOS – Android (A) The best and cost effective option would be to use On-Demand Instances. The AWS documentation gives the following additional information on On-Demand EC2 Instances. With On-Demand instances you only pay for EC2 instances you use. The use of On-Demand instances frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs. Reference: AWS ec2 pricing on-demand
Q74: Which of the following can be used to secure EC2 Instances?
Answer: iOS – Android security groups acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC could be assigned to a different set of security groups. If you don’t specify a particular group at launch time, the instance is automatically assigned to the default security group for the VPC. Reference: VPC Security Groups
Q75: What is the purpose of a DNS server?
Choose the correct answer:
A. To act as an internet search engine.
B. To protect you from hacking attacks.
C. To convert common language domain names to IP addresses.
Domain name system servers act as a “third party” that provides the service of converting common language domain names to IP addresses (which are required for a web browser to properly make a request for web content).
High availability refers to the concept that something will be accessible when you try to access it. An object or web application is “highly available” when it is accessible a vast majority of the time.
RDS is a SQL database service (that offers several database engine options), and DynamoDB is a NoSQL database option that only offers one NoSQL engine.
Reference:
Q78: What are two open source in-memory engines supported by ElastiCache?
Q85:If you want to have SMS or email notifications sent to various members of your department with status updates on resources in your AWS account, what service should you choose?
Choose the correct answer:
A. SNS
B. GetSMS
C. RDS
D. STS
Answer:
Answer: iOS – Android (A) Simple Notification Service (SNS) is what publishes messages to SMS and/or email endpoints.
Amazon WorkSpaces is a managed, secure Desktop-as-a-Service (DaaS) solution. You can use Amazon WorkSpaces to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe
Q87: Your company has recently migrated large amounts of data to the AWS cloud in S3 buckets. But it is necessary to discover and protect the sensitive data in these buckets. Which AWS service can do that?
Notes:Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS.
Q88: Your Finance Department has instructed you to save costs wherever possible when using the AWS Cloud. You notice that using reserved EC2 instances on a 1year contract will save money. What payment method will save the most money?
A: Deferred
B: Partial Upfront
C: All Upfront
D: No Upfront
Answer: C
Notes: With the All Upfront option, you pay for the entire Reserved Instance term with one upfront payment. This option provides you with the largest discount compared to On Demand Instance pricing.
Q89: A fantasy sports company needs to run an application for the length of a football season (5 months). They will run the application on an EC2 instance and there can be no interruption. Which purchasing option best suits this use case?
Notes: This is not a long enough term to make reserved instances the better option. Plus, the application can’t be interrupted, which rules out spot instances. Dedicated instances provide the option to bring along existing software licenses.
The scenario does not indicate a need to do this.
Q90:Your company is considering migrating its data center to the cloud. What are the advantages of the AWS cloud over an on-premises data center?
A. Replace upfront operational expenses with low variable operational expenses.
B. Maintain physical access to the new data center, but share responsibility with AWS.
C. Replace low variable costs with upfront capital expenses.
D. Replace upfront capital expenses with low variable costs.
Q91:You are leading a pilot program to try the AWS Cloud for one of your applications. You have been instructed to provide an estimate of your AWS bill. Which service will allow you to do this by manually entering your planned resources by service?
Notes: With the AWS Pricing Calculator, you can input the services you will use, and the configuration of those services, and get an estimate of the costs these services will accrue. AWS Pricing Calculator lets you explore AWS services, and create an estimate for the cost of your use cases on AWS.
Q92:Which AWS service would enable you to view the spending distribution in one of your AWS accounts?
Notes: AWS Cost Explorer is a free tool that you can use to view your costs and usage. You can view data up to the last 13 months, forecast how much you are likely to spend for the next three months, and get recommendations for what Reserved Instances to purchase. You can use AWS Cost Explorer to see patterns in how much you spend on AWS resources over time, identify areas that need further inquiry, and see trends that you can use to understand your costs. You can also specify time ranges for the data, and view time data by day or by month.
Q93:You are managing the company’s AWS account. The current support plan is Basic, but you would like to begin using Infrastructure Event Management. What support plan (that already includes Infrastructure Event Management without an additional fee) should you upgrade to?
A. Upgrade to Enterprise plan.
B. Do nothing. It is included in the Basic plan.
C. Upgrade to Developer plan.
D. Upgrade to the Business plan. No other steps are necessary.
Notes:AWS Infrastructure Event Management is a structured program available to Enterprise support customers (and Business Support customers for an additional fee) that helps you plan for large-scale events, such as product or application launches, infrastructure migrations, and marketing events.
With Infrastructure Event Management, you get strategic planning assistance before your event, as well as real-time support during these moments that matter most for your business.
Q94:You have decided to use the AWS Cost and Usage Report to track your EC2 Reserved Instance costs. To where can these reports be published?
A. Trusted Advisor
B. An S3 Bucket that you own.
C. CloudWatch
D. An AWS owned S3 Bucket.
Answer: B
Notes: The AWS Cost and Usage Reports (AWS CUR) contains the most comprehensive set of cost and usage data available. You can use Cost and Usage Reports to publish your AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that break down your costs by the hour or day, by product or product resource, or by tags that you define yourself. AWS updates the report in your bucket once a day in comma-separated value (CSV) format. You can view the reports using spreadsheet software such as Microsoft Excel or Apache OpenOffice Calc, or access them from an application using the Amazon S3 API.
Q95:What can we do in AWS to receive the benefits of volume pricing for your multiple AWS accounts?
A. Use consolidated billing in AWS Organizations.
B. Purchase services in bulk from AWS Marketplace.
Notes: You can use the consolidated billing feature in AWS Organizations to consolidate billing and payment for multiple AWS accounts or multiple Amazon Internet Services Pvt. Ltd (AISPL) accounts. You can combine the usage across all accounts in the organization to share the volume pricing discounts, Reserved Instance discounts, and Savings Plans. This can result in a lower charge for your project, department, or company than with individual standalone accounts.
Q96:A gaming company is using the AWS Developer Tool Suite to develop, build, and deploy their applications. Which AWS service can be used to trace user requests from end-to-end through the application?
Notes:AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components.
Q97:A company needs to use a Load Balancer which can serve traffic at the TCP, and UDP layers. Additionally, it needs to handle millions of requests per second at very low latencies. Which Load Balancer should they use?
Notes:Network Load Balancer is best suited for load balancing of Transmission Control Protocol (TCP), User Datagram Protocol (UDP) and Transport Layer Security (TLS) traffic where extreme performance is required. Operating at the connection level (Layer 4), Network Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) and is capable of handling millions of requests per second while maintaining ultra-low latencies.
Q98:Your company is migrating its services to the AWS cloud. The DevOps team has heard about infrastructure as code, and wants to investigate this concept. Which AWS service would they investigate?
Notes:AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS.
Q99:You have a MySQL database that you want to migrate to the cloud, and you need it to be significantly faster there. You are looking for a speed increase up to 5 times the current performance. Which AWS offering could you use?
Notes:Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is up to five times faster than standard MySQL databases and three times faster than standard PostgreSQL databases.
Q100:A developer is trying to programmatically retrieve information from an EC2 instance such as public keys, ip address, and instance id. From where can this information be retrieved?
Notes: This type of data is stored in Instance metadata. Instance userdata does not retrieve the information mentioned, but can be used to help configure a new instance.
Q101: Why is AWS more economical than traditional data centers for applications with varying compute workloads?
A) Amazon EC2 costs are billed on a monthly basis. B) Users retain full administrative access to their Amazon EC2 instances. C) Amazon EC2 instances can be launched on demand when needed. D) Users can permanently run enough instances to handle peak workloads.
Answer: C Notes: The ability to launch instances on demand when needed allows users to launch and terminate instances in response to a varying workload. This is a more economical practice than purchasing enough on-premises servers to handle the peak load. Reference: Advantage of cloud computing
Q102: Which AWS service would simplify the migration of a database to AWS?
A) AWS Storage Gateway B) AWS Database Migration Service (AWS DMS) C) Amazon EC2 D) Amazon AppStream 2.0
Answer: B Notes: AWS DMS helps users migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. AWS DMS can migrate data to and from most widely used commercial and open-source databases. Reference: AWS DMS
Q103: Which AWS offering enables users to find, buy, and immediately start using software solutions in their AWS environment?
A) AWS Config B) AWS OpsWorks C) AWS SDK D) AWS Marketplace
Answer: D Notes: AWS Marketplace is a digital catalog with thousands of software listings from independent software vendors that makes it easy to find, test, buy, and deploy software that runs on AWS. Reference: AWS Markerplace
Q104: Which AWS networking service enables a company to create a virtual network within AWS?
A) AWS Config B) Amazon Route 53 C) AWS Direct Connect D) Amazon Virtual Private Cloud (Amazon VPC)
Answer: D Notes: Amazon VPC lets users provision a logically isolated section of the AWS Cloud where users can launch AWS resources in a virtual network that they define. Reference: VPC https://aws.amazon.com/vpc/
Q105: Which component of the AWS global infrastructure does Amazon CloudFront use to ensure low-latency delivery?
A) AWS Regions B) Edge locations C) Availability Zones D) Virtual Private Cloud (VPC)
Answer: B Notes: – To deliver content to users with lower latency, Amazon CloudFront uses a global network of points of presence (edge locations and regional edge caches) worldwide. Reference: Cloudfront – https://aws.amazon.com/cloudfront/
Q106: How would a system administrator add an additional layer of login security to a user’s AWS Management Console?
A) Use Amazon Cloud Directory B) Audit AWS Identity and Access Management (IAM) roles C) Enable multi-factor authentication D) Enable AWS CloudTrail
Answer: C Notes: – Multi-factor authentication (MFA) is a simple best practice that adds an extra layer of protection on top of a username and password. With MFA enabled, when a user signs in to an AWS Management Console, they will be prompted for their username and password (the first factor—what they know), as well as for an authentication code from their MFA device (the second factor—what they have). Taken together, these multiple factors provide increased security for AWS account settings and resources. Reference: MFA – https://aws.amazon.com/iam/features/mfa/
Q107: Which service can identify the user that made the API call when an Amazon EC2 instance is terminated?
A) AWS Trusted Advisor B) AWS CloudTrail C) AWS X-Ray D) AWS Identity and Access Management (AWS IAM)
Answer: B Notes: – AWS CloudTrail helps users enable governance, compliance, and operational and risk auditing of their AWS accounts. Actions taken by a user, role, or an AWS service are recorded as events in CloudTrail. Events include actions taken in the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs and APIs. Reference: AWS CloudTrail https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html
Q108: Which service would be used to send alerts based on Amazon CloudWatch alarms?
A) Amazon Simple Notification Service (Amazon SNS) B) AWS CloudTrail C) AWS Trusted Advisor D) Amazon Route 53
Answer: A Notes: Amazon SNS and Amazon CloudWatch are integrated so users can collect, view, and analyze metrics for every active SNS. Once users have configured CloudWatch for Amazon SNS, they can gain better insight into the performance of their Amazon SNS topics, push notifications, and SMS deliveries. Reference: CloudWatch for Amazon SNS https://docs.aws.amazon.com/sns/latest/dg/sns-monitoring-using-cloudwatch.html
Q109: Where can a user find information about prohibited actions on the AWS infrastructure?
A) AWS Trusted Advisor B) AWS Identity and Access Management (IAM) C) AWS Billing Console D) AWS Acceptable Use Policy
Answer: D Notes: – The AWS Acceptable Use Policy provides information regarding prohibited actions on the AWS infrastructure. Reference: AWS Acceptable Use Policy – https://aws.amazon.com/aup/
Q110: Which of the following is an AWS responsibility under the AWS shared responsibility model?
A) Configuring third-party applications B) Maintaining physical hardware C) Securing application access and data D) Managing guest operating systems
Answer: B Notes: – Maintaining physical hardware is an AWS responsibility under the AWS shared responsibility model. Reference: AWS shared responsibility model https://aws.amazon.com/compliance/shared-responsibility-model/
Q111: Which recommendations are included in the AWS Trusted Advisor checks? (Select TWO.)
A) Amazon S3 bucket permissions
B) AWS service outages for services
C) Multi-factor authentication (MFA) use on the AWS account root user
D) Available software patches for Amazon EC2 instances
Answer: A and C
Notes: Trusted Advisor checks for S3 bucket permissions in Amazon S3 with open access permissions. Bucket permissions that grant list access to everyone can result in higher than expected charges if objects in the bucket are listed by unintended users at a high frequency. Bucket permissions that grant upload and delete access to all users create potential security vulnerabilities by allowing anyone to add, modify, or remove items in a bucket. This Trusted Advisor check examines explicit bucket permissions and associated bucket policies that might override the bucket permissions.
Trusted Advisor does not provide notifications for service outages. You can use the AWS Personal Health Dashboard to learn about AWS Health events that can affect your AWS services or account.
Trusted Advisor checks the root account and warns if MFA is not enabled.
Trusted Advisor does not provide information about the number of users in an AWS account.
What is the difference between Amazon EC2 Savings Plans and Spot Instances?
Amazon EC2 Savings Plans are ideal for workloads that involve a consistent amount of compute usage over a 1-year or 3-year term. With Amazon EC2 Savings Plans, you can reduce your compute costs by up to 72% over On-Demand costs.
Spot Instances are ideal for workloads with flexible start and end times, or that can withstand interruptions. With Spot Instances, you can reduce your compute costs by up to 90% over On-Demand costs. Unlike Amazon EC2 Savings Plans, Spot Instances do not require contracts or a commitment to a consistent amount of compute usage.
Amazon EBS vs Amazon EFS
An Amazon EBS volume stores data in a single Availability Zone. To attach an Amazon EC2 instance to an EBS volume, both the Amazon EC2 instance and the EBS volume must reside within the same Availability Zone.
Amazon EFS is a regional service. It stores data in and across multiple Availability Zones. The duplicate storage enables you to access data concurrently from all the Availability Zones in the Region where a file system is located. Additionally, on-premises servers can access Amazon EFS using AWS Direct Connect.
Which cloud deployment model allows you to connect public cloud resources to on-premises infrastructure?
Applications made available through hybrid deployments connect cloud resources to on-premises infrastructure and applications. For example, you might have an application that runs in the cloud but accesses data stored in your on-premises data center.
What is the difference between Amazon EC2 Savings Plans and Spot Instances?
Amazon EC2 Savings Plans are ideal for workloads that involve a consistent amount of compute usage over a 1-year or 3-year term. With Amazon EC2 Savings Plans, you can reduce your compute costs by up to 72% over On-Demand costs.
Spot Instances are ideal for workloads with flexible start and end times, or that can withstand interruptions. With Spot Instances, you can reduce your compute costs by up to 90% over On-Demand costs. Unlike Amazon EC2 Savings Plans, Spot Instances do not require contracts or a commitment to a consistent amount of compute usage.
Which benefit of cloud computing helps you innovate and build faster?
Agility: The cloud gives you quick access to resources and services that help you build and deploy your applications faster.
Which developer tool allows you to write code within your web browser?
Cloud9 is an integrated development environment (IDE) that allows you to write code within your web browser.
Which method of accessing an EC2 instance requires both a private key and a public key?
SSH allows you to access an EC2 instance from your local laptop using a key pair, which consists of a private key and a public key.
Which service allows you to track the name of the user making changes in your AWS account?
CloudTrail tracks user activity and API calls in your account, which includes identity information (the user’s name, source IP address, etc.) about the API caller.
Which analytics service allows you to query data in Amazon S3 using Structured Query Language (SQL)?
Athena is a query service that makes it easy to analyze data in Amazon S3 using SQL.
Which machine learning service helps you build, train, and deploy models quickly?
SageMaker helps you build, train, and deploy machine learning models quickly.
Which EC2 storage mechanism is recommended when running a database on an EC2 instance?
EBS is a storage device you can attach to your instances and is a recommended storage option when you run databases on an instance.
Which storage service is a scalable file system that only works with Linux-based workloads?
EFS is an elastic file system for Linux-based workloads.
Djamgatech: AI Driven Certification Preparation: Azure AI, AWS Machine Learning Specialty, AWS Data Analytics, GCP ML, GCP PDE,
Which AWS service provides a secure and resizable compute platform with choice of processor, storage, networking, operating system, and purchase model?
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. Amazon EC2 offers the broadest and deepest compute platform with choice of processor, storage, networking, operating system, and purchase model. Amazon EC2.
Which services allow you to build hybrid environments by connecting on-premises infrastructure to AWS?
Site-to-site VPN allows you to establish a secure connection between your on-premises equipment and the VPCs in your AWS account.
Direct Connect allows you to establish a dedicated network connection between your on-premises network and AWS.
What service could you recommend to a developer to automate the software release process?
CodePipeline is a developer tool that allows you to continuously automate the software release process.
Which service allows you to practice infrastructure as code by provisioning your AWS resources via scripted templates?
CloudFormation allows you to provision your AWS resources via scripted templates.
Which machine learning service allows you to add image analysis to your applications?
Rekognition is a service that makes it easy to add image analysis to your applications.
Which services allow you to run containerized applications without having to manage servers or clusters?
Fargate removes the need for you to interact with servers or clusters as it provisions, configures, and scales clusters of virtual machines to run containers for you.
ECS lets you run your containerized Docker applications on both Amazon EC2 and AWS Fargate.
EKS lets you run your containerized Kubernetes applications on both Amazon EC2 and AWS Fargate.
Amazon S3 offers multiple storage classes. Which storage class is best for archiving data when you want the cheapest cost and don’t mind long retrieval times?
S3 Glacier Deep Archive offers the lowest cost and is used to archive data. You can retrieve objects within 12 hours.
In the shared responsibility model, what is the customer responsible for?
You are responsible for patching the guest OS, including updates and security patches.
You are responsible for firewall configuration and securing your application.
A company needs phone, email, and chat access 24 hours a day, 7 days a week. The response time must be less than 1 hour if a production system has a service interruption. Which AWS Support plan meets these requirements at the LOWEST cost?
The Business Support plan provides phone, email, and chat access 24 hours a day, 7 days a week. The Business Support plan has a response time of less than 1 hour if a production system has a service interruption.
Which of the following is an advantage of consolidated billing on AWS?
Consolidated billing is a feature of AWS Organizations. You can combine the usage across all accounts in your organization to share volume pricing discounts, Reserved Instance discounts, and Savings Plans. This solution can result in a lower charge compared to the use of individual standalone accounts.
A company requires physical isolation of its Amazon EC2 instances from the instances of other customers. Which instance purchasing option meets this requirement?
With Dedicated Hosts, a physical server is dedicated for your use. Dedicated Hosts provide visibility and the option to control how you place your instances on an isolated, physical server. For more information about Dedicated Hosts, see Amazon EC2 Dedicated Hosts.
A company is hosting a static website from a single Amazon S3 bucket. Which AWS service will achieve lower latency and high transfer speeds?
CloudFront is a web service that speeds up the distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. Content is cached in edge locations. Content that is repeatedly accessed can be served from the edge locations instead of the source S3 bucket. For more information about CloudFront, see Accelerate static website content delivery.
Which AWS service provides a simple and scalable shared file storage solution for use with Linux-based Amazon EC2 instances and on-premises servers?
Amazon EFS provides an elastic file system that lets you share file data without the need to provision and manage storage. It can be used with AWS Cloud services and on-premises resources, and is built to scale on demand to petabytes without disrupting applications. With Amazon EFS, you can grow and shrink your file systems automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.
Which service allows you to generate encryption keys managed by AWS?
KMS allows you to generate and manage encryption keys. The keys generated by KMS are managed by AWS.
Which service can integrate with a Lambda function to automatically take remediation steps when it uncovers suspicious network activity when monitoring logs in your AWS account?
GuardDuty can perform automated remediation actions by leveraging Amazon CloudWatch Events and AWS Lambda. GuardDuty continuously monitors for threats and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon S3. GuardDuty analyzes multiple AWS data sources, such as AWS CloudTrail event logs, Amazon VPC Flow Logs, and DNS logs.
Which service allows you to create access keys for someone needing to access AWS via the command line interface (CLI)?
IAM allows you to create users and generate access keys for users needing to access AWS via the CLI.
Which service allows you to record software configuration changes within your Amazon EC2 instances over time?
Config helps with recording compliance and configuration changes over time for your AWS resources.
Which service assists with compliance and auditing by offering a downloadable report that provides the status of passwords and MFA devices in your account?
IAM provides a downloadable credential report that lists all users in your account and the status of their various credentials, including passwords, access keys, and MFA devices.
Which service allows you to locate credit card numbers stored in Amazon S3?
Macie is a data privacy service that helps you uncover and protect your sensitive data, such as personally identifiable information (PII) like credit card numbers, passport numbers, social security numbers, and more.
How do you manage permissions for multiple users at once using AWS Identity and Access Management (IAM)?
An IAM group is a collection of IAM users. When you assign an IAM policy to a group, all users in the group are granted permissions specified by the policy.
Which service protects your web application from cross-site scripting attacks?
WAF helps protect your web applications from common web attacks, like SQL injection or cross-site scripting.
Which AWS Trusted Advisor real-time guidance recommendations are available for AWS Basic Support and AWS Developer Support customers?
Basic and Developer Support customers get 50 service limit checks.
Basic and Developer Support customers get security checks for “Specific Ports Unrestricted” on Security Groups.
Basic and Developer Support customers get security checks on S3 Bucket Permissions.
Which service allows you to simplify billing by using a single payment method for all your accounts?
Organizations offers consolidated billing that provides 1 bill for all your AWS accounts. This also gives you access to volume discounts.
Which AWS service usage will always be free even after the 12-month free tier plan has expired?
One million Lambda requests are always free each month.
What is the easiest way for a customer on the AWS Basic Support plan to increase service limits?
The Basic Support plan allows 24/7 access to Customer Service via email and the ability to open service limit increase support cases.
Which types of issues are covered by AWS Support?
“How to” questions about AWS service and features
Problems detected by health checks
Djamgatech: AI Driven Certification Preparation: Azure AI, AWS Machine Learning Specialty, AWS Data Analytics, GCP ML, GCP PDE,
Which features of AWS reduce your total cost of ownership (TCO)?
Sharing servers with others allows you to save money.
Elastic computing allows you to trade capital expense for variable expense.
You pay only for the computing resources you use with no long-term commitments.
Which service allows you to select and deploy operating system and software patches automatically across large groups of Amazon EC2 instances?
Systems Manager allows you to automate operational tasks across your AWS resources.
Which service provides the easiest way to set up and govern a secure, multi-account AWS environment?
Control Tower allows you to centrally govern and enforce the best use of AWS services across your accounts.
Which cost management tool gives you the ability to be alerted when the actual or forecasted cost and usage exceed your desired threshold?
Budgets allow you to improve planning and cost control with flexible budgeting and forecasting. You can choose to be alerted when your budget threshold is exceeded.
Which tool allows you to compare your estimated service costs per Region?
The Pricing Calculator allows you to get an estimate for the cost of AWS services. Comparing service costs per Region is a common use case.
Who can assist with accelerating the migration of legacy contact center infrastructure to AWS?
Professional Services is a global team of experts that can help you realize your desired business outcomes with AWS.
The AWS Partner Network (APN) is a global community of partners that helps companies build successful solutions with AWS.
Which cost management tool allows you to view costs from the past 12 months, current detailed costs, and forecasts costs for up to 3 months?
Cost Explorer allows you to visualize, understand, and manage your AWS costs and usage over time.
Which service reduces the operational overhead of your IT organization?
Managed Services implements best practices to maintain your infrastructure and helps reduce your operational overhead and risk.
I assume it is your subscription where the VPCs are located, otherwise you can’t really discover the information you are looking for. On the EC2 server you could use AWS CLI or Powershell based scripts that query the IP information. Based on IP you can find out what instance uses the network interface, what security groups are tied to it and in which VPC the instance is hosted. Read more here…
When using AWS Lambda inside your VPC, your Lambda function will be allocated private IP addresses, and only private IP addresses, from your specified subnets. This means that you must ensure that your specified subnets have enough free address space for your Lambda function to scale up to. Each simultaneous invocation needs its own IP. Read more here…
When a Lambda “is in a VPC”, it really means that its attached Elastic Network Interface is the customer’s VPC and not the hidden VPC that AWS manages for Lambda.
The ENI is not related to the AWS Lambda management system that does the invocation (the data plane mentioned here). The AWS Step Function system can go ahead and invoke the Lambda through the API, and the network request for that can pass through the underlying VPC and host infrastructure.
Those Lambdas in turn can invoke other Lambda directly through the API, or more commonly by decoupling them, such as through Amazon SQS used as a trigger. Read more ….
How do I invoke an AWS Lambda function programmatically?
Invokes a Lambda function. You can invoke a function synchronously (and wait for the response), or asynchronously. To invoke a function asynchronously, set InvocationType to Event.
For synchronous invocation, details about the function response, including errors, are included in the response body and headers. For either invocation type, you can find more information in the execution log and trace.
When an error occurs, your function may be invoked multiple times. Retry behavior varies by error type, client, event source, and invocation type. For example, if you invoke a function asynchronously and it returns an error, Lambda executes the function up to two more times. For more information, see Retry Behavior.
For asynchronous invocation, Lambda adds events to a queue before sending them to your function. If your function does not have enough capacity to keep up with the queue, events may be lost. Occasionally, your function may receive the same event multiple times, even if no error occurs. To retain events that were not processed, configure your function with a dead-letter queue.
The status code in the API response doesn’t reflect function errors. Error codes are reserved for errors that prevent your function from executing, such as permissions errors, limit errors, or issues with your function’s code and configuration. For example, Lambda returns TooManyRequestsException if executing the function would cause you to exceed a concurrency limit at either the account level ( Concurrent Invocation Limit Exceeded) or function level ( Reserved Function Concurrent Invocation LimitExceeded).
For functions with a long timeout, your client might be disconnected during synchronous invocation while it waits for a response. Configure your HTTP client, SDK, firewall, proxy, or operating system to allow for long connections with timeout or keep-alive settings.
The subnet mask determines how many bits of the network address are relevant (and thus indirectly the size of the network block in terms of how many host addresses are available) –
192.0.2.0, subnet mask 255.255.255.0 means that 192.0.2 is the significant portion of the network number, and that there 8 bits left for host addresses (i.e. 192.0.2.0 thru 192.0.2.255)
192.0.2.0, subnet mask 255.255.255.128 means that 192.0.2.0 is the significant portion of the network number (first three octets and the most significant bit of the last octet), and that there 7 bits left for host addresses (i.e. 192.0.2.0 thru 192.0.2.127)
When in doubt, envision the network number and subnet mask in base 2 (i.e. binary) and it will become much clearer. Read more here…
Separate out the roles needed to do each job. (Assuming this is a corporate environment)
Have a role for EC2, another for Networking, another for IAM.
Everyone should not be admin. Everyone should not be able to add/remove IGW’s, NAT gateways, alter security groups and NACLS, or setup peering connections.
Also, another thing… lock down full internet access. Limit to what is needed and that’s it. Read more here….
How can we setup AWS public-private subnet in VPC without NAT server?
Within a single VPC, the subnets’ route tables need to point to each other. This will already work without additional routes because VPC sets up the local target to point to the VPC subnet.
Security groups are not used here since they are attached to instances, and not networks.
The NAT EC2 instance (server), or AWS-provided NAT gateway is necessary only if the private subnet internal addresses need to make outbound connections. The NAT will translate the private subnet internal addresses to the public subnet internal addresses, and the AWS VPC Internet Gateway will translate these to external IP addresses, which can then go out to the Internet. Read more here ….
What are the applications (or workloads) that cannot be migrated on to cloud (AWS or Azure or GCP)?
A good example of workloads that currently are not in public clouds are mobile and fixed core telecom networks for tier 1 service providers. This is despite the fact that these core networks are increasingly software based and have largely been decoupled from the hardware. There are a number of reasons for this such as the public cloud providers such as Azure and AWS do not offer the guaranteed availability required by telecom networks. These networks require 99.999% availability and is typically referred to as telecom grade.
The regulatory environment frequently restricts hosting of subscriber data outside the of the operators data centers or in another country and key network functions such as lawful interception cannot contractually be hosted off-prem. Read more here….
How many CIDRs can we add to my own created VPC?
You can add up to 5 IPv4 CIDR blocks, or 1 IPv6 block per VPC. You can further segment the network by utilizing up to 200 subnets per VPC. Amazon VPC Limits. Read more …
Why can’t a subnet’s CIDR be changed once it has been assigned?
Sure it can, but you’ll need to coordinate with the neighbors. You can merge two /25’s into a single /24 quite effortlessly if you control the entire range it covers. In practice you’ll see many tiny allocations in public IPv4 space, like /29’s and even smaller. Those are all assigned to different people. If you want to do a big shuffle there, you have a lot of coordinating to do.. or accept the fallout from the breakage you cause. Read more…
Can one VPC talk to another VPC?
Yes, but a Virtual Private Cloud is usually built for the express purpose of being isolated from unwanted external traffic. I can think of several good reasons to encourage that sort of communication, so the idea is not without merit. Read more..
Good knowledge about the AWS services, and how to leverage them to solve simple to complex problems.
As your question is related to the deployment Pod, you will probably be asked about deployment methods (A/B testing like blue-green deployment) as well as pipelining strategies. You might be asked during this interview to reason about a simple task and to code it (like parsing a log file). Also review the TCP/IP stack in-depth as well as the tools to troubleshoot it for the networking round. You will eventually have some Linux questions, the range of questions can vary from common CLI tools to Linux internals like signals / syscalls / file descriptors and so on.
Last but not least the Leadership principles, I can only suggest you to prepare a story for each of them. You will quickly find what LP they are looking for and would be able to give the right signal to your interviewer.
Finally, remember that theres a debrief after the (usually 5) stages of your on site interview, and more senior and convincing interviewers tend to defend their vote so don’t screw up with them.
Be natural, focus on the question details and ask for confirmation, be cool but not too much. At the end of the day, remember that your job will be to understand customer issues and provide a solution, so treat your interviewers as if they were customers and they will see a successful CSE in you, be reassured and give you the job.
Expect questions on cloudformations, Teraform, Aws ec2/rds and stack related questions.
It also depends on the support team you are being hired for. Networking or compute teams (Ec2) have different interview patterns vs database or big data support.
In any case, basics of OS, networking are critical to the interview. If you have a phone screen, we will be looking for basic/semi advance skills of these and your speciality. For example if you mention Oracle in your resume and you are interviewing for the database team, expect a flurry of those questions.
Other important aspect is the Amazon leadership principles. Half of your interview is based on LPs. If you fail to have scenarios where you do not demonstrate our LPs, you cannot expect to work here even though your technical skills are above average (Having extraordinary skills is a different thing).
The overall interview itself will have 1 phone screen if you are interviewing in the US and 1–2 if outside US. The onsite loop will be 4 rounds , 2 of which are technical (again divided into OS and networking and the specific speciality of the team you are interviewing for ) and 2 of them are leadership principles where we test your soft skills and management skills as they are very important in this job. You need to have a strong view point, disagree if it seems valid to do so, empathy and be a team player while showing the ability to pull off things individually as well. These skills will be critical for cracking LP interviews.
You will NOT be asked to code or write queries as its not part of the job, so you can concentrate on the theoretical part of the subject and also your resume. We will grill you on topics mentioned on your resume to start with.
Monolithic architecture is something that build from single piece of material, historically from rock. Monolith term normally use for object made from single large piece of material.” – Non-Technical Definition. “Monolithic application has single code base with multiple modules.
Large Monolithic code-base (often spaghetti code) puts immense cognitive complexity on the developer’s head. As a result, the development velocity is poor. Granular scaling (i.e., scaling part of the application) is not possible. Polyglot programming or polyglot database is challenging.
Drawbacks of Monolithic Architecture
This simple approach has a limitation in size and complexity. Application is too large and complex to fully understand and made changes fast and correctly. The size of the application can slow down the start-up time. You must redeploy the entire application on each update.
Sticky sessions, also known as session affinity, allow you to route a site user to the particular web server that is managing that individual user’s session. The session’s validity can be determined by a number of methods, including a client-side cookies or via configurable duration parameters that can be set at the load balancer which routes requests to the web servers.
Some advantages with utilizing sticky sessions are that it’s cost effective due to the fact you are storing sessions on the same web servers running your applications and that retrieval of those sessions is generally fast because it eliminates network latency. A drawback for using storing sessions on an individual node is that in the event of a failure, you are likely to lose the sessions that were resident on the failed node. In addition, in the event the number of your web servers change, for example a scale-up scenario, it’s possible that the traffic may be unequally spread across the web servers as active sessions may exist on particular servers. If not mitigated properly, this can hinder the scalability of your applications. Read more here …
After you terminate an instance, it remains visible in the console for a short while, and then the entry is automatically deleted. You cannot delete the terminated instance entry yourself. After an instance is terminated, resources such as tags and volumes are gradually disassociated from the instance, therefore may no longer be visible on the terminated instance after a short while.
When an instance terminates, the data on any instance store volumes associated with that instance is deleted.
By default, Amazon EBS root device volumes are automatically deleted when the instance terminates. However, by default, any additional EBS volumes that you attach at launch, or any EBS volumes that you attach to an existing instance persist even after the instance terminates. This behavior is controlled by the volume’s DeleteOnTermination attribute, which you can modify
When you first launch an instance with gp2 volumes attached, you get an initial burst credit allowing for up to 30 minutes of 3,000 iops/sec.
After the first 30 minutes, your volume will accrue credits as follows (taken directly from AWS documentation):
Within the General Purpose (SSD) implementation is a Token Bucket model that works as follows
Each token represents an “I/O credit” that pays for one read or one write.
A bucket is associated with each General Purpose (SSD) volume, and can hold up to 5.4 million tokens.
Tokens accumulate at a rate of 3 per configured GB per second, up to the capacity of the bucket.
Tokens can be spent at up to 3000 per second per volume.
The baseline performance of the volume is equal to the rate at which tokens are accumulated — 3 IOPS per GB per second.
In addition to this, gp2 volumes provide baseline performance of 3 iops per Gb, up to 1Tb (3000 iops). Volumes larger than 1Tb no longer work on the credit system, as they already provide a baseline of 3000 iops. Gp2 volumes have a cap of 10,000 iops regardless of the volume size (so the iops max out for volumes larger than 3.3Tb)
Elastic IP addresses are free when you have them assigned to an instance, feel free to use one! Elastic IPs get disassociated when you stop an instance, so you will get charged in the mean time. The benefit is that you get to keep that IP allocated to your account though, instead of losing it like any other. Once you start the instance you just re-associate it back and you have your old IP again.
Here are the changes associated with the use of Elastic IP addresses
No cost for Elastic IP addresses while in use
* $0.01 per non-attached Elastic IP address per complete hour
* $0.00 per Elastic IP address remap – first 100 remaps / month
* $0.10 per Elastic IP address remap – additional remap / month over 100
If you require any additional information about pricing please reference the link below
The short answer to reducing your AWS EC2 costs – turn off your instances when you don’t need them.
Your AWS bill is just like any other utility bill, you get charged for however much you used that month. Don’t make the mistake of leaving your instances on 24/7 if you’re only using them during certain days and times (ex. Monday – Friday, 9 to 5).
To automatically start and stop your instances, AWS offers an “EC2 scheduler” solution. A better option would be a cloud cost management tool that not only stops and starts your instances automatically, but also tracks your usage and makes sizing recommendations to optimize your cloud costs and maximize your time and savings.
You could potentially save money using Reserved Instances. But, in non-production environments such as dev, test, QA, and training, Reserved Instances are not your best bet. Why is this the case? These environments are less predictable; you may not know how many instances you need and when you will need them, so it’s better to not waste spend on these usage charges. Instead, schedule such instances (preferably using ParkMyCloud). Scheduling instances to be only up 12 hours per day on weekdays will save you 65% – better than all but the most restrictive 3-year RIs!
Well AWS is a web service provider which offers a set of services related to compute, storage, database, network and more to help the business scale and grow
All your concerns are related to AWS EC2 instance, so let me start with an instance
Instance:
An EC2 instance is similar to a server where you can host your websites or applications to make it available Globally
It is highly scalable and works on the pay-as-you-go model
You can increase or decrease the capacity of these instances as per the requirement
AMI:
AMI provides the information required to launch the EC2 instance
AMI includes the pre-configured templates of the operating system that runs on the AWS
Users can launch multiple instances with the same configuration from a single AMI
Snapshot:
Snapshots are the incremental backups for the Amazon EBS
Data in the EBS are stored in S3 by taking point-to-time snapshots
Unique data are only deleted when a snapshot is deleted
They are definitely all chalk and cheese to one another.
A VPN (Virtual Private Network) is essentially an encrypted “channel” connecting two networks, or a machine to a network, generally over the public internet.
A VPS (Virtual Private Server) is a rented virtual machine running on someone else’s hardware. AWS EC2 can be thought of as a VPS, but the term is usually used to describe low-cost products offered by lots of other hosting companies.
A VPC (Virtual Private Cloud) is a virtual network in AWS (Amazon Web Services). It can be divided into private and public subnets, have custom routing rules, have internal connections to other VPCs, etc. EC2 instances and other resources are placed in VPCs similarly to how physical data centers have operated for a very long time.
Elastic IP address is basically the static IP (IPv4) address that you can allocate to your resources.
Now, in case that you allocate IP to the resource (and the resource is running), you are not charged anything. On the other hand, if you create Elastic IP, but you do not allocate it to the resource (or the resource is not running), then you are charged some amount (should be around $0.005 per hour if I remember correctly)
Additional info about these:
You are limited to 5 Elastic IP addresses per region. If you require more than that, you can contact AWS support with a request for additional addresses. You need to have a good reason in order to be approved because IPv4 addresses are becoming a scarce resource.
In general, you should be good without Elastic IPs for most of the use-cases (as every EC2 instance has its own public IP, and you can use load balancers, as well as map most of the resources via Route 53).
One of the use-cases that I’ve seen where my client is using Elastic IP is to make it easier for him to access specific EC2 instance via RDP, as well as do deployment through Visual Studio, as he targets the Elastic IP, and thus does not have to watch for any changes in public IP (in case of stopping or rebooting).
At this time, AWS Transit Gateway does not support inter region attachments. The transit gateway and the attached VPCs must be in the same region. VPC peering supports inter region peering.
The EC2 instance is server instance whilst a Workspace is windows desktop instance
Both Windows Server and Windows workstation editions have desktops. Windows Server Core doesn’t not (and AWS doesn’t have an AMI for Windows Server Core that I could find).
It is possible to SSH into a Windows instance – this is done on port 22. You would not see a desktop when using SSH if you had enabled it. It is not enabled by default.
If you are seeing a desktop, I believe you’re “RDPing” to the Windows instance. This is done with the RDP protocol on port 3389.
Two different protocols and two different ports.
Workspaces doesn’t allow terminal or ssh services by default. You need to use Workspace client. You still can enable RDP or/and SSH but this is not recommended.
Workspaces is a managed desktop service. AWS is taking care of pre-build AMIs, software licenses, joining to domain, scaling etc.
What is Amazon EC2?Scalable, pay-as-you-go compute capacity in the cloud. Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.
What is Amazon WorkSpaces?Easily provision cloud-based desktops that allow end-users to access applications and resources. With a few clicks in the AWS Management Console, customers can provision a high-quality desktop experience for any number of users at a cost that is highly competitive with traditional desktops and half the cost of most virtual desktop infrastructure (VDI) solutions. End-users can access the documents, applications and resources they need with the device of their choice, including laptops, iPad, Kindle Fire, or Android tablets.
Elastic – Amazon EC2 enables you to increase or decrease capacity within minutes, not hours or days. You can commission one, hundreds or even thousands of server instances simultaneously.
Completely Controlled – You have complete control of your instances. You have root access to each one, and you can interact with them as you would any machine.
Flexible – You have the choice of multiple instance types, operating systems, and software packages. Amazon EC2 allows you to select a configuration of memory, CPU, instance storage, and the boot partition size that is optimal for your choice of operating system and application.
On the other hand, Amazon WorkSpaces provides the following key features:
Support Multiple Devices- Users can access their Amazon WorkSpaces using their choice of device, such as a laptop computer (Mac OS or Windows), iPad, Kindle Fire, or Android tablet.
Keep Your Data Secure and Available- Amazon WorkSpaces provides each user with access to persistent storage in the AWS cloud. When users access their desktops using Amazon WorkSpaces, you control whether your corporate data is stored on multiple client devices, helping you keep your data secure.
Choose the Hardware and Software you need- Amazon WorkSpaces offers a choice of bundles providing different amounts of CPU, memory, and storage so you can match your Amazon WorkSpaces to your requirements. Amazon WorkSpaces offers preinstalled applications (including Microsoft Office) or you can bring your own licensed software.
Amazon EBS vs Amazon EFS
An Amazon EBS volume stores data in a single Availability Zone. To attach an Amazon EC2 instance to an EBS volume, both the Amazon EC2 instance and the EBS volume must reside within the same Availability Zone.
Amazon EFS is a regional service. It stores data in and across multiple Availability Zones. The duplicate storage enables you to access data concurrently from all the Availability Zones in the Region where a file system is located. Additionally, on-premises servers can access Amazon EFS using AWS Direct Connect.
Provides secure, resizable compute capacity in the cloud. It makes web-scale cloud computing easier for developers. EC2
EC2 Spot
Run fault-tolerant workloads for up to 90% off. EC2Spot
EC2 Autoscaling
Automatically add or remove compute capacity to meet changes in demand. EC2_AustoScaling
Lightsail
Designed to be the easiest way to launch & manage a virtual private server with AWS. An easy-to-use cloud platform that offers everything need to build an application or website. Lightsail
Batch
Enables developers, scientists, & engineers to easily & efficiently run hundreds of thousands of batch computing jobs on AWS. Fully managed batch processing at any scale. Batch
Containers
Elastic Container Service (ECS)
Highly secure, reliable, & scalable way to run containers. ECS
Run code without thinking about servers. Pay only for the compute time you consume. Lamda
Edge and hybrid
Outposts
Run AWS infrastructure & services on premises for a truly consistent hybrid experience. Outposts
Snow Family
Collect and process data in rugged or disconnected edge environments. SnowFamily
Wavelength
Deliver ultra-low latency application for 5G devices. Wavelenth
VMware Cloud on AWS
Innovate faster, rapidly transition to the cloud, & work securely from any location. VMware_On_AWS
Local Zones
Run latency sensitive applications closer to end-users. LocalZones
Networking and Content Delivery
Use cases
Functionality
Service
Description
Build a cloud network
Define and provision a logically isolated network for your AWS resources
VPC
VPC lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. VPC
Connect VPCs and on-premises networks through a central hub
Transit Gateway
Transit Gateway connects VPCs & on-premises networks through a central hub. This simplifies network & puts an end to complex peering relationships. TransitGateway
Provide private connectivity between VPCs, services, and on-premises applications
PrivateLink
PrivateLink provides private connectivity between VPCs & services hosted on AWS or on-premises, securely on the Amazon network. PrivateLink
Route users to Internet applications with a managed DNS service
Route 53
Route 53 is a highly available & scalable cloud DNS web service. Route53
Scale your network design
Automatically distribute traffic across a pool of resources, such as instances, containers, IP addresses, and Lambda functions
Elastic Load Balancing
Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as EC2’s, containers, IP addresses, & Lambda functions. ElasticLoadBalancing
Direct traffic through the AWS Global network to improve global application performance
Global Accelerator
Global Accelerator is a networking service that sends user’s traffic through AWS’s global network infrastructure, improving internet user performance by up to 60%. GlobalAccelerator
Secure your network traffic
Safeguard applications running on AWS against DDoS attacks
Shield
Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. Shield
Protect your web applications from common web exploits
WAF
WAF is a web application firewall that helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. WAF
Centrally configure and manage firewall rules
Firewall Manager
Firewall Manager is a security management service which allows to centrally configure & manage firewall rules across accounts & apps in AWS Organization. link text
Build a hybrid IT network
Connect your users to AWS or on-premises resources using a Virtual Private Network
(VPN) – Client
VPN solutions establish secure connections between on-premises networks, remote offices, client devices, & the AWS global network. VPN
Create an encrypted connection between your network and your Amazon VPCs or AWS Transit Gateways
(VPN) – Site to Site
Site-to-Site VPN creates a secure connection between data center or branch office & AWS cloud resources. site_to_site
Establish a private, dedicated connection between AWS and your datacenter, office, or colocation environment
Direct Connect
Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. DirectConnect
Content delivery networks
Securely deliver data, videos, applications, and APIs to customers globally with low latency, and high transfer speeds
CloudFront
CloudFront expedites distribution of static & dynamic web content. CloudFront
Build a network for microservices architectures
Provide application-level networking for containers and microservices
App Mesh
App Mesh makes it accessible to guide & control microservices operating on AWS. AppMesh
Create, maintain, and secure APIs at any scale
API Gateway
API Gateway allows the user to design & expand their own REST and WebSocket APIs at any scale. APIGateway
Discover AWS services connected to your applications
Cloud Map
Cloud Map permits the name & handles the cloud resources. CloudMap
S3 is the storehouse for the internet i.e. object storage built to store & retrieve any amount of data from anywhere S3
AWS Backup
AWS Backup is an externally-accessible backup provider that makes it easier to align & optimize the backup of data across AWS services in the cloud. AWS_Backup
Amazon EBS
Amazon Elastic Block Store is a web service that provides block-level storage volumes. EBS
Amazon EFS Storage
EFS offers file storage for the user’s Amazon EC2 instances. It’s kind of blob Storage. EFS
Amazon FSx
FSx supply fully managed 3rd-party file systems with the native compatibility & characteristic sets for workloads. It’s available as FSx for Windows server (Fully managed file storage built on Windows Server) & Lustre (Fully managed high-performance file system integrated with S3). FSx_WindowsFSx_Lustre
AWS Storage Gateway
Storage Gateway is a service which connects an on-premises software appliance with cloud-based storage. Storage_Gateway
AWS DataSync
DataSync makes it simple & fast to move large amounts of data online between on-premises storage & S3, EFS, or FSx for Windows File Server. DataSync
AWS Transfer Family
The Transfer Family provides fully managed support for file transfers directly into & out of S3. Transfer_Family
AWS Snow Family
Highly-secure, portable devices to collect & process data at the edge, and migrate data into and out of AWS. Snow_Family
Classification: Object storage: S3 File storage services: Elastic File System, FSx for Windows Servers & FSx for Lustre Block storage: EBS Backup: AWS Backup Data transfer: Storage gateway –> 3 types: Tape, File, Volume. Transfer Family –> SFTP, FTPS, FTP. Edge computing and storage and Snow Family –> Snowcone, Snowball, Snowmobile
Databases
Database type
Use cases
Service
Description
Relational
Traditional applications, ERP, CRM, e-commerce
Aurora, RDS, Redshift
RDS is a web service that makes it easier to set up, control, and scale a relational database in the cloud. AuroraRDSRedshift
Key-value
High-traffic web apps, e-commerce systems, gaming applications
DynamoDB
DynamoDB is a fully administered NoSQL database service that offers quick and reliable performance with integrated scalability. DynamoDB
ElastiCache helps in setting up, managing, and scaling in-memory cache conditions. MemcachedRedis
Document
Content management, catalogs, user profiles
DocumentDB
DocumentDB (with MongoDB compatibility) is a quick, dependable, and fully-managed database service that makes it easy for you to set up, operate, and scale MongoDB-compatible databases.DocumentDB
Wide column
High scale industrial apps for equipment maintenance, fleet management, and route optimization
Keyspaces (for Apache Cassandra)
Keyspaces is a scalable, highly available, and managed Apache Cassandra–compatible database service. Keyspaces
Graph
Fraud detection, social networking, recommendation engines
Neptune
Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets. Neptune
Time series
IoT applications, DevOps, industrial telemetry
Timestream
Timestream is a fast, scalable, and serverless time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day. Timestream
Ledger
Systems of record, supply chain, registrations, banking transactions
Quantum Ledger Database (QLDB)
QLDB is a fully managed ledger database that provides a transparent, immutable, and cryptographically verifiable transaction log owned by a central trusted authority. QLDB
Developer Tools
Service
Description
Cloud9
Cloud9 is a cloud-based IDE that enables the user to write, run, and debug code. Cloud9
CodeArtifact
CodeArtifact is a fully managed artifact repository service that makes it easy for organizations of any size to securely store, publish, & share software packages used in their software development process. CodeArtifact
CodeBuild
CodeBuild is a fully managed service that assembles source code, runs unit tests, & also generates artefacts ready to deploy. CodeBuild
CodeGuru
CodeGuru is a developer tool powered by machine learning that provides intelligent recommendations for improving code quality & identifying an application’s most expensive lines of code. CodeGuru
Cloud Development Kit
Cloud Development Kit (AWS CDK) is an open source software development framework to define cloud application resources using familiar programming languages. CDK
CodeCommit
CodeCommit is a version control service that enables the user to personally store & manage Git archives in the AWS cloud. CodeCommit
CodeDeploy
CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as EC2, Fargate, Lambda, & on-premises servers. CodeDeploy
CodePipeline
CodePipeline is a fully managed continuous delivery service that helps automate release pipelines for fast & reliable app & infra updates. CodePipeline
CodeStar
CodeStar enables to quickly develop, build, & deploy applications on AWS. CodeStar
CLI
AWS CLI is a unified tool to manage AWS services & control multiple services from the command line & automate them through scripts. CLI
X-Ray
X-Ray helps developers analyze & debug production, distributed applications, such as those built using a microservices architecture. X-Ray
CDK uses the familiarity & expressive power of programming languages for modeling apps. CDK
Corretto
Corretto is a no-cost, multiplatform, production-ready distribution of the OpenJDK. Corretto
Crypto Tools
Cryptography is hard to do safely & correctly. The AWS Crypto Tools libraries are designed to help everyone do cryptography right, even without special expertise. Crypto Tools
Serverless Application Model (SAM)
SAM is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, & event source mappings. SAM
Tools for developing and managing applications on AWS
Security, Identity, & Compliance
Category
Use cases
Service
Description
Identity & access management
Securely manage access to services and resources
Identity & Access Management (IAM)
IAM is a web service for safely controlling access to AWS services. IAM
Securely manage access to services and resources
Single Sign-On
SSO helps in simplifying, managing SSO access to AWS accounts & business applications. SSO
Identity management for apps
Cognito
Cognito lets you add user sign-up, sign-in, & access control to web & mobile apps quickly and easily. Cognito
Managed Microsoft Active Directory
Directory Service
AWS Managed Microsoft Active Directory (AD) enables your directory-aware workloads & AWS resources to use managed Active Directory (AD) in AWS. DirectoryService
Simple, secure service to share AWS resources
Resource Access Manager
Resource Access Manager (RAM) is a service that enables you to easily & securely share AWS resources with any AWS account or within AWS Organization. RAM
Central governance and management across AWS accounts
Organizations
Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS. Orgs
Detection
Unified security and compliance center
Security Hub
Security Hub gives a comprehensive view of security alerts & security posture across AWS accounts. SecurityHub
Managed threat detection service
GuardDuty
GuardDuty is a threat detection service that continuously monitors for malicious activity & unauthorized behavior to protect AWS accounts, workloads, & data stored in S3. GuardDuty
Analyze application security
Inspector
Inspector is a security vulnerability assessment service improves the security & compliance of the AWS resources. Inspector
Record and evaluate configurations of your AWS resources
Config
Config is a service that enables to assess, audit, & evaluate the configurations of AWS resources. Config
Track user activity and API usage
CloudTrail
CloudTrail is a service that enables governance, compliance, operational auditing, & risk auditing of AWS account. CloudTrail
Security management for IoT devices
IoT Device Defender
IoT Device Defender is a fully managed service that helps secure fleet of IoT devices. IoTDD
Infrastructure protection
DDoS protection
Shield
Shield is a managed DDoS protection service that safeguards apps running. It provides always-on detection & automatic inline mitigations that minimize application downtime & latency. Shield
Filter malicious web traffic
Web Application Firewall (WAF)
WAF is a web application firewall that helps protect web apps or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. WAF
Central management of firewall rules
Firewall Manager
Firewall Manager eases the user AWS WAF administration & maintenance activities over multiple accounts & resources. FirewallManager
Data protection
Discover and protect your sensitive data at scale
Macie
Macie is a fully managed data (security & privacy) service that uses ML & pattern matching to discover & protect sensitive data. Macie
Key storage and management
Key Management Service (KMS)
KMS makes it easy for to create & manage cryptographic keys & control their use across a wide range of AWS services & in your applications. KMS
Hardware based key storage for regulatory compliance
CloudHSM
CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate & use your own encryption keys. CloudHSM
Provision, manage, and deploy public and private SSL/TLS certificates
Certificate Manager
Certificate Manager is a service that easily provision, manage, & deploy public and private SSL/TLS certs for use with AWS services & internal connected resources. ACM
Rotate, manage, and retrieve secrets
Secrets Manager
Secrets Manager assist the user to safely encode, store, & recover credentials for any user’s database & other services. SecretsManager
Incident response
Investigate potential security issues
Detective
Detective makes it easy to analyze, investigate, & quickly identify the root cause of potential security issues or suspicious activities. Detective
Provides scalable, cost-effective business continuity for physical, virtual, & cloud servers. CloudEndure
Compliance
No cost, self-service portal for on-demand access to AWS’ compliance reports
Artifact
Artifact is a web service that enables the user to download AWS security & compliance records. Artifact
Data Lakes & Analytics
Category
Use cases
Service
Description
Analytics
Interactive analytics
Athena
Athena is an interactive query service that makes it easy to analyze data in S3 using standard SQL. Athena
Big data processing
EMR
EMR is the industry-leading cloud big data platform for processing vast amounts of data using open source tools such as Apache Spark, Hive, HBase,Flink, Hudi, & Presto. EMR
Data warehousing
Redshift
The most popular & fastest cloud data warehouse. Redshift
Real-time analytics
Kinesis
Kinesis makes it easy to collect, process, & analyze real-time, streaming data so one can get timely insights. Kinesis
Operational analytics
Elasticsearch Service
Elasticsearch Service is a fully managed service that makes it easy to deploy, secure, & run Elasticsearch cost effectively at scale. ES
Dashboards & visualizations
Quicksight
QuickSight is a fast, cloud-powered business intelligence service that makes it easy to deliver insights to everyone in organization. QuickSight
Data movement
Real-time data movement
1) Amazon Managed Streaming for Apache Kafka (MSK) 2) Kinesis Data Streams 3) Kinesis Data Firehose 4) Kinesis Data Analytics 5) Kinesis Video Streams 6) Glue
MSK is a fully managed service that makes it easy to build & run applications that use Apache Kafka to process streaming data. MSKKDSKDFKDAKVSGlue
Data lake
Object storage
1) S3 2) Lake Formation
Lake Formation is a service that makes it easy to set up a secure data lake in days. A data lake is a centralized, curated, & secured repository that stores all data, both in its original form & prepared for analysis. S3LakeFormation
Backup & archive
1) S3 Glacier 2) Backup
S3 Glacier & S3 Glacier Deep Archive are a secure, durable, & extremely low-cost S3 cloud storage classes for data archiving & long-term backup. S3Glacier
Data catalog
1) Glue 2)) Lake Formation
Refer as above.
Third-party data
Data Exchange
Data Exchange makes it easy to find, subscribe to, & use third-party data in the cloud. DataExchange
Predictive analytics && machine learning
Frameworks & interfaces
Deep Learning AMIs
Deep Learning AMIs provide machine learning practitioners & researchers with the infrastructure & tools to accelerate deep learning in the cloud, at any scale. DeepLearningAMIs
Platform services
SageMaker
SageMaker is a fully managed service that provides every developer & data scientist with the ability to build, train, & deploy machine learning (ML) models quickly. SageMaker
Containers
Use cases
Service
Description
Store, encrypt, and manage container images
ECR
Refer compute section
Run containerized applications or build microservices
ECS
Refer compute section
Manage containers with Kubernetes
EKS
Refer compute section
Run containers without managing servers
Fargate
Fargate is a serverless compute engine for containers that works with both ECS & EKS. Fargate
Run containers with server-level control
EC2
Refer compute section
Containerize and migrate existing applications
App2Container
App2Container (A2C) is a command-line tool for modernizing .NET & Java applications into containerized applications. App2Container
Quickly launch and manage containerized applications
Copilot
Copilot is a command line interface (CLI) that enables customers to quickly launch & easily manage containerized applications on AWS. Copilot
Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to users of your application, which improves performance & reduces latency.
Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora (MySQL & PostgreSQL-compatible editions), where the database will automatically start up, shut down, & scale capacity up or down based on your application’s needs.
RDS Proxy is a fully managed, highly available database proxy for RDS that makes applications more scalable, resilient to database failures, & more secure.
AppSync is a fully managed service that makes it easy to develop GraphQL APIs by handling the heavy lifting of securely connecting to data sources like AWS DynamoDB, Lambda.
EventBridge is a serverless event bus that makes it easy to connect applications together using data from apps, integrated SaaS apps, & AWS services.
Step Functions is a serverless function orchestrator that makes it easy to sequence Lambda functions & multiple AWS services into business-critical applications.
The easiest way to set up and govern a new, secure multi-account AWS environment. ControlTower
Organizations
Organizations helps centrally govern environment as you grow & scale workloads on AWS Organizations
Well-Architected Tool
Well-Architected Tool helps review the state of workloads & compares them to the latest AWS architectural best practices. WATool
Budgets
Budgets allows to set custom budgets to track cost & usage from the simplest to the most complex use cases. Budgets
License Manager
License Manager makes it easier to manage software licenses from software vendors such as Microsoft, SAP, Oracle, & IBM across AWS & on-premises environments. LicenseManager
Provision
CloudFormation
CloudFormation enables the user to design & provision AWS infrastructure deployments predictably & repeatedly. CloudFormation
Service Catalog
Service Catalog allows organizations to create & manage catalogs of IT services that are approved for use on AWS. ServiceCatalog
OpsWorks
OpsWorks presents a simple and flexible way to create and maintain stacks and applications. OpsWorks
Marketplace
Marketplace is a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, & deploy software that runs on AWS. Marketplace
Operate
CloudWatch
CloudWatch offers a reliable, scalable, & flexible monitoring solution that can easily start. CloudWatch
CloudTrail
CloudTrail is a service that enables governance, compliance, operational auditing, & risk auditing of AWS account. CloudTrail
Read For Me launched at the 2021 AWS re:Invent Builders’ Fair in Las Vegas. A web application which helps the visually impaired ‘hear documents. With the help of AI services such as Amazon Textract, Amazon Comprehend, Amazon Translate and Amazon Polly utilizing an event-driven architecture and serverless technology, users upload a picture of a document, or anything with text, and within a few seconds “hear” that document in their chosen language.
AWS read for me
2- Delivering code and architectures through AWS Proton and Git
Infrastructure operators are looking for ways to centrally define and manage the architecture of their services, while developers need to find a way to quickly and safely deploy their code. In this session, learn how to use AWS Proton to define architectural templates and make them available to development teams in a collaborative manner. Also, learn how to enable development teams to customize their templates so that they fit the needs of their services.
3- Accelerate front-end web and mobile development with AWS Amplify
User-facing web and mobile applications are the primary touchpoint between organizations and their customers. To meet the ever-rising bar for customer experience, developers must deliver high-quality apps with both foundational and differentiating features. AWS Amplify helps front-end web and mobile developers build faster front to back. In this session, review Amplify’s core capabilities like authentication, data, and file storage and explore new capabilities, such as Amplify Geo and extensibility features for easier app customization with AWS services and better integration with existing deployment pipelines. Also learn how customers have been successful using Amplify to innovate in their businesses.
3- Train ML models at scale with Amazon SageMaker, featuring Aurora
Today, AWS customers use Amazon SageMaker to train and tune millions of machine learning (ML) models with billions of parameters. In this session, learn about advanced SageMaker capabilities that can help you manage large-scale model training and tuning, such as distributed training, automatic model tuning, optimizations for deep learning algorithms, debugging, profiling, and model checkpointing, so that even the largest ML models can be trained in record time for the lowest cost. Then, hear from Aurora, a self-driving vehicle technology company, on how they use SageMaker training capabilities to train large perception models for autonomous driving using massive amounts of images, video, and 3D point cloud data.
AWS RE:INVENT 2020 – LATEST PRODUCTS AND SERVICES ANNOUNCED:
Amazon Elasticsearch Service is uniquely positioned to handle log analytics workloads. With a multitude of open-source and AWS-native service options, users can assemble effective log data ingestion pipelines and couple these with Amazon Elasticsearch Service to build a robust, cost-effective log analytics solution. This session reviews patterns and frameworks leveraged by companies such as Capital One to build an end-to-end log analytics solution using Amazon Elasticsearch Service.
Many companies in regulated industries have achieved compliance requirements using AWS Config. They also need a record of the incidents generated by AWS Config in tools such as ServiceNow for audits and remediation. In this session, learn how you can achieve compliance as code using AWS Config. Through the creation of a noncompliant Amazon EC2 machine, this demo shows how AWS Config triggers an incident into a governance, risk, and compliance system for audit recording and remediation. The session also covers best practices for how to automate the setup process with AWS CloudFormation to support many teams.
3- Cost-optimize your enterprise workloads with Amazon EBS – Compute
Recent times have underscored the need to enable agility while maintaining the lowest total cost of ownership (TCO). In this session, learn about the latest volume types that further optimize your performance and cost, while enabling you to run newer applications on AWS with high availability. Dive deep into the latest AWS volume launches and cost-optimization strategies for workloads such as databases, virtual desktop infrastructure, and low-latency interactive applications.
Location data is a vital ingredient in today’s applications, enabling use cases from asset tracking to geomarketing. Now, developers can use the new Amazon Location Service to add maps, tracking, places, geocoding, and geofences to applications, easily, securely, and affordably. Join this session to see how to get started with the service and integrate high-quality location data from geospatial data providers Esri and HERE. Learn how to move from experimentation to production quickly with location capabilities. This session can help developers who require simple location data and those building sophisticated asset tracking, customer engagement, fleet management, and delivery applications.
In this session, learn how Amazon Connect Tasks makes it easy for you to prioritize, assign, and track all the tasks that agents need to complete, including work in external applications needed to resolve customer issues (such as emails, cases, and social posts). Tasks provides a single place for agents to be assigned calls, chats, and tasks, ensuring agents are focused on the highest-priority work. Also, learn how you can also use Tasks with Amazon Connect’s workflow capabilities to automate task-related actions that don’t require agent interaction. Come see how you can use Amazon Connect Tasks to increase customer satisfaction while improving agent productivity.
New agent-assist capabilities from Amazon Connect Wisdom make it easier and faster for agents to find the information they need to solve customer issues in real time. In this session, see how agents can use simple ML-powered search to find information stored across knowledge bases, wikis, and FAQs, like Salesforce and ServiceNow. Join the session to hear Traeger Pellet Grills discuss how it’s using these new features, along with Contact Lens for Amazon Connect, to deliver real-time recommendations to agents based on issues automatically detected during calls.
Grafana is a popular, open-source data visualization tool that enables you to centrally query and analyze observability data across multiple data sources. Learn how the new Amazon Managed Service for Grafana, announced with Grafana’s parent company Grafana Labs, solves common observability challenges. With the new fully managed service, you can monitor, analyze, and alarm on metrics, logs, and traces while offloading the operational management of security patching, upgrading, and resource scaling to AWS. This session also covers new Grafana capabilities such as advanced security features and native AWS service integrations to simplify configuration and onboarding of data sources.
Prometheus is a popular open-source monitoring and alerting solution optimized for container environments. Customers love Prometheus for its active open-source community and flexible query language, using it to monitor containers across AWS and on-premises environments. Amazon Managed Service for Prometheus is a fully managed Prometheus-compatible monitoring service. In this session, learn how you can use the same open-source Prometheus data model, existing instrumentation, and query language to monitor performance with improved scalability, availability, and security without having to manage the underlying infrastructure.
Today, enterprises use low-power, long-range wide-area network (LoRaWAN) connectivity to transmit data over long ranges, through walls and floors of buildings, and in commercial and industrial use cases. However, this requires companies to operate their own LoRa network server (LNS). In this session, learn how you can use LoRaWAN for AWS IoT Core to avoid time-consuming and undifferentiated development work, operational overhead of managing infrastructure, or commitment to costly subscription-based pricing from third-party service providers.
10-AWS CloudShell: The fastest way to get started with AWS CLI
AWS CloudShell is a free, browser-based shell available from the AWS console that provides a simple way to interact with AWS resources through the AWS command-line interface (CLI). In this session, see an overview of both AWS CloudShell and the AWS CLI, which when used together are the fastest and easiest ways to automate tasks, write scripts, and explore new AWS services. Also, see a demo of both services and how to quickly and easily get started with each.
Industrial organizations use AWS IoT SiteWise to liberate their industrial equipment data in order to make data-driven decisions. Now with AWS IoT SiteWise Edge, you can collect, organize, process, and monitor your equipment data on premises before sending it to local or AWS Cloud destinations—all while using the same asset models, APIs, and functionality. Learn how you can extend the capabilities of AWS IoT SiteWise to the edge with AWS IoT SiteWise Edge.
AWS Fault Injection Simulator is a fully managed chaos engineering service that helps you improve application resiliency by making it easy and safe to perform controlled chaos engineering experiments on AWS. In this session, see an overview of chaos engineering and AWS Fault Injection Simulator, and then see a demo of how to use AWS Fault Injection Simulator to make applications more resilient to failure.
Organizations are breaking down data silos and building petabyte-scale data lakes on AWS to democratize access to thousands of end users. Since its launch, AWS Lake Formation has accelerated data lake adoption by making it easy to build and secure data lakes. In this session, AWS Lake Formation GM Mehul A. Shah showcases recent innovations enabling modern data lake use cases. He also introduces a new capability of AWS Lake Formation that enables fine-grained, row-level security and near-real-time analytics in data lakes.
Machine learning (ML) models may generate predictions that are not fair, whether because of biased data, a model that contains bias, or bias that emerges over time as real-world conditions change. Likewise, closed-box ML models are opaque, making it difficult to explain to internal stakeholders, auditors, external regulators, and customers alike why models make predictions both overall and for individual inferences. In this session, learn how Amazon SageMaker Clarify is providing built-in tools to detect bias across the ML workflow including during data prep, after training, and over time in your deployed model.
Amazon EMR on Amazon EKS introduces a new deployment option in Amazon EMR that allows you to run open-source big data frameworks on Amazon EKS. This session digs into the technical details of Amazon EMR on Amazon EKS, helps you understand benefits for customers using Amazon EMR or running open-source Spark on Amazon EKS, and discusses performance considerations.
Finding unexpected anomalies in metrics can be challenging. Some organizations look for data that falls outside of arbitrary ranges; if the range is too narrow, they miss important alerts, and if it is too broad, they receive too many false alerts. In this session, learn about Amazon Lookout for Metrics, a fully managed anomaly detection service that is powered by machine learning and over 20 years of anomaly detection expertise at Amazon to quickly help organizations detect anomalies and understand what caused them. This session guides you through setting up your own solution to monitor for anomalies and showcases how to deliver notifications via various integrations with the service.
17- Improve application availability with ML-powered insights using Amazon DevOps Guru
As applications become increasingly distributed and complex, developers and IT operations teams need more automated practices to maintain application availability and reduce the time and effort spent detecting, debugging, and resolving operational issues manually. In this session, discover Amazon DevOps Guru, an ML-powered cloud operations service, informed by years of Amazon.com and AWS operational excellence, that provides an easy and automated way to improve an application’s operational performance and availability. See how you can transform your IT operations and reduce mean time to recovery (MTTR) with contextual insights.
Amazon Connect Voice ID provides real-time caller authentication that makes voice interactions in contact centers more secure and efficient. Voice ID uses machine learning to verify the identity of genuine customers by analyzing a caller’s unique voice characteristics. This allows contact centers to use an additional security layer that doesn’t rely on the caller answering multiple security questions, and it makes it easy to enroll and verify customers without disrupting the natural flow of the conversation. Join this session to see how fast and secure ML-based voice authentication can power your contact center.
G4ad instances feature the latest AMD Radeon Pro V520 GPUs and second-generation AMD EPYC processors. These new instances deliver the best price performance in Amazon EC2 for graphics-intensive applications such as virtual workstations, game streaming, and graphics rendering. This session dives deep into these instances, ideal use cases, and performance benchmarks, and it provides a demo.
new capability that enables deployment of Amazon ECS tasks on customer-managed infrastructure. This session covers the evolution of Amazon ECS over time, including new on-premises capabilities to manage your hybrid footprint using a common fully managed control plane and API. You learn some foundational technical details and important tenets that AWS is using to design these capabilities, and the session ends with a short demo of Amazon ECS Anywhere.
Amazon Aurora Serverless is an on-demand, auto scaling configuration of Amazon Aurora that automatically adjusts database capacity based on application demand. With Amazon Aurora Serverless v2, you can now scale database workloads instantly from hundreds to hundreds of thousands of transactions per second and adjust capacity in fine-grained increments to provide just the right amount of database resources. This session dives deep into Aurora Serverless v2 and shows how it can help you operate even the most demanding database workloads worry-free.
Apple delights its customers with stunning devices like iPhones, iPads, MacBooks, Apple Watches, and Apple TVs, and developers want to create applications that run on iOS, macOS, iPadOS, tvOS, watchOS, and Safari. In this session, learn how Amazon is innovating to improve the development experience for Apple applications. Come learn how AWS now enables you to develop, build, test, and sign Apple applications with the flexibility, scalability, reliability, and cost benefits of Amazon EC2.
When industrial equipment breaks down, this means costly downtime. To avoid this, you perform maintenance at regular intervals, which is inefficient and increases your maintenance costs. Predictive maintenance allows you to plan the required repair at an optimal time before a breakdown occurs. However, predictive maintenance solutions can be challenging and costly to implement given the high costs and complexity of sensors and infrastructure. You also have to deal with the challenges of interpreting sensor data and accurately detecting faults in order to send alerts. Come learn how Amazon Monitron helps you solve these challenges by offering an out-of-the-box, end-to-end, cost-effective system.
As data grows, we need innovative approaches to get insight from all the information at scale and speed. AQUA is a new hardware-accelerated cache that uses purpose-built analytics processors to deliver up to 10 times better query performance than other cloud data warehouses by automatically boosting certain types of queries. It’s available in preview on Amazon Redshift RA3 nodes in select regions at no extra cost and without any code changes. Attend this session to understand how AQUA works and which analytic workloads will benefit the most from AQUA.
Figuring out if a part has been manufactured correctly, or if machine part is damaged, is vitally important. Making this determination usually requires people to inspect objects, which can be slow and error-prone. Some companies have applied automated image analysis—machine vision—to detect anomalies. While useful, these systems can be very difficult and expensive to maintain. In this session, learn how Amazon Lookout for Vision can automate visual inspection across your production lines in few days. Get started in minutes, and perform visual inspection and identify product defects using as few as 30 images, with no machine learning (ML) expertise required.
AWS Proton is a new service that enables infrastructure operators to create and manage common container-based and serverless application stacks and automate provisioning and code deployments through a self-service interface for their developers. Learn how infrastructure teams can empower their developers to use serverless and container technologies without them first having to learn, configure, and maintain the underlying resources.
Migrating applications from SQL Server to an open-source compatible database can be time-consuming and resource-intensive. Solutions such as the AWS Database Migration Service (AWS DMS) automate data and database schema migration, but there is often more work to do to migrate application code. This session introduces Babelfish for Aurora PostgreSQL, a new translation layer for Amazon Aurora PostgreSQL that enables Amazon Aurora to understand commands from applications designed to run on Microsoft SQL Server. Learn how Babelfish for Aurora PostgreSQL works to reduce the time, risk, and effort of migrating Microsoft SQL Server-based applications to Aurora, and see some of the capabilities that make this possible.
Over the past decade, we’ve witnessed a digital transformation in healthcare, with organizations capturing huge volumes of patient information. But this data is often unstructured and difficult to extract, with information trapped in clinical notes, insurance claims, recorded conversations, and more. In this session, explore how the new Amazon HealthLake service removes the heavy lifting of organizing, indexing, and structuring patient information to provide a complete view of each patient’s health record in the FHIR standard format. Come learn how to use prebuilt machine learning models to analyze and understand relationships in the data, identify trends, and make predictions, ultimately delivering better care for patients.
When business users want to ask new data questions that are not answered by existing business intelligence (BI) dashboards, they rely on BI teams to create or update data models and dashboards, which can take several weeks to complete. In this session, learn how Merlin lets users simply enter their questions on the Merlin search bar and get answers in seconds. Merlin uses natural language processing and semantic data understanding to make sense of the data. It extracts business terminologies and intent from users’ questions, retrieves the corresponding data from the source, and returns the answer in the form of a number, chart, or table in Amazon QuickSight.
When developers publish images publicly for anyone to find and use—whether for free or under license—they must make copies of common images and upload them to public websites and registries that do not offer the same availability commitment as Amazon ECR. This session explores a new Amazon public registry, Amazon ECR Public, built with AWS experience operating Amazon ECR. Here, developers can share georeplicated container software worldwide for anyone to discover and download. Developers can quickly publish public container images with a single command. Learn how anyone can browse and pull container software for use in their own applications.
Industrial companies are constantly working to avoid unplanned downtime due to equipment failure and to improve operational efficiency. Over the years, they have invested in physical sensors, data connectivity, data storage, and dashboarding to monitor equipment and get real-time alerts. Current data analytics methods include single-variable thresholds and physics-based modeling approaches, which are not effective at detecting certain failure types and operating conditions. In this session, learn how Amazon Lookout for Equipment uses data from your sensors to detect abnormal equipment behavior so that you can take action before machine failures occur and avoid unplanned downtime.
In this session, learn how Contact Lens for Amazon Connect enables your contact center supervisors to understand the sentiment of customer conversations, identify call drivers, evaluate compliance with company guidelines, and analyze trends. This can help supervisors train agents, replicate successful interactions, and identify crucial company and product feedback. Your supervisors can conduct fast full-text search on all transcripts to quickly troubleshoot customer issues. With real-time capabilities, you can get alerted to issues during live customer calls and deliver proactive assistance to agents while calls are in progress, improving customer satisfaction. Join this session to see how real-time ML-powered analytics can power your contact center.
AWS Local Zones places compute, storage, database, and other select services closer to locations where no AWS Region exists today. Last year, AWS launched the first two Local Zones in Los Angeles, and organizations are using Local Zones to deliver applications requiring ultra-low-latency compute. AWS is launching Local Zones in 15 metro areas to extend access across the contiguous US. In this session, learn how you can run latency-sensitive portions of applications local to end users and resources in a specific geography, delivering single-digit millisecond latency for use cases such as media and entertainment content creation, real-time gaming, reservoir simulations, electronic design automation, and machine learning.
Your customers expect a fast, frictionless, and personalized customer service experience. In this session, learn about Amazon Connect Customer Profiles—a new unified customer profile capability to allow agents to provide more personalized service during a call. Customer Profiles automatically brings together customer information from multiple applications, such as Salesforce, Marketo, Zendesk, ServiceNow, and Amazon Connect contact history, into a unified customer profile. With Customer Profiles, agents have the information they need, when they need it, directly in their agent application, resulting in improved customer satisfaction and reduced call resolution times (by up to 15%).
Preparing training data can be tedious. Amazon SageMaker Data Wrangler provides a faster, visual way to aggregate and prepare data for machine learning. In this session, learn how to use SageMaker Data Wrangler to connect to data sources and use prebuilt visualization templates and built-in data transforms to streamline the process of cleaning, verifying, and exploring data without having to write a single line of code. See a demonstration of how SageMaker Data Wrangler can be used to perform simple tasks as well as more advanced use cases. Finally, see how you can take your data preparation workflows into production with a single click.
To provide access to critical resources when needed and also limit the potential financial impact of an application outage, a highly available application design is critical. In this session, learn how you can use Amazon CloudWatch and AWS X-Ray to increase the availability of your applications. Join this session to learn how AWS observability solutions can help you proactively detect, efficiently investigate, and quickly resolve operational issues. All of which help you manage and improve your application’s availability.
Security is critical for your Kubernetes-based applications. Join this session to learn about the security features and best practices for Amazon EKS. This session covers encryption and other configurations and policies to keep your containers safe.
Don’t miss the AWS Partner Keynote with Doug Yeum, head of Global Partner Organization; Sandy Carter, vice president, Global Public Sector Partners and Programs; and Dave McCann, vice president, AWS Migration, Marketplace, and Control Services, to learn how AWS is helping partners modernize their businesses to help their customers transform.
Join Swami Sivasubramanian for the first-ever Machine Learning Keynote, live at re:Invent. Hear how AWS is freeing builders to innovate on machine learning with the latest developments in AWS machine learning, demos of new technology, and insights from customers.
Join Peter DeSantis, senior vice president of Global Infrastructure and Customer Support, to learn how AWS has optimized its cloud infrastructure to run some of the world’s most demanding womath.ceilrkloads and give your business a competitive edge.
Join Dr. Werner Vogels at 8:00AM (PST) as he goes behind the scenes to show how Amazon is solving today’s hardest technology problems. Based on his experience working with some of the largest and most successful applications in the world, Dr. Vogels shares his insights on building truly resilient architectures and what that means for the future of software development.
Cloud architecture has evolved over the years as the nature of adoption has changed and the level of maturity in our thinking continues to develop. In this session, Rudy Valdez, VP of Solutions Architecture and Training & Certification, walks
Organizations around the world are minimizing operations and maximizing agility by developing with serverless building blocks. Join David Richardson, VP of Serverless, for a closer look at the serverless programming model, including event-dri
AWS edge computing solutions provide infrastructure and software that move data processing and analysis as close to the endpoint where data is generated as required by customers. In this session, learn about new edge computing capabilities announced at re:Invent and how customers are using purpose-built edge solutions to extend the cloud to the edge.
Topics on simplifying container deployment, legacy workload migration using containers, optimizing costs for containerized applications, container architectural choices, and more.
Do you need to know what’s happening with your applications that run on Amazon EKS? In this session, learn how you can combine open-source tools, such as Prometheus and Grafana, with Amazon CloudWatch using CloudWatch Container Insights. Come to this session for a demo of Prometheus metrics with Container Insights.
The hard part is done. You and your team have spent weeks poring over pull requests, building microservices and containerizing them. Congrats! But what do you do now? How do you get those services on AWS? How do you manage multiple environments? How do you automate deployments? AWS Copilot is a new command line tool that makes building, developing, and operating containerized applications on AWS a breeze. In this session, learn how AWS Copilot can help you and your team manage your services and deploy them to production, safely and delightfully.
Five years ago, if you talked about containers, the assumption was that you were running them on a Linux VM. Fast forward to today, and now that assumption is challenged—in a good way. Come to this session to explore the best data plane option to meet your needs. This session covers the advantages of different abstraction models (Amazon EC2 or AWS Fargate), the operating system (Linux or Windows), the CPU architecture (x86 or Arm), and the commercial model (Spot or On-Demand Instances.)
Security is critical for your Kubernetes-based applications. Join this session to learn about the security features and best practices for Amazon EKS. This session covers encryption and other configurations and policies to keep your containers safe.
In this session, learn how the Commonwealth Bank of Australia (CommBank) built a platform to run containerized applications in a regulated environment and then replicated it across multiple departments using Amazon EKS, AWS CDK, and GitOps. This session covers how to manage multiple multi-team Amazon EKS clusters across multiple AWS accounts while ensuring compliance and observability requirements and integrating Amazon EKS with AWS Identity and Access Management, Amazon CloudWatch, AWS Secrets Manager, Application Load Balancer, Amazon Route 53, and AWS Certificate Manager.
Amazon EKS is a fully managed service that makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. Join this session to learn about how Verizon runs its core applications on Amazon EKS at scale. Verizon also discusses how it worked with AWS to overcome several post-Amazon EKS migration challenges and ensured that the platform was robust.
Containers have helped revolutionize modern application architecture. While managed container services have enabled greater agility in application development, coordinating safe deployments and maintainable infrastructure has become more important than ever. This session outlines how to integrate CI/CD best practices into deployments of your Amazon ECS and AWS Fargate services using pipelines and the latest in AWS developer tooling.
With Amazon ECS, you can run your containerized workloads securely and with ease. In this session, learn how to utilize the full spectrum of Amazon ECS security features and its tight integrations with AWS security features to help you build highly secure applications.
Do you have to budget your spend for container workloads? Do you need to be able to optimize your spend in multiple services to reduce waste? If so, this session is for you. It walks you through how you can use AWS services and configurations to improve your cost visibility. You learn how you can select the best compute options for your containers to maximize utilization and reduce duplication. This combined with various AWS purchase options helps you ensure that you’re using the best options for your services and your budget.
You have a choice of approach when it comes to provisioning compute for your containers. Some users prefer to have more direct control of their instances, while others could do away with the operational heavy lifting. AWS Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. This session explores the benefits and considerations of running on Fargate or directly on Amazon EC2 instances. You hear about new and upcoming features and learn how Amenity Analytics benefits from the serverless operational model.
Are you confused by the many choices of containers services that you can run on AWS? This session explores all your options and the advantages of each. Whether you are just beginning to learn Docker or are an expert with Kubernetes, join this session to learn how to pick the right services that would work best for you.
Leading containers migration and modernization initiatives can be daunting, but AWS is making it easier. This session explores architectural choices and common patterns, and it provides real-world customer examples. Learn about core technologies to help you build and operate container environments at scale. Discover how abstractions can reduce the pain for infrastructure teams, operators, and developers. Finally, hear the AWS vision for how to bring it all together with improved usability for more business agility.
As the number of services grow within an application, it becomes difficult to pinpoint the exact location of errors, reroute traffic after failures, and safely deploy code changes. In this session, learn how to integrate AWS App Mesh with Amazon ECS to export monitoring data and implement consistent communications control logic across your application. This makes it easy to quickly pinpoint the exact locations of errors and automatically reroute network traffic, keeping your container applications highly available and performing well.
Enterprises are continually looking to develop new applications using container technologies and leveraging modern CI/CD tools to automate their software delivery lifecycles. This session highlights the types of applications and associated factors that make a candidate suitable to be containerized. It also covers best practices that can be considered as you embark on your modernization journey.
Because of its security, reliability, and scalability capabilities, Amazon Elastic Kubernetes Service (Amazon EKS) is used by organization in their most sensitive and mission-critical applications. This session focuses on how Amazon EKS networking works with an Amazon VPC and how to expose your Kubernetes application using Elastic Load Balancing load balancers. It also looks at options for more efficient IP address utilization.
Network design is a critical component in your large-scale migration journey. This session covers some of the real-world networking challenges faced when migrating to the cloud. You learn how to overcome these challenges by diving deep into topics such as establishing private connectivity to your on-premises data center and accelerating data migrations using AWS Direct Connect/Direct Connect gateway, centralizing and simplifying your networking with AWS Transit Gateway, and extending your private DNS into the cloud. The session also includes a discussion of related best practices.
5G will be the catalyst for the next industrial revolution. In this session, come learn about key technical use cases for different industry segments that will be enabled by 5G and related technologies, and hear about the architectural patterns that will support these use cases. You also learn about AWS-enabled 5G reference architectures that incorporate AWS services.
AWS offers a breadth and depth of machine learning (ML) infrastructure you can use through either a do-it-yourself approach or a fully managed approach with Amazon SageMaker. In this session, explore how to choose the proper instance for ML inference based on latency and throughput requirements, model size and complexity, framework choice, and portability. Join this session to compare and contrast compute-optimized CPU-only instances, such as Amazon EC2 C4 and C5; high-performance GPU instances, such as Amazon EC2 G4 and P3; cost-effective variable-size GPU acceleration with Amazon Elastic Inference; and highest performance/cost with Amazon EC2 Inf1 instances powered by custom-designed AWS Inferentia chips.
When it comes to architecting your workloads on VMware Cloud on AWS, it is important to understand design patterns and best practices. Come join this session to learn how you can build well-architected cloud-based solutions for your VMware workloads. This session covers infrastructure designs with native AWS service integrations across compute, networking, storage, security, and operations. It also covers the latest announcements for VMware Cloud on AWS and how you can use these new features in your current architecture.
One of the most critical phases of executing a migration is moving traffic from your existing endpoints to your newly deployed resources in the cloud. This session discusses practices and patterns that can be leveraged to ensure a successful cutover to the cloud. The session covers preparation, tools and services, cutover techniques, rollback strategies, and engagement mechanisms to ensure a successful cutover.
AWS DeepRacer is the fastest way to get rolling with machine learning. Developers of all skill levels can get hands-on, learning how to train reinforcement learning models in a cloud based 3D racing simulator. Attend a session to get started, and then test your skills by competing for prizes and glory in an exciting autonomous car racing experience throughout re:Invent!
AWS DeepRacer gives you an interesting and fun way to get started with reinforcement learning (RL). RL is an advanced machine learning (ML) technique that takes a very different approach to training models than other ML methods. Its super power is that it learns very complex behaviors without requiring any labeled training data, and it can make short-term decisions while optimizing for a longer-term goal. AWS DeepRacer makes it fast and easy to build models in Amazon SageMaker and train, test, and iterate quickly and easily on the track in the AWS DeepRacer 3D racing simulator.
As more organizations are looking to migrate to the cloud, Red Hat OpenShift Service offers a proven, reliable, and consistent platform across the hybrid cloud. Red Hat and AWS recently announced a fully managed joint service that can be deployed directly from the AWS Management Console and can integrate with other AWS Cloud-native services. In this session, you learn about this new service, which delivers production-ready Kubernetes that many enterprises use on premises today, enhancing your ability to shift workloads to the AWS Cloud and making it easier to adopt containers and deploy applications faster. This presentation is brought to you by Red Hat, an AWS Partner.
Event-driven architecture can help you decouple services and simplify dependencies as your applications grow. In this session, you learn how Amazon EventBridge provides new options for developers who are looking to gain the benefits of this approach.
Amazon Timestream is a fast, scalable, and serverless time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day at as little as one-tenth the cost of relational databases. In this session, dive deep on Amazon Timestream features and capabilities, including its serverless automatic scaling architecture, its storage tiering that simplifies your data lifecycle management, its purpose-built query engine that lets you access and analyze recent and historical data together, and its built-in time series analytics functions that help you identify trends and patterns in your data in near-real time.
Savings Plans is a flexible pricing model that allows you to save up to 72 percent on Amazon EC2, AWS Fargate, and AWS Lambda. Many AWS users have adopted Savings Plans since its launch in November 2019 for the simplicity, savings, ease of use, and flexibility. In this session, learn how many organizations use Savings Plans to drive more migrations and business outcomes. Hear from Comcast on their compute transformation journey to the cloud and how it started with RIs. As their cloud usage evolved, they adopted Savings Plans to drive business outcomes such as new architecture patterns.
The ability to deploy only configuration changes, separate from code, means you do not have to restart the applications or services that use the configuration and changes take effect immediately. In this session, learn best practices used by teams within Amazon to rapidly release features at scale. Learn about a pattern that uses AWS CodePipeline and AWS AppConfig that will allow you to roll out application configurations without taking applications out of service. This will help you ship features faster across complex environments or regions.
I watched (binged) the A Cloud Guru course in two days and did the 6 practice exams over a week. I originally was only getting 70%’s on the exams, but continued doing them on my free time (to the point where I’d have 15 minutes and knock one out on my phone lol) and started getting 90%’s. – A mix of knowledge vs memorization tbh. Just make sure you read why your answers are wrong.
I don’t really have a huge IT background, although will note I work in a DevOps (1 1/2 years) environment; so I do use AWS to host our infrastructure. However, the exam is very high level compared to what I do/services I use. I’m fairly certain with zero knowledge/experience, someone could pass this within two weeks. AWS is also currently promoting a “get certified” challenge and is offering 50% off.
Went through the entire CloudAcademy course. Most of the info went out the other ear. Got a 67% on their final exam. Took the ExamPro free exam, got 69%.
Was going to take it last Saturday, but I bought TutorialDojo’s exams on Udemy. Did one Friday night, got a 50% and rescheduled it a week later to today Sunday.
Took 4 total TD exams. Got a 50%, 54%, 67%, and 64%. Even up until last night I hated the TD exams with a passion, I thought they were covering way too much stuff that didn’t even pop up in study guides I read. Their wording for some problems were also atrocious. But looking back, the bulk of my “studying” was going through their pretty well written explanations, and their links to the white papers allowed me to know what and where to read.
Not sure what score I got yet on the exam. As someone who always hated testing, I’m pretty proud of myself. I also had to take a dump really bad starting at around question 25. Thanks to TutorialsDojo Jon Bonso for completely destroying my confidence before the exam, forcing me to up my game. It’s better to walk in way over prepared than underprepared.
I would like to thank this community for recommendations about exam preparation. It was wayyyy easier than I expected (also way easier than TD practice exams scenario-based questions-a lot less wordy on real exam). I felt so unready before the exam that I rescheduled the exam twice. Quick tip: if you have limited time to prepare for this exam, I would recommend scheduling the exam beforehand so that you don’t procrastinate fully.
Resources:
-Stephane’s course on Udemy (I have seen people saying to skip hands-on videos but I found them extremely helpful to understand most of the concepts-so try to not skip those hands-on)
-Tutorials Dojo practice exams (I did only 3.5 practice tests out of 5 and already got 8-10 EXACTLY worded questions on my real exam)
Previous Aws knowledge:
-Very little to no experience (deployed my group’s app to cloud via Elastic beanstalk in college-had 0 clue at the time about what I was doing-had clear guidelines)
Preparation duration: -2 weeks (honestly watched videos for 12 days and then went over summary and practice tests on the last two days)
I used Stephane Maarek on Udemy. Purchased his course and the 6 Practice Exams. Also got Neal Davis’ 500 practice questions on Udemy. I took Stephane’s class over 2 days, then spent the next 2 weeks going over the tests (3~4 per day) till I was constantly getting over 80% – passed my exam with a 882.
What an adventure, I’ve never really gieven though to getting a cert until one day it just dawned on me that it’s one of the few resources that are globally accepted. So you can approach any company and basically prove you know what’s up on AWS 😀
Passed with two weeks of prep (after work and weekends)
This was just a nice structured presentation that also gives you the powerpoint slides plus cheatsheets and a nice overview of what is said in each video lecture.
Udemy – AWS Certified Cloud Practitioner Practice Exams, created by Jon Bonso**, Tutorials Dojo**
These are some good prep exams, they ask the questions in a way that actually make you think about the related AWS Service. With only a few “Bullshit! That was asked in a confusing way” questions that popped up.
I took CCP 2 days ago and got the pass notification right after submitting the answers. In about the next 3 hours I got an email from Credly for the badge. This morning I got an official email from AWS congratulating me on passing, the score is much higher than I expected. I took Stephane Maarek’s CCP course and his 6 demo exams, then Neal Davis’ 500 questions also. On all the demo exams, I took 1 fail and all passes with about 700-800. But in the real exam, I got 860. The questions in the real exam are kind of less verbose IMO, but I don’t truly agree with some people I see on this sub saying that they are easier. Just a little bit of sharing, now I’ll find something to continue ^^
Passed the exam! Spent 25 minutes answering all the questions. Another 10 to review. I might come back and update this post with my actual score.
Background
– A year of experience working with AWS (e.g., EC2, Elastic Beanstalk, Route 53, and Amplify).
– Cloud development on AWS is not my strong suit. I just Google everything, so my knowledge is very spotty. Less so now since I studied for this exam.
Study stats
– Spent three weeks studying for the exam.
– Studied an hour to two every day.
– Solved 800-1000 practice questions.
– Took 450 screenshots of practice questions and technology/service descriptions as reference notes to quickly swift through on my phone and computer for review. Screenshots were of questions that I either didn’t know, knew but was iffy on, or those I believed I’d easily forget.
– Made 15-20 pages of notes. Chill. Nothing crazy. This is on A4 paper. Free-form note taking. With big diagrams. Around 60-80 words per page.
– I was getting low-to-mid 70%s on Neal Davis’s and Stephane Maarek’s practice exams. Highest score I got was an 80%.
– I got a 67(?)% on one of Stephane Maarek’s exams. The only sub-70% I ever got on any practice test. I got slightly anxious. But given how much harder Maarek’s exams are compared to the actual exam, the anxiety was undue.
– Finishing the practice exams on time was never a problem for me. I would finish all of them comfortably within 35 minutes.
Resources used
– AWS Cloud Practitioner Essentials on the AWS Training and Certification Portal
– AWS Certified Cloud Practitioner Practice Tests (Book) by Neal Davis
– 6 Practice Exams | AWS Certified Cloud Practitioner CLF-C01 by Stephane Maarek*
– Certified Cloud Practitioner Course by Exam Pro (Paid Version)**
– One or two free practice exams found by a quick Google search
*Regarding Exam Pro: I went through about 40% of the video lectures. I went through all the videos in the first few sections but felt that watching the lectures was too slow and laborious even at 1.5-2x speed. (The creator, for the most part, reads off of the slides, adding brief comments here and there.) So, I decided to only watch the video lectures for sections I didn’t have a good grasp on. (I believe the video lectures provided in the course are just split versions of the full length course available for free on YouTube under the freeCodeCamp channel, here.) The online course provides five practice exams. I did not take any of them.
**Regarding Stephane Maarek: I only took his practice exams. I did not take his study guide course.
Notes
– My study regimen (i.e., an hour to two every day for three weeks) was overkill.
– The questions on the practice exams created by Neal Davis and Stephane Maarek were significantly harder than those on the actual exam. I believe I could’ve passed without touching any of these resources.
– I retook one or two practice exams out of the 10+ I’ve taken. I don’t think there’s a need to retake the exams as long as you are diligent about studying the questions and underlying concepts you got wrong. I reviewed all the questions I missed on every practice exam the day before.
What would I do differently?
– Focus on practice tests only. No video lectures.
– Focus on the technologies domain. You can intuit your way through questions in the other domains.
I thank you all for helping me through this process! Couldn’t have done it without all of the recommendations and guidance on this page.
Background: I am a back-end developer that works 12 hours a day for corporate America, so no time to study (or do anything) but I made it work.
Could I have probably gone for SAA first? Yeah, but I wanted to prove to myself that I could do it. I studied for about a month. I used Maarek’s Udemy course at 1.5x speed and I couldn’t recommend it more. I also used his practice exams. I’ll be honest, I took 5 practice exams and got somehow managed to fail every single one in the mid 60’s lol. Cleared the exam with an 800. Practice exams WAY harder.
My 2 cents on must knows:
AWS Shared Security Model (who owns what)
Everything Billing (EC2 instance, S3, different support plans)
I had a few ML questions that caught me off guard
VPC concepts – i.e. subnets, NACL, Transit Gateway
I studied solidly for two weeks, starting with Tutorials Dojo (which was recommended somewhere on here). I turned all of their vocabulary words and end of module questions into note cards. I did the same with their final assessment and one free exam.
During my second week, I studied the cards for anywhere from one to two hours a day, and I’d randomly watch videos on common exam questions.
The last thing I did was watch a 3 hr long video this morning that walks you through setting up AWS Instances. The visual of setting things up filled in a lot of holes.
I had some PSI software problems, and ended up getting started late. I was pretty dejected towards the end of the exam, and was honestly (and pleasantly) surprised to see that I passed.
Hopefully this helps someone. Keep studying and pushing through – if you know it, you know it. Even if you have a bad start. Cheers 🍻
Hey everyone, I’m currently preparing for the AWS CloudOps certification. I have some experience with AWS, but not a lot. My manager recommended this certification because the work I’ve been doing aligns closely with its topics. If you have any tips, study recommendations, or resources that helped you, I’d really appreciate it. I’ve already picked up Stéphane Maarek’s course on Udemy. FYI: I haven’t taken any other AWS certification exams before—this will be my first one. Thanks in advance! submitted by /u/Thick-Mud-1037 [link] [comments]
https://preview.redd.it/916uz310ylzf1.png?width=1912&format=png&auto=webp&s=81df21324cf68c2c94ad7e4633913b9384d204b6 I just took the security specialty exam yesterday, and received the results this morning! Thank you very much y'all for the tips and pieces of advice. Studied for it for exactly a month with Adrian Cantrill's course, Tutorials Dojo, and AWS official questions sets. The exam definitely had questions I was not expecting but I would say they were variations of the questions out there in TD and AWS official questions, I had 3 that were exactly like TD still. And really understanding the concepts is key here I'd say. The pressure was big (not because of questions length, although that played a part), but because time concerns. Finished with over 1h left on the clock though. submitted by /u/Popular-Indication20 [link] [comments]
I’m currently in the AWS re/Start program, and I have to say it’s been a really eye-opening journey into the world of cloud computing. The exposure to AWS concepts, Python, Linux, and employability skills is incredible, especially for people like me who didn’t have a strong tech background before joining. That said, the pace of the program feels extremely intense. We cover so many topics — Python, Linux, networking, security, databases etc. — all within a few short weeks. It sometimes feels impossible to properly grasp one topic before moving on to the next. For learners who prefer to understand things deeply rather than just rushing through, this can get overwhelming and stressful. I really think the program would benefit from a slightly longer schedule or more built-in review periods, especially around the Python and AWS service modules. Slowing things down just a bit would help learners absorb the material more confidently and enjoy the process rather than feeling constantly behind. Overall, I’m grateful for the opportunity, it’s a great program with solid content and supportive trainers. I just hope the pace can be adjusted so that more learners can thrive instead of feeling overloaded. submitted by /u/Previous_Gene_254 [link] [comments]
I have 3+ years of help desk experience. For those who been in cloud computing for long time what’s your tips on starting, with what certs do you recommend. Curious in could security or network engineering in cloud. Any tips and advice helps. Also should I purse cloud or go the regular CCNA etc. network engineering route and stay away from cloud, thanks. submitted by /u/MrRandome23 [link] [comments]
Passed AWS CCP few days ago! 🙌 Huge thanks to everyone here who shared tips and study resources — it truly helped me pass. Now I’m wondering, what should be my next step? I’m currently a Project Manager/Scrum Master with no technical background, just aiming to grow my cloud knowledge so I can better support my teams. Any advice on what cert or learning path to take next if I’m on more of a project/delivery management track? submitted by /u/Awkward_Guide_9270 [link] [comments]
Got my results back today for the CloudOps exam and failed with a score of 70%. I sat the SysOps Admin exam a month before with a failure score of 70%. I used CloudGuru for the content and paid Udemy and SkillsCertPro mocks to prepare for the second attempt. I studied for two solid weeks and learnt a tonne of new stuff and felt confident, I was sure I passed the second time today. But I failed… with the exact same score?! Feeling very deflated. Any advice is welcome 🙁 submitted by /u/External-Eye7025 [link] [comments]
Amazon CloudWatch Database Insights now detects anomalies on additional metrics through its on-demand analysis experience. Database Insights is a monitoring and diagnostics solution that helps database administrators and application developers optimize database performance by providing comprehensive visibility into database metrics, query performance, and resource utilization patterns. The on-demand analysis feature utilizes machine learning to help identify anomalies and performance bottlenecks during the selected time period, and gives advice on what to do next. The Database Insights on-demand analysis feature now offers enhanced anomaly detection capabilities. Previously, database administrators could analyze database performance and correlate metrics based on database load. Now, the on-demand analysis report also identifies anomalies in database-level and operating system-level counter metrics for the database instance, as well as per-SQL metrics for the top SQL statements contributing to database load. The feature automatically compares your selected time period against normal baseline performance, identifies anomalies, and provides specific remediation advice while reducing mean time to diagnosis. Through intuitive visualizations and clear explanations, you can quickly identify performance issues and receive step-by-step guidance for resolution. You can get started with on-demand analysis by enabling the Advanced mode of CloudWatch Database Insights on your Amazon Aurora or RDS databases using the AWS management console, AWS APIs, or AWS CloudFormation. Please refer to RDS documentation and Aurora documentation for information regarding the availability of Database Insights across different regions, engines, and instance classes.
Amazon FSx now integrates with AWS Secrets Manager, enabling enhanced protection and management of the Active Directory domain service account credentials for your FSx for Windows File Server file systems and FSx for NetApp ONTAP Storage Virtual Machines (SVMs). Previously, if you wanted to join your FSx for Windows file system or FSx for ONTAP SVM to your Active Directory domain for user authentication and access control, you needed to specify the username and password for your service account in the Amazon FSx Console, Amazon FSx API, AWS CLI, or AWS CloudFormation. With this launch, you can now specify an AWS Secrets Manager secret containing the service account credentials, enabling you to strengthen your security posture by eliminating the need to store plain text credentials in application code or configuration files, and aligning with best practices for credential management. Additionally, you can use AWS Secrets Manager to rotate your Active Directory credentials and consume them when needed in FSx workloads. You can now use AWS Secrets Manager to store your domain join service credentials for all FSx for Windows file systems and FSx for ONTAP Storage Virtual Machines in all AWS Regions where they are available. For more information, see Amazon FSx for Windows File Server documentation and Amazon FSx for NetApp ONTAP documentation.
The grade isn’t amazing, but I didn’t cram, revise, or take any practice tests to really assess my level, so I’m actually quite proud of myself. The Solutions Architect Pro exam I’ll be taking in six months will be a different story though, I think I’ll definitely need to study for that, lol! By the way, I have a question: I haven’t received the badge on Credly.com. Do you know why? submitted by /u/Go0ners14 [link] [comments]
Hey ya'll, just took and passed the Solutions Architect Associate exam yesterday, giving me my third AWS cert in a month! Certified AI Practitioner October 3rd, Developer Associate October 21st, and SAA November 4th (technically a day more than a month so I lied). Background: 2 years working hands-on as developer with AWS, obtained Certified Cloud Practitioner last year. First off, shoutout the GOAT Stephane Maarek, I have used his courses for every AWS cert I've studied for - his lectures pretty much covered everything, and the slides are a great reference. Also shoutout TutorialDojo (🇵🇭), it has a TON of practice questions with an excellent interface to practice. My study plan for each cert was to finish the Stephane course on Udemy, take notes, and then from that point, grind the Tutorial Dojos exams. I would start with the Review Mode until I was pretty much acing each one, then would spam the Randomized tests the days leading to the actual test. I took a LOT of practice tests, so I won't post each result. But for each cert, I basically got to the point where I could skim through the Randomized test and score at least a 90%. I'd say that the actual AI Practitioner and SAA were pretty much on par with the Tutorial Dojo tests in terms of difficulty, but the Developer one was a bit harder. If I were to do studying over again, I would have spent more time just reviewing and studying notes than spamming the TD tests, as it got to a point where I was memorizing the answers based on specific keywords in the question/answers, which wasn't really reinforcing any knowledge. I was "overfitting" a bit too much to the TD tests. I'd say I had around 2-4 hours to study each day for the certs. Studying for the AI Practitioner test took around 1-2 weeks total, I have a data science minor so a lot of the AI terms were familiar. As expected, definitely the easiest of the three. The Developer and SAA both took around three weeks each, though at one point I was studying for both at the same time. The content of the Developer and SAA exams have a lot of overlap, so it was definitely nice to take them both around the same time. The Developer exam is much more specific (and a lot harder) - needing to know some specific API commands or configurations, and a lot more about deployment. SAA is much more high-level, focusing more on things like cost-savings, disaster recovery, migrations, high availability, scaling, etc. But for the most part, the actual services covered by both are the same - it's just a matter of looking at them from a different lens. If you finished one of Stephane's courses for Dev, you probably have about 10 more hours of new content for the SAA (and vice versa). I don't remember specifics from the AI or Dev exams, but the SAA exam had a lot of tricky questions about networking (VPC, hybrid cloud networking w Storage Gateway, etc) and storage solutions (EBS vs EFS, FSx types, etc). It's useful to know the different protocols available for the different FSx types, and use cases for ALB vs NLB. One major recommendation in my opinion - take the exams in person. I took the CCP and AI exams online, and the whole time I was worried about if I was fidgeting too much, looking around too much, or if my internet would cut out (I've heard you can't even open your mouth to read the questions aloud). I had to clear literally everything from the cubbies under my desk, resulting in 2 minutes of me filming myself chucking everything on my desk over my shoulder to the corner of the room. Taking the exams at a test center allowed me to chill out a bit on all that and just focus on the test, which was definitely helpful when taking the more complicated tests like Dev and SAA. I was able to lean back, stretch, and was given a whiteboard for notes - just make sure you bring TWO forms of ID, some guy before me forgot that rule. Also the test center lady was very sweet 🙂 submitted by /u/kittykat87654321 [link] [comments]
I gave my test yesterday and I'm relieved to say that I passed. My TD scores were between 75-85%, I took 4 timed mode tests. I also did the topic-based questions on TD which is a part of the practice test bundle. I took the practice test from Stephan's course and I scored 66%. Prep and Background: My background, experienced solutions architect but AWS usage is minimal in our company. This was my recertification; I had passed my SAAC03 exam in Nov 2022. I started prepping with Cantril's course back in August, I was 80% done when I realized that the course may be outdated. I switched to Stephan's course since I had already bought it. I started switching back and forth on these courses on certain topics which I found hard to grasp. Sometimes different teaching styles help. The biggest help with Stephan's course was the slides that you can download and study. I studied whenever I got a chance at work, time between meetings etc. The one thing about Stephan's course is there is not a lot of hands-on, but he has scenario-based sections which are so practical and provide a way to understand solutions. His material is crisp and concise which helps you revise quickly. Exam: The exam is hard. I found it on par with Stephan's practice test. I was surprised to see that there were hardly any questions on CloudFront, Route 53, EC2 and VPC. Most of the questions were focused on S3, ECS/EKS/Kubernetes, Microservices architecture, S3 data querying (3-4 questions), datawarehouse migration from on-prem to cloud, ALB/NLB. There were maybe 10-15 questions which I would categorize as easy. Rest of them had two options that you can eliminate and the other two options that were left were tricky. I had to really read the options 2-3 times and go back to question to pick up the key words like: What is the most COST-EFFECTIVE solution? What will minimize the OPERATIONAL overhead? etc. Focus on the key words in the last sentence of the question. Some of the questions were verbose, that is where I think you can get lost in the question and options. Next time I'm going to get an extra 30 mins accommodation. I hardly had any time to review “Flag for review" questions which were quite a lot. When I got out of test center, I thought I was going to fail. I would also give my credit to this reddit community for posts on exam experience and the mind map that I discovered here: https://www.mindmeister.com/app/map/3471885158?t=lE6MXlXHYC Here is the list of things I remember: Direct connect/VPN site to site-Data encryption at transit and rest Instance classes for EC2 with RDS-Memory Optimized or Storage Optimized S3 ECS SES Reserved constancy Aurora global db EBS accidental delete Backup and retention Security groups and Acl Microserices architecture- ecs Cloud hsm File gateway Fsx smb 50 tb of data transfer from on prem to cloud. File gateway S3 gateway Lambda concurrency question Typed on my phone and in a hurry. Pardon any typos!!! submitted by /u/Character-Eagle-214 [link] [comments]
I passed the Machine Learning Associate certification with very few days of preparation. My first relevant certification, I don't have anyone to talk to to share the news with, so I came here to share my joy and say that this community helped a lot. I didn't know, I joined recently but it was essential. (I'm still ecstatic, I'll update later with more information and links that helped me, but basically it was 4 insane days, day and night marathoning a Udemy course that they recommended here, until 10 minutes before the test I was studying). If you could leave any tips for upcoming certifications (I've been a data analyst for 6 years and I entered the world of Data Science 1.5 years ago but I still work as an analyst) Note: I'm not going to stray too far from the other testimonials that have already helped me, and I'll give credit later. But the test was at a good level, I didn't find it easy, I passed with 737, but study is necessary mainly due to the details covered (whether in the services or their respective functionalities, parameters...). Lots of trade-offs, architecture design as needed. Last detail: VERY SageMaker and its integrations, is the heart of certification submitted by /u/Puzzled_Surprise_383 [link] [comments]
https://preview.redd.it/mzct2iy7ggzf1.png?width=1384&format=png&auto=webp&s=e51c93f18ba37969541ea3ba137eab02c10921b8 I am redirected back to skillbuilder main page instead of Benchprep after clicking Practice Exam. Just yesterday I could access it normally, is anybody else having the same issue ? submitted by /u/Popular-Indication20 [link] [comments]
AWS is announcing the general availability of new memory-optimized Amazon EC2 R8a instances. R8a instances, feature 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, deliver up to 30% higher performance, and up to 19% better price-performance compared to R7a instances. R8a instances deliver 45% more memory bandwidth compared to R7a instances, making these instances ideal for latency sensitive workloads. Compared to Amazon EC2 R7a instances, R8a instances provide up to 60% faster performance for GroovyJVM, allowing higher request throughput and better response times for business-critical applications. Built on the AWS Nitro System using sixth generation Nitro Cards, R8a instances are ideal for high performance, memory-intensive workloads, such as SQL and NoSQL databases, distributed web scale in-memory caches, in-memory databases, real-time big data analytics, and Electronic Design Automation (EDA) applications. R8a instances offer 12 sizes including 2 bare metal sizes. Amazon EC2 R8a instances are SAP-certified, and providing 38% more SAPS compared to R7a instances. R8a instances are available in the following AWS Regions: US East (N. Virginia), US East (Ohio), and US West (Oregon) regions. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 R8a instance page.
Amazon GameLift Streams is now integrated with AWS Health and will provide automated notifications about aging stream groups. Customers are sent regular reminders via AWS Health to re-create their stream groups starting as early as the 45th day to the 335th day from the stream group creation date. Stream groups older than 180 days are restricted from adding new applications and automatically expire after the 365th day. This feature strengthens our customer’s security posture by helping customers manage the lifecycle of stream groups and prevent the use of outdated resources that might be missing updates. While the customer focuses on their game development, the service helps maintain the health of their resources. AWS Health will send a reminder to the linked account on the 45th day and on the 150th day from the stream group creation day, informing customers that the stream group will be restricted from adding new applications after the 180-day. A last reminder to re-create the stream group will be sent on 335th day informing customers that the stream group will expire on the 365th day. This feature is available in all AWS Regions where Amazon GameLift Streams is offered at no additional cost. Maintenance warnings or the expiration date of a stream group can be viewed on the Stream group details page on the service console, or by using the ExpiresAt field in the GetStreamGroup API response. To learn more about managing your stream groups and configuring notifications, visit the Amazon GameLift documentation on Stream group lifecycle.
Amazon Keyspaces (for Apache Cassandra) now supports Multi-Region Replication in the Middle East (Bahrain) and Asia Pacific (Hong Kong) Regions. With this expansion, customers can now replicate their Amazon Keyspaces tables to and from these Regions, enabling lower latency access to data and improved regional resiliency. Amazon Keyspaces Multi-Region Replication automatically replicates data across AWS Regions with typically less than a second of replication lag, allowing applications to read and write data to the same table in multiple Regions. This capability helps customers build globally distributed applications that can serve users with low latency regardless of their location, while also providing business continuity in the event of a regional disruption. The addition of Multi-Region Replication support in Middle East (Bahrain) and Asia Pacific (Hong Kong) enables organizations operating in these regions to build highly available applications that can maintain consistent performance for users across the Middle East and Asia Pacific. Customers can now replicate their Keyspaces tables between these regions and any other supported AWS Region without managing complex replication infrastructure. You pay only for the resources you use, including data storage, read/write capacity, and writes in each Region of your multi-Region keyspace. To learn more about Amazon Keyspaces Multi-Region Replication and its regional availability, visit the Amazon Keyspaces documentation.
Amazon CloudWatch Application Signals Model Context Protocol or MCP Server for Application Performance Monitoring (APM) now integrates CloudWatch Synthetics canary monitoring directly into its audit framework, enabling automated, AI-powered debugging of synthetic monitoring failures. DevOps teams and developers can now use natural language questions like 'Why is my checkout canary failing?' in compatible AI assistants such as Amazon Q, Claude, or other supported assistants to utilize the new AI-powered debugged capabilities and quickly distinguish between canary infrastructure issues and actual service problems, addressing the significant challenge of extensive manual analysis in maintaining reliable synthetic monitoring. The integration extends Application Signals' existing multi-signal (services, operations, SLOs, golden signals) analysis capabilities to include comprehensive canary diagnostics. The new feature automatically correlates canary failures with service health metrics, traces, and dependencies through an intelligent audit pipeline. Starting from natural language prompts from users, the system performs multi-layered diagnostic analysis across six major areas: Network Issues, Authentication Failures, Performance Problems, Script Errors, Infrastructure Issues, and Service Dependencies. This analysis includes automated comparison of HTTP Archive or HAR files, CloudWatch logs analysis, S3 artifact examination, and configuration validation, significantly reducing the time needed to identify and resolve synthetic monitoring issues. Customers can then access these insights through natural language interactions with supported AI assistants. This feature is available in all commercial AWS regions where Amazon CloudWatch Synthetics is offered. Customers will need access to a compatible AI agent such as Amazon Q, Claude, or other supported AI assistants to utilize the AI-powered debugging capabilities. To learn more about implementing AI-based debugging for your synthetic monitoring, visit the CloudWatch Application Signals MCP Server documentation.
Buyers and sellers in India can now transact locally in AWS Marketplace, with invoicing in Indian Rupees (INR), and with simplified tax compliance through AWS India. With this launch, India-based sellers can now register to sell in AWS Marketplace and offer paid subscriptions to buyers in India. India-based sellers will be able to create private offers in US dollars (USD) or INR. Buyers in India purchasing paid offerings in AWS Marketplace from India-based sellers will receive invoices in INR, helping to simplify invoicing with consistency across AWS Cloud and AWS Marketplace purchases. Sellers based in India can begin selling paid offerings in AWS Marketplace and can work with India-based Channel Partners to sell to customers. AWS India will facilitate the issuance of tax-compliant invoices in INR to buyers, with the independent software vendor (ISV) or Channel Partner as the seller of record. AWS India will automate the collection and remittance of Withholding Tax (WHT) and GST-Tax Collected at Source (GST-TCS) to the relevant tax authorities, fulfilling compliance requirements for buyers. During this phase, non-India based sellers can continue to sell directly to buyers in India through AWS Inc., in USD or through AWS India by working through authorized distributors. To learn more and explore solutions available from India-based sellers, visit this page. To get started as a seller, India-based ISVs and Channel Partners can register in the AWS Marketplace Management Portal. For more information about buying or selling using AWS Marketplace in India, visit the India FAQs page and help guide.
I learned sap for 2 months, everyday two hours. Four backend experiences. Amost no Aws experience. I use TD and exam at 8am. Get message at 8pm. I meet network error(I am from china) and delayed 40min, big challenge to my heart. Finally using Japan vpn to connect onvue. The real exam description is short than TD. And many new concept and service, cannot learn from TD. Maybe really need to learn from work experience. submitted by /u/Diligent_Ambition_14 [link] [comments]
Good day, everyone. I'm writing this to inquire on the way forward. I'm to sit for AWS Certified Cloud Practitioner exam but Pearson OnVue has proven to be really unreliable. I'm studying really far from home and also quite far from the available physical testing centers which influenced my decision to try the OnVue option. My testing history has been: Attempt 1 (Oct 28/29): I had an exam scheduled for Oct 28, but I had to reschedule it myself to Oct 29 due to a mismatch in name order between the portal and my national ID. This one was on me; I got the changes effected and was ready. Attempt 2 (Oct 29): On the rescheduled day, Oct 29, I was unable to check in. The check-in portal and the Certmetrics portal were both down. Customer support confirmed this was all due to the "us-east-1" incident. I was forced to reschedule. Attempt 3 (Nov 3): My next exam was scheduled for Nov 3. On the test day, I went through the check-in, but my Starlink network failed their network tests, so I was not allowed to proceed. Another forced reschedule. Attempt 4 (Nov 5 - Today): ...and lastly, today. My exam was scheduled for Nov 5, at 10:30 AM. On checking in today at 10:00 AM on the dot, I just kept getting "OnVUE check-in can not be started at this time." submitted by /u/PieRat-2534 [link] [comments]
AWS Launch Wizard now offers a guided approach to sizing, configuring, and deploying Windows Server EC2 instances with Microsoft SQL Server Developer Edition installed from your own media. AWS Launch Wizard for SQL Server Developer Edition allows you to simplify launching cost-effective and full-featured SQL Server instances on Amazon EC2, making it ideal for developers building non-production and test database environments. This feature is ideal for customers who also have existing non-production databases running SQL Server Enterprise Edition or SQL Server Standard Edition, as migrating the non-production databases to SQL Server Developer Edition will reduce SQL license costs while maintaining feature parity. This feature is available in all supported commercial AWS Regions and the AWS GovCloud (US) Regions. To learn more, see the AWS Launch Wizard for SQL Server User Guide and blog post here.
Amazon CloudFront now supports both IPv4 and IPv6 addresses for Anycast Static IP configurations. Previously, customers could only use IPv4 addresses when using CloudFront Anycast static IP addresses. With this launch, customers using CloudFront Anycast Static IP addresses receive both IPv4 and IPv6 addresses for their workloads. This dual-stack support allows customers to meet IPv6 compliance requirements, future-proof their infrastructure, and serve end users on IPv6-only networks. Typically, CloudFront uses rotating IP addresses to serve traffic. Customers implementing Anycast Static IPs receive a dedicated list of static IP addresses for their workloads. CloudFront Anycast Static IPs enables customers to provide a dedicated list of IP addresses to partners and their customers for enhancing security and simplifying network management across various use cases. CloudFront supports IPv6 for Anycast Static IPs from all edge locations. This excludes Amazon Web Services China (Beijing) region, operated by Sinnet, and the Amazon Web Services China (Ningxia) region, operated by NWCD. Learn more about Anycast Static IPs here and for more information, please refer to the Amazon CloudFront Developer Guide. For pricing, please see CloudFront Pricing.
AWS Glue Schema Registry (GSR) has now expanded the programming language support for GSR Client library to include C# support along with existing Java support. C# applications integrating with Apache Kafka or Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Kinesis Data Streams, and Apache Flink or Amazon Managed Service for Apache Flink can now interact with AWS Glue Schema Registry to maintain data quality and schema compatibility in streaming data applications.
AWS Glue Schema Registry, a serverless feature of AWS Glue, enables you to validate and control the evolution of streaming data using registered schemas at no additional charge. Schemas define the structure and format of data records produced by applications. Using AWS Glue Schema Registry, you can centrally manage and enforce schema definitions across your data ecosystem. This ensures consistency of schemas across applications and enables seamless data integration between producers and consumers. Through centralized schema validation, teams can maintain data quality standards and evolve their schemas in a controlled manner.
C# support is available across all AWS regions where Glue Schema Registry is available. Visit the Glue Schema Registry developer guide, and SDK to get started with C# integration.
Can anyone suggest the right study resources for AWS ML Associate Exam? A dedicated WhatsApp group or subreddit would be really helpful. submitted by /u/rukikazuru [link] [comments]
Apart from the general syllabus & prep for exam. I also wanted to clear AWS Cloud Practitioner exam without spending any money. So I collected those 4500 ETC points and now while I applied, it says: I have to complete AWS Cloud Practitioner learning path & 1 mock test on skill builder website I want to know if someone got the 100% voucher with this method, can you please tell on how to do things and how to clear the exam??!! submitted by /u/No-Construction-9928 [link] [comments]
Hey everyone I wanted to share that I have successfully cleared AWS Certified Machine Learning Engineer - Associate (MLA-C01) with the score of 843! https://preview.redd.it/rr6m8w6g9dzf1.png?width=894&format=png&auto=webp&s=2095be1dd41b4b55bc120efbc1666d6cf1b46dc6 I had scheduled the exam (with the 50% off voucher many have used here) and started preparing about 2 weeks before. I used Stephane Marek and Frank Kane's course to get familiar with the exam contents. Due to office work (and bit of procrastination), got delayed in completing the course work and had to reschedule the exam once. I had planned to do practice tests from TD, but due to lack of time just completed and revised the AWS skill builder free test and 3 tests from Stephane Marek and Abhishek Singh's practice tests on Udemy. They are actually quite good to help with some of the topics that might be missed during coursework or require deeper understanding. My learnings from the whole experience: Take more time to prepare thoroughly and practice as many times as needed to be confident before exam. I couldn't reschedule it after 4 Nov due to voucher constraints, and was somewhat stressed due to lack of practice. Believe in your preparation and your aptitude. Thanks to the community for the all the advices & experiences that made the process lot more easier! submitted by /u/SmartConnection4230 [link] [comments]
I got turned away by the exam center because apparently my name doesn't exactly fit the first name / last name field. The email receipt seemed to show my name correctly (although there isn't a breakdown of first name / last name). They only informed me a few hours before I arrived when I was already on the road. Has anyone been in the same situation? Were you allowed to reschedule without shelling out another hundred bucks? I've already reached out via email and they said they'd respond within 2 days but I'm anxious and a bit disheartened. submitted by /u/contemplating-all [link] [comments]
Been working in this field for some time now and had been canceling and rescheduling this a lot. Finally decided to go hail mary and took the exam yesterday. Woke up to this today morning. Nearly two weeks of dedicated preparation with Stéphane's course and Tutorials Dojo. Thanks a lot to this community for guiding me the right way - and also for the promo code (guess why I took it yesterday!). submitted by /u/Salt_in_Stress [link] [comments]
https://preview.redd.it/eyg9msh01czf1.png?width=1152&format=png&auto=webp&s=d79926e74bc3c5759501084ffd5a7b314557cbed Spent the last 2 weeks squeezing in 1–2 hours a night after work with Stephane Maarek’s Udemy course. Not easy switching into study mode after a long day, but I had a 50% voucher expiring today so there was no turning back. Was still reviewing notes an hour before the exam! Score came in tonight: 702 Limped across the line, battered and bruised, but I’m here. Big thanks to this sub and everyone who shares tips & resources! You guys rock!! Seeing other people grind through really helped keep me motivated, even when I didn’t feel fully ready. That’s 2 AWS certs down this year, onward to SAA in early 2026 💪 submitted by /u/RTM179 [link] [comments]
Did the 50% deal as so many others have. Took Stephen Maarek's course and paired it with Tutorial Dojo questions. Short amount of time to pull it together as I just passed the AWS Certified Cloud Practitioner exam. Couldn't pass up the deal though! Onward and upward to Solutions Architect Associate. Let's go! submitted by /u/Real-Theory8840 [link] [comments]
Honestly, the entire exam I was sure I’ll not pass but I did. I studies for about 4 month but not very intensively. I have no experience with AWS services. Here is how I prepared: As a first step I took Stephane Maarek Udemy course. I watched it passively for about 6 hour a week, by sections. The course was detailed enough in my opinion and it gave me a good foundation with the learning journey. Tutorial Dojo- I purchased the practice exams pack and the study guide (pdf with 200 pages). I first read the book and then started practicing the questions. As a beginning I used the sectioned mode, then took full exams but without timing them and at the end I did the timed mode. The exams has very good explanation for the answers and I tried to document my mistakes in a separate file and mention points that are missing in my knowledge. The questions in the tutorial dojo exams sometimes repeat themselves but that ok! I took the online test with WiFi connection and had no troubles at all. Good luck!! submitted by /u/MassiveCarpenter3611 [link] [comments]
Amazon OpenSearch Serverless has added support for Federal Information Processing Standards (FIPS) compliant endpoints for Data Plane APIs in US East (N. Virginia), US East (Ohio), Canada (Central), AWS GovCloud (US-East), and AWS GovCloud (US-West). The service now meets the security requirements for cryptographic modules as outlined in Federal Information Processing Standard (FIPS) 140-3. Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless FIPS, see the documentation.
Hey everyone, just wanted to say a huge thank you to this community. I passed the AWS AIF-C01 exam, barely above the passing mark, but a pass is a pass! 😅 I actually don’t have any background in AI/ML, but I decided to take advantage of the discounted exam promo from AWS AI/ML Certification Challenge 2025 which will end today. With work, family, and volunteer commitments, I only had about two weeks to prepare. Ended up cramming Stephen Maarek’s Udemy course and going through the 4 practice question review sections from Tutorials Dojo (Jon Bonso). Still found myself studying two hours before the exam and decided to just wing it and somehow managed to scrape through. Big thanks to everyone who shares tips, breakdowns, and motivation here. You guys really helped keep me on track even when I thought I was way behind. On to the next cert! What are your suggestions? Looking at maybe shifting my career to more of a Cloud or AI/ML Project Management. submitted by /u/Plane_Tadpole_1175 [link] [comments]
Starting today, AWS Cloud WAN is available in the AWS Asia Pacific (Thailand), AWS Asia Pacific (Taipei) and AWS Asia Pacific (New Zealand) Regions. With AWS Cloud WAN, you can use a central dashboard and network policies to create a global network that spans multiple locations and networks, removing the need to configure and manage different networks using different technologies. You can use network policies to specify the Amazon Virtual Private Clouds, AWS Transit Gateways, and on-premises locations you want to connect to using an AWS Site-to-Site VPN, AWS Direct Connect, or third-party software-defined WAN (SD-WAN) products. The AWS Cloud WAN central dashboard generates a comprehensive view of the network to help you monitor network health, security, and performance. In addition, AWS Cloud WAN automatically creates a global network across AWS Regions by using Border Gateway Protocol (BGP) so that you can easily exchange routes worldwide. To learn more, please visit the AWS Cloud WAN product detail page.
Starting today, you can add warm pools to Auto Scaling groups (ASGs) that have mixed instances policies. With warm pools, customers can improve the elasticity of their applications by creating a pool of pre-initialized EC2 instances that are ready to quickly serve application traffic. By combining warm pools with instance type flexibility, an ASG can rapidly scale out to its maximum size at any time, deploying applications across multiple instance types to enhance availability. Warm pools are particularly beneficial for applications with lengthy initialization processes, such as writing large amounts of data to disk, running complex custom scripts, or other time-consuming setup procedures that can take several minutes or longer to serve traffic. With this new release, the warm pool feature now works seamlessly with ASGs configured for multiple On-Demand instance types, whether specified through manual instance type lists or attribute-based instance type selection. The combination of instance type flexibility and warm pools provides a powerful solution that helps customers scale out efficiently while maximizing availability. The warm pool feature is available through the AWS Management Console, the AWS SDKs, and the AWS Command Line Interface (CLI). It is available in all public AWS Regions and AWS GovCloud (US) Regions. To learn more about warm pools, visit this AWS documentation.
AWS is expanding service reference information to include which operations are supported by AWS services and which IAM permissions are needed to call a given operation. This will help you answer questions such as “I want to call a specific AWS service operation, which IAM permissions do I need?” You can automate the retrieval of service reference information, eliminating manual effort and ensuring your policies align with the latest service updates. You can also incorporate this service reference information directly into your policy management tools and processes for a seamless integration. This feature is offered at no additional cost. To get started, refer to the documentation on programmatic service reference information.
Amazon Relational Database Service (RDS) for Oracle is now available with R7i memory-optimized preconfigured instances that offer additional memory and storage I/O per vCPU. Powered by custom 4th Gen Intel Xeon Scalable processors with AWS Nitro System and DDR5 memory for high performance, these instances provide up to 64:1 memory-to-vCPU ratio. Many Oracle database workloads require high memory, but can safely reduce the number of vCPUs without impacting application performance. By running such Oracle database workloads on R7i pre-configured instances, customers can lower their Oracle database licensing and support costs while meeting high performance application requirements. Memory optimized R7i pre-configured instances are available for Amazon RDS for Oracle with Bring Your Own License (BYOL) license model supporting both Oracle Database Enterprise Edition and Oracle Database Standard Edition 2. To learn more about Amazon RDS for Oracle R7i memory-optimized preconfigured instances, read RDS for Oracle User Guide and visit Amazon RDS for Oracle Pricing for available instance configurations, pricing details, and region availability.
Amazon Bedrock AgentCore Runtime now supports two deployment methods for AI agents: container-based deployment and direct code upload. Developers can now choose between direct code-zip file upload for rapid prototyping and iteration, or leverage advanced container-based options for complex use cases requiring custom configurations. AgentCore Runtime provides a serverless, framework and model agnostic runtime for running agents and tools at scale. This deployment option streamlines the prototyping workflow while maintaining enterprise security and scaling capabilities for production deployments. Developers can now deploy agents using direct code-zip upload with easy drag-and-drop functionality. This enables faster iteration cycles, empowering developers to prototype quickly and focus on building innovative agent capabilities. This feature is available in all nine AWS Regions where Amazon Bedrock AgentCore Runtime is available: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland). To learn more about AgentCore Runtime deployment options, see the AgentCore documentation and get started with the AgentCore Starter Toolkit. AgentCore offers consumption-based pricing with no upfront costs.
Amazon Connect now lets you configure aliases for email addresses, so customers see trusted identities when sending or receiving messages, helping maintain a consistent brand experience and simplify email management. For example, when forwarding a customer-facing address such as support@company.com to an address in Amazon Connect, you can configure an alias to ensure customers continue to see support@company.com as the sender. Amazon Connect Email is available in the US East (N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), and Europe (London) regions. To learn more and get started, please refer to the help documentation, pricing page, or visit the Amazon Connect website.
AWS Config conformance packs and organization-level management capabilities for conformance packs are now available in additional AWS Regions. Conformance packs allow you to bundle AWS Config rules into a single package, simplifying deployment at scale. You can deploy and manage these conformance packs throughout your AWS environment. Conformance packs provide a general-purpose compliance framework designed to enable you to create security, operational, or cost-optimization governance checks using managed or custom AWS Config rules. This allows you to monitor compliance scores based on your own groupings. With this launch, you can also manage the AWS Config conformance packs and individual AWS Config rules at the organization level which simplifies the compliance management across your AWS Organization. With this expansion, AWS Config Conformance Packs are now also available in the following AWS Regions: Asia Pacific (Malaysia), Asia Pacific (New Zealand), Asia Pacific (Thailand), Asia Pacific (Taipei) and Mexico (Central). To get started, you can either use the provided sample conformance pack templates or craft a custom YAML file from scratch based on a custom conformance pack. Conformance pack deployment can be done through the AWS Config console, AWS CLI, or via AWS CloudFormation. You will be charged per conformance pack evaluation in your AWS account per AWS Region. Visit the AWS Config pricing page for more details. To learn more about AWS Config conformance packs, see our documentation.
Amazon Kinesis Data Streams launches On-demand Advantage, so customers can warm on-demand streams to handle instant throughput increases up to 10GB or 10 million events per second, eliminating the need to over-provision or build custom scaling solutions. Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store data streams at any scale. On-demand streams automatically scale capacity based on data usage, and now you can warm write capacity ad hoc. On-demand Advantage also provides a simpler pricing structure that removes the fixed, per-stream charge, so customers only pay for data usage at better rates. On-demand Advantage offers data usage with 60% lower pricing compared to On-demand Standard, with data ingest at $0.032/GB and data retrieval at $0.016/GB in the US East (N. Virginia) region. The price of Enhanced fan-out data retrieval is the same as shared-throughput retrievals, making higher fan-out use cases more cost effective. The mode also decreases the price of extended retention by 77% from $0.10/GB-month to $0.023/GB-month. Once you enable On-demand Advantage mode, the account will be billed for a minimum of 25MB/s of data ingest and 25MB/s of data retrieval at the lower rates across all on-demand streams. The new pricing means On-demand Advantage is the most cost effective way to stream with Kinesis Data Streams when you ingest at least 10MB/s in aggregate, fan out to more than two consumer applications, or have hundreds of streams in a region. You can check directly in the Kinesis console and the pricing page if On-demand Advantage is a good fit for your account. On-demand Advantage is available in all AWS regions where Kinesis Data Streams is available, including AWS GovCloud (US) and China regions. To learn more, see the launch blog and the Kinesis Data Streams User Guide.
AWS Config announces launch of an additional 42 managed Config rules for various use cases such as security, cost, durability, and operations. You can now search, discover, enable and manage these additional rules directly from AWS Config and govern more use cases for your AWS environment.
With this launch, you can now enable these controls across your account or across your organization. For example, you can evaluate your tagging strategies across Amazon EKS Fargate profiles, Amazon EC2 Network Insight Analyses, AWS Glue Machine learning transforms. Or you can assess your security posture across Amazon Cognito Identity pools, Amazon Lightsail buckets, AWS Amplify apps and more. Additionally, you can leverage Conformance Packs to group these new controls and deploy across an account or across organization, streamlining your multi-account governance.
For the full list of recently released rules, visit the AWS Config developer guide. For description of each rule and the AWS Regions in which it is available, please refer our Config managed rules documentation. To start using Config rules, please refer our documentation. New Rules Launched:
AMPLIFY_APP_NO_ENVIRONMENT_VARIABLES
AMPLIFY_BRANCH_DESCRIPTION
APIGATEWAY_STAGE_DESCRIPTION
APIGATEWAYV2_STAGE_DESCRIPTION
API_GWV2_STAGE_DEFAULT_ROUTE_DETAILED_METRICS_ENABLED
APIGATEWAY_STAGE_ACCESS_LOGS_ENABLED
APPCONFIG_DEPLOYMENT_STRATEGY_MINIMUM_FINAL_BAKE_TIME
APPCONFIG_DEPLOYMENT_STRATEGY_TAGGED
APPFLOW_FLOW_TRIGGER_TYPE_CHECK
APPMESH_VIRTUAL_NODE_CLOUD_MAP_IP_PREF_CHECK
APPMESH_VIRTUAL_NODE_DNS_IP_PREF_CHECK
APPRUNNER_SERVICE_IP_ADDRESS_TYPE_CHECK
APPRUNNER_SERVICE_MAX_UNHEALTHY_THRESHOLD
APS_RULE_GROUPS_NAMESPACE_TAGGED
AUDITMANAGER_ASSESSMENT_TAGGED
BATCH_MANAGED_COMPUTE_ENV_ALLOCATION_STRATEGY_CHECK
BATCH_MANAGED_SPOT_COMPUTE_ENVIRONMENT_MAX_BID
COGNITO_IDENTITY_POOL_UNAUTHENTICATED_LOGINS
COGNITO_USER_POOL_PASSWORD_POLICY_CHECK
CUSTOMERPROFILES_DOMAIN_TAGGED
DEVICEFARM_PROJECT_TAGGED
DEVICEFARM_TEST_GRID_PROJECT_TAGGED
DMS_REPLICATION_INSTANCE_MULTI_AZ_ENABLED
EC2_LAUNCH_TEMPLATES_EBS_VOLUME_ENCRYPTED
EC2_NETWORK_INSIGHTS_ANALYSIS_TAGGED
EKS_FARGATE_PROFILE_TAGGED
GLUE_ML_TRANSFORM_TAGGED
IOT_SCHEDULED_AUDIT_TAGGED
IOT_PROVISIONING_TEMPLATE_DESCRIPTION
IOT_PROVISIONING_TEMPLATE_JITP
IOT_PROVISIONING_TEMPLATE_TAGGED
KINESIS_VIDEO_STREAM_MINIMUM_DATA_RETENTION
LAMBDA_FUNCTION_DESCRIPTION
LIGHTSAIL_BUCKET_ALLOW_PUBLIC_OVERRIDES_DISABLED
RDS_MYSQL_CLUSTER_COPY_TAGS_TO_SNAPSHOT_CHECK
RDS_PGSQL_CLUSTER_COPY_TAGS_TO_SNAPSHOT_CHECK
ROUTE53_RESOLVER_FIREWALL_DOMAIN_LIST_TAGGED
ROUTE53_RESOLVER_FIREWALL_RULE_GROUP_ASSOCIATION_TAGGED
ROUTE53_RESOLVER_FIREWALL_RULE_GROUP_TAGGED
ROUTE53_RESOLVER_RESOLVER_RULE_TAGGED
RUM_APP_MONITOR_TAGGED
RUM_APP_MONITOR_CLOUDWATCH_LOGS_ENABLED
Hello all, i have passed the AI practitioner exam today, scored 776/1000. The exam was really easy tbh. I took the exam because of the 50% offer that is going on. I used stephen maarek's course and practice tests which was really useful. I never crossed 70% in the mock exams. I found the practical tests harder than the real exam. I also used aws skill builder free version domain reviews and sample test which was very helpful. Planning to do some small projects after this to secure a cloud job somehow. Thank you! submitted by /u/lfcclf [link] [comments]
Amazon CloudWatch Synthetics multi-browser support is now available in the AWS GovCloud (US-East, US-West) Regions. This expansion enables customers in these two regions to test and monitor their web applications using both Chrome and Firefox browsers. With this launch, you can run the same canary script across Chrome and Firefox when using Playwright-based canaries or Puppeteer-based canaries. CloudWatch Synthetics automatically collects browser-specific performance metrics, success rates, and visual monitoring results while maintaining an aggregate view of overall application health. This helps development and operations teams quickly identify and resolve browser compatibility issues that could affect application reliability. To learn more about configuring multi-browser canaries, see the canary docs in the Amazon CloudWatch Synthetics User Guide.
We're excited to announce a simplified pricing model for Amazon Cognito's machine-to-machine (M2M) authentication. Starting today we are removing the M2M app client pricing dimension, making it more cost-effective for customers to build and scale their M2M applications. Cognito supports applications that access API data with machine identities. Machine identities in user pools are clients that run on application servers and connect to remote APIs. Their operation happens without user interaction such as scheduled tasks, data streams, or asset updates. This change reduces the pricing of Cognito for customers using M2M authentication by removing the app client price dimension. Customers will continue to be charged based on the number of successful M2M token requests per month. Previously, customers were charged for each M2M app client registered, regardless of usage amount, and each successful token request made by the app client to access a resource. With this change, customers will only pay for their successful token requests, making it more cost-effective to build and scale M2M applications using Amazon Cognito. This pricing change is automatic and requires no action from customers. It is effective in all supported Amazon Cognito regions. To learn more about Amazon Cognito pricing, visit our pricing page.
Amazon CloudWatch agent now supports the collection of detailed performance metrics for NVMe local volumes on Amazon EC2 instances. These metrics give you insights into behavior and performance characteristics of your NVMe local storage. The CloudWatch agent can now be configured to collect and send detailed NVMe metrics to CloudWatch, providing deeper visibility into storage performance. The new metrics include comprehensive performance indicators such as queue depths, I/O sizes, and device utilization. These metrics are similar to the detailed performance statistics available for EBS volumes, providing a consistent monitoring experience across both storage types. You can create CloudWatch dashboards, set alarms, and analyze trends for your NVMe-based instance store volumes. Detailed performance statistics for Amazon EC2 instance store volumes via Amazon CloudWatch agent are available for all local NVMe volumes attached to Nitro-based EC2 instances in all AWS Commercial and AWS GovCloud (US) Regions. See the Amazon CloudWatch pricing page for CloudWatch pricing details. To get started with detailed performance statistics for Amazon EC2 instance store volumes in CloudWatch, see Collect Amazon EC2 instance store volume NVMe driver metrics in the Amazon CloudWatch User Guide. To learn more about detailed performance statistics for Amazon EC2 instance store volumes, see Amazon EC2 instance store volumes in the Amazon EC2 User Guide.
AWS Config now supports 49 additional AWS resource types across key services including Amazon EC2, Amazon Bedrock, and Amazon SageMaker. This expansion provides greater coverage over your AWS environment, enabling you to more effectively discover, assess, audit, and remediate an even broader range of resources. With this launch, if you have enabled recording for all resource types, then AWS Config will automatically track these new additions. The newly supported resource types are also available in Config rules and Config aggregators. You can now use AWS Config to monitor the following newly supported resource types in all AWS Regions where the supported resources are available:
Resource Types
AWS::ApiGateway::DomainName
AWS::Glue::Registry
AWS::ApiGateway::Method
AWS::IoTCoreDeviceAdvisor::SuiteDefinition
AWS::ApiGateway::UsagePlan
AWS::MediaPackageV2::Channel
AWS::AppConfig::Extension
AWS::MediaPackageV2::ChannelGroup
AWS::Bedrock::ApplicationInferenceProfile
AWS::MediaTailor::LiveSource
AWS::Bedrock::Prompt
AWS::MSK::ServerlessCluster
AWS::BedrockAgentCore::BrowserCustom
AWS::PaymentCryptography::Alias
AWS::BedrockAgentCore::CodeInterpreterCustom
AWS::PaymentCryptography::Key
AWS::BedrockAgentCore::Runtime
AWS::RolesAnywhere::CRL
AWS::CloudFormation::LambdaHook
AWS::RolesAnywhere::Profile
AWS::CloudFormation::StackSet
AWS::S3::AccessGrant
AWS::Comprehend::Flywheel
AWS::S3::AccessGrantsInstance
AWS::Config::AggregationAuthorization
AWS::S3::AccessGrantsLocation
AWS::DataSync::Agent
AWS::SageMaker::DataQualityJobDefinition
AWS::Deadline::Fleet
AWS::SageMaker::MlflowTrackingServer
AWS::Deadline::QueueFleetAssociation
AWS::SageMaker::ModelBiasJobDefinition
AWS::EC2::IPAMPoolCidr
AWS::SageMaker::ModelExplainabilityJobDefinition
AWS::EC2::SubnetNetworkAclAssociation
AWS::SageMaker::ModelQualityJobDefinition
AWS::EC2::VPCGatewayAttachment
AWS::SageMaker::MonitoringSchedule
AWS::ECR::RepositoryCreationTemplate
AWS::SageMaker::StudioLifecycleConfig
AWS::ElasticLoadBalancingV2::TargetGroup
AWS::SecretsManager::RotationSchedule
AWS::EMR::Studio
AWS::SES::DedicatedIpPool
AWS::EMRContainers::VirtualCluster
AWS::SES::MailManagerTrafficPolicy
AWS::EMRServerless::Application
AWS::SSM::ResourceDataSync
AWS::EntityResolution::MatchingWorkflow
To view the complete list of AWS Config supported resource types, see the supported resource types page.
You can now monitor Mountpoint operations in observability tools such as Amazon CloudWatch, Prometheus, and Grafana. With this launch, Mountpoint emits near real-time metrics such as request count or request latency using OpenTelemetry Protocol (OTLP), an open source data transmission protocol. This means you can use applications such as CloudWatch agent or the OpenTelemetry (OTel) collector to publish the metrics into observability tools and create dashboards for monitoring and troubleshooting. Previously, Mountpoint emitted operational data into log files, and you needed to create custom tools to parse the log files for insights. Now, when you mount your Amazon S3 bucket, you can configure Mountpoint to publish the metrics to an observability tool to proactively monitor issues that might impact your applications. For example, you can check if an application is unable to access S3 due to permission issues by analyzing the S3 request error metric that provides error types at an Amazon EC2 instance granularity. Follow the step-by-step instructions to set up the CloudWatch agent or the OTel collector and configure Mountpoint to publish metrics into an observability tool. For more information, visit the Mountpoint for Amazon S3 GitHub repository, Mountpoint product page, and Mountpoint for Amazon S3 CSI driver GitHub page.
What is Google Workspace? Google Workspace is a cloud-based productivity suite that helps teams communicate, collaborate and get things done from anywhere and on any device. It's simple to set up, use and manage, so your business can focus on what really matters.
Here are some highlights: Business email for your domain Look professional and communicate as you@yourcompany.com. Gmail's simple features help you build your brand while getting more done.
Access from any location or device Check emails, share files, edit documents, hold video meetings and more, whether you're at work, at home or on the move. You can pick up where you left off from a computer, tablet or phone.
Enterprise-level management tools Robust admin settings give you total command over users, devices, security and more.
Sign up using my link https://referworkspace.app.goo.gl/Q371 and get a 14-day trial, and message me to get an exclusive discount when you try Google Workspace for your business.
Google Workspace Business Standard Promotion code for the Americas
63F733CLLY7R7MM
63F7D7CPD9XXUVT
63FLKQHWV3AEEE6
63JGLWWK36CP7WM Email me for more promo codes
submitted by /u/Content-Word-7673 [link] [comments]
Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.
Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.
I agree that Gane was getting the better of Tom in the first round but the significant strikes number was 30 to 27 in favor of the Frenchman Moreover, we never know what could have been if Tom was poked by Gane at that point. What do you think would have happened? I believe Tom would have recovered in time to finish Gane by the 3rd Read for More- https://mmasucka.com/jon-jones-labels-tom-aspinall-incredibly-overrated/ submitted by /u/One-Faithlessness730 [link] [comments]