AWS Certification Exam Prep: DynamoDB Facts, Summaries and Questions/Answers.

DynamoDB

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

AWS Certification Exam Prep: DynamoDB facts and summaries, AWS DynamoDB Top 10 Questions and Answers Dump

Definition 1: Amazon DynamoDB is a fully managed proprietary NoSQL database service that supports key-value and document data structures and is offered by Amazon.com as part of the Amazon Web Services portfolio. DynamoDB exposes a similar data model to and derives its name from Dynamo, but has a different underlying implementation. Dynamo had a multi-master design requiring the client to resolve version conflicts and DynamoDB uses synchronous replication across multiple datacenters for high durability and availability.

Definition 2: DynamoDB is a fast and flexible non-relational database service for any scale. DynamoDB enables customers to offload the administrative burdens of operating and scaling distributed databases to AWS so that they don’t have to worry about hardware provisioning, setup and configuration, throughput capacity planning, replication, software patching, or cluster scaling.

Amazon DynamoDB explained

  • Fully Managed
  • Fast, consistent Performance
  • Fine-grained access control
  • Flexible
Amazon DynamoDB explained
Amazon DynamoDB explained

AWS DynamoDB Facts and Summaries

  1. Amazon DynamoDB is a low-latency NoSQL database.
  2. DynamoDB consists of Tables, Items, and Attributes
  3. DynamoDb supports both document and key-value data models
  4. DynamoDB Supported documents formats are JSON, HTML, XML
  5. DynamoDB has 2 types of Primary Keys: Partition Key and combination of Partition Key + Sort Key (Composite Key)
  6. DynamoDB has 2 consistency models: Strongly Consistent / Eventually Consistent
  7. DynamoDB Access is controlled using IAM policies.
  8. DynamoDB has fine grained access control using IAM Condition parameter dynamodb:LeadingKeys to allow users to access only the items where the partition key vakue matches their user ID.
  9. DynamoDB Indexes enable fast queries on specific data columns
  10. DynamoDB indexes give you a different view of your data based on alternative Partition / Sort Keys.
  11. DynamoDB Local Secondary indexes must be created when you create your table, they have same partition Key as your table, and they have a different Sort Key.
  12. DynamoDB Global Secondary Index Can be created at any time: at table creation or after. They have a different partition Key as your table and a different sort key as your table.
  13. A DynamoDB query operation finds items in a table using only the primary Key attribute: You provide the Primary Key name and a distinct value to search for.
  14. A DynamoDB Scan operation examines every item in the table. By default, it return data attributes.
  15. DynamoDB Query operation is generally more efficient than a Scan.
  16. With DynamoDB, you can reduce the impact of a query or scan by setting a smaller page size which uses fewer read operations.
  17. To optimize DynamoDB performance, isolate scan operations to specific tables and segregate them from your mission-critical traffic.
  18. To optimize DynamoDB performance, try Parallel scans rather than the default sequential scan.
  19. To optimize DynamoDB performance: Avoid using scan operations if you can: design tables in a way that you can use Query, Get, or BatchGetItems APIs.
  20. When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity.
  21. DynamoDb Provisioned Throughput is measured in Capacity Units.
    • 1 Write Capacity Unit = 1 x 1KB Write per second.
    • 1 Read Capacity Unit = 1 x 4KB Strongly Consistent Read Or 2 x 4KB Eventually Consistent Reads per second. Eventual consistent reads give us the maximum performance with the read operation.
  22. What is the maximum throughput that can be provisioned for a single DynamoDB table?
    DynamoDB is designed to scale without limits. However, if you want to exceed throughput rates of 10,000 write capacity units or 10,000 read capacity units for an individual table, you must Contact AWS to increase it.
    If you want to provision more than 20,000 write capacity units or 20,000 read capacity units from a single subscriber account, you must first contact AWS to request a limit increase.
  23. Dynamo Db Performance: DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications.
    • As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds
    • DAX improves response times for Eventually Consistent reads only.
    • With DAX, you point your API calls to the DAX cluster instead of your table.
    • If the item you are querying is on the cache, DAX will return it; otherwise, it will perform and Eventually Consistent GetItem operation to your DynamoDB table.
    • DAX reduces operational and application complexity by providing a managed service that is API compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
    • DAX is not suitable for write-intensive applications or applications that require Strongly Consistent reads.
    • For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
  24. Dynamo Db Performance: ElastiCache
    • In-memory cache sits between your application and database
    • 2 different caching strategies: Lazy loading and Write Through: Lazy loading only caches the data when it is requested
    • Elasticache Node failures are not fatal, just lots of cache misses
    • Avoid stale data by implementing a TTL.
    • Write-Through strategy writes data into cache whenever there is a change to the database. Data is never stale
    • Write-Through penalty: Each write involves a write to the cache. Elasticache node failure means that data is missing until added or updated in the database.
    • Elasticache is wasted resources if most of the data is never used.
  25. Time To Live (TTL) for DynamoDB allows you to define when items in a table expire so that they can be automatically deleted from the database. TTL is provided at no extra cost as a way to reduce storage usage and reduce the cost of storing irrelevant data without using provisioned throughput. With TTL enabled on a table, you can set a timestamp for deletion on a per-item basis, allowing you to limit storage usage to only those records that are relevant.
  26. DynamoDB Security: DynamoDB uses the CMK to generate and encrypt a unique data key for the table, known as the table key. With DynamoDB, AWS Owned, or AWS Managed CMK can be used to generate & encrypt keys. AWS Owned CMK is free of charge while AWS Managed CMK is chargeable. Customer managed CMK’s are not supported with encryption at rest.
  27. Amazon DynamoDB offers fully managed encryption at rest. DynamoDB encryption at rest provides enhanced security by encrypting your data at rest using an AWS Key Management Service (AWS KMS) managed encryption key for DynamoDB. This functionality eliminates the operational burden and complexity involved in protecting sensitive data.
  28. DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management
  29. DynamoDB Streams Use Cases and Design Patterns:
    How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
    How do you trigger an event based on a particular transaction?
    How do you audit or archive transactions?
    How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?
    As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit.

    You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time.

    AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region

  30. 20 global secondary indexes are allowed per table? (by default)
  31. What is one key difference between a global secondary index and a local secondary index?
    A local secondary index must have the same partition key as the main table
  32. How many tables can an AWS account have per region? 256
  33. How many secondary indexes (global and local combined) are allowed per table? (by default): 25
    You can define up to 5 local secondary indexes and 20 global secondary indexes per table (by default) – for a total of 25.
  34. How can you increase your DynamoDB table limit in a region?
    By contacting AWS and requesting a limit increase
  35. For any AWS account, there is an initial limit of 256 tables per region.
  36. The minimum length of a partition key value is 1 byte. The maximum length is 2048 bytes.
  37. The minimum length of a sort key value is 1 byte. The maximum length is 1024 bytes.
  38. For tables with local secondary indexes, there is a 10 GB size limit per partition key value. A table with local secondary indexes can store any number of items, as long as the total size for any one partition key value does not exceed 10 GB.
  39. The following diagram shows a local secondary index named LastPostIndex. Note that the partition key is the same as that of the Thread table, but the sort key is LastPostDateTime.

    DynamoDB secondary indexes example
    AWS DynamoDB secondary indexes example
  40. Relational vs Non Relational (SQL vs NoSQL)
Relational vs Non Relational
Relational vs Non Relational
SQL vs NOSQL
SQL vs NOSQL
SQL vs NoSQL in AWS
SQL vs NoSQL in AWS

Top
Reference: AWS DynamoDB

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

AWS DynamoDB Questions and Answers Dumps

Q0: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Q2: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?

  • A. 6000
  • B. 10
  • C. 3600
  • D. 600

B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.

You can specify the Write capacity in the Capacity tab of the DynamoDB table.

Reference: AWS working with tables

Q3: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?

  • A. CustomerID
  • B. CustomerName
  • C. Location
  • D. Age


Answer- A
Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on..
Use composite attributes. Try to combine more than one attribute to form a unique key.
Reference: Choosing the right DynamoDB Partition Key

Top

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

Q4: A DynamoDB table is set with a Read Throughput capacity of 5 RCU. Which of the following read configuration will provide us the maximum read throughput?

  • A. Read capacity set to 5 for 4KB reads of data at strong consistency
  • B. Read capacity set to 5 for 4KB reads of data at eventual consistency
  • C. Read capacity set to 15 for 1KB reads of data at strong consistency
  • D. Read capacity set to 5 for 1KB reads of data at eventual consistency
Answer: B.
The calculation of throughput capacity for option B would be:
Read capacity(5) * Amount of data(4) = 20.
Since its required at eventual consistency , we can double the read throughput to 20*2=40

Reference: Read/Write Capacity Mode

Top

Q5: Your team is developing a solution that will make use of DynamoDB tables. Due to the nature of the application, the data is needed across a couple of regions across the world. Which of the following would help reduce the latency of requests to DynamoDB from different regions?

  • A. Enable Multi-AZ for the DynamoDB table
  • B. Enable global tables for DynamoDB
  • C. Enable Indexes for the table
  • D. Increase the read and write throughput for the tablez
Answer: B
Amazon DynamoDB global tables provide a fully managed solution for deploying a multi-region, multimaster database, without having to build and maintain your own replication solution. When you create a global table, you specify the AWS regions where you want the table to be available. DynamoDB performs all of the necessary tasks to create identical tables in these regions, and propagate ongoing data changes to all of them.
Reference: Global Tables

Top

Q6: An application is currently accessing  a DynamoDB table. Currently the tables queries are performing well. Changes have been made to the application and now the performance of the application is starting to degrade. After looking at the changes , you see that the queries are making use of an attribute which is not the partition key? Which of the following would be the adequate change to make to resolve the issue?

  • A. Add an index for the DynamoDB table
  • B. Change all the queries to ensure they use the partition key
  • C. Enable global tables for DynamoDB
  • D. Change the read capacity on the table


Answer: A
Amazon DynamoDB provides fast access to items in a table by specifying primary key values. However, many applications might benefit from having one or more secondary (or alternate) keys available, to allow efficient access to data with attributes other than the primary key. To address this, you can create one or more secondary indexes on a table, and issue Query or Scan requests against these indexes.

A secondary index is a data structure that contains a subset of attributes from a table, along with an alternate key to support Query operations. You can retrieve data from the index using a Query, in much the same way as you use Query with a table. A table can have multiple secondary indexes, which gives your applications access to many different query patterns.

Reference: Improving Data Access with Secondary Indexes

Top

Q7: Company B has created an e-commerce site using DynamoDB and is designing a products table that includes items purchased and the users who purchased the item.
When creating a primary key on a table which of the following would be the best attribute for the partition key? Select the BEST possible answer.

  • A. None of these are correct.
  • B. user_id where there are many users to few products
  • C. category_id where there are few categories to many products
  • D. product_id where there are few products to many users
Answer: B.
When designing tables it is important for the data to be distributed evenly across the entire table. It is best practice for performance to set your primary key where there are many primary keys to few rows. An example would be many users to few products. An example of bad design would be a primary key of product_id where there are few products but many users.
When designing tables it is important for the data to be distributed evenly across the entire table. It is best practice for performance to set your primary key where there are many primary keys to few rows. An example would be many users to few products. An example of bad design would be a primary key of product_id where there are few products but many users.
Reference: Partition Keys and Sort Keys

Top

Q8: Which API call can be used to retrieve up to 100 items at a time or 16 MB of data from a DynamoDB table?

  • A. BatchItem
  • B. GetItem
  • C. BatchGetItem
  • D. ChunkGetItem
Answer: C. BatchGetItem

The BatchGetItem operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key.

A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. BatchGetItem will return a partial result if the response size limit is exceeded, the table’s provisioned throughput is exceeded, or an internal processing failure occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys. You can use this value to retry the operation starting with the next item to get.Reference: API-Specific Limits

Top

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

Q9: Which DynamoDB limits can be raised by contacting AWS support?

  • A. The number of hash keys per account
  • B. The maximum storage used per account
  • C. The number of tables per account
  • D. The number of local secondary indexes per account
  • E. The number of provisioned throughput units per account


Answer: C. and E.

For any AWS account, there is an initial limit of 256 tables per region.
AWS places some default limits on the throughput you can provision.
These are the limits unless you request a higher amount.
To request a service limit increase see https://aws.amazon.com/support.

Reference: Limits in DynamoDB


Top

Q10: Which approach below provides the least impact to provisioned throughput on the “Product”
table?

  • A. Create an “Images” DynamoDB table to store the Image with a foreign key constraint to
    the “Product” table
  • B. Add an image data type to the “Product” table to store the images in binary format
  • C. Serialize the image and store it in multiple DynamoDB tables
  • D. Store the images in Amazon S3 and add an S3 URL pointer to the “Product” table item
    for each image


Answer: D.

Amazon DynamoDB currently limits the size of each item that you store in a table (see Limits in DynamoDB). If your application needs to store more data in an item than the DynamoDB size limit permits, you can try compressing one or more large attributes, or you can store them as an object in Amazon Simple Storage Service (Amazon S3) and store the Amazon S3 object identifier in your DynamoDB item.
Compressing large attribute values can let them fit within item limits in DynamoDB and reduce your storage costs. Compression algorithms such as GZIP or LZO produce binary output that you can then store in a Binary attribute type.
Reference: Best Practices for Storing Large Items and Attributes


Top

Q11: You’re creating a forum DynamoDB database for hosting forums. Your “thread” table contains the forum name and each “forum name” can have one or more “subjects”. What primary key type would you give the thread table in order to allow more than one subject to be tied to the forum primary key name?

  • A. Hash
  • B. Range and Hash
  • C. Primary and Range
  • D. Hash and Range

Answer: D.
Each forum name can have one or more subjects. In this case, ForumName is the hash attribute and Subject is the range attribute.

Reference: DynamoDB keys

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

Top

Amazon Aurora explained:

  • High scalability
  • High availability and durability
  • High Performance
  • Multi Region
Amazon Aurora explained
Amazon Aurora explained

Amazon ElastiCache Explained

  • In-Memory data store
  • High availability and reliability
  • Fully managed
  • Supports two pop
  • Open source engine
Amazon ElastiCache Explained
Amazon ElastiCache Explained

Amazon Redshift explained

  • Fast, fully managed, petabyte-scale data warehouse
  • Supports wide range of open data formats
  • Allows you to run SQL queries against large unstructured data in Amazon Simple Storage Service
  • Integrates with popular Business Intelligence (BI) and extract, Transform, Load  (ETL) solutions.
Amazon Redshift explained
Amazon Redshift explained

Amazon Neptune Explained

  • Fully managed graph database
  • Supports open graph APIs
  • Used in Social Networking
  • Amazon Neptune Explained
    Amazon Neptune Explained

AWS Certification Exam Prep: S3 Facts, Summaries, Questions and Answers

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

AWS Certification Exam Prep: S3 Facts, Summaries, Questions and Answers

AWS S3 Facts and summaries, AWS S3 Top 10 Questions and Answers Dump

Definition 1: Amazon S3 or Amazon Simple Storage Service is a “simple storage service” offered by Amazon Web Services that provides object storage through a web service interface. Amazon S3 uses the same scalable storage infrastructure that Amazon.com uses to run its global e-commerce network.

Definition 2: Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.

AWS S3 Explained graphically:

Amazon S3 Explained in pictures
Amazon S3 Explained

Amazon S3 Explained in pictures
Amazon S3 Explained in pictures
Amazon S3 Explained graphically
Amazon S3 Explained graphically

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

AWS S3 Facts and summaries

  1. S3 is a universal namespace, meaning each S3 bucket you create must have a unique name that is not being used by anyone else in the world.
  2. S3 is object based: i.e allows you to upload files.
  3. Files can be from 0 Bytes to 5 TB
  4. What is the maximum length, in bytes, of a DynamoDB range primary key attribute value?
    The maximum length of a DynamoDB range primary key attribute value is 2048 bytes (NOT 256 bytes).
  5. S3 has unlimited storage.
  6. Files are stored in Buckets.
  7. Read after write consistency for PUTS of new Objects
  8. Eventual Consistency for overwrite PUTS and DELETES (can take some time to propagate)
  9. S3 Storage Classes/Tiers:
    • S3 Standard (durable, immediately available, frequently accesses)
    • Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering): It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access.
    • S3 Standard-Infrequent Access – S3 Standard-IA (durable, immediately available, infrequently accessed)
    • S3 – One Zone-Infrequent Access – S3 One Zone IA: Same ad IA. However, data is stored in a single Availability Zone only
    • S3 – Reduced Redundancy Storage (data that is easily reproducible, such as thumbnails, etc.)
    • Glacier – Archived data, where you can wait 3-5 hours before accessing

    You can have a bucket that has different objects stored in S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA.

  10. The default URL for S3 hosted websites lists the bucket name first followed by s3-website-region.amazonaws.com . Example: enoumen.com.s3-website-us-east-1.amazonaws.com
  11. Core fundamentals of an S3 object
    • Key (name)
    • Value (data)
    • Version (ID)
    • Metadata
    • Sub-resources (used to manage bucket-specific configuration)
      • Bucket Policies, ACLs,
      • CORS
      • Transfer Acceleration
  12. Object-based storage only for files
  13. Not suitable to install OS on.
  14. Successful uploads will generate a HTTP 200 status code.
  15. S3 Security – Summary
    • By default, all newly created buckets are PRIVATE.
    • You can set up access control to your buckets using:
      • Bucket Policies – Applied at the bucket level
      • Access Control Lists – Applied at an object level.
    • S3 buckets can be configured to create access logs, which log all requests made to the S3 bucket. These logs can be written to another bucket.
  16. S3 Encryption
    • Encryption In-Transit (SSL/TLS)
    • Encryption At Rest:
      • Server side Encryption (SSE-S3, SSE-KMS, SSE-C)
      • Client Side Encryption
    • Remember that we can use a Bucket policy to prevent unencrypted files from being uploaded by creating a policy which only allows requests which include the x-amz-server-side-encryption parameter in the request header.
  17. S3 CORS (Cross Origin Resource Sharing):
    CORS defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.

    • Used to enable cross origin access for your AWS resources, e.g. S3 hosted website accessing javascript or image files located in another bucket. By default, resources in one bucket cannot access resources located in another. To allow this we need to configure CORS on the bucket being accessed and enable access for the origin (bucket) attempting to access.
    • Always use the S3 website URL, not the regular bucket URL. E.g.: https://s3-eu-west-2.amazonaws.com/acloudguru
  18. S3 CloudFront:
    • Edge locations are not just READ only – you can WRITE to them too (i.e put an object on to them.)
    • Objects are cached for the life of the TTL (Time to Live)
    • You can clear cached objects, but you will be charged. (Invalidation)
  19. S3 Performance optimization – 2 main approaches to Performance Optimization for S3:
    • GET-Intensive Workloads – Use Cloudfront
    • Mixed Workload – Avoid sequencial key names for your S3 objects. Instead, add a random prefix like a hex hash to the key name to prevent multiple objects from being stored on the same partition.
      • mybucket/7eh4-2019-03-04-15-00-00/cust1234234/photo1.jpg
      • mybucket/h35d-2019-03-04-15-00-00/cust1234234/photo2.jpg
      • mybucket/o3n6-2019-03-04-15-00-00/cust1234234/photo3.jpg
  20. The best way to handle large objects uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts.
  21. You can enable versioning on a bucket, even if that bucket already has objects in it. The already existing objects, though, will show their versions as null. All new objects will have version IDs.
  22. Bucket names cannot start with a . or – characters. S3 bucket names can contain both the . and – characters. There can only be one . or one – between labels. E.G mybucket-com mybucket.com are valid names but mybucket–com and mybucket..com are not valid bucket names.
  23. What is the maximum number of S3 buckets allowed per AWS account (by default)? 100
  24. You successfully upload an item to the us-east-1 region. You then immediately make another API call and attempt to read the object. What will happen?
    All AWS regions now have read-after-write consistency for PUT operations of new objects. Read-after-write consistency allows you to retrieve objects immediately after creation in Amazon S3. Other actions still follow the eventual consistency model (where you will sometimes get stale results if you have recently made changes)
  25. S3 bucket policies require a Principal be defined. Review the access policy elements here
  26. What checksums does Amazon S3 employ to detect data corruption?

    Amazon S3 uses a combination of Content-MD5 checksums and cyclic redundancy checks (CRCs) to detect data corruption. Amazon S3 performs these checksums on data at rest and repairs any corruption using redundant data. In addition, the service calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data.

Top
Reference: AWS S3

AWS S3 Top 10 Questions and Answers Dump

Q0: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?

  • A. Create multiple threads and upload the objects in the multiple threads
  • B. Write the items in batches for better performance
  • C. Use the Multipart upload API
  • D. Enable versioning on the Bucket


C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.

Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html


Top


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Q2: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?

  • A. AWS::Serverless::Api
  • B. AWS::Serverless::Application
  • C. AWS::Serverless::Layerversion
  • D. AWS::Serverless::Function


Answer – B
AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets.
Reference: Declaring Serverless Resources

Top

Q3: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?

  • A. Enable versioning for the underlying S3 bucket.
  • B. Enable Replication so that the objects get replicated to the other bucket
  • C. Enable CORS for the bucket
  • D. Change the Bucket policy for the bucket to allow access from the other bucket


Answer – C

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:

Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can congure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.

Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would congure the bucket that is hosting the web font to allow any origin to make these requests.

Reference: Cross-Origin Resource Sharing (CORS)


Top

Q4: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below

  • A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
  • B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
  • C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
  • D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.


Answer- C
The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3.
You can then provides access to the objects based on the key values generated via the user id.

Reference: The AWS Security Token Service (STS)


Top

Q5: Both ACLs and Bucket Policies can be used to grant access to S3 buckets. Which of the following statements is true about ACLs and Bucket policies?

  • A. Bucket Policies are Written in JSON and ACLs are written in XML
  • B. ACLs can be attached to S3 objects or S3 Buckets
  • C. Bucket Policies and ACLs are written in JSON
  • D. Bucket policies are only attached to s3 buckets, ACLs are only attached to s3 objects
Answer: A. and B.
Only Bucket Policies are written in JSON, ACLs are written in XML.
While Bucket policies are indeed only attached to S3 buckets, ACLs can be attached to S3 Buckets OR S3 Objects.
Reference:

Top

Q6: What are good options to improve S3 performance when you have significantly high numbers of GET requests?

  • A. Introduce random prefixes to S3 objects
  • B. Introduce random suffixes to S3 objects
  • C. Setup CloudFront for S3 objects
  • D. Migrate commonly used objects to Amazon Glacier
Answer: C
CloudFront caching is an excellent way to avoid putting extra strain on the S3 service and to improve the response times of reqeusts by caching data closer to users at CloudFront locations.
S3 Transfer Acceleration optimizes the TCP protocol and adds additional intelligence between the client and the S3 bucket, making S3 Transfer Acceleration a better choice if a higher throughput is desired. If you have objects that are smaller than 1GB or if the data set is less than 1GB in size, you should consider using Amazon CloudFront’s PUT/POST commands for optimal performance.
Reference: Amazon S3 Transfer Acceleration

Top

Q7: If an application is storing hourly log files from thousands of instances from a high traffic
web site, which naming scheme would give optimal performance on S3?

  • A. Sequential
  • B. HH-DD-MM-YYYY-log_instanceID
  • C. YYYY-MM-DD-HH-log_instanceID
  • D. instanceID_log-HH-DD-MM-YYYY
  • E. instanceID_log-YYYY-MM-DD-HH


Answer: A. B. C. D. and E.
Amazon S3 now provides increased performance to support at least 3,500 requests per second to add data and 5,500 requests per second to retrieve data, which can save significant processing time for no additional charge. Each S3 prefix can support these request rates, making it simple to increase performance significantly.
This S3 request rate performance increase removes any previous guidance to randomize object prefixes to achieve faster performance. That means you can now use logical or sequential naming patterns in S3 object naming without any performance implications.

Reference: Amazon S3 Announces Increased Request Rate Performance

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.


Top

Q8: You are working with the S3 API and receive an error message: 409 Conflict. What is the possible cause of this error

  • A. You’re attempting to remove a bucket without emptying the contents of the bucket first.
  • B. You’re attempting to upload an object to the bucket that is greater than 5TB in size.
  • C. Your request does not contain the proper metadata.
  • D. Amazon S3 is having internal issues.

Answer:A.

Reference: S3 Error codes

Top

Q9: You created three S3 buckets – “mywebsite.com”, “downloads.mywebsite.com”, and “www.mywebsite.com”. You uploaded your files and enabled static website hosting. You specified both of the default documents under the “enable static website hosting” header. You also set the “Make Public” permission for the objects in each of the three buckets. You create the Route 53 Aliases for the three buckets. You are going to have your end users test your websites by browsing to http://mydomain.com/error.html, http://downloads.mydomain.com/index.html, and http://www.mydomain.com. What problems will your testers encounter?

  • A. http://mydomain.com/error.html will not work because you did not set a value for the error.html file
  • B. There will be no problems, all three sites should work.
  • C. http://www.mywebsite.com will not work because the URL does not include a file name at the end of it.
  • D. http://downloads.mywebsite.com/index.html will not work because the “downloads” prefix is not a supported prefix for S3 websites using Route 53 aliases

Answer: B.
It used to be that the only allowed domain prefix when creating Route 53 Aliases for S3 static websites was the “www” prefix. However, this is no longer the case. You can now use other subdomain.

Reference: Hosting a Static Website on Amazon S3

Top

Q10: Which of the following is NOT a common S3 API call?

  • A. UploadPart
  • B. ReadObject
  • C. PutObject
  • D. DownloadBucket

Answer: D.

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

Reference: s3api

Top

Other AWS Facts and Summaries

Pass the 2023 AWS Cloud Practitioner CCP CLF-C02 Certification with flying colors Ace the 2023 AWS Solutions Architect Associate SAA-C03 Exam with Confidence Pass the 2023 AWS Certified Machine Learning Specialty MLS-C01 Exam with Flying Colors

List of Freely available programming books - What is the single most influential book every Programmers should read



#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks

Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
zCanadian Quiz and Trivia, Canadian History, Citizenship Test, Geography, Wildlife, Secenries, Banff, Tourism

Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Africa Quiz, Africa Trivia, Quiz, African History, Geography, Wildlife, Culture

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA


Health Health, a science-based community to discuss health news and the coronavirus (COVID-19) pandemic

Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.

Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.

Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.

Turn your dream into reality with Google Workspace: It’s free for the first 14 days.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes:
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6 96DRHDRA9J7GTN6
63F733CLLY7R7MM
63F7D7CPD9XXUVT
63FLKQHWV3AEEE6
63JGLWWK36CP7WM
63KKR9EULQRR7VE
63KNY4N7VHCUA9R
63LDXXFYU6VXDG9
63MGNRCKXURAYWC
63NGNDVVXJP4N99
63P4G3ELRPADKQU
With Google Workspace, Get custom email @yourcompany, Work from anywhere; Easily scale up or down
Google gives you the tools you need to run your business like a pro. Set up custom email, share files securely online, video chat from any device, and more.
Google Workspace provides a platform, a common ground, for all our internal teams and operations to collaboratively support our primary business goal, which is to deliver quality information to our readers quickly.
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE
C37HCAQRVR7JTFK
C3AE76E7WATCTL9
C3C3RGUF9VW6LXE
C3D9LD4L736CALC
C3EQXV674DQ6PXP
C3G9M3JEHXM3XC7
C3GGR3H4TRHUD7L
C3LVUVC3LHKUEQK
C3PVGM4CHHPMWLE
C3QHQ763LWGTW4C
Even if you’re small, you want people to see you as a professional business. If you’re still growing, you need the building blocks to get you where you want to be. I’ve learned so much about business through Google Workspace—I can’t imagine working without it.
(Email us for more codes)

error: Content is protected !!