Pros and Cons of Cloud Computing

Cloud User insurance and Cloud Provider Insurance

Cloud computing is the new big thing in Information Technology. Everyone, every business will sooner or later adopt it, because of hosting cost benefits, scalability and more.

This blog outlines the Pros and Cons of Cloud Computing, Pros and Cons of Cloud Technology, Faqs, Facts, Questions and Answers Dump about cloud computing.

AWS Cloud Practitioner Exam Prep App – Free

AWS Certified Cloud Practitioner Exam Prep App
AWS Certified Cloud Practitioner Exam Prep PWA App

What is cloud computing?

Cloud computing is an information technology paradigm that enables ubiquitous access to shared pools of configurable system resources and higher-level services that can be rapidly provisioned with minimal management effort, often over the Internet. Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a public utility.
Simply put, cloud computing is the delivery of computing services including servers, storage, databases, networking, software, analytics, and intelligence—over the Internet (“the cloud”) to offer faster innovation, flexible resources, and economies of scale. You typically pay only for cloud services you use, helping you lower your operating costs, run your infrastructure more efficiently, and scale as your business needs change.

What are the Pros of using cloud computing? What are characteristics of cloud computing?


  • Cost effective & Time saving: Cloud computing eliminates the capital expense of buying hardware and software and setting up and running on-site datacenters; the racks of servers, the round-the-clock electricity for power and cooling, and the IT experts for managing the infrastructure.
  • The ability to pay only for cloud services you use, helping you lower your operating costs.
  • Powerful server capabilities and Performance: The biggest cloud computing services run on a worldwide network of secure datacenters, which are regularly upgraded to the latest generation of fast and efficient computing hardware. This offers several benefits over a single corporate datacenter, including reduced network latency for applications and greater economies of scale.
  • Powerful and scalable server capabilities: The ability to scale elastically; That means delivering the right amount of IT resources—for example, more or less computing power, storage, bandwidth—right when they’re needed, and from the right geographic location.
  • SaaS ( Software as a service). Software as a service is a method for delivering software applications over the Internet, on demand and typically on a subscription basis. With SaaS, cloud providers host and manage the software application and underlying infrastructure, and handle any maintenance, like software upgrades and security patching. Users connect to the application over the Internet, usually with a web browser on their phone, tablet, or PC.
  • PaaS ( Platform as a service). Platform as a service refers to cloud computing services that supply an on-demand environment for developing, testing, delivering, and managing software applications. PaaS is designed to make it easier for developers to quickly create web or mobile apps, without worrying about setting up or managing the underlying infrastructure of servers, storage, network, and databases needed for development.
  • IaaS ( Infrastructure as a service). The most basic category of cloud computing services. With IaaS, you rent IT infrastructure—servers and virtual machines (VMs), storage, networks, operating systems—from a cloud provider on a pay-as-you-go basis
  • Serverless: Running complex Applications without a single server. Overlapping with PaaS, serverless computing focuses on building app functionality without spending time continually managing the servers and infrastructure required to do so. The cloud provider handles the setup, capacity planning, and server management for you. Serverless architectures are highly scalable and event-driven, only using resources when a specific function or trigger occurs.
  • Infrastructure provisioning as code, helps recreating same infrastructure by re-running the same code in a few click.
  • Automatic and Reliable Data backup and storage of data: Cloud computing makes data backup, disaster recovery, and business continuity easier and less expensive because data can be mirrored at multiple redundant sites on the cloud provider’s network.
  • Increase Productivity: On-site datacenters typically require a lot of “racking and stacking”—hardware setup, software patching, and other time-consuming IT management chores. Cloud computing removes the need for many of these tasks, so IT teams can spend time on achieving more important business goals.
  • Security: Many cloud providers offer a broad set of policies, technologies, and controls that strengthen your security posture overall, helping protect your data, apps, and infrastructure from potential threats.
  • Speed: Most cloud computing services are provided self service and on demand, so even vast amounts of computing resources can be provisioned in minutes, typically with just a few mouse clicks, giving businesses a lot of flexibility and taking the pressure off capacity planning.

What are the Cons of using cloud computing?


  • Privacy: Cloud computing poses privacy concerns because the service provider can access the data that is in the cloud at any time. It could accidentally or deliberately alter or delete information.Many cloud providers can share information with third parties if necessary for purposes of law and order without a warrant. That is permitted in their privacy policies, which users must agree to before they start using cloud services.
  • Security: According to the Cloud Security Alliance, the top three threats in the cloud are Insecure Interfaces and API’s, Data Loss & Leakage, and Hardware Failure—which accounted for 29%, 25% and 10% of all cloud security outages respectively. Together, these form shared technology vulnerabilities.
  • Ownership of Data: There is the problem of legal ownership of the data (If a user stores some data in the cloud, can the cloud provider profit from it?). Many Terms of Service agreements are silent on the question of ownership.
  • Limited Customization Options: Cloud computing is cheaper because of economics of scale, and—like any outsourced task—you tend to get what you get. A restaurant with a limited menu is cheaper than a personal chef who can cook anything you want.
  • Downtime: Technical outages are inevitable and occur sometimes when cloud service providers (CSPs) become overwhelmed in the process of serving their clients. This may result to temporary business suspension.
  • Insurance : It can be expensive to insure the customer and business data and infrastructure hosted in the cloud. A cyber insurance is necessary when using the cloud.
  • Other concerns of cloud computing.

      • Users with specific records-keeping requirements, such as public agencies that must retain electronic records according to statute, may encounter complications with using cloud computing and storage. For instance, the U.S. Department of Defense designated the Defense Information Systems Agency (DISA) to maintain a list of records management products that meet all of the records retention, personally identifiable information (PII), and security (Information Assurance; IA) requirements
      • Cloud storage is a rich resource for both hackers and national security agencies. Because the cloud holds data from many different users and organizations, hackers see it as a very valuable target.
    • Piracy and copyright infringement may be enabled by sites that permit filesharing. For example, the CodexCloud ebook storage site has faced litigation from the owners of the intellectual property uploaded and shared there, as have the GrooveShark and YouTube sites it has been compared to.

What are the different types of cloud computing?


  • Public clouds: A cloud is called a “public cloud” when the services are rendered over a network that is open for public use. They are owned and operated by a third-party cloud service providers, which deliver their computing resources, like servers and storage, over the Internet. Microsoft Azure is an example of a public cloud. With a public cloud, all hardware, software, and other supporting infrastructure is owned and managed by the cloud provider. You access these services and manage your account using a web browser. For infrastructure as a service (IaaS) and platform as a service (PaaS), Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP) hold a commanding position among the many cloud companies.
  • Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third party, and hosted either internally or externally. A private cloud refers to cloud computing resources used exclusively by a single business or organization. A private cloud can be physically located on the company’s on-site datacenter. Some companies also pay third-party service providers to host their private cloud. A private cloud is one in which the services and infrastructure are maintained on a private network.
  • Hybrid cloud is a composition of a public cloud and a private environment, such as a private cloud or on-premise resources, that remain distinct entities but are bound together, offering the benefits of multiple deployment models. Hybrid cloud can also mean the ability to connect collocation, managed and/or dedicated services with cloud resources. Hybrid clouds combine public and private clouds, bound together by technology that allows data and applications to be shared between them. By allowing data and applications to move between private and public clouds, a hybrid cloud gives your business greater flexibility, more deployment options, and helps optimize your existing infrastructure, security, and compliance.
  • Community Cloud: A community cloud in computing is a collaborative effort in which infrastructure is shared between several organizations from a specific community with common concerns, whether managed internally or by a third-party and hosted internally or externally. This is controlled and used by a group of organizations that have shared interest. The costs are spread over fewer users than a public cloud, so only some of the cost savings potential of cloud computing are realized.


Other AWS Facts and Summaries and Questions/Answers Dump

Reference


AWS Certification Exam Prep: DynamoDB Facts, Summaries and Questions/Answers.

DynamoDB

AWS DynamoDB facts and summaries, AWS DynamoDB Top 10 Questions and Answers Dump

Definition 1: Amazon DynamoDB is a fully managed proprietary NoSQL database service that supports key-value and document data structures and is offered by Amazon.com as part of the Amazon Web Services portfolio. DynamoDB exposes a similar data model to and derives its name from Dynamo, but has a different underlying implementation. Dynamo had a multi-master design requiring the client to resolve version conflicts and DynamoDB uses synchronous replication across multiple datacenters for high durability and availability.

Definition 2: DynamoDB is a fast and flexible non-relational database service for any scale. DynamoDB enables customers to offload the administrative burdens of operating and scaling distributed databases to AWS so that they don’t have to worry about hardware provisioning, setup and configuration, throughput capacity planning, replication, software patching, or cluster scaling.

AWS DynamoDB Facts and Summaries

  1. Amazon DynamoDB is a low-latency NoSQL database.
  2. DynamoDB consists of Tables, Items, and Attributes
  3. DynamoDb supports both document and key-value data models
  4. DynamoDB Supported documents formats are JSON, HTML, XML
  5. DynamoDB has 2 types of Primary Keys: Partition Key and combination of Partition Key + Sort Key (Composite Key)
  6. DynamoDB has 2 consistency models: Strongly Consistent / Eventually Consistent
  7. DynamoDB Access is controlled using IAM policies.
  8. DynamoDB has fine grained access control using IAM Condition parameter dynamodb:LeadingKeys to allow users to access only the items where the partition key vakue matches their user ID.
  9. DynamoDB Indexes enable fast queries on specific data columns
  10. DynamoDB indexes give you a different view of your data based on alternative Partition / Sort Keys.
  11. DynamoDB Local Secondary indexes must be created when you create your table, they have same partition Key as your table, and they have a different Sort Key.
  12. DynamoDB Global Secondary Index Can be created at any time: at table creation or after. They have a different partition Key as your table and a different sort key as your table.
  13. A DynamoDB query operation finds items in a table using only the primary Key attribute: You provide the Primary Key name and a distinct value to search for.
  14. A DynamoDB Scan operation examines every item in the table. By default, it return data attributes.
  15. DynamoDB Query operation is generally more efficient than a Scan.
  16. With DynamoDB, you can reduce the impact of a query or scan by setting a smaller page size which uses fewer read operations.
  17. To optimize DynamoDB performance, isolate scan operations to specific tables and segregate them from your mission-critical traffic.
  18. To optimize DynamoDB performance, try Parallel scans rather than the default sequential scan.
  19. To optimize DynamoDB performance: Avoid using scan operations if you can: design tables in a way that you can use Query, Get, or BatchGetItems APIs.
  20. When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity.
  21. DynamoDb Provisioned Throughput is measured in Capacity Units.
    • 1 Write Capacity Unit = 1 x 1KB Write per second.
    • 1 Read Capacity Unit = 1 x 4KB Strongly Consistent Read Or 2 x 4KB Eventually Consistent Reads per second. Eventual consistent reads give us the maximum performance with the read operation.
  22. What is the maximum throughput that can be provisioned for a single DynamoDB table?
    DynamoDB is designed to scale without limits. However, if you want to exceed throughput rates of 10,000 write capacity units or 10,000 read capacity units for an individual table, you must Ccontact AWS to increase it.
    If you want to provision more than 20,000 write capacity units or 20,000 read capacity units from a single subscriber account, you must first contact AWS to request a limit increase.
  23. Dynamo Db Performance: DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications.
    • As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds
    • DAX improves response times for Eventually Consistent reads only.
    • With DAX, you point your API calls to the DAX cluster instead of your table.
    • If the item you are querying is on the cache, DAX will return it; otherwise, it will perform and Eventually Consistent GetItem operation to your DynamoDB table.
    • DAX reduces operational and application complexity by providing a managed service that is API compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
    • DAX is not suitable for write-intensive applications or applications that require Strongly Consistent reads.
    • For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
  24. Dynamo Db Performance: ElastiCache
    • In-memory cache sits between your application and database
    • 2 different caching strategies: Lazy loading and Write Through: Lazy loading only caches the data when it is requested
    • Elasticache Node failures are not fatal, just lots of cache misses
    • Avoid stale data by implementing a TTL.
    • Write-Through strategy writes data into cache whenever there is a change to the database. Data is never stale
    • Write-Through penalty: Each write involves a write to the cache. Elasticache node failure means that data is missing until added or updated in the database.
    • Elasticache is wasted resources if most of the data is never used.
  25. Time To Live (TTL) for DynamoDB allows you to define when items in a table expire so that they can be automatically deleted from the database. TTL is provided at no extra cost as a way to reduce storage usage and reduce the cost of storing irrelevant data without using provisioned throughput. With TTL enabled on a table, you can set a timestamp for deletion on a per-item basis, allowing you to limit storage usage to only those records that are relevant.
  26. DynamoDB Security: DynamoDB uses the CMK to generate and encrypt a unique data key for the table, known as the table key. With DynamoDB, AWS Owned, or AWS Managed CMK can be used to generate & encrypt keys. AWS Owned CMK is free of charge while AWS Managed CMK is chargeable. Customer managed CMK’s are not supported with encryption at rest.
  27. Amazon DynamoDB offers fully managed encryption at rest. DynamoDB encryption at rest provides enhanced security by encrypting your data at rest using an AWS Key Management Service (AWS KMS) managed encryption key for DynamoDB. This functionality eliminates the operational burden and complexity involved in protecting sensitive data.
  28. DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management
  29. DynamoDB Streams Use Cases and Design Patterns:
    How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
    How do you trigger an event based on a particular transaction?
    How do you audit or archive transactions?
    How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?
    As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit.

    You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time.

    AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region

  30. 20 global secondary indexes are allowed per table? (by default)
  31. What is one key difference between a global secondary index and a local secondary index?
    A local secondary index must have the same partition key as the main table
  32. How many tables can an AWS account have per region? 256
  33. How many secondary indexes (global and local combined) are allowed per table? (by default): 25
    You can define up to 5 local secondary indexes and 20 global secondary indexes per table (by default) – for a total of 25.
  34. How can you increase your DynamoDB table limit in a region?
    By contacting AWS and requesting a limit increase
  35. For any AWS account, there is an initial limit of 256 tables per region.
  36. The minimum length of a partition key value is 1 byte. The maximum length is 2048 bytes.
  37. The minimum length of a sort key value is 1 byte. The maximum length is 1024 bytes.
  38. For tables with local secondary indexes, there is a 10 GB size limit per partition key value. A table with local secondary indexes can store any number of items, as long as the total size for any one partition key value does not exceed 10 GB.
  39. The following diagram shows a local secondary index named LastPostIndex. Note that the partition key is the same as that of the Thread table, but the sort key is LastPostDateTime.
    DynamoDB secondary indexes example
    AWS DynamoDB secondary indexes example

Top
Reference: AWS DynamoDB




AWS DynamoDB Questions and Answers Dump

Q0: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator

D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX

Top

Q2: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?

  • A. 6000
  • B. 10
  • C. 3600
  • D. 600

B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.

You can specify the Write capacity in the Capacity tab of the DynamoDB table.

Reference: AWS working with tables

Q3: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?

  • A. CustomerID
  • B. CustomerName
  • C. Location
  • D. Age

Answer- A
Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on..
Use composite attributes. Try to combine more than one attribute to form a unique key.
Reference: Choosing the right DynamoDB Partition Key

Top

Q4: A DynamoDB table is set with a Read Throughput capacity of 5 RCU. Which of the following read configuration will provide us the maximum read throughput?

  • A. Read capacity set to 5 for 4KB reads of data at strong consistency
  • B. Read capacity set to 5 for 4KB reads of data at eventual consistency
  • C. Read capacity set to 15 for 1KB reads of data at strong consistency
  • D. Read capacity set to 5 for 1KB reads of data at eventual consistency

Answer: B.
The calculation of throughput capacity for option B would be:
Read capacity(5) * Amount of data(4) = 20.
Since its required at eventual consistency , we can double the read throughput to 20*2=40

Reference: Read/Write Capacity Mode

Top

Q5: Your team is developing a solution that will make use of DynamoDB tables. Due to the nature of the application, the data is needed across a couple of regions across the world. Which of the following would help reduce the latency of requests to DynamoDB from different regions?

  • A. Enable Multi-AZ for the DynamoDB table
  • B. Enable global tables for DynamoDB
  • C. Enable Indexes for the table
  • D. Increase the read and write throughput for the tablez

Answer: B
Amazon DynamoDB global tables provide a fully managed solution for deploying a multi-region, multimaster database, without having to build and maintain your own replication solution. When you create a global table, you specify the AWS regions where you want the table to be available. DynamoDB performs all of the necessary tasks to create identical tables in these regions, and propagate ongoing data changes to all of them.
Reference: Global Tables

Top

Q6: An application is currently accessing  a DynamoDB table. Currently the tables queries are performing well. Changes have been made to the application and now the performance of the application is starting to degrade. After looking at the changes , you see that the queries are making use of an attribute which is not the partition key? Which of the following would be the adequate change to make to resolve the issue?

  • A. Add an index for the DynamoDB table
  • B. Change all the queries to ensure they use the partition key
  • C. Enable global tables for DynamoDB
  • D. Change the read capacity on the table

Answer: A
Amazon DynamoDB provides fast access to items in a table by specifying primary key values. However, many applications might benefit from having one or more secondary (or alternate) keys available, to allow efficient access to data with attributes other than the primary key. To address this, you can create one or more secondary indexes on a table, and issue Query or Scan requests against these indexes.

A secondary index is a data structure that contains a subset of attributes from a table, along with an alternate key to support Query operations. You can retrieve data from the index using a Query, in much the same way as you use Query with a table. A table can have multiple secondary indexes, which gives your applications access to many different query patterns.

Reference: Improving Data Access with Secondary Indexes

Top

Q7: Company B has created an e-commerce site using DynamoDB and is designing a products table that includes items purchased and the users who purchased the item.
When creating a primary key on a table which of the following would be the best attribute for the partition key? Select the BEST possible answer.

  • A. None of these are correct.
  • B. user_id where there are many users to few products
  • C. category_id where there are few categories to many products
  • D. product_id where there are few products to many users

Answer: B.
When designing tables it is important for the data to be distributed evenly across the entire table. It is best practice for performance to set your primary key where there are many primary keys to few rows. An example would be many users to few products. An example of bad design would be a primary key of product_id where there are few products but many users.
When designing tables it is important for the data to be distributed evenly across the entire table. It is best practice for performance to set your primary key where there are many primary keys to few rows. An example would be many users to few products. An example of bad design would be a primary key of product_id where there are few products but many users.
Reference: Partition Keys and Sort Keys

Top

Q8: Which API call can be used to retrieve up to 100 items at a time or 16 MB of data from a DynamoDB table?

  • A. BatchItem
  • B. GetItem
  • C. BatchGetItem
  • D. ChunkGetItem

Answer: C. BatchGetItem

The BatchGetItem operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key.

A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. BatchGetItem will return a partial result if the response size limit is exceeded, the table’s provisioned throughput is exceeded, or an internal processing failure occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys. You can use this value to retry the operation starting with the next item to get.Reference: API-Specific Limits

Top

Q9: Which DynamoDB limits can be raised by contacting AWS support?

  • A. The number of hash keys per account
  • B. The maximum storage used per account
  • C. The number of tables per account
  • D. The number of local secondary indexes per account
  • E. The number of provisioned throughput units per account

Answer: C. and E.

For any AWS account, there is an initial limit of 256 tables per region.
AWS places some default limits on the throughput you can provision.
These are the limits unless you request a higher amount.
To request a service limit increase see https://aws.amazon.com/support.

Reference: Limits in DynamoDB

Top

Q10: Which approach below provides the least impact to provisioned throughput on the “Product”
table?

  • A. Create an “Images” DynamoDB table to store the Image with a foreign key constraint to
    the “Product” table
  • B. Add an image data type to the “Product” table to store the images in binary format
  • C. Serialize the image and store it in multiple DynamoDB tables
  • D. Store the images in Amazon S3 and add an S3 URL pointer to the “Product” table item
    for each image

Answer: D.

Amazon DynamoDB currently limits the size of each item that you store in a table (see Limits in DynamoDB). If your application needs to store more data in an item than the DynamoDB size limit permits, you can try compressing one or more large attributes, or you can store them as an object in Amazon Simple Storage Service (Amazon S3) and store the Amazon S3 object identifier in your DynamoDB item.
Compressing large attribute values can let them fit within item limits in DynamoDB and reduce your storage costs. Compression algorithms such as GZIP or LZO produce binary output that you can then store in a Binary attribute type.
Reference: Best Practices for Storing Large Items and Attributes

Top

Q11: You’re creating a forum DynamoDB database for hosting forums. Your “thread” table contains the forum name and each “forum name” can have one or more “subjects”. What primary key type would you give the thread table in order to allow more than one subject to be tied to the forum primary key name?

  • A. Hash
  • B. Range and Hash
  • C. Primary and Range
  • D. Hash and Range

Answer: D.
Each forum name can have one or more subjects. In this case, ForumName is the hash attribute and Subject is the range attribute.

Reference: DynamoDB keys

Top




Other AWS Facts and Summaries and Questions/Answers Dump

AWS Certification Exam Prep: S3 Facts, Summaries, Questions and Answers

AWS S3 Facts and summaries, AWS S3 Top 10 Questions and Answers Dump

Definition 1: Amazon S3 or Amazon Simple Storage Service is a “simple storage service” offered by Amazon Web Services that provides object storage through a web service interface. Amazon S3 uses the same scalable storage infrastructure that Amazon.com uses to run its global e-commerce network.

Definition 2: Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.




AWS S3 Facts and summaries

  1. S3 is a universal namespace, meaning each S3 bucket you create must have a unique name that is not being used by anyone else in the world.
  2. S3 is object based: i.e allows you to upload files.
  3. Files can be from 0 Bytes to 5 TB
  4. What is the maximum length, in bytes, of a DynamoDB range primary key attribute value?
    The maximum length of a DynamoDB range primary key attribute value is 2048 bytes (NOT 256 bytes).
  5. S3 has unlimited storage.
  6. Files are stored in Buckets.
  7. Read after write consistency for PUTS of new Objects
  8. Eventual Consistency for overwrite PUTS and DELETES (can take some time to propagate)
  9. S3 Storage Classes/Tiers:
    • S3 Standard (durable, immediately available, frequently accesses)
    • Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering): It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access.
    • S3 Standard-Infrequent Access – S3 Standard-IA (durable, immediately available, infrequently accessed)
    • S3 – One Zone-Infrequent Access – S3 One Zone IA: Same ad IA. However, data is stored in a single Availability Zone only
    • S3 – Reduced Redundancy Storage (data that is easily reproducible, such as thumbnails, etc.)
    • Glacier – Archived data, where you can wait 3-5 hours before accessing

    You can have a bucket that has different objects stored in S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA.

  10. The default URL for S3 hosted websites lists the bucket name first followed by s3-website-region.amazonaws.com . Example: enoumen.com.s3-website-us-east-1.amazonaws.com
  11. Core fundamentals of an S3 object
    • Key (name)
    • Value (data)
    • Version (ID)
    • Metadata
    • Sub-resources (used to manage bucket-specific configuration)
      • Bucket Policies, ACLs,
      • CORS
      • Transfer Acceleration
  12. Object-based storage only for files
  13. Not suitable to install OS on.
  14. Successful uploads will generate a HTTP 200 status code.
  15. S3 Security – Summary
    • By default, all newly created buckets are PRIVATE.
    • You can set up access control to your buckets using:
      • Bucket Policies – Applied at the bucket level
      • Access Control Lists – Applied at an object level.
    • S3 buckets can be configured to create access logs, which log all requests made to the S3 bucket. These logs can be written to another bucket.
  16. S3 Encryption
    • Encryption In-Transit (SSL/TLS)
    • Encryption At Rest:
      • Server side Encryption (SSE-S3, SSE-KMS, SSE-C)
      • Client Side Encryption
    • Remember that we can use a Bucket policy to prevent unencrypted files from being uploaded by creating a policy which only allows requests which include the x-amz-server-side-encryption parameter in the request header.
  17. S3 CORS (Cross Origin Resource Sharing):
    CORS defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.  

    • Used to enable cross origin access for your AWS resources, e.g. S3 hosted website accessing javascript or image files located in another bucket. By default, resources in one bucket cannot access resources located in another. To allow this we need to configure CORS on the bucket being accessed and enable access for the origin (bucket) attempting to access.
    • Always use the S3 website URL, not the regular bucket URL. E.g.: https://s3-eu-west-2.amazonaws.com/acloudguru
    •  
  18. S3 CloudFront:
    • Edge locations are not just READ only – you can WRITE to them too (i.e put an object on to them.)
    • Objects are cached for the life of the TTL (Time to Live)
    • You can clear cached objects, but you will be charged. (Invalidation)
  19. S3 Performance optimization – 2 main approaches to Performance Optimization for S3:
    • GET-Intensive Workloads – Use Cloudfront
    • Mixed Workload – Avoid sequencial key names for your S3 objects. Instead, add a random prefix like a hex hash to the key name to prevent multiple objects from being stored on the same partition.
      • mybucket/7eh4-2019-03-04-15-00-00/cust1234234/photo1.jpg
      • mybucket/h35d-2019-03-04-15-00-00/cust1234234/photo2.jpg
      • mybucket/o3n6-2019-03-04-15-00-00/cust1234234/photo3.jpg
  20. The best way to handle large objects uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts.
  21. You can enable versioning on a bucket, even if that bucket already has objects in it. The already existing objects, though, will show their versions as null. All new objects will have version IDs.
  22. Bucket names cannot start with a . or – characters. S3 bucket names can contain both the . and – characters. There can only be one . or one – between labels. E.G mybucket-com mybucket.com are valid names but mybucket–com and mybucket..com are not valid bucket names.
  23. What is the maximum number of S3 buckets allowed per AWS account (by default)? 100
  24. You successfully upload an item to the us-east-1 region. You then immediately make another API call and attempt to read the object. What will happen?
    All AWS regions now have read-after-write consistency for PUT operations of new objects. Read-after-write consistency allows you to retrieve objects immediately after creation in Amazon S3. Other actions still follow the eventual consistency model (where you will sometimes get stale results if you have recently made changes)
  25. S3 bucket policies require a Principal be defined. Review the access policy elements here
  26. What checksums does Amazon S3 employ to detect data corruption?

    Amazon S3 uses a combination of Content-MD5 checksums and cyclic redundancy checks (CRCs) to detect data corruption. Amazon S3 performs these checksums on data at rest and repairs any corruption using redundant data. In addition, the service calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data.

Top
Reference: AWS S3




AWS S3 Top 10 Questions and Answers Dump

Q0: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?

  • A. Create multiple threads and upload the objects in the multiple threads
  • B. Write the items in batches for better performance
  • C. Use the Multipart upload API
  • D. Enable versioning on the Bucket

C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.

Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html

Top

Q2: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?

  • A. AWS::Serverless::Api
  • B. AWS::Serverless::Application
  • C. AWS::Serverless::Layerversion
  • D. AWS::Serverless::Function

Answer – B
AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets.
Reference: Declaring Serverless Resources

Top

Q3: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?

  • A. Enable versioning for the underlying S3 bucket.
  • B. Enable Replication so that the objects get replicated to the other bucket
  • C. Enable CORS for the bucket
  • D. Change the Bucket policy for the bucket to allow access from the other bucket

Answer – C

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.

Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:

Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can congure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.

Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would congure the bucket that is hosting the web font to allow any origin to make these requests.

Reference: Cross-Origin Resource Sharing (CORS)

Top

Q4: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below

  • A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
  • B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
  • C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
  • D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.

Answer- C
The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3.
You can then provides access to the objects based on the key values generated via the user id.

Reference: The AWS Security Token Service (STS)

Top

Q5: Both ACLs and Bucket Policies can be used to grant access to S3 buckets. Which of the following statements is true about ACLs and Bucket policies?

  • A. Bucket Policies are Written in JSON and ACLs are written in XML
  • B. ACLs can be attached to S3 objects or S3 Buckets
  • C. Bucket Policies and ACLs are written in JSON
  • D. Bucket policies are only attached to s3 buckets, ACLs are only attached to s3 objects

Answer: A. and B.
Only Bucket Policies are written in JSON, ACLs are written in XML.
While Bucket policies are indeed only attached to S3 buckets, ACLs can be attached to S3 Buckets OR S3 Objects.
Reference:

Top

Q6: What are good options to improve S3 performance when you have significantly high numbers of GET requests?

  • A. Introduce random prefixes to S3 objects
  • B. Introduce random suffixes to S3 objects
  • C. Setup CloudFront for S3 objects
  • D. Migrate commonly used objects to Amazon Glacier

Answer: C
CloudFront caching is an excellent way to avoid putting extra strain on the S3 service and to improve the response times of reqeusts by caching data closer to users at CloudFront locations.
S3 Transfer Acceleration optimizes the TCP protocol and adds additional intelligence between the client and the S3 bucket, making S3 Transfer Acceleration a better choice if a higher throughput is desired. If you have objects that are smaller than 1GB or if the data set is less than 1GB in size, you should consider using Amazon CloudFront’s PUT/POST commands for optimal performance.
Reference: Amazon S3 Transfer Acceleration

Top

Q7: If an application is storing hourly log files from thousands of instances from a high traffic
web site, which naming scheme would give optimal performance on S3?

  • A. Sequential
  • B. HH-DD-MM-YYYY-log_instanceID
  • C. YYYY-MM-DD-HH-log_instanceID
  • D. instanceID_log-HH-DD-MM-YYYY
  • E. instanceID_log-YYYY-MM-DD-HH

Answer: A. B. C. D. and E.
Amazon S3 now provides increased performance to support at least 3,500 requests per second to add data and 5,500 requests per second to retrieve data, which can save significant processing time for no additional charge. Each S3 prefix can support these request rates, making it simple to increase performance significantly.
This S3 request rate performance increase removes any previous guidance to randomize object prefixes to achieve faster performance. That means you can now use logical or sequential naming patterns in S3 object naming without any performance implications.

Reference: Amazon S3 Announces Increased Request Rate Performance

Top

Q8: You are working with the S3 API and receive an error message: 409 Conflict. What is the possible cause of this error

  • A. You’re attempting to remove a bucket without emptying the contents of the bucket first.
  • B. You’re attempting to upload an object to the bucket that is greater than 5TB in size.
  • C. Your request does not contain the proper metadata.
  • D. Amazon S3 is having internal issues.

Answer:A.

Reference: S3 Error codes

Top

Q9: You created three S3 buckets – “mywebsite.com”, “downloads.mywebsite.com”, and “www.mywebsite.com”. You uploaded your files and enabled static website hosting. You specified both of the default documents under the “enable static website hosting” header. You also set the “Make Public” permission for the objects in each of the three buckets. You create the Route 53 Aliases for the three buckets. You are going to have your end users test your websites by browsing to http://mydomain.com/error.html, http://downloads.mydomain.com/index.html, and http://www.mydomain.com. What problems will your testers encounter?

  • A. http://mydomain.com/error.html will not work because you did not set a value for the error.html file
  • B. There will be no problems, all three sites should work.
  • C. http://www.mywebsite.com will not work because the URL does not include a file name at the end of it.
  • D. http://downloads.mywebsite.com/index.html will not work because the “downloads” prefix is not a supported prefix for S3 websites using Route 53 aliases

Answer: B.
It used to be that the only allowed domain prefix when creating Route 53 Aliases for S3 static websites was the “www” prefix. However, this is no longer the case. You can now use other subdomain.

Reference: Hosting a Static Website on Amazon S3

Top

Q10: Which of the following is NOT a common S3 API call?

  • A. UploadPart
  • B. ReadObject
  • C. PutObject
  • D. DownloadBucket

Answer: D.

Reference: s3api

Top




Other AWS Facts and Summaries

AWS Certified Cloud Practitioner Exam Preparation: Questions and Answers Dump

AWS Certified Cloud Practitioner Exam

Welcome to AWS Certified Cloud Practitioner Exam Preparation: Definition and Objectives, Top 100 Questions and Answers Dump, White papers, Courses, Labs and Training Materials, Exam info and details, References, Jobs, Others AWS Certificates, 

Download the mobile version here

What is the AWS Certified Cloud Practitioner Exam?

The AWS Certified Cloud Practitioner Exam is an introduction to AWS services and the intention is to examine the candidates ability to define what the AWS cloud is and its global infrastructure. It provides an overview of AWS core services security aspects, pricing and support services. The main objective is to provide an overall understanding about the Amazon Web Services Cloud platform. The course helps you get the conceptual understanding of the AWS and can help you know about the basics of AWS and cloud computing, including the services, cases and benefits.

To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Top

AWS Certified Cloud Practitioner Exam Prep Questions and Answers (Get the Free Mobile App Here for a better mobile experience)

AWS Certified Cloud Practitioner Exam Prep

Below we are providing you with:

  • aws cloud practitioner exam questions
  • aws cloud practitioner sample questions
  • aws cloud practitioner exam dumps
  • aws cloud practitioner practice questions and answers
  • aws cloud practitioner practice exam questions and references

For auditing purposes, your company now wants to monitor all API activity for all regions in your AWS environment. What can you use to fulfill this new requirement?

  • A. For each region, enable CloudTrail and send all logs to a bucket in each region.
  • B. Enable CloudTrail for all regions.
  • C. Ensure one CloudTrail is enabled for all regions.
  • D. Use AWS Config to enable the trail for all regions.

Answer:

C. Ensure one CloudTrail is enabled for all regions.
Turn on CloudTrail for all regions in your environment and CloudTrail will deliver log files from all regions to one S3 bucket.
AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting.

Reference:
AWS CloudTrail

Top

What is the best solution to provide secure access to an S3 bucket not using the internet?

  • A. Use a VPN connection.
  • B. Use an Internet Gateway.
  • C. Use a VPC Endpoint to access S3.
  • D. Use a NAT Gateway.

Answer:

C. Use a VPC Endpoint to access S3.
A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

AWS PrivateLink simplifies the security of data shared with cloud-based applications by eliminating the exposure of data to the public Internet.

Reference:
VPC Endpoint

Top

In the AWS Shared Responsibility Model, which of the following are the responsibility of AWS?

  • A. Securing Edge Locations
  • B. Encrypting data
  • C. Password policies
  • D. Decomissioning data

Answer:

A. and D.
It is AWS responsibility to secure Edge locations and decommission the data.
AWS responsibility “Security of the Cloud” – AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.

Reference:
AWS Shared Responsibility Model

Top

You have EC2 instances running at 90% utilization and you expect this to continue for at least a year. What type of EC2 instance would you choose to ensure your cost stay at a minimum?

  • A. Dedicated host instances
  • B. On-demand instances
  • C. Spot instances
  • D. Reserved instances

Answer:

D. Reserved instances:
Reserved instances are the best choice for instances with continuous usage and offer a reduced cost because you purchase the instance for the entire year.
Amazon EC2 Reserved Instances (RI) provide a significant discount (up to 75%) compared to On-Demand pricing and provide a capacity reservation when used in a specific Availability Zone.

Reference:
AWS Reserved instances.

Top

What tool would you use to get an estimated monthly cost for your environment?

  • A. TCO Calculator
  • B. Simply Monthly Calculator
  • C. Cost Explorer
  • D. Consolidated Billing

Answer:

B. Simply Monthly Calculator:
The AWS Simple Monthly Calculator helps customers and prospects estimate their monthly AWS bill more efficiently. Using this tool, they can add, modify and remove services from their ‘bill’ and it will recalculate their estimated monthly charges automatically.

Reference:
AWS Simply Monthly Calculator

Top

How do you make sure your organization does not exceed its monthly budget?

  • A. Sign up for the free alert under filing preferences in the AWS Management Console.
  • B. Set a schedule to regularly review the Billing an Cost Management dashboard each month.
  • C. Create an email alert in AWS Budget
  • D. In CloudWatch, create an alarm that triggers each time the limit is exceeded.

Answer:

C. Create an email alert in AWS Budget.
AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount.
You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Reservation alerts are supported for Amazon EC2, Amazon RDS, Amazon Redshift, Amazon ElastiCache, and Amazon Elasticsearch reservations.

Reference:
AWS Budget.

Top

An Edge Location is a specialization AWS data centre that works with which services?

  • A. Lambda
  • B. CloudWatch
  • C. CloudFront
  • D. Route 53

Answer:

A. C. D. : Lambda, CloudFront and Route 53
Lambda@Edge lets you run Lambda functions to customize the content that CloudFront delivers, executing the functions in AWS locations closer to the viewer.
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you’re serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.

CloudFront speeds up the distribution of your content by routing each user request through the AWS backbone network to the edge location that can best serve your content. Typically, this is a CloudFront edge server that provides the fastest delivery to the viewer. Using the AWS network dramatically reduces the number of networks that your users’ requests must pass through, which improves performance. Users get lower latency—the time it takes to load the first byte of the file—and higher data transfer rates.

You also get increased reliability and availability because copies of your files (also known as objects) are now held (or cached) in multiple edge locations around the world.

Reference:
AWS Edge Locations

Top

What is the preferred method of linking 2 AWS accounts?

  • A. AWS Organizations
  • B. Cost Explorer
  • C. VPC Peering
  • D. Consolidated billing

Answer:

A. AWS Organizations
AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage. AWSOrganizations includes account management and consolidated billing capabilities that enable you to better meet the budgetary, security, and compliance needs of your business.

Reference:
AWS Organizations.

Top

Which of the following service is most useful when a Disaster Recovery method is triggered in AWS.

  • A. Amazon Route 53
  • B. Amazon SNS
  • C. Amazon SQS
  • D. Amazon Inspector

Answer:

A. Route 53 is a domain name system service by AWS. When a Disaster does occur , it can be easy to switch to secondary sites using the Route53 service.
Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that
computers use to connect to each other. Amazon Route 53 is fully compliant with IPv6 as well.

Reference: AWS Route 53/

Top

Which of the following disaster recovery deployment mechanisms that has the highest downtime

  • A. Pilot light
  • B. Warm standby
  • C. Multi Site
  • D. Backup and Restore

Answer:

D. The below snapshot from the AWS Documentation shows the spectrum of the Disaster recovery methods. If you go to the further end of the spectrum you have the least time for downtime for the users.

AWS Certified Cloud Practitioner Exam: AWS Disaster Recovery Techniques

AWS Disaster Recovery Techniques

Reference: AWS Route 53/

Top

Your company is planning to host resources in the AWS Cloud. They want to use services which can be used to decouple resources hosted on the cloud. Which of the following services can help fulfil this requirement?

  • A. AWS EBS Volumes
  • B. AWS EBS Snapshots
  • C. AWS Glacier
  • D. AWS SQS

Answer:

D. AWS SQS: Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components.

Reference: AWS Simple Queue Service Developer Guive

Top

If you have a set of frequently accessed files that are used on a daily basis, what S3 storage class should you store them in?

  • A. Infrequent Access
  • B. Fast Access
  • C. Reduced Redundancy
  • D. Standard

Answer:

D. Standard: The Standard storage class should be used for files that you access on a daily or very frequent basis.

Reference: AWS storage-classes/

What is the availability and durability rating of S3 Standard Storage Class?

Choose the correct answer:

  • A. 99.999999999% Durability and 99.99% Availability
  • B. 99.999999999% Availability and 99.90% Durability
  • C. 99.999999999% Durability and 99.00% Availability
  • D. 99.999999999% Availability and 99.99% Durability

Answer:

A. 99.999999999% Durability and 99.99% Availability
S3 Standard Storage class has a rating of 99.999999999% durability (referred to as 11 nines) and 99.99% availability.

Reference: AWS storage classes/

Top

What AWS database is primarily used to analyze data using standard SQL formatting with compatibility for your existing business intelligence tools

  • A. Redshift
  • B. RDS
  • C. DynamoDB
  • D. ElastiCache

Answer:

A. Redshift is a database offering that is fully-managed and used for data warehousing and analytics, including compatibility with existing business intelligence tools.

Reference: AWS redshift/

Top

What are the benefits of DynamoDB?

Choose the 3 correct answers:

  • A. Single-digit millisecond latency.
  • B. Supports multiple known NoSQL database engines like MariaDB and Oracle NoSQL.
  • C. Supports both document and key-value store data models.
  • D. Automatic scaling of throughput capacity.

Answer:

A. C. D. DynamoDB does not use/support other NoSQL database engines. You only have access to use DynamoDB’s built-in engine.

Reference: AWS DynamoDB

Top

Which of the following are the benefits of AWS Organizations?

Choose the 2 correct answers:

  • A. Analyze cost before migrating to AWS.
  • B. Centrally manage access polices across multiple AWS accounts.
  • C. Automate AWS account creation and management.
  • D. Provide technical help (by AWS) for issues in your AWS account.

Answer:

B. and C.:
CENTRALLY MANAGE POLICIES ACROSS MULTIPLE AWS ACCOUNTS
AUTOMATE AWS ACCOUNT CREATION AND MANAGEMENT
CONTROL ACCESS TO AWS SERVICES
CONSOLIDATE BILLING ACROSS MULTIPLE AWS ACCOUNTS

Reference: AWS organizations/

There is a requirement hosting a set of servers in the Cloud for a short period of 3 months. Which of the following types of instances should be chosen to be cost effective.

  • A. Spot Instances
  • B. On-Demand
  • C. No Upfront costs Reserved
  • D. Partial Upfront costs Reserved

Answer:

B. Since the requirement is just for 3 months, then the best cost effective option is to use On-Demand Instances.

Reference: AWS pricing on-demand/

Top

Which of the following is not a disaster recovery deployment technique.

  • A. Pilot light
  • B. Warm standby
  • C. Single Site
  • D. Multi-Site

Answer:

C. The following figure shows a spectrum for the four scenarios, arranged by how quickly a system can be available to users after a DR event.

AWS Disaster Recovery Techniques
AWS Disaster Recovery Techniques

Reference: https://aws.amazon.com/blogs/aws/new-whitepaper-use-aws-for-disaster-recovery/

Top

Which of the following are attributes to the costing for using the Simple Storage Service. Choose 2 answers from the options given below

  • A. The storage class used for the objects stored.
  • B. Number of S3 buckets.
  • C. The total size in gigabytes of all objects stored.
  • D. Using encryption in S3

Answer:

A. and C: Below is a snapshot of the costing calculator for AWS S3.

AWS Certified Cloud Practitioner Exam: S3 storage cost estimator
Amazon S3 is storage for the Internet. It is designed to make web-scale computing easier for developers.

Reference: http://calculator.s3.amazonaws.com/index.html ; S3 storage classes

What endpoints are possible to send messages to with Simple Notification Service?

Choose the 3 correct answers:

  • A. SQS
  • B. SMS
  • C. FTP
  • D. Lambda

Answer:

Top

What service helps you to aggregate logs from your EC2 instance? Choose one answer from the options below:

  • A. SQS
  • B. S3
  • C. Cloudtrail
  • D. Cloudwatch Logs

Answer:

D. You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, and other sources. You can then retrieve the associated log data from CloudWatch Log.

Reference: AWS CloudWatch Logs

Top

A company is deploying a new two-tier web application in AWS. The company wants to store their most frequently used data so that the response time for the application is improved. Which AWS service provides the solution for the company’s requirements?

  • A. MySQL Installed on two Amazon EC2 Instances in a single Availability Zone
  • B. Amazon RDS for MySQL with Multi-AZ
  • C. Amazon ElastiCache
  • D. Amazon DynamoDB

Answer:

C. Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases.

Reference: AWS elasticache/

Top

You have a distributed application that periodically processes large volumes of data across multiple Amazon EC2 Instances. The application is designed to recover gracefully from Amazon EC2 instance failures. You are required to accomplish this task in the most cost-effective way. Which of the following will meet
your requirements?

  • A. Spot Instances
  • B. Reserved Instances
  • C. Dedicated Instances

On-Demand Instances

Answer:

A. When you think of cost effectiveness, you can either have to choose Spot or Reserved instances. Now when you have a regular processing job, the best is to use spot instances and since your application is designed recover gracefully from Amazon EC2 instance failures, then even if you lose the Spot instance , there is no issue because your application can recover.

Reference: AWS EC2 spot instances

Top

Which of the following features is associated with a Subnet in a VPC to protect against Incoming traffic requests?

  • A. AWS Inspector
  • B. Subnet Groups
  • C. Security Groups
  • D. NACL

Answer:

D. A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC.

Reference: AWS VPC ACLs

Top

A company is deploying a two-tier, highly available web application to AWS. Which service provides durable storage for static content while utilizing Overall CPU resources for the web tier?

  • A. Amazon EBC volume.
  • B. Amazon S3
  • C. Amazon EC2 instance store
  • D. Amazon RDS instance

Answer:

B. Amazon S3 is the default storage service that should be considered for companies. It provides durable storage for all static content.

Reference: S3 faqs

Top

What are characteristics of Amazon S3?
Choose 2 answers from the options given below.

  • A. S3 allows you to store objects of virtually unlimited size.
  • B. S3 allows you to store unlimited amounts of data.
  • C. S3 should be used to host relational database.
  • D. Objects are directly accessible via a URL.

Answer:

B. and D.: Each object does have a limitation in S3, but you can store virtually unlimited amounts of data. Also each object gets a directly accessible URL

Reference: AWS s3 faqs

Top

When working on the costing for on-demand EC2 instances , which are the following are attributes which determine the costing of the EC2 Instance. Choose 3 answers from the options given below

  • A. Instance Type
  • B. AMI Type
  • C. Region
  • D. Edge location

Answer:

A. B. C. : See components making up the pricing below.

AWS AMI Pricing
AWS AMI Pricing

Reference: AWS ec2 pricing on-demand/

Top

You have a mission-critical application which must be globally available at all times. If this is the case, which of the below deployment mechanisms would you employ

  • A. Deployment to multiple edge locations
  • B. Deployment to multiple Availability Zones
  • D. Deployment to multiple Data Centers
  • D. Deployment to multiple Regions

Answer:

D. Regions represent different geographic locations and it is best to host your application across multiple regions for disaster recovery.

Reference: AWS regions availability zones

Top

Which of the following are right principles when designing cloud based systems. Choose 2 answers from the options below

  • A. Build Tightly-coupled components
  • B. Build loosely-coupled components
  • C. Assume everything will fail
  • D. Use as many services as possible

Answer:

B. and C. Always build components which are loosely coupled. This is so that even if one component does fail, the entire system does not fail. Also if you build with the assumption that everything will fail, then you will ensure that the right measures are taken to build a highly available and fault tolerant system.

Reference: AWS Well architected networks

Top

You have 2 accounts in your AWS account. One for the Dev and the other for QA. All are part of
consolidated billing. The master account has purchase 3 reserved instances. The Dev department is currently using 2 reserved instances. The QA team is planning on using 3 instances which of the same instance type. What is the pricing tier of the instances that can be used by the QA Team?

  • A. No Reserved and 3 on-demand
  • B. One Reserved and 2 on-demand
  • C. Two Reserved and 1 on-demand
  • D. Three Reserved and no on-demand

Answer:

B. Since all are a part of consolidating billing, the pricing of reserved instances can be shared by All. And since 2 are already used by the Dev team , another one can be used by the QA team. The rest of the instances can be on-demand instances.

Reference: AWS ec2 pricing reserved instances/

Top

Which one of the following features is normally present in all of AWS Support plans

  • A. 24/7 access to Customer Service
  • B. Access to all features in the Trusted Advisor
  • C. A technical Account Manager
  • D. A dedicated support person

Answer:

A.

AWS Support plans
AWS Support plans

Reference: AWS premium support compare plans

Top

Which of the following storage mechanisms can be used to store messages effectively which can be used across distributed systems?

  • A. Amazon Glacier
  • B. Amazon EBS Volumes
  • C. Amazon EBS Snapshots
  • D. Amazon SQS

Answer:

D. Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components.

Reference: AWS Simple Queue Service

Top

You are exploring what services AWS has off-hand. You have a large number of data sets that need to be processed. Which of the following services can help fulfil this requirement.

  • A. EMR
  • B. S3
  • C. Glacier
  • D. Storage Gateway

Answer:

A. Amazon EMR helps you analyze and process vast amounts of data by distributing the computational work across a cluster of virtual servers running in the AWS Cloud. The cluster is managed using an open-source framework called Hadoop. Amazon EMR lets you focus on crunching or analyzing your data without having to worry about time-consuming setup, management, and tuning of Hadoop clusters or the compute capacity they rely on.

Reference: AWS Emr

Top

Which of the following services allows you to analyze EC2 Instances against pre-defined security templates to check for vulnerabilities

  • A. AWS Trusted Advisor
  • B. AWS Inspector
  • C. AWS WAF
  • D. AWS Shield

Answer:

B. Amazon Inspector enables you to analyze the behaviour of your AWS resources and helps you to identify potential security issues. Using Amazon Inspector, you can define a collection of AWS resources that you want to include in an assessment target. You can then create an assessment template and launch a security
assessment run of this target.

Reference: AWS inspector introduction

Top

Your company is planning to offload some of the batch processing workloads on to AWS. These jobs can be interrupted and resumed at any time. Which of the following instance types would be the most cost effective to use for this purpose.

  • A. On-Demand
  • B. Spot
  • C. Full Upfront Reserved
  • D. Partial Upfront Reserved

Answer:

B. Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. For example, Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks

Reference: AWS Spot Instances

Top

Which of the following is not a category recommendation given by the AWS Trusted Advisor?

  • A. Security
  • B. High Availability
  • C. Performance
  • D. Fault tolerance

Answer:

B.

AWS Trusted advisor

Reference: AWS Trust Advisor

Top

Which of the below cannot be used to get data onto Amazon Glacier.

  • A. AWS Glacier API
  • B. AWS Console
  • C. AWS Glacier SDK
  • D. AWS S3 Lifecycle policies

Answer:

B. Note that the AWS Console cannot be used to upload data onto Glacier. The console can only be used to create a Glacier vault which can be used to upload the data.

Reference: Uploading an archive in AWS

Top

Which of the following from AWS can be used to transfer petabytes of data from on-premise locations to the AWS Cloud.

  • A. AWS Import/Export
  • B. AWS EC2
  • C. AWS Snowball
  • D. AWS Transfer

Answer:

C. Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data& into and out of the AWS cloud. Using Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. Transferring data with Snowball is simple, fast, secure, and can be as little as one-fifth the cost of high-speed Internet.

Reference: AWS snowball

Top

Which of the following services allows you to analyze EC2 Instances against pre-defined security templates to check for vulnerabilities

  • A. AWS Trusted Advisor
  • B. AWS Inspector
  • C. AWS WAF
  • D. AWS Shield

Answer:

B. Amazon Inspector enables you to analyze the behavior of your AWS resources and helps you to identify potential security issues. Using Amazon Inspector, you can define a collection of AWS resources that you want to include in an assessment target. You can then create an assessment template and launch a security
assessment run of this target.

Reference: AWS Inspector

Top

Your company wants to move an existing Oracle database to the AWS Cloud. Which of the following services can help facilitate this move.

  • A. AWS Database Migration Service
  • B. AWS VM Migration Service
  • C. AWS Inspector
  • D. AWS Trusted Advisor

Answer:

A. AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open source databases.

Reference: AWS dms

Top

Which of the following features of AWS RDS allows for offloading reads of the database.

  • A. Cross region replication
  • B. Creating Read Replica’s
  • C. Using snapshots
  • D. Using Multi-AZ feature

Answer:

B. You can reduce the load on your source DB Instance by routing read queries from your applications to the read replica. Read replicas allow you to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.

Reference: AWS read replicas

Top

Which of the following does AWS perform on its behalf for EBS volumes to make it less prone to failure?

  • A. Replication of the volume across Availability Zones
  • B. Replication of the volume in the same Availability Zone
  • C. Replication of the volume across Regions
  • D. Replication of the volume across Edge locations

Answer:

B. When you create an EBS volume in an Availability Zone, it is automatically replicated within that zone to prevent data loss due to failure of any single hardware component

Reference: AWS EBS Volumes

Top

Your company is planning to host a large e-commerce application on the AWS Cloud. One of their major concerns is Internet attacks such as DDos attacks. Which of the following services can help mitigate this concern. Choose 2 answers from the options given below

  • A. A. Cloudfront
  • B. AWS Shield
  • C. C. AWS EC2
  • D. AWS Config

Answer:

A. and B. : One of the first techniques to mitigate DDoS attacks is to minimize the surface area that can be attacked thereby limiting the options for attackers and allowing you to build protections in a single place. We want to ensure that we do not expose our application or resources to ports, protocols or applications from where they do not expect any communication. Thus, minimizing the possible points of attack and letting us concentrate our mitigation efforts. In some cases, you can do this by placing your computation resources behind Content Distribution
Networks (CDNs), Load Balancers and restricting direct Internet traffic to certain parts of your infrastructure
like your database servers. In other cases, you can use firewalls or Access Control Lists (ACLs) to control what traffic reaches your applications.

Reference: ddos attack protection/

Top

Which of the following are 2 ways that AWS allows to link accounts

  • A. Consolidating billing
  • B. AWS Organizations
  • C. Cost Explorer
  • D. IAM

Answer:

A. and B. : You can use the consolidated billing feature in AWS Organizations to consolidate payment for multiple AWS accounts or multiple AISPL accounts. With consolidated billing, you can see a combined view of AWS charges incurred by all of your accounts. You also can get a cost report for each member account that is associated with your master account. Consolidated billing is offered at no additional charge.

Reference: AWS Consolidated billing

Top

Which of the following helps in DDos protection. Choose 2 answers from the options given below

  • A. Cloudfront
  • B. AWS Shield
  • C. AWS EC2
  • D. AWS Config

Answer:

A. and B. : One of the first techniques to mitigate DDoS attacks is to minimize the surface area that can be attacked thereby limiting the options for attackers and allowing you to build protections in a single place. We want to ensure that we do not expose our application or resources to ports, protocols or applications from where they do not expect any communication. Thus, minimizing the possible points of attack and letting us concentrate our mitigation efforts. In some cases, you can do this by placing your computation resources behind; Content Distribution Networks (CDNs), Load Balancers and restricting direct Internet traffic to certain parts of your infrastructure like your database servers. In other cases, you can use firewalls or Access Control Lists (ACLs) to control what traffic reaches your applications.

Reference: AWS shield – ddos attack protection/

Top

Which of the following can be used to call AWS services from programming languages

  • A. AWS SDK
  • B. AWS Console
  • C. AWS CLI
  • D. AWS IAM

Answer:

AWS SDK can be plugged in for various programming languages. Using the SDK you can then call the required AWS services.

Reference: AWS tools

A company wants to host a self-managed database in AWS. How would you ideally implement this solution?

  • A. Using the AWS DynamoDB service
  • B. Using the AWS RDS service
  • C. Hosting a database on an EC2 Instance
  • D. Using the Amazon Aurora service

Answer:

C. If you want a self-managed database, that means you want complete control over the database engine and the underlying infrastructure. In such a case you need to host the database on an EC2 Instance

Reference: AWS ec2

Top

When creating security groups, which of the following is a responsibility of the customer. Choose 2 answers from the options given below.

  • A. Giving a name and description for the security group
  • B. Defining the rules as per the customer requirements.
  • C. Ensure the rules are applied immediately
  • D. Ensure the security groups are linked to the Elastic Network interface

Answer:

A. and B. : When you define security rules for EC2 Instances, you give a name, description and write the rules for the security group

Reference: AWS using Network Security Groups

Top

There is a requirement to host a database server for a minimum period of one year. Which of the following would result in the least cost?

  • A. Spot Instances
  • B. On-Demand
  • C. No Upfront costs Reserved
  • D. Partial Upfront costs Reserved

Answer:

D. : If the database is going to be used for a minimum of one year at least , then it is better to get Reserved Instances. You can save on costs , and if you use a partial upfront options , you can get a better discount

Reference: AWS Reserved Instances

Top

which of the below can be used to import data into Amazon Glacier?
Choose 3 answers from the options given below:

  • A. AWS Glacier API
  • B. AWS Console
  • C. AWS Glacier SDK
  • D. AWS S3 Lifecycle policies

Answer:

A. C. and D. : The AWS Console cannot be used to upload data onto Glacier. The console can only be used to create a Glacier vault which can be used to upload the data.

Reference: Uploading an archive in AWS

Top

Which of the following can be used to secure EC2 Instances hosted in AWS. Choose 2 answers

  • A. Usage of Security Groups
  • B. Usage of AMI’s
  • C. Usage of Network Access Control Lists
  • D. Usage of the Internet gateway

Answer:

A and C: Security groups acts as a virtual firewall for your instance to control inbound and outbound traffic. Network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for
controlling traffic in and out of one or more subnets.

Reference: VPC Security Groups and Network Access Control List

Top

Which of the following can be used to host virtual servers on AWS

  • A. AWS IAM
  • B. AWS Server
  • C. AWS EC2
  • D. AWS Regions

Answer:

C. AWS EC2

Reference: AWS ec2

Top

You plan to deploy an application on AWS. This application needs to be PCI Compliant. Which of the below steps are needed to ensure the compliance? Choose 2 answers from the below list:

  • A. Choose AWS services which are PCI Compliant
  • B. Ensure the right steps are taken during application development for PCI Compliance
  • C. Encure the AWS Services are made PCI Compliant
  • D. Do an audit after the deployment of the application for PCI Compliance.

Answer:

A. and B.

Reference: pci dss level-1 faqs/

Top

Top

Which tool can you use to forecast your AWS spending?

  • A. AWS organizations
  • B. Amazon Dev pay
  • C. AWS Trusted Advisor
  • D. AWS Cost explorer

Answer:

D. AWS Cost Explorer lets you dive deeper into your cost and usage data to identify trends, pinpoint cost drivers, and detect anomalies.

Reference: AWS Cost Explorer Docs

The Trusted Advisor service provides insight regarding which four categories of an AWS account?

  • A. Security, fault tolerance, high availability, performance and Service Limits
  • B. Security, access control, high availability, performance and Service Limits
  • C. Performance, cost optimization, Security, fault tolerance and Service Limits
  • D. Performance, cost optimization, Access Control, Connectivity, and Service Limits

Answer:

C. Performance, cost optimization, Security, fault tolerance and Service Limits

Reference: AWS trusted advisor

Top

As per the AWS Acceptable Use Policy, penetration testing of EC2 instances

  • A. May be performed by AWS, and will be performed by AWS upon customer request
  • B. May be performed by AWS, and is periodically performed by AWS
  • C. Are expressly prohibited under all circumtances
  • D. May be performed by the customer on their own instances with prior authorization from AWS
  • E. May be performed by the customer on their own instances, only if performed from EC2 instances

Answer:

D. You need to take authorization from AWS before doing a penetration test on EC2 instances.

Reference: AWS pen testing

Top

What is the AWS feature that enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket

  • A. File Transfer
  • B. HTTP Transfer
  • C. Transfer Acceleration
  • D. S3 Acceleration

Answer:

C. Transfer Acceleration

Reference: AWS transfer acceleration examples

Top

What best describes an AWS region?

Choose the correct answer:

  • A. The physical networking connections between Availability Zones.
  • B. A specific location where an AWS data center is located.
  • C. A collection of DNS servers.
  • D. An isolated collection of AWS Availability Zones, of which there are many placed all around the world.

Answer:

D: An AWS region is an isolated geographical area that is is comprised of three or more AWS Availability Zones.

Reference:Concepts Regions And AvailabilityZones

Top

Question: Which of the following is a factor when calculating Total Cost of Ownership (TCO) for the AWS Cloud?

  • A. The number of servers migrated to AWS
  • B. The number of users migrated to AWS
  • C. The number of passwords migrated to AWS
  • D. The number of keys migrated to AWS

Answer:

A. Running servers will incur costs. The number of running servers is one factor of Server Costs; a key component of AWS’s Total Cost of Ownership (TCO). Reference: AWS cost calculator

Top

Which AWS Services can be used to store files? Choose 2 answers from the options given below:

  • A. Amazon CloudWatch
  • B. Amazon Simple Storage Service (Amazon S3)
  • C. Amazon Elastic Block Store (Amazon EBS)
  • D. AWS COnfig
  • D. AWS Amazon Athena

Answer:

B. and C. Amazon S3 is a Object storage built to store and retrieve any amount of data from anywhere. Amazon Elastic Block Store is a Persistent block storage for Amazon EC2.

Reference: AWS s3 and AWS EBS

Question: What best describes Amazon Web Services (AWS)?

Choose the correct answer:

  • A. AWS is the cloud.
  • B. AWS only provides compute and storage services.
  • C. AWS is a cloud services provider.
  • D. None of the above.

Answer:

C: AWS is defined as a cloud services provider. They provide hundreds of services of which compute and storage are included (not not limited to).
Reference: AWS

Question: Which AWS service can be used as a global content delivery network (CDN) service?

  • A. Amazon SES
  • B. Amazon CouldTrail
  • C. Amazon CloudFront
  • D. Amazon S3

Answer:

C: Amazon CloudFront is a web service that gives businesses and web application developers an easy
and cost effective way to distribute content with low latency and high data transfer speeds. Like other AWS services, Amazon CloudFront is a self-service, pay-per-use offering, requiring no long term commitments or minimum fees. With CloudFront, your files are delivered to end-users using a global network of edge locations.Reference: AWS cloudfront

Top

What best describes the concept of fault tolerance?

Choose the correct answer:

  • A. The ability for a system to withstand a certain amount of failure and still remain functional.
  • B. The ability for a system to grow in size, capacity, and/or scope.
  • C. The ability for a system to be accessible when you attempt to access it.
  • D. The ability for a system to grow and shrink based on demand.

Answer:

A: Fault tolerance describes the concept of a system (in our case a web application) to have failure in some of its components and still remain accessible (highly available). Fault tolerant web applications will have at least two web servers (in case one fails).

Reference:Designing fault tolerant applications/

Question: The firm you work for is considering migrating to AWS. They are concerned about cost and the initial investment needed. Which of the following features of AWS pricing helps lower the initial investment amount needed? Choose 2 answers from the options given below:

  • A. The ability to choose the lowest cost vendor.
  • B. The ability to pay as you go
  • C. No upfront costs
  • D. Discounts for upfront payments

Answer:

B and C: The best features of moving to the AWS Cloud is: No upfront cost and The ability to pay as you go where the customer only pays for the resources needed. Reference: AWS pricing

Top

What best describes the concept of elasticity?

Choose the correct answer:

  • A. The ability for a system to grow in size, capacity, and/or scope.
  • B. The ability for a system to grow and shrink based on demand.
  • C. The ability for a system to withstand a certain amount of failure and still remain functional.
  • D. ability for a system to be accessible when you attempt to access it.

Answer:

B: Elasticity (think of a rubber band) defines a system that can easily (and cost-effectively) grow and shrink based on required demand.

Reference:Cost optimization automating elasticity

Question: Your company has
started using AWS. Your IT Security team is concerned with the
security of hosting resources in the Cloud. Which AWS service provides security optimization recommendations that could help the IT Security team secure resources using AWS?

  • A. AWS API Gateway
  • B. Reserved Instances
  • C. AWS Trusted Advisor
  • D. AWS Spot Instances

Answer:

C: An online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment, Trusted Advisor provides real time guidance to help you provision your resources following AWS best practices. Reference: AWS trusted advisor

What is the relationship between AWS global infrastructure and the concept of high availability?

Choose the correct answer:

  • A. AWS is centrally located in one location and is subject to widespread outages if something happens at that one location.
  • B. AWS regions and Availability Zones allow for redundant architecture to be placed in isolated parts of the world.
  • C. Each AWS region handles a different AWS services, and you must use all regions to fully use AWS.
  • D. None of the above

Answer

B: As an AWS user, you can create your applications infrastructure and duplicate it. By placing duplicate infrastructure in multiple regions, high availability is created because if one region fails you have a backup (in a another region) to use.

Reference:RDS Concepts MultiAZ

Question: You are hosting a number of EC2 Instances on AWS. You are looking to monitor CPU Utilization on the Instance. Which service would you use to collect and track performance metrics for AWS services?

  • A. Amazon CloudFront
  • B. Amazon CloudSearch
  • C. Amazon CloudWatch
  • D. AWS Managed Services

Top

Answer:

C: Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. Reference: AWS cloudwatch/

Question: Which of the following support plans give access to all the checks in the Trusted Advisor service. Choose 2 answers from the options given below:

  • A. Basic
  • B. Business
  • C. Enterprise

Answer:

Question: Which of the following in AWS maps to a separate geographic location?

  • A. AWS Region
  • B. AWS Data Centers
  • C. AWS Availability Zone

Answer:

A: Amazon cloud computing resources are hosted in multiple locations world-wide. These locations are composed of AWS Regions and Availability Zones. Each AWS Region is a separate geographic area. Reference: AWS Regions And Availability Zone

Top

What best describes the concept of scalability?

Choose the correct answer:

  • A. The ability for a system to grow and shrink based on demand.
  • B. The ability for a system to grow in size, capacity, and/or scope.
  • C. The ability for a system be be accessible when you attempt to access it.
  • D. The ability for a system to withstand a certain amount of failure and still remain functional.

Answer

B: Scalability refers to the concept of a system being able to easily (and cost-effectively) scale UP. For web applications, this means the ability to easily add server capacity when demand requires.

Reference:AWS autoscaling

Question: If you wanted to monitor all events in your AWS account, which of the below services would you use?

  • A. AWS CloudWatch
  • B. AWS CloudWatch logs
  • C. AWS Config
  • D. AWS CloudTrail

Answer:

D: AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk
auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. Reference: Cloudtrail

Top

What are the four primary benefits of using the cloud/AWS?

Choose the correct answer:

  • A. Fault tolerance, scalability, elasticity, and high availability.
  • B. Elasticity, scalability, easy access, limited storage.
  • C. Fault tolerance, scalability, sometimes available, unlimited storage
  • D. Unlimited storage, limited compute capacity, fault tolerance, and high availability.

Answer:

A: Fault tolerance, scalability, elasticity, and high availability are the four primary benefits of AWS/the cloud.

What best describes a simplified definition of the “cloud”?

Choose the correct answer:

  • A. All the computers in your local home network.
  • B. Your internet service provider
  • C. A computer located somewhere else that you are utilizing in some capacity.
  • D. An on-premise data center that your company owns.

Answer

D: The simplest definition of the cloud is a computer that is located somewhere else that you are utilizing in some capacity. AWS is a cloud services provider, as the provide access to computers they own (located at AWS data centers), that you use for various purposes.

Top

Question: Your development team is planning to host a development environment on the cloud. This consists of EC2 and RDS instances. This environment will probably only be required for 2 months. Which types of instances would you use for this purpose?

  • A. On-Demand
  • B. Spot
  • C. Reserved
  • D. Dedicated

Answer:

A: The best and cost effective option would be to use On-Demand Instances. The AWS documentation gives the following additional information on On-Demand EC2 Instances. With On-Demand instances you only pay for
EC2 instances you use. The use of On-Demand instances frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs. Reference: AWS ec2 pricing on-demand

Question: Which of the following can be used to secure EC2 Instances?

  • A. Security Groups
  • B. EC2 Lists
  • C. AWS Configs
  • D. AWS CloudWatch

Answer:

A: security group< acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC could be assigned to a different set of security groups. If you don’t specify a particular group at launch time, the instance is automatically assigned to the default security group for the VPC. Reference: VPC Security Groups

Exam Topics:

The AWS Cloud Practitioner exam is broken down into 4 domains

  • Cloud Concepts
  • Security
  • Technology
  • Billing and Pricing.

What is the purpose of a DNS server?

Choose the correct answer:

  • A. To act as an internet search engine.
  • B. To protect you from hacking attacks.
  • C. To convert common language domain names to IP addresses.
  • D. To serve web application content.

Answer:

C: Domain name system servers act as a “third party” that provides the service of converting common language domain names to IP addresses (which are required for a web browser to properly make a request for web content).

Top

What best describes the concept of high availability?

Choose the correct answer:

  • A. The ability for a system to grow in size, capacity, and/or scope.
  • B. The ability for a system to withstand a certain amount of failure and still remain functional.
  • C. The ability for a system to grow and shrink based on demand.
  • D. The ability for a system to be accessible when you attempt to access it.

Answer:

D: High availability refers to the concept that something will be accessible when you try to access it. An object or web application is “highly available” when it is accessible a vast majority of the time.

Top

What is the major difference between AWS’s RDS and DynamoDB database services?

Choose the correct answer:

  • A. RDS offers NoSQL database options, and DynamoDB offers SQL database options.
  • B. RDS offers one SQL database option, and DynamoDB offers many NoSQL database options.
  • C. RDS offers SQL database options, and DynamoDB offers a NoSQL database option.
  • D. None of the above

Answer:

C. RDS is a SQL database service (that offers several database engine options), and DynamoDB is a NoSQL database option that only offers one NoSQL engine.

Reference:

What are two open source in-memory engines supported by ElastiCache?

Choose the 2 correct answers:

  • A. CacheIt
  • B. Aurora
  • C. MemcacheD
  • D. Redis

Answer:

C. and D. Redis, MemcacheD

Reference: AWS Elasticache/

Top

What AWS database service is used for data warehousing of petabytes of data?

Choose the correct answer:

  • A. RDS
  • B. Elasticache
  • C. Redshift
  • D. DynamoDB

Answer:

C. Redshift is a fully-managed data warehouse that is perfect for storing petabytes worth of data.

Reference: AWS Redshift

Which AWS service uses a combination of publishers and subscribers?

Choose the correct answer:

  • A. Lambda
  • B. RDS
  • C. EC2
  • D. SNS

Answer:

D. In SNS, there are two types of clients: publishers and subscribers. Publishers send the message, and subscribers receive the message.

Reference: AWS SNS

What SQL database engine options are available in RDS?

Choose the 3 correct answers:

  • A. MySQL
  • B. MongoDB
  • C. PostgreSQL
  • D. MariaDB

Answer:

A. C. and D. RDS offers the following SQL options: Aurora MySQL MariaDB PostgreSQL Oracle Microsoft SQLServer

Reference:

What is the name of AWS’s RDS SQL database engine?

Choose the correct answer:

  • A. Lightsail
  • B. Aurora
  • C. MySQL
  • D. SNS

Answer:

B. AWS created their own custom SQL database engine, which is called Aurora.

Reference: AWS Aurora

Under what circumstances would you choose to use the AWS service CloudTrail?

Choose the correct answer:

  • A. When you want to log what actions various IAM users are taking in your AWS account.
  • B. When you want a serverless compute platform.
  • C. When you want to collect and view resource metrics.
  • D. When you want to send SMS notifications based on events that occur in your account.

Answer:

A. When you want to log what actions various IAM users are taking in your AWS account.

Reference: AWS Cloudtrail

If you want to monitor the average CPU usage of your EC2 instances, which AWS service should you use?

Choose the correct answer:

  • A. CloudMonitor
  • B. CloudTrail
  • C. CloudWatch
  • D. None of the above

Answer:

C. CloudWatch is used to collect, view, and track metrics for resources (such as EC2 instances) in your AWS account.

Reference: AWS CloudWatch

What is AWS’s relational database service?

Choose the correct answer:

  • A. ElastiCache
  • B. DymamoDB
  • C. RDS
  • D. Redshift

Answer:

C. RDS offers SQL database options – otherwise known as relational databases.

Reference: AWS RDS

Top

If you want to have SMS or email notifications sent to various members of your department with status updates on resources in your AWS account, what service should you choose?

Choose the correct answer:

  • A. SNS
  • B. GetSMS
  • C. RDS
  • D. STS

Answer:

A. Simple Notification Service (SNS) is what publishes messages to SMS and/or email endpoints.

Reference: AWS SNS

AWS Certified Cloud Practitioner Exam Whitepapers:

AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers.

Top

Online Training and Labs for AWS Cloud Certified Practitioner Exam

Top

AWS Cloud Practitioners Jobs

Top

AWS Certified Cloud Practitioner Exam info and details, How To:

The AWS Certified Cloud Practitioner Exam is a multiple choice, multiple answer exam. Here is the Exam Overview:

Top

Additional Information for reference

Below are some useful reference links that would help you to learn about AWS Practitioner Exam.

Other Relevant and Recommended AWS Certifications

AWS Certification Exams Roadmap
AWS Certification Exams Roadmap

Top

Other AWS Facts and Summaries and Questions/Answers Dump