Download the AI & Machine Learning For Dummies PRO App: iOS - Android Our AI and Machine Learning For Dummies PRO App can help you Ace the following AI and Machine Learning certifications:
Use AWS Cheatsheets – I also found the cheatsheets provided by Tutorials Dojo very helpful. In my opinion, it is better than Jayendrapatil Patil’s blog since it contains more updated information that complements your review notes. #AWS Cheat Sheet
Watch this exam readiness 3hr video, it very recent webinar this provides what is expected in the exam. #AWS Exam Prep Video
27
Start off watching Ryan’s videos. Try and completely focus on the hands on. Take your time to understand what you are trying to learn and achieve in those LAB Sessions. #AWS Exam Prep Video
28
Do not rush into completing the videos. Take your time and hone the basics. Focus and spend a lot of time for the back bone of AWS infrastructure – Compute/EC2 section, Storage (S3/EBS/EFS), Networking (Route 53/Load Balancers), RDS, VPC, Route 3. These sections are vast, with lot of concepts to go over and have loads to learn. Trust me you will need to thoroughly understand each one of them to ensure you pass the certification comfortably. #AWS Exam Prep Video
29
Make sure you go through resources section and also AWS documentation for each components. Go over FAQs. If you have a question, please post it in the community. Trust me, each answer here helps you understand more about AWS. #AWS Faqs
30
Like any other product/service, each AWS offering has a different flavor. I will take an example of EC2 (Spot/Reserved/Dedicated/On Demand etc.). Make sure you understand what they are, what are the pros/cons of each of these flavors. Applies for all other offerings too. #AWS Services
31
Follow Neal K Davis on Linkedin and Read his updates about DVA-C01 #AWS Services
What is the AWS Certified Developer Associate Exam?
The AWS Certified Developer – Associate examination is intended for individuals who perform a development role and have one or more years of hands-on experience developing and maintaining an AWS-based application. It validates an examinee’s ability to:
Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
Demonstrate proficiency in developing, deploying, and debugging cloud-based applications using AWS
There are two types of questions on the examination:
Multiple-choice: Has one correct response and three incorrect responses (distractors).
Provide implementation guidance based on best practices to the organization throughout the lifecycle of the project.
Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that an examinee with incomplete knowledge or skill would likely choose. However, they are generally plausible responses that fit in the content area defined by the test objective. Unanswered questions are scored as incorrect; there is no penalty for guessing.
To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
Download the AI & Machine Learning For Dummies PRO App: iOS - Android Our AI and Machine Learning For Dummies PRO App can help you Ace the following AI and Machine Learning certifications:
Understand bastion hosts, and which subnet one might live on. Bastion hosts are instances that sit within your public subnet and are typically accessed using SSH or RDP. Once remote connectivity has been established with the bastion host, it then acts as a ‘jump’ server, allowing you to use SSH or RDP to login to other instances (within private subnets) deeper within your network. When properly configured through the use of security groups and Network ACLs, the bastion essentially acts as a bridge to your private instances via the Internet.” Bastion Hosts
3
Know the difference between Directory Service’s AD Connector and Simple AD. Use Simple AD if you need an inexpensive Active Directory–compatible service with the common directory features. AD Connector lets you simply connect your existing on-premises Active Directory to AWS. AD Connector and Simple AD
4
Know how to enable cross-account access with IAM: To delegate permission to access a resource, you create an IAM role that has two policies attached. The permissions policy grants the user of the role the needed permissions to carry out the desired tasks on the resource. The trust policy specifies which trusted accounts are allowed to grant its users permissions to assume the role. The trust policy on the role in the trusting account is one-half of the permissions. The other half is a permissions policy attached to the user in the trusted account that allows that user to switch to, or assume the role. Enable cross-account access with IAM
Know When Elastic IPs are free or not: If you associate additional EIPs with that instance, you will be charged for each additional EIP associated with that instance per hour on a pro rata basis. Additional EIPs are only available in Amazon VPC. To ensure efficient use of Elastic IP addresses, we impose a small hourly charge when these IP addresses are not associated with a running instance or when they are associated with a stopped instance or unattached network interface. When are AWS Elastic IPs Free or not?
9
Know what are the four high level categories of information Trusted Advisor supplies. #AWS Trusted advisor
10
Know how to troubleshoot a connection time out error when trying to connect to an instance in your VPC. You need a security group rule that allows inbound traffic from your public IP address on the proper port, you need a route that sends all traffic destined outside the VPC (0.0.0.0/0) to the Internet gateway for the VPC, the network ACLs must allow inbound and outbound traffic from your public IP address on the proper port, etc. #AWS Connection time out error
11
Be able to identify multiple possible use cases and eliminate non-use cases for SWF. #AWS
12
Understand how you might set up consolidated billing and cross-account access such that individual divisions resources are isolated from each other, but corporate IT can oversee all of it. #AWS Set up consolidated billing
Know how you would go about making changes to an Auto Scaling group, fully understanding what you can and can’t change. “You can only specify one launch configuration for an Auto Scaling group at a time, and you can’t modify a launch configuration after you’ve created it. Therefore, if you want to change the launch configuration for your Auto Scaling group, you must create a launch configuration and then update your Auto Scaling group with the new launch configuration. When you change the launch configuration for your Auto Scaling group, any new instances are launched using the new configuration parameters, but existing instances are not affected. #AWS Make Change to Auto Scaling group
Know how you would go about making changes to an Auto Scaling group, fully understanding what you can and can’t change. “You can only specify one launch configuration for an Auto Scaling group at a time, and you can’t modify a launch configuration after you’ve created it. Therefore, if you want to change the launch configuration for your Auto Scaling group, you must create a launch configuration and then update your Auto Scaling group with the new launch configuration. When you change the launch configuration for your Auto Scaling group, any new instances are launched using the new configuration parameters, but existing instances are not affected. #AWS Make Change to Auto Scaling group
15
Know which field you use to run a script upon launching your instance. #AWS User data script
16
Know how DynamoDB (durable, and you can pay for strong consistency), Elasticache (great for speed, not so durable), and S3 (eventual consistency results in lower latency) compare to each other in terms of durability and low latency. #AWS DynamoDB consistency
17
Know the difference between bucket policies, IAM policies, and ACLs for use with S3, and examples of when you would use each. “With IAM policies, companies can grant IAM users fine-grained control to their Amazon S3 bucket or objects while also retaining full control over everything the users do. With bucket policies, companies can define rules which apply broadly across all requests to their Amazon S3 resources, such as granting write privileges to a subset of Amazon S3 resources. Customers can also restrict access based on an aspect of the request, such as HTTP referrer and IP address. With ACLs, customers can grant specific permissions (i.e. READ, WRITE, FULL_CONTROL) to specific users for an individual bucket or object. #AWS Difference between bucket policies
Understand how you can use ELB cross-zone load balancing to ensure even distribution of traffic to EC2 instances in multiple AZs registered with a load balancer. #AWS ELB cross-zone load balancing
Spot instances are good for cost optimization, even if it seems you might need to fall back to On-Demand instances if you wind up getting kicked off them and the timeline grows tighter. The primary (but still not only) factor seems to be whether you can gracefully handle instances that die on you–which is pretty much how you should always design everything, anyway! #AWS Spot instances
22
The term “use case” is not the same as “function” or “capability”. A use case is something that your app/system will need to accomplish, not just behaviour that you will get from that service. In particular, a use case doesn’t require that the service be a 100% turnkey solution for that situation, just that the service plays a valuable role in enabling it. #AWS use case
23
There might be extra, unnecessary information in some of the questions (red herrings), so try not to get thrown off by them. Understand what services can and can’t do, but don’t ignore “obvious”-but-still-correct answers in favour of super-tricky ones. #AWS Exam Answers: Distractors
24
If you don’t know what they’re trying to ask, in a question, just move on and come back to it later (by using the helpful “mark this question” feature in the exam tool). You could easily spend way more time than you should on a single confusing question if you don’t triage and move on. #AWS Exa: Skip Questions that are vague and come back to them later
25
Some exam questions required you to understand features and use cases of: VPC peering, cross-account access, DirectConnect, snapshotting EBS RAID arrays, DynamoDB, spot instances, Glacier, AWS/user security responsibilities, etc. #AWS
26
The 30 Day constraint in the S3 Lifecycle Policy before transitioning to S3-IA and S3-One Zone IA storage classes #AWS S3 lifecycle policy
Watch Acloud Guru Videos Lectures while commuting / lunch break – Reschedule the exam if you are not yet ready #AWS ACloud Guru
36
Watch Linux Academy Videos Lectures while commuting / lunch break – Reschedule the exam if you are not yet ready #AWS Linux Academy
37
Watch Udemy Videos Lectures while commuting / lunch break – Reschedule the exam if you are not yet ready #AWS Linux Academy
38
The Udemy practice test interface is good that it pinpoints your weak areas, so what I did was to re-watch all the videos that I got the wrong answers. Since I was able to gauge my exam readiness, I decided to reschedule my exam for 2 more weeks, to help me focus on completing the practice tests. #AWS Udemy
39
Use AWS Cheatsheets – I also found the cheatsheets provided by Tutorials Dojo very helpful. In my opinion, it is better than Jayendrapatil Patil’s blog since it contains more updated information that complements your review notes. #AWS Cheat Sheet
40
Watch this exam readiness 3hr video, it very recent webinar this provides what is expected in the exam. #AWS Exam Prep Video
41
Start off watching Ryan’s videos. Try and completely focus on the hands on. Take your time to understand what you are trying to learn and achieve in those LAB Sessions. #AWS Exam Prep Video
42
Do not rush into completing the videos. Take your time and hone the basics. Focus and spend a lot of time for the back bone of AWS infrastructure – Compute/EC2 section, Storage (S3/EBS/EFS), Networking (Route 53/Load Balancers), RDS, VPC, Route 3. These sections are vast, with lot of concepts to go over and have loads to learn. Trust me you will need to thoroughly understand each one of them to ensure you pass the certification comfortably. #AWS Exam Prep Video
43
Make sure you go through resources section and also AWS documentation for each components. Go over FAQs. If you have a question, please post it in the community. Trust me, each answer here helps you understand more about AWS. #AWS Faqs
44
Like any other product/service, each AWS offering has a different flavor. I will take an example of EC2 (Spot/Reserved/Dedicated/On Demand etc.). Make sure you understand what they are, what are the pros/cons of each of these flavors. Applies for all other offerings too. #AWS Services
45
Ensure to attend all quizzes after each section. Please do not treat these quizzes as your practice exams. These quizzes are designed to mostly test your knowledge on the section you just finished. The exam itself is designed to test you with scenarios and questions, where in you will need to recall and apply your knowledge of different AWS technologies/services you learn over multiple lectures. #AWS Services
46
I, personally, do not recommend to attempt a practice exam or simulator exam until you have done all of the above. It was a little overwhelming for me. I had thoroughly gone over the videos. And understood the concepts pretty well, but once I opened exam simulator I felt the questions were pretty difficult. I also had a feeling that videos do not cover lot of topics. But later I realized, given the vastness of AWS Services and offerings it is really difficult to encompass all these services and their details in the course content. The fact that these services keep changing so often, does not help #AWS Services
47
Go back and make a note of all topics, that you felt were unfamiliar for you. Go through the resources section and fiund links to AWS documentation. After going over them, you shoud gain at least 5-10% more knowledge on AWS. Have expectations from the online courses as a way to get thorough understanding of basics and strong foundations for your AWS knowledge. But once you are done with videos. Make sure you spend a lot of time on AWS documentation and FAQs. There are many many topics/sub topics which may not be covered in the course and you would need to know, atleast their basic functionalities, to do well in the exam. #AWS Services
48
Once you start taking practice exams, it may seem really difficult at the beginning. So, please do not panic if you find the questions complicated or difficult. IMO they are designed or put in a way to sound complicated but they are not. Be calm and read questions very carefully. In my observation, many questions have lot of information which sometimes is not relevant to the solution you are expected to provide. Read the question slowly and read it again until you understand what is expected out of it. #AWS Services
49
With each practice exam you will come across topics that you may need to scale your knowledge on or learn them from scratch. #AWS Services
50
With each test and the subsequent revision, you will surely feel more confident. There are 130 mins for questions. 2 mins for each question which is plenty of time. At least take 8-10 practice tests. The ones on udemy/tutorialdojo are really good. If you are a acloudguru member. The exam simulator is really good. Manage your time well. Keep patience. I saw someone mention in one of the discussions that do not under estimate the mental focus/strength needed to sit through 130 mins solving these questions. And it is really true. Do not give away or waste any of those precious 130 mins. While answering flag/mark questions you think you are not completely sure. My advice is, even if you finish early, spend your time reviewing the answers. I could review 40 of my answers at the end of test. And I at least rectified 3 of them (which is 4-5% of total score, I think) So in short – Put a lot of focus on making your foundations strong. Make sure you go through AWS Documentation and FAQs. Try and envision how all of the AWS components can fit together and provide an optimal solution. Keep calm. This video gives outline about exam, must watch before or after Ryan’s course.#AWS Services
51
Walking you through how to best prepare for the AWS Certified Solutions Architect Associate SAA-C02 exam in 5 steps: 1. Understand the exam blueprint 2. Learn about the new topics included in the SAA-C02 version of the exam 3. Use the many FREE resources available to gain and deepen your knowledge 4. Enroll in our hands-on video course to learn AWS in depth 5. Use practice tests to fully prepare yourself for the exam and assess your exam readiness AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS
52
Storage: 1. Know your different Amazon S3 storage tiers! You need to know the use cases, features and limitations, and relative costs; e.g. retrieval costs. 2. Amazon S3 lifecycle policies is also required knowledge — there are minimum storage times in certain tiers that you need to know. 3. For Glacier, you need to understand what it is, what it’s used for, and what the options are for retrieval times and fees. 4. For the Amazon Elastic File System (EFS), make sure you’re clear which operating systems you can use with it (just Linux). 5. For the Amazon Elastic Block Store (EBS), make sure you know when to use the different tiers including instance stores; e.g. what would you use for a datastore that requires the highest IO and the data is distributed across multiple instances? (Good instance store use case) 6. Learn about Amazon FSx. You’ll need to know about FSx for Windows and Lustre. 7. Know how to improve Amazon S3 performance including using CloudFront, and byte-range fetches — check out this whitepaper. 8. Make sure you understand about Amazon S3 object deletion protection options including versioning and MFA delete. AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS
53
Compute: 1. You need to have a good understanding of the options for how to scale an Auto Scaling Group using metrics such as SQS queue depth, or numbers of SNS messages. 2. Know your different Auto Scaling policies including Target Tracking Policies. 3. Read up on High Performance Computing (HPC) with AWS. You’ll need to know about Amazon FSx with HPC use cases. 4. Know your placement groups. Make sure you can differentiate between spread, cluster and partition; e.g. what would you use for lowest latency? What about if you need to support an app that’s tightly coupled? Within an AZ or cross AZ? 5. Make sure you know the difference between Elastic Network Adapters (ENAs), Elastic Network Interfaces (ENIs) and Elastic Fabric Adapters (EFAs). 6. For the Amazon Elastic Container Service (ECS), make sure you understand how to assign IAM policies to ECS for providing S3 access. How can you decouple an ECS data processing process — Kinesis Firehose or SQS? 7. Make sure you’re clear on the different EC2 pricing models including Reserved Instances (RI) and the different RI options such as scheduled RIs. 8. Make sure you know the maximum execution time for AWS Lambda (it’s currently 900 seconds or 15 minutes). AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS
54
Network 1. Understand what AWS Global Accelerator is and its use cases. 2. Understand when to use CloudFront and when to use AWS Global Accelerator. 3. Make sure you understand the different types of VPC endpoint and which require an Elastic Network Interface (ENI) and which require a route table entry. 4. You need to know how to connect multiple accounts; e.g. should you use VPC peering or a VPC endpoint? 5. Know the difference between PrivateLink and ClassicLink. 6. Know the patterns for extending a secure on-premises environment into AWS. 7. Know how to encrypt AWS Direct Connect (you can use a Virtual Private Gateway / AWS VPN). 8. Understand when to use Direct Connect vs Snowball to migrate data — lead time can be an issue with Direct Connect if you’re in a hurry. 9. Know how to prevent circumvention of Amazon CloudFront; e.g. Origin Access Identity (OAI) or signed URLs / signed cookies. AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS
55
Databases 1. Make sure you understand Amazon Aurora and Amazon Aurora Serverless. 2. Know which RDS databases can have Read Replicas and whether you can read from a Multi-AZ standby. 3. Know the options for encrypting an existing RDS database; e.g. only at creation time otherwise you must encrypt a snapshot and create a new instance from the snapshot. 4. Know which databases are key-value stores; e.g. Amazon DynamoDB. AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS
56
Application Integration 1. Make sure you know the use cases for the Amazon Simple Queue Service (SQS), and Simple Notification Service (SNS). 2. Understand the differences between Amazon Kinesis Firehose and SQS and when you would use each service. 3. Know how to use Amazon S3 event notifications to publish events to SQS — here’s a good “How To” article. AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS
57
Management and Governance 1. You’ll need to know about AWS Organizations; e.g. how to migrate an account between organizations. 2. For AWS Organizations, you also need to know how to restrict actions using service control policies attached to OUs. 3. Understand what AWS Resource Access Manager is. AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS
The AWS Certified Solution Architect Associate Examination reparation and Readiness Quiz App (SAA-C01, SAA-C01, SAA) Prep App helps you prepare and train for the AWS Certification Solution Architect Associate Exam with various questions and answers dumps.
This App provide updated Questions and Answers, an Intuitive Responsive Interface allowing to browse questions horizontally and browse tips and resources vertically after completing a quiz.
Features:
100+ Questions and Answers updated frequently to get you AWS certified.
Quiz with score tracker, countdown timer, highest score saving. Vie Answers after completing the quiz for each category.
Ability to navigate through questions for each category using next and previous button.
Resource info page about the answer for each category and Top 60 Tips to succeed in the exam.
Prominent Cloud Evangelist latest tweets and Technology Latest News Feed
The app helps you study and practice from your mobile device with an intuitive interface.
SAA-C01 and SAA-C02 compatible
Resource info page about the answer for each category.
Helps you study and practice from your mobile device with an intuitive interface.
The questions and Answers are divided in 4 categories:
Design High Performing Architectures,
Design Cost Optimized Architectures,
Design Secure Applications And Architectures,
Design Resilient Architecture,
The questions and answers cover the following topics: AWS VPC, S3, DynamoDB, EC2, ECS, Lambda, API Gateway, CloudWatch, CloudTrail, Code Pipeline, Code Deploy, TCO Calculator, AWS S3, AWS DynamoDB, CloudWatch , AWS SES, Amazon Lex, AWS EBS, AWS ELB, AWS Autoscaling , RDS, Aurora, Route 53, Amazon CodeGuru, Amazon Bracket, AWS Billing and Pricing, AWS Simply Monthly Calculator, AWS cost calculator, Ec2 pricing on-demand, AWS Pricing, AWS Pay As You Go, AWS No Upfront Cost, Cost Explorer, AWS Organizations, Consolidated billing, Instance Scheduler, on-demand instances, Reserved instances, Spot Instances, CloudFront, Web hosting on S3, S3 storage classes, AWS Regions, AWS Availability Zones, Trusted Advisor, Various architectural Questions and Answers about AWS, AWS SDK, AWS EBS Volumes, EC2, S3, Containers, KMS, AWS read replicas, Cloudfront, API Gateway, AWS Snapshots, Auto shutdown Ec2 instances, High Availability, RDS, DynamoDB, Elasticity, AWS Virtual Machines, AWS Caching, AWS Containers, AWS Architecture, AWS Ec2, AWS S3, AWS Security, AWS Lambda, Bastion Hosts, S3 lifecycle policy, kinesis sharing, AWS KMS, Design High Performing Architectures, Design Cost Optimized Architectures, Design Secure Applications And Architectures, Design Resilient Architecture, AWS vs Azure vs Google Cloud, Resources, Questions, AWS, AWS SDK, AWS EBS Volumes, AWS read replicas, Cloudfront, API Gateway, AWS Snapshots, Auto shutdown Ec2 instances, High Availability, RDS, DynamoDB, Elasticity, AWS Virtual Machines, AWS Caching, AWS Containers, AWS Architecture, AWS Ec2, AWS S3, AWS Security, AWS Lambda, Load Balancing, DynamoDB, EBS, Multi-AZ RDS, Aurora, EFS, DynamoDB, NLB, ALB, Aurora, Auto Scaling, DynamoDB(latency), Aurora(performance), Multi-AZ RDS(high availability), Throughput Optimized EBS (highly sequential), SAA-CO1, SAA-CO2, Cloudwatch, CloudTrail, KMS, ElasticBeanstalk, OpsWorks, RPO vs RTO, HA vs FT, Undifferentiated Heavy Lifting, Access Management Basics, Shared Responsibility Model, Cloud Service Models, etc…
The resources sections cover the following areas: Certification, AWS training, Mock Exam Preparation Tips, Cloud Architect Training, Cloud Architect Knowledge, Cloud Technology, cloud certification, cloud exam preparation tips, cloud solution architect associate exam, certification practice exam, learn aws free, amazon cloud solution architect, question dumps, acloud guru links, tutorial dojo links, linuxacademy links, latest aws certification tweets, and post from reddit, quota, linkedin, medium, cloud exam preparation tips, aws cloud solution architect associate exam, aws certification practice exam, cloud exam questions, learn aws free, amazon cloud solution architect, amazon cloud certified solution architect associate exam questions, as certification dumps, google cloud, azure cloud, acloud, learn google cloud, learn azure cloud, cloud comparison, etc.
Abilities Validated by the Certification:
Effectively demonstrate knowledge of how to architect and deploy secure and robust applications on AWS technologies
Define a solution using architectural design principles based on customer requirements
Provide implementation guidance based on best practices to the organization throughout the life cycle of the project
Recommended Knowledge for the Certification:
One year of hands-on experience designing available, cost-effective, fault-tolerant, and scalable distributed systems on AWS.
Hands-on experience using compute, networking, storage, and database AWS services.
Hands-on experience with AWS deployment and management services.
Ability to identify and define technical requirements for an AWS-based application.
bility to identify which AWS services meet a given technical requirement.
Knowledge of recommended best practices for building secure and reliable applications on the AWS platform.
An understanding of the basic architectural principles of building in the AWS Cloud.
An understanding of the AWS global infrastructure.
An understanding of network technologies as they relate to AWS.
An understanding of security features and tools that AWS provides and how they relate to traditional services.
Note and disclaimer: We are not affiliated with AWS or Amazon or Microsoft or Google. The questions are put together based on the certification study guide and materials available online. We also receive questions and answers from anonymous users and we vet to make sure they are legitimate. The questions in this app should help you pass the exam but it is not guaranteed. We are not responsible for any exam you did not pass.
Important: To succeed with the real exam, do not memorize the answers in this app. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
What is the AWS Certified Solution Architect Associate Exam?
This exam validates an examinee’s ability to effectively demonstrate knowledge of how to architect and deploy secure and robust applications on AWS technologies. It validates an examinee’s ability to:
Define a solution using architectural design principles based on customer requirements.
Multiple-response: Has two correct responses out of five options.
There are two types of questions on the examination:
Multiple-choice: Has one correct response and three incorrect responses (distractors).
Provide implementation guidance based on best practices to the organization throughout the lifecycle of the project.
Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that an examinee with incomplete knowledge or skill would likely choose. However, they are generally plausible responses that fit in the content area defined by the test objective. Unanswered questions are scored as incorrect; there is no penalty for guessing.
To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
Download the AI & Machine Learning For Dummies PRO App: iOS - Android Our AI and Machine Learning For Dummies PRO App can help you Ace the following AI and Machine Learning certifications:
What are the corresponding Azure and Google Cloud services for each of the AWS services?
What are unique distinctions and similarities between AWS, Azure and Google Cloud services? For each AWS service, what is the equivalent Azure and Google Cloud service? For each Azure service, what is the corresponding Google Service? AWS Services vs Azure vs Google Services? Side by side comparison between AWS, Google Cloud and Azure Service?
Category: Marketplace Easy-to-deploy and automatically configured third-party applications, including single virtual machine or multiple virtual machine solutions. References: [AWS]:AWS Marketplace [Azure]:Azure Marketplace [Google]:Google Cloud Marketplace Tags: #AWSMarketplace, #AzureMarketPlace, #GoogleMarketplace Differences: They are both digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on their respective cloud platform.
Tags: #AlexaSkillsKit, #MicrosoftBotFramework, #GoogleAssistant Differences: One major advantage Google gets over Alexa is that Google Assistant is available to almost all Android devices.
Tags: #AmazonLex, #CogintiveServices, #AzureSpeech, #Api.ai, #DialogFlow, #Tensorflow Differences: api.ai provides us with such a platform which is easy to learn and comprehensive to develop conversation actions. It is a good example of the simplistic approach to solving complex man to machine communication problem using natural language processing in proximity to machine learning. Api.ai supports context based conversations now, which reduces the overhead of handling user context in session parameters. On the other hand in Lex this has to be handled in session. Also, api.ai can be used for both voice and text based conversations (assistant actions can be easily created using api.ai).
Category: Big data and analytics: Data warehouse Description: Apache Spark-based analytics platform. Managed Hadoop service. Data orchestration, ETL, Analytics and visualization References: [AWS]:EMR, Data Pipeline, Kinesis Stream, Kinesis Firehose, Glue, QuickSight, Athena, CloudSearch [Azure]:Azure Databricks, Data Catalog Cortana Intelligence, HDInsight, Power BI, Azure Datafactory, Azure Search, Azure Data Lake Anlytics, Stream Analytics, Azure Machine Learning [Google]:Cloud DataProc, Machine Learning, Cloud Datalab Tags:#EMR, #DataPipeline, #Kinesis, #Cortana, AzureDatafactory, #AzureDataAnlytics, #CloudDataProc, #MachineLearning, #CloudDatalab Differences: All three providers offer similar building blocks; data processing, data orchestration, streaming analytics, machine learning and visualisations. AWS certainly has all the bases covered with a solid set of products that will meet most needs. Azure offers a comprehensive and impressive suite of managed analytical products. They support open source big data solutions alongside new serverless analytical products such as Data Lake. Google provide their own twist to cloud analytics with their range of services. With Dataproc and Dataflow, Google have a strong core to their proposition. Tensorflow has been getting a lot of attention recently and there will be many who will be keen to see Machine Learning come out of preview.
Category: Serverless Description: Integrate systems and run backend processes in response to events or schedules without provisioning or managing servers. References: [AWS]:AWS Lambda [Azure]:Azure Functions [Google]:Google Cloud Functions Tags:#AWSLAmbda, #AzureFunctions, #GoogleCloudFunctions Differences: Both AWS Lambda and Microsoft Azure Functions and Google Cloud Functions offer dynamic, configurable triggers that you can use to invoke your functions on their platforms. AWS Lambda, Azure and Google Cloud Functions support Node.js, Python, and C#. The beauty of serverless development is that, with minor changes, the code you write for one service should be portable to another with little effort – simply modify some interfaces, handle any input/output transforms, and an AWS Lambda Node.JS function is indistinguishable from a Microsoft Azure Node.js Function. AWS Lambda provides further support for Python and Java, while Azure Functions provides support for F# and PHP. AWS Lambda is built from the AMI, which runs on Linux, while Microsoft Azure Functions run in a Windows environment. AWS Lambda uses the AWS Machine architecture to reduce the scope of containerization, letting you spin up and tear down individual pieces of functionality in your application at will.
Category:Caching Description:An in-memory–based, distributed caching service that provides a high-performance store typically used to offload non transactional work from a database. References: [AWS]:AWS ElastiCache (works as an in-memory data store and cache to support the most demanding applications requiring sub-millisecond response times.) [Azure]:Azure Cache for Redis (based on the popular software Redis. It is typically used as a cache to improve the performance and scalability of systems that rely heavily on backend data-stores.) [Google]:Memcache (In-memory key-value store, originally intended for caching) Tags:#Redis, #Memcached <Differences: They all support horizontal scaling via sharding.They all improve the performance of web applications by allowing you to retrive information from fast, in-memory caches, instead of relying on slower disk-based databases.”, “Differences”: “ElastiCache supports Memcached and Redis. Memcached Cloud provides various data persistence options as well as remote backups for disaster recovery purposes. Redis offers persistence to disk, Memcache does not. This can be very helpful if you cache lots of data, since you remove the slowness around having a fully cold cache. Redis also offers several extra data structures that Memcache doesn’t— Lists, Sets, Sorted Sets, etc. Memcache only has Key/Value pairs. Memcache is multi-threaded. Redis is single-threaded and event driven. Redis is very fast, but it’ll never be multi-threaded. At hight scale, you can squeeze more connections and transactions out of Memcache. Memcache tends to be more memory efficient. This can make a big difference around the magnitude of 10s of millions or 100s of millions of keys. ElastiCache supports Memcached and Redis. Memcached Cloud provides various data persistence options as well as remote backups for disaster recovery purposes. Redis offers persistence to disk, Memcache does not. This can be very helpful if you cache lots of data, since you remove the slowness around having a fully cold cache. Redis also offers several extra data structures that Memcache doesn’t— Lists, Sets, Sorted Sets, etc. Memcache only has Key/Value pairs. Memcache is multi-threaded. Redis is single-threaded and event driven. Redis is very fast, but it’ll never be multi-threaded. At hight scale, you can squeeze more connections and transactions out of Memcache. Memcache tends to be more memory efficient. This can make a big difference around the magnitude of 10s of millions or 100s of millions of keys.
Category: Enterprise application services Description:Fully integrated Cloud service providing communications, email, document management in the cloud and available on a wide variety of devices. References: [AWS]:Amazon WorkMail, Amazon WorkDocs, Amazon Kendra (Sync and Index) [Azure]:Office 365 [Google]:G Suite Tags: #AmazonWorkDocs, #Office365, #GoogleGSuite Differences: G suite document processing applications like Google Docs are far behind Office 365 popular Word and Excel software, but G Suite User interface is intuite, simple and easy to navigate. Office 365 is too clunky. Get 20% off G-Suite Business Plan with Promo Code: PCQ49CJYK7EATNC
Category: Management Description: A unified management console that simplifies building, deploying, and operating your cloud resources. References: [AWS]:AWS Management Console, Trusted Advisor, AWS Usage and Billing Report, AWS Application Discovery Service, Amazon EC2 Systems Manager, AWS Personal Health Dashboard, AWS Compute Optimizer (Identify optimal AWS Compute resources) [Azure]:Azure portal, Azure Advisor, Azure Billing API, Azure Migrate, Azure Monitor, Azure Resource Health [Google]:Google CLoud Platform, Cost Management, Security Command Center, StackDriver Tags: #AWSConsole, #AzurePortal, #GoogleCloudConsole, #TrustedAdvisor, #AzureMonitor, #SecurityCommandCenter Differences: AWS Console categorizes its Infrastructure as a Service offerings into Compute, Storage and Content Delivery Network (CDN), Database, and Networking to help businesses and individuals grow. Azure excels in the Hybrid Cloud space allowing companies to integrate onsite servers with cloud offerings. Google has a strong offering in containers, since Google developed the Kubernetes standard that AWS and Azure now offer. GCP specializes in high compute offerings like Big Data, analytics and machine learning. It also offers considerable scale and load balancing – Google knows data centers and fast response time.
Build and connect intelligent bots that interact with your users using text/SMS, Skype, Teams, Slack, Office 365 mail, Twitter, and other popular services.
Enables both Speech to Text, and Text into Speech capabilities. The Speech Services are the unification of speech-to-text, text-to-speech, and speech-translation into a single Azure subscription. It’s easy to speech enable your applications, tools, and devices with the Speech SDK, Speech Devices SDK, or REST APIs. Amazon Polly is a Text-to-Speech (TTS) service that uses advanced deep learning technologies to synthesize speech that sounds like a human voice. With dozens of lifelike voices across a variety of languages, you can select the ideal voice and build speech-enabled applications that work in many different countries. Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to their applications. Using the Amazon Transcribe API, you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech.
Computer Vision: Extract information from images to categorize and process visual data. Amazon Rekognition is a simple and easy to use API that can quickly analyze any image or video file stored in Amazon S3. Amazon Rekognition is always learning from new data, and we are continually adding new labels and facial recognition features to the service.
Face: Detect, identy, and analyze faces in photos.
The Virtual Assistant Template brings together a number of best practices we’ve identified through the building of conversational experiences and automates integration of components that we’ve found to be highly beneficial to Bot Framework developers.
Processes and moves data between different compute and storage services, as well as on-premises data sources at specified intervals. Create, schedule, orchestrate, and manage data pipelines.
Virtual servers allow users to deploy, manage, and maintain OS and server software. Instance types provide combinations of CPU/RAM. Users pay for what they use with the flexibility to change sizes.
Allows you to automatically change the number of VM instances. You set defined metric and thresholds that determine if the platform adds or removes instances.
Redeploy and extend your VMware-based enterprise workloads to Azure with Azure VMware Solution by CloudSimple. Keep using the VMware tools you already know to manage workloads on Azure without disrupting network, security, or data protection policies.
Azure Container Instances is the fastest and simplest way to run a container in Azure, without having to provision any virtual machines or adopt a higher-level orchestration service.
Deploy orchestrated containerized applications with Kubernetes. Simplify monitoring and cluster management through auto upgrades and a built-in operations console.
Fully managed service that enables developers to deploy microservices applications without managing virtual machines, storage, or networking. AWS App Mesh is a service mesh that provides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure. App Mesh standardizes how your services communicate, giving you end-to-end visibility and ensuring high-availability for your applications.
Integrate systems and run backend processes in response to events or schedules without provisioning or managing servers. AWS Lambda is an event-driven, serverless computing platform provided by Amazon as a part of the Amazon Web Services. It is a computing service that runs code in response to events and automatically manages the computing resources required by that code
Managed relational database service where resiliency, scale, and maintenance are primarily handled by the platform. Amazon Relational Database Service is a distributed relational database service by Amazon Web Services. It is a web service running “in the cloud” designed to simplify the setup, operation, and scaling of a relational database for use in applications. Administration processes like patching the database software, backing up databases and enabling point-in-time recovery are managed automatically. Scaling storage and compute resources can be performed by a single API call as AWS does not offer an ssh connection to RDS instances.
An in-memory–based, distributed caching service that provides a high-performance store typically used to offload non transactional work from a database. Amazon ElastiCache is a fully managed in-memory data store and cache service by Amazon Web Services. The service improves the performance of web applications by retrieving information from managed in-memory caches, instead of relying entirely on slower disk-based databases. ElastiCache supports two open-source in-memory caching engines: Memcached and Redis.
Migration of database schema and data from one database format to a specific database technology in the cloud. AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.
Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers. AWS X-Ray is an application performance management service that enables a developer to analyze and debug applications in the Amazon Web Services (AWS) public cloud. A developer can use AWS X-Ray to visualize how a distributed application is performing during development or production, and across multiple AWS regions and accounts.
A cloud service for collaborating on code development. AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define. AWS CodeCommit is a source code storage and version-control service for Amazon Web Services’ public cloud customers. CodeCommit was designed to help IT teams collaborate on software development, including continuous integration and application delivery.
Collection of tools for building, debugging, deploying, diagnosing, and managing multiplatform scalable apps and services. The AWS Developer Tools are designed to help you build software like Amazon. They facilitate practices such as continuous delivery and infrastructure as code for serverless, containers, and Amazon EC2.
Built on top of the native REST API across all cloud services, various programming language-specific wrappers provide easier ways to create solutions. The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.
Configures and operates applications of all shapes and sizes, and provides templates to create and manage a collection of resources. AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers.
Provides a way for users to automate the manual, long-running, error-prone, and frequently repeated IT tasks. AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. CloudFormation allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts.
Provides an isolated, private environment in the cloud. Users have control over their virtual networking environment, including selection of their own IP address range, creation of subnets, and configuration of route tables and network gateways.
Connects Azure virtual networks to other Azure virtual networks, or customer on-premises networks (Site To Site). Allows end users to connect to Azure services through VPN tunneling (Point To Site).
A service that hosts domain names, plus routes users to Internet applications, connects user requests to datacenters, manages traffic to apps, and improves app availability with automatic failover.
Application Gateway is a layer 7 load balancer. It supports SSL termination, cookie-based session affinity, and round robin for load-balancing traffic.
Azure Digital Twins is an IoT service that helps you create comprehensive models of physical environments. Create spatial intelligence graphs to model the relationships and interactions between people, places, and devices. Query data from a physical space rather than disparate sensors.
Provides analysis of cloud resource configuration and security so subscribers can ensure they’re making use of best practices and optimum configurations.
Allows users to securely control access to services and resources while offering data security and protection. Create and manage users and groups, and use permissions to allow and deny access to resources.
Role-based access control (RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.
Provides managed domain services such as domain join, group policy, LDAP, and Kerberos/NTLM authentication that are fully compatible with Windows Server Active Directory.
Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources, so those resources stay compliant with your corporate standards and service level agreements.
Azure management groups provide a level of scope above subscriptions. You organize subscriptions into containers called “management groups” and apply your governance conditions to the management groups. All subscriptions within a management group automatically inherit the conditions applied to the management group. Management groups give you enterprise-grade management at a large scale, no matter what type of subscriptions you have.
Helps you protect and safeguard your data and meet your organizational security and compliance commitments.
Key Management Service AWS KMS, CloudHSM | Key Vault
Provides security solution and works with other services by providing a way to manage, create, and control encryption keys stored in hardware security modules (HSM).
Provides inbound protection for non-HTTP/S protocols, outbound network-level protection for all ports and protocols, and application-level protection for outbound HTTP/S.
An automated security assessment service that improves the security and compliance of applications. Automatically assess applications for vulnerabilities or deviations from best practices.
Object storage service, for use cases including cloud applications, content distribution, backup, archiving, disaster recovery, and big data analytics.
Provides a simple interface to create and configure file systems quickly, and share common files. Can be used with traditional protocols that access files over a network.
Easily join your distributed microservice architectures into a single global application using HTTP load balancing and path-based routing rules. Automate turning up new regions and scale-out with API-driven global actions, and independent fault-tolerance to your back end microservices in Azure—or anywhere.
Cloud technology to build distributed applications using out-of-the-box connectors to reduce integration challenges. Connect apps, data and devices on-premises or in the cloud.
Serverless technology for connecting apps, data and devices anywhere, whether on-premises or in the cloud for large ecosystems of SaaS and cloud-based connectors.
Azure Stack is a hybrid cloud platform that enables you to run Azure services in your company’s or service provider’s datacenter. As a developer, you can build apps on Azure Stack. You can then deploy them to either Azure Stack or Azure, or you can build truly hybrid apps that take advantage of connectivity between an Azure Stack cloud and Azure.
Basically, it all comes down to what your organizational needs are and if there’s a particular area that’s especially important to your business (ex. serverless, or integration with Microsoft applications).
Some of the main things it comes down to is compute options, pricing, and purchasing options.
Here’s a brief comparison of the compute option features across cloud providers:
Here’s an example of a few instances’ costs (all are Linux OS):
Each provider offers a variety of options to lower costs from the listed On-Demand prices. These can fall under reservations, spot and preemptible instances and contracts.
Both AWS and Azure offer a way for customers to purchase compute capacity in advance in exchange for a discount: AWS Reserved Instances and Azure Reserved Virtual Machine Instances. There are a few interesting variations between the instances across the cloud providers which could affect which is more appealing to a business.
Another discounting mechanism is the idea of spot instances in AWS and low-priority VMs in Azure. These options allow users to purchase unused capacity for a steep discount.
With AWS and Azure, enterprise contracts are available. These are typically aimed at enterprise customers, and encourage large companies to commit to specific levels of usage and spend in exchange for an across-the-board discount – for example, AWS EDPs and Azure Enterprise Agreements.
You can read more about the differences between AWS and Azure to help decide which your business should use in this blog post
Download the AI & Machine Learning For Dummies PRO App: iOS - Android Our AI and Machine Learning For Dummies PRO App can help you Ace the following AI and Machine Learning certifications:
The AWS Certified Cloud Practitioner Exam (CLF-C02) is an introduction to AWS services and the intention is to examine the candidates ability to define what the AWS cloud is and its global infrastructure. It provides an overview of AWS core services security aspects, pricing and support services. The main objective is to provide an overall understanding about the Amazon Web Services Cloud platform. The course helps you get the conceptual understanding of the AWS and can help you know about the basics of AWS and cloud computing, including the services, cases and benefits [Get AWS CCP Practice Exam PDF Dumps here]
To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
aws cloud practitioner practice questions and answers
aws cloud practitioner practice exam questions and references
Q1:For auditing purposes, your company now wants to monitor all API activity for all regions in your AWS environment. What can you use to fulfill this new requirement?
A. For each region, enable CloudTrail and send all logs to a bucket in each region.
B. Enable CloudTrail for all regions.
C. Ensure one CloudTrail is enabled for all regions.
D. Use AWS Config to enable the trail for all regions.
Ensure one CloudTrail is enabled for all regions. Turn on CloudTrail for all regions in your environment and CloudTrail will deliver log files from all regions to one S3 bucket. AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting.
Use a VPC Endpoint to access S3. A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.
AWS PrivateLink simplifies the security of data shared with cloud-based applications by eliminating the exposure of data to the public Internet.
[Get AWS CCP Practice Exam PDF Dumps here] It is AWS responsibility to secure Edge locations and decommission the data. AWS responsibility “Security of the Cloud” – AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.
Q4:You have EC2 instances running at 90% utilization and you expect this to continue for at least a year. What type of EC2 instance would you choose to ensure your cost stay at a minimum?
[Get AWS CCP Practice Exam PDF Dumps here] Reserved instances are the best choice for instances with continuous usage and offer a reduced cost because you purchase the instance for the entire year. Amazon EC2 Reserved Instances (RI) provide a significant discount (up to 75%) compared to On-Demand pricing and provide a capacity reservation when used in a specific Availability Zone.
The AWS Simple Monthly Calculator helps customers and prospects estimate their monthly AWS bill more efficiently. Using this tool, they can add, modify and remove services from their ‘bill’ and it will recalculate their estimated monthly charges automatically.
A. Sign up for the free alert under filing preferences in the AWS Management Console.
B. Set a schedule to regularly review the Billing an Cost Management dashboard each month.
C. Create an email alert in AWS Budget
D. In CloudWatch, create an alarm that triggers each time the limit is exceeded.
Answer:
Answer: iOS – Android (C) [Get AWS CCP Practice Exam PDF Dumps here] AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Reservation alerts are supported for Amazon EC2, Amazon RDS, Amazon Redshift, Amazon ElastiCache, and Amazon Elasticsearch reservations.
Q7:An Edge Location is a specialization AWS data centre that works with which services?
A. Lambda
B. CloudWatch
C. CloudFront
D. Route 53
Answer:
Answer: Get AWS Certified Cloud Practitioner Practice Exam CCP CLF-C02 eBook Print Book here Lambda@Edge lets you run Lambda functions to customize the content that CloudFront delivers, executing the functions in AWS locations closer to the viewer. Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you’re serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.
CloudFront speeds up the distribution of your content by routing each user request through the AWS backbone network to the edge location that can best serve your content. Typically, this is a CloudFront edge server that provides the fastest delivery to the viewer. Using the AWS network dramatically reduces the number of networks that your users’ requests must pass through, which improves performance. Users get lower latency—the time it takes to load the first byte of the file—and higher data transfer rates.
You also get increased reliability and availability because copies of your files (also known as objects) are now held (or cached) in multiple edge locations around the world.
Anser: A. Route 53 is a domain name system service by AWS. When a Disaster does occur , it can be easy to switch to secondary sites using the Route53 service. Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other. Amazon Route 53 is fully compliant with IPv6 as well.
Answer: D. The below snapshot from the AWS Documentation shows the spectrum of the Disaster recovery methods. If you go to the further end of the spectrum you have the least time for downtime for the users.
Q11:Your company is planning to host resources in the AWS Cloud. They want to use services which can be used to decouple resources hosted on the cloud. Which of the following services can help fulfil this requirement?
A. AWS EBS Volumes
B. AWS EBS Snapshots
C. AWS Glacier
D. AWS SQS
Answer:
D. AWS SQS: Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components.
A. 99.999999999% Durability and 99.99% Availability S3 Standard Storage class has a rating of 99.999999999% durability (referred to as 11 nines) and 99.99% availability.
A. Redshift is a database offering that is fully-managed and used for data warehousing and analytics, including compatibility with existing business intelligence tools.
B. and C. CENTRALLY MANAGE POLICIES ACROSS MULTIPLE AWS ACCOUNTS AUTOMATE AWS ACCOUNT CREATION AND MANAGEMENT CONTROL ACCESS TO AWS SERVICES CONSOLIDATE BILLING ACROSS MULTIPLE AWS ACCOUNTS
Q17:There is a requirement hosting a set of servers in the Cloud for a short period of 3 months. Which of the following types of instances should be chosen to be cost effective.
A. Spot Instances
B. On-Demand
C. No Upfront costs Reserved
D. Partial Upfront costs Reserved
Answer:
B. Since the requirement is just for 3 months, then the best cost effective option is to use On-Demand Instances.
You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, and other sources. You can then retrieve the associated log data from CloudWatch Log.
Q22:A company is deploying a new two-tier web application in AWS. The company wants to store their most frequently used data so that the response time for the application is improved. Which AWS service provides the solution for the company’s requirements?
A. MySQL Installed on two Amazon EC2 Instances in a single Availability Zone
Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases.
Q23:You have a distributed application that periodically processes large volumes of data across multiple Amazon EC2 Instances. The application is designed to recover gracefully from Amazon EC2 instance failures. You are required to accomplish this task in the most cost-effective way. Which of the following will meetyour requirements?
When you think of cost effectiveness, you can either have to choose Spot or Reserved instances. Now when you have a regular processing job, the best is to use spot instances and since your application is designed recover gracefully from Amazon EC2 instance failures, then even if you lose the Spot instance , there is no issue because your application can recover.
A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC.
Q25:A company is deploying a two-tier, highly available web application to AWS. Which service provides durable storage for static content while utilizing Overall CPU resources for the web tier?
A. Amazon EBC volume.
B. Amazon S3
C. Amazon EC2 instance store
D. Amazon RDS instance
Answer:
B. Amazon S3 is the default storage service that should be considered for companies. It provides durable storage for all static content.
Q26:When working on the costing for on-demand EC2 instances , which are the following are attributes which determine the costing of the EC2 Instance. Choose 3 answers from the options given below
Q27:You have a mission-critical application which must be globally available at all times. If this is the case, which of the below deployment mechanisms would you employ
Always build components which are loosely coupled. This is so that even if one component does fail, the entire system does not fail. Also if you build with the assumption that everything will fail, then you will ensure that the right measures are taken to build a highly available and fault tolerant system.
Q29: You have 2 accounts in your AWS account. One for the Dev and the other for QA. All are part ofconsolidated billing. The master account has purchase 3 reserved instances. The Dev department is currently using 2 reserved instances. The QA team is planning on using 3 instances which of the same instance type. What is the pricing tier of the instances that can be used by the QA Team?
Since all are a part of consolidating billing, the pricing of reserved instances can be shared by All. And since 2 are already used by the Dev team , another one can be used by the QA team. The rest of the instances can be on-demand instances.
Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components.
Q32:You are exploring what services AWS has off-hand. You have a large number of data sets that need to be processed. Which of the following services can help fulfil this requirement.
A. EMR
B. S3
C. Glacier
D. Storage Gateway
Answer:
A. Amazon EMR helps you analyze and process vast amounts of data by distributing the computational work across a cluster of virtual servers running in the AWS Cloud. The cluster is managed using an open-source framework called Hadoop. Amazon EMR lets you focus on crunching or analyzing your data without having to worry about time-consuming setup, management, and tuning of Hadoop clusters or the compute capacity they rely on.
Amazon Inspector enables you to analyze the behaviour of your AWS resources and helps you to identify potential security issues. Using Amazon Inspector, you can define a collection of AWS resources that you want to include in an assessment target. You can then create an assessment template and launch a security assessment run of this target.
Q34:Your company is planning to offload some of the batch processing workloads on to AWS. These jobs can be interrupted and resumed at any time. Which of the following instance types would be the most cost effective to use for this purpose.
A. On-Demand
B. Spot
C. Full Upfront Reserved
D. Partial Upfront Reserved
Answer:
B. Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. For example, Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks
Note that the AWS Console cannot be used to upload data onto Glacier. The console can only be used to create a Glacier vault which can be used to upload the data.
Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data& into and out of the AWS cloud. Using Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. Transferring data with Snowball is simple, fast, secure, and can be as little as one-fifth the cost of high-speed Internet.
Amazon Inspector enables you to analyze the behavior of your AWS resources and helps you to identify potential security issues. Using Amazon Inspector, you can define a collection of AWS resources that you want to include in an assessment target. You can then create an assessment template and launch a security assessment run of this target.
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open source databases.
You can reduce the load on your source DB Instance by routing read queries from your applications to the read replica. Read replicas allow you to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.
When you create an EBS volume in an Availability Zone, it is automatically replicated within that zone to prevent data loss due to failure of any single hardware component
Q42:Your company is planning to host a large e-commerce application on the AWS Cloud. One of their major concerns is Internet attacks such as DDos attacks.
Which of the following services can help mitigate this concern. Choose 2 answers from the options given below
One of the first techniques to mitigate DDoS attacks is to minimize the surface area that can be attacked thereby limiting the options for attackers and allowing you to build protections in a single place. We want to ensure that we do not expose our application or resources to ports, protocols or applications from where they do not expect any communication. Thus, minimizing the possible points of attack and letting us concentrate our mitigation efforts. In some cases, you can do this by placing your computation resources behind Content Distribution Networks (CDNs), Load Balancers and restricting direct Internet traffic to certain parts of your infrastructure like your database servers. In other cases, you can use firewalls or Access Control Lists (ACLs) to control what traffic reaches your applications.
You can use the consolidated billing feature in AWS Organizations to consolidate payment for multiple AWS accounts or multiple AISPL accounts. With consolidated billing, you can see a combined view of AWS charges incurred by all of your accounts. You also can get a cost report for each member account that is associated with your master account. Consolidated billing is offered at no additional charge.
One of the first techniques to mitigate DDoS attacks is to minimize the surface area that can be attacked thereby limiting the options for attackers and allowing you to build protections in a single place. We want to ensure that we do not expose our application or resources to ports, protocols or applications from where they do not expect any communication. Thus, minimizing the possible points of attack and letting us concentrate our mitigation efforts. In some cases, you can do this by placing your computation resources behind; Content Distribution Networks (CDNs), Load Balancers and restricting direct Internet traffic to certain parts of your infrastructure like your database servers. In other cases, you can use firewalls or Access Control Lists (ACLs) to control what traffic reaches your applications.
If you want a self-managed database, that means you want complete control over the database engine and the underlying infrastructure. In such a case you need to host the database on an EC2 Instance
If the database is going to be used for a minimum of one year at least , then it is better to get Reserved Instances. You can save on costs , and if you use a partial upfront options , you can get a better discount
The AWS Console cannot be used to upload data onto Glacier. The console can only be used to create a Glacier vault which can be used to upload the data.
Security groups acts as a virtual firewall for your instance to control inbound and outbound traffic. Network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets.
Q52:You plan to deploy an application on AWS. This application needs to be PCI Compliant. Which of the below steps are needed to ensure the compliance? Choose 2 answers from the below list:
A. Choose AWS services which are PCI Compliant
B. Ensure the right steps are taken during application development for PCI Compliance
C. Encure the AWS Services are made PCI Compliant
D. Do an audit after the deployment of the application for PCI Compliance.
Q57:Which of the following is a factor when calculating Total Cost of Ownership (TCO) for the AWS Cloud?
A. The number of servers migrated to AWS
B. The number of users migrated to AWS
C. The number of passwords migrated to AWS
D. The number of keys migrated to AWS
Answer:
A. Running servers will incur costs. The number of running servers is one factor of Server Costs; a key component of AWS’s Total Cost of Ownership (TCO). Reference: AWS cost calculator
Q58:Which AWS Services can be used to store files? Choose 2 answers from the options given below:
A. Amazon CloudWatch
B. Amazon Simple Storage Service (Amazon S3)
C. Amazon Elastic Block Store (Amazon EBS)
D. AWS COnfig
D. AWS Amazon Athena
B. and C. Amazon S3 is a Object storage built to store and retrieve any amount of data from anywhere. Amazon Elastic Block Store is a Persistent block storage for Amazon EC2.
C: AWS is defined as a cloud services provider. They provide hundreds of services of which compute and storage are included (not not limited to). Reference: AWS
Q60: Which AWS service can be used as a global content delivery network (CDN) service?
A. Amazon SES
B. Amazon CouldTrail
C. Amazon CloudFront
D. Amazon S3
Answer:
C: Amazon CloudFront is a web service that gives businesses and web application developers an easy and cost effective way to distribute content with low latency and high data transfer speeds. Like other AWS services, Amazon CloudFront is a self-service, pay-per-use offering, requiring no long term commitments or minimum fees. With CloudFront, your files are delivered to end-users using a global network of edge locations.Reference: AWS cloudfront
Q61:What best describes the concept of fault tolerance?
Choose the correct answer:
A. The ability for a system to withstand a certain amount of failure and still remain functional.
B. The ability for a system to grow in size, capacity, and/or scope.
C. The ability for a system to be accessible when you attempt to access it.
D. The ability for a system to grow and shrink based on demand.
Answer:
A: Fault tolerance describes the concept of a system (in our case a web application) to have failure in some of its components and still remain accessible (highly available). Fault tolerant web applications will have at least two web servers (in case one fails).
Q62: The firm you work for is considering migrating to AWS. They are concerned about cost and the initial investment needed. Which of the following features of AWS pricing helps lower the initial investment amount needed?
Choose 2 answers from the options given below:
A. The ability to choose the lowest cost vendor.
B. The ability to pay as you go
C. No upfront costs
D. Discounts for upfront payments
Answer:
B and C: The best features of moving to the AWS Cloud is: No upfront cost and The ability to pay as you go where the customer only pays for the resources needed. Reference: AWS pricing
Q64: Your company has started using AWS. Your IT Security team is concerned with the security of hosting resources in the Cloud. Which AWS service provides security optimization recommendations that could help the IT Security team secure resources using AWS?
An online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment, Trusted Advisor provides real time guidance to help you provision your resources following AWS best practices. Reference: AWS trusted advisor
Q65:What is the relationship between AWS global infrastructure and the concept of high availability?
Choose the correct answer:
A. AWS is centrally located in one location and is subject to widespread outages if something happens at that one location.
B. AWS regions and Availability Zones allow for redundant architecture to be placed in isolated parts of the world.
C. Each AWS region handles a different AWS services, and you must use all regions to fully use AWS.
As an AWS user, you can create your applications infrastructure and duplicate it. By placing duplicate infrastructure in multiple regions, high availability is created because if one region fails you have a backup (in a another region) to use.
Q66: You are hosting a number of EC2 Instances on AWS. You are looking to monitor CPU Utilization on the Instance. Which service would you use to collect and track performance metrics for AWS services?
Answer: iOS – Android C: Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. Reference: AWS cloudwatch
Q67: Which of the following support plans give access to all the checks in the Trusted Advisor service.
Q68: Which of the following in AWS maps to a separate geographic location?
A. AWS Region B. AWS Data Centers C. AWS Availability Zone
Answer:
Answer: iOS – Android A: Amazon cloud computing resources are hosted in multiple locations world-wide. These locations are composed of AWS Regions and Availability Zones. Each AWS Region is a separate geographic area. Reference: AWS Regions And Availability Zone
Q69:What best describes the concept of scalability?
Choose the correct answer:
A. The ability for a system to grow and shrink based on demand.
B. The ability for a system to grow in size, capacity, and/or scope.
C. The ability for a system be be accessible when you attempt to access it.
D. The ability for a system to withstand a certain amount of failure and still remain functional.
Answer
Answer: iOS – Android B: Scalability refers to the concept of a system being able to easily (and cost-effectively) scale UP. For web applications, this means the ability to easily add server capacity when demand requires.
Q70: If you wanted to monitor all events in your AWS account, which of the below services would you use?
A. AWS CloudWatch
B. AWS CloudWatch logs
C. AWS Config
D. AWS CloudTrail
Answer:
D: AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. Reference: Cloudtrail
Q71:What are the four primary benefits of using the cloud/AWS?
Choose the correct answer:
A. Fault tolerance, scalability, elasticity, and high availability.
B. Elasticity, scalability, easy access, limited storage.
C. Fault tolerance, scalability, sometimes available, unlimited storage
D. Unlimited storage, limited compute capacity, fault tolerance, and high availability.
Answer:
Answer: iOS – Android Fault tolerance, scalability, elasticity, and high availability are the four primary benefits of AWS/the cloud.
Q72:What best describes a simplified definition of the “cloud”?
Choose the correct answer:
A. All the computers in your local home network.
B. Your internet service provider
C. A computer located somewhere else that you are utilizing in some capacity.
D. An on-premise data center that your company owns.
Answer
Answer: iOS – Android (D) The simplest definition of the cloud is a computer that is located somewhere else that you are utilizing in some capacity. AWS is a cloud services provider, as the provide access to computers they own (located at AWS data centers), that you use for various purposes.
Q73: Your development team is planning to host a development environment on the cloud. This consists of EC2 and RDS instances. This environment will probably only be required for 2 months.
Which types of instances would you use for this purpose?
A. On-Demand
B. Spot
C. Reserved
D. Dedicated
Answer:
Answer: iOS – Android (A) The best and cost effective option would be to use On-Demand Instances. The AWS documentation gives the following additional information on On-Demand EC2 Instances. With On-Demand instances you only pay for EC2 instances you use. The use of On-Demand instances frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs. Reference: AWS ec2 pricing on-demand
Q74: Which of the following can be used to secure EC2 Instances?
Answer: iOS – Android security groups acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC could be assigned to a different set of security groups. If you don’t specify a particular group at launch time, the instance is automatically assigned to the default security group for the VPC. Reference: VPC Security Groups
Q75: What is the purpose of a DNS server?
Choose the correct answer:
A. To act as an internet search engine.
B. To protect you from hacking attacks.
C. To convert common language domain names to IP addresses.
Domain name system servers act as a “third party” that provides the service of converting common language domain names to IP addresses (which are required for a web browser to properly make a request for web content).
High availability refers to the concept that something will be accessible when you try to access it. An object or web application is “highly available” when it is accessible a vast majority of the time.
RDS is a SQL database service (that offers several database engine options), and DynamoDB is a NoSQL database option that only offers one NoSQL engine.
Reference:
Q78: What are two open source in-memory engines supported by ElastiCache?
Q85:If you want to have SMS or email notifications sent to various members of your department with status updates on resources in your AWS account, what service should you choose?
Choose the correct answer:
A. SNS
B. GetSMS
C. RDS
D. STS
Answer:
Answer: iOS – Android (A) Simple Notification Service (SNS) is what publishes messages to SMS and/or email endpoints.
Amazon WorkSpaces is a managed, secure Desktop-as-a-Service (DaaS) solution. You can use Amazon WorkSpaces to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe
Q87: Your company has recently migrated large amounts of data to the AWS cloud in S3 buckets. But it is necessary to discover and protect the sensitive data in these buckets. Which AWS service can do that?
Notes:Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS.
Q88: Your Finance Department has instructed you to save costs wherever possible when using the AWS Cloud. You notice that using reserved EC2 instances on a 1year contract will save money. What payment method will save the most money?
A: Deferred
B: Partial Upfront
C: All Upfront
D: No Upfront
Answer: C
Notes: With the All Upfront option, you pay for the entire Reserved Instance term with one upfront payment. This option provides you with the largest discount compared to On Demand Instance pricing.
Q89: A fantasy sports company needs to run an application for the length of a football season (5 months). They will run the application on an EC2 instance and there can be no interruption. Which purchasing option best suits this use case?
Notes: This is not a long enough term to make reserved instances the better option. Plus, the application can’t be interrupted, which rules out spot instances. Dedicated instances provide the option to bring along existing software licenses.
The scenario does not indicate a need to do this.
Q90:Your company is considering migrating its data center to the cloud. What are the advantages of the AWS cloud over an on-premises data center?
A. Replace upfront operational expenses with low variable operational expenses.
B. Maintain physical access to the new data center, but share responsibility with AWS.
C. Replace low variable costs with upfront capital expenses.
D. Replace upfront capital expenses with low variable costs.
Q91:You are leading a pilot program to try the AWS Cloud for one of your applications. You have been instructed to provide an estimate of your AWS bill. Which service will allow you to do this by manually entering your planned resources by service?
Notes: With the AWS Pricing Calculator, you can input the services you will use, and the configuration of those services, and get an estimate of the costs these services will accrue. AWS Pricing Calculator lets you explore AWS services, and create an estimate for the cost of your use cases on AWS.
Q92:Which AWS service would enable you to view the spending distribution in one of your AWS accounts?
Notes: AWS Cost Explorer is a free tool that you can use to view your costs and usage. You can view data up to the last 13 months, forecast how much you are likely to spend for the next three months, and get recommendations for what Reserved Instances to purchase. You can use AWS Cost Explorer to see patterns in how much you spend on AWS resources over time, identify areas that need further inquiry, and see trends that you can use to understand your costs. You can also specify time ranges for the data, and view time data by day or by month.
Q93:You are managing the company’s AWS account. The current support plan is Basic, but you would like to begin using Infrastructure Event Management. What support plan (that already includes Infrastructure Event Management without an additional fee) should you upgrade to?
A. Upgrade to Enterprise plan.
B. Do nothing. It is included in the Basic plan.
C. Upgrade to Developer plan.
D. Upgrade to the Business plan. No other steps are necessary.
Notes:AWS Infrastructure Event Management is a structured program available to Enterprise support customers (and Business Support customers for an additional fee) that helps you plan for large-scale events, such as product or application launches, infrastructure migrations, and marketing events.
With Infrastructure Event Management, you get strategic planning assistance before your event, as well as real-time support during these moments that matter most for your business.
Q94:You have decided to use the AWS Cost and Usage Report to track your EC2 Reserved Instance costs. To where can these reports be published?
A. Trusted Advisor
B. An S3 Bucket that you own.
C. CloudWatch
D. An AWS owned S3 Bucket.
Answer: B
Notes: The AWS Cost and Usage Reports (AWS CUR) contains the most comprehensive set of cost and usage data available. You can use Cost and Usage Reports to publish your AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that break down your costs by the hour or day, by product or product resource, or by tags that you define yourself. AWS updates the report in your bucket once a day in comma-separated value (CSV) format. You can view the reports using spreadsheet software such as Microsoft Excel or Apache OpenOffice Calc, or access them from an application using the Amazon S3 API.
Q95:What can we do in AWS to receive the benefits of volume pricing for your multiple AWS accounts?
A. Use consolidated billing in AWS Organizations.
B. Purchase services in bulk from AWS Marketplace.
Notes: You can use the consolidated billing feature in AWS Organizations to consolidate billing and payment for multiple AWS accounts or multiple Amazon Internet Services Pvt. Ltd (AISPL) accounts. You can combine the usage across all accounts in the organization to share the volume pricing discounts, Reserved Instance discounts, and Savings Plans. This can result in a lower charge for your project, department, or company than with individual standalone accounts.
Q96:A gaming company is using the AWS Developer Tool Suite to develop, build, and deploy their applications. Which AWS service can be used to trace user requests from end-to-end through the application?
Notes:AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components.
Q97:A company needs to use a Load Balancer which can serve traffic at the TCP, and UDP layers. Additionally, it needs to handle millions of requests per second at very low latencies. Which Load Balancer should they use?
Notes:Network Load Balancer is best suited for load balancing of Transmission Control Protocol (TCP), User Datagram Protocol (UDP) and Transport Layer Security (TLS) traffic where extreme performance is required. Operating at the connection level (Layer 4), Network Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) and is capable of handling millions of requests per second while maintaining ultra-low latencies.
Q98:Your company is migrating its services to the AWS cloud. The DevOps team has heard about infrastructure as code, and wants to investigate this concept. Which AWS service would they investigate?
Notes:AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS.
Q99:You have a MySQL database that you want to migrate to the cloud, and you need it to be significantly faster there. You are looking for a speed increase up to 5 times the current performance. Which AWS offering could you use?
Notes:Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is up to five times faster than standard MySQL databases and three times faster than standard PostgreSQL databases.
Q100:A developer is trying to programmatically retrieve information from an EC2 instance such as public keys, ip address, and instance id. From where can this information be retrieved?
Notes: This type of data is stored in Instance metadata. Instance userdata does not retrieve the information mentioned, but can be used to help configure a new instance.
Q101: Why is AWS more economical than traditional data centers for applications with varying compute workloads?
A) Amazon EC2 costs are billed on a monthly basis. B) Users retain full administrative access to their Amazon EC2 instances. C) Amazon EC2 instances can be launched on demand when needed. D) Users can permanently run enough instances to handle peak workloads.
Answer: C Notes: The ability to launch instances on demand when needed allows users to launch and terminate instances in response to a varying workload. This is a more economical practice than purchasing enough on-premises servers to handle the peak load. Reference: Advantage of cloud computing
Q102: Which AWS service would simplify the migration of a database to AWS?
A) AWS Storage Gateway B) AWS Database Migration Service (AWS DMS) C) Amazon EC2 D) Amazon AppStream 2.0
Answer: B Notes: AWS DMS helps users migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. AWS DMS can migrate data to and from most widely used commercial and open-source databases. Reference: AWS DMS
Q103: Which AWS offering enables users to find, buy, and immediately start using software solutions in their AWS environment?
A) AWS Config B) AWS OpsWorks C) AWS SDK D) AWS Marketplace
Answer: D Notes: AWS Marketplace is a digital catalog with thousands of software listings from independent software vendors that makes it easy to find, test, buy, and deploy software that runs on AWS. Reference: AWS Markerplace
Q104: Which AWS networking service enables a company to create a virtual network within AWS?
A) AWS Config B) Amazon Route 53 C) AWS Direct Connect D) Amazon Virtual Private Cloud (Amazon VPC)
Answer: D Notes: Amazon VPC lets users provision a logically isolated section of the AWS Cloud where users can launch AWS resources in a virtual network that they define. Reference: VPC https://aws.amazon.com/vpc/
Q105: Which component of the AWS global infrastructure does Amazon CloudFront use to ensure low-latency delivery?
A) AWS Regions B) Edge locations C) Availability Zones D) Virtual Private Cloud (VPC)
Answer: B Notes: – To deliver content to users with lower latency, Amazon CloudFront uses a global network of points of presence (edge locations and regional edge caches) worldwide. Reference: Cloudfront – https://aws.amazon.com/cloudfront/
Q106: How would a system administrator add an additional layer of login security to a user’s AWS Management Console?
A) Use Amazon Cloud Directory B) Audit AWS Identity and Access Management (IAM) roles C) Enable multi-factor authentication D) Enable AWS CloudTrail
Answer: C Notes: – Multi-factor authentication (MFA) is a simple best practice that adds an extra layer of protection on top of a username and password. With MFA enabled, when a user signs in to an AWS Management Console, they will be prompted for their username and password (the first factor—what they know), as well as for an authentication code from their MFA device (the second factor—what they have). Taken together, these multiple factors provide increased security for AWS account settings and resources. Reference: MFA – https://aws.amazon.com/iam/features/mfa/
Q107: Which service can identify the user that made the API call when an Amazon EC2 instance is terminated?
A) AWS Trusted Advisor B) AWS CloudTrail C) AWS X-Ray D) AWS Identity and Access Management (AWS IAM)
Answer: B Notes: – AWS CloudTrail helps users enable governance, compliance, and operational and risk auditing of their AWS accounts. Actions taken by a user, role, or an AWS service are recorded as events in CloudTrail. Events include actions taken in the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs and APIs. Reference: AWS CloudTrail https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html
Q108: Which service would be used to send alerts based on Amazon CloudWatch alarms?
A) Amazon Simple Notification Service (Amazon SNS) B) AWS CloudTrail C) AWS Trusted Advisor D) Amazon Route 53
Answer: A Notes: Amazon SNS and Amazon CloudWatch are integrated so users can collect, view, and analyze metrics for every active SNS. Once users have configured CloudWatch for Amazon SNS, they can gain better insight into the performance of their Amazon SNS topics, push notifications, and SMS deliveries. Reference: CloudWatch for Amazon SNS https://docs.aws.amazon.com/sns/latest/dg/sns-monitoring-using-cloudwatch.html
Q109: Where can a user find information about prohibited actions on the AWS infrastructure?
A) AWS Trusted Advisor B) AWS Identity and Access Management (IAM) C) AWS Billing Console D) AWS Acceptable Use Policy
Answer: D Notes: – The AWS Acceptable Use Policy provides information regarding prohibited actions on the AWS infrastructure. Reference: AWS Acceptable Use Policy – https://aws.amazon.com/aup/
Q110: Which of the following is an AWS responsibility under the AWS shared responsibility model?
A) Configuring third-party applications B) Maintaining physical hardware C) Securing application access and data D) Managing guest operating systems
Answer: B Notes: – Maintaining physical hardware is an AWS responsibility under the AWS shared responsibility model. Reference: AWS shared responsibility model https://aws.amazon.com/compliance/shared-responsibility-model/
Q111: Which recommendations are included in the AWS Trusted Advisor checks? (Select TWO.)
A) Amazon S3 bucket permissions
B) AWS service outages for services
C) Multi-factor authentication (MFA) use on the AWS account root user
D) Available software patches for Amazon EC2 instances
Answer: A and C
Notes: Trusted Advisor checks for S3 bucket permissions in Amazon S3 with open access permissions. Bucket permissions that grant list access to everyone can result in higher than expected charges if objects in the bucket are listed by unintended users at a high frequency. Bucket permissions that grant upload and delete access to all users create potential security vulnerabilities by allowing anyone to add, modify, or remove items in a bucket. This Trusted Advisor check examines explicit bucket permissions and associated bucket policies that might override the bucket permissions.
Trusted Advisor does not provide notifications for service outages. You can use the AWS Personal Health Dashboard to learn about AWS Health events that can affect your AWS services or account.
Trusted Advisor checks the root account and warns if MFA is not enabled.
Trusted Advisor does not provide information about the number of users in an AWS account.
What is the difference between Amazon EC2 Savings Plans and Spot Instances?
Amazon EC2 Savings Plans are ideal for workloads that involve a consistent amount of compute usage over a 1-year or 3-year term. With Amazon EC2 Savings Plans, you can reduce your compute costs by up to 72% over On-Demand costs.
Spot Instances are ideal for workloads with flexible start and end times, or that can withstand interruptions. With Spot Instances, you can reduce your compute costs by up to 90% over On-Demand costs. Unlike Amazon EC2 Savings Plans, Spot Instances do not require contracts or a commitment to a consistent amount of compute usage.
Amazon EBS vs Amazon EFS
An Amazon EBS volume stores data in a single Availability Zone. To attach an Amazon EC2 instance to an EBS volume, both the Amazon EC2 instance and the EBS volume must reside within the same Availability Zone.
Amazon EFS is a regional service. It stores data in and across multiple Availability Zones. The duplicate storage enables you to access data concurrently from all the Availability Zones in the Region where a file system is located. Additionally, on-premises servers can access Amazon EFS using AWS Direct Connect.
Which cloud deployment model allows you to connect public cloud resources to on-premises infrastructure?
Applications made available through hybrid deployments connect cloud resources to on-premises infrastructure and applications. For example, you might have an application that runs in the cloud but accesses data stored in your on-premises data center.
What is the difference between Amazon EC2 Savings Plans and Spot Instances?
Amazon EC2 Savings Plans are ideal for workloads that involve a consistent amount of compute usage over a 1-year or 3-year term. With Amazon EC2 Savings Plans, you can reduce your compute costs by up to 72% over On-Demand costs.
Spot Instances are ideal for workloads with flexible start and end times, or that can withstand interruptions. With Spot Instances, you can reduce your compute costs by up to 90% over On-Demand costs. Unlike Amazon EC2 Savings Plans, Spot Instances do not require contracts or a commitment to a consistent amount of compute usage.
Which benefit of cloud computing helps you innovate and build faster?
Agility: The cloud gives you quick access to resources and services that help you build and deploy your applications faster.
Which developer tool allows you to write code within your web browser?
Cloud9 is an integrated development environment (IDE) that allows you to write code within your web browser.
Which method of accessing an EC2 instance requires both a private key and a public key?
SSH allows you to access an EC2 instance from your local laptop using a key pair, which consists of a private key and a public key.
Which service allows you to track the name of the user making changes in your AWS account?
CloudTrail tracks user activity and API calls in your account, which includes identity information (the user’s name, source IP address, etc.) about the API caller.
Which analytics service allows you to query data in Amazon S3 using Structured Query Language (SQL)?
Athena is a query service that makes it easy to analyze data in Amazon S3 using SQL.
Which machine learning service helps you build, train, and deploy models quickly?
SageMaker helps you build, train, and deploy machine learning models quickly.
Which EC2 storage mechanism is recommended when running a database on an EC2 instance?
EBS is a storage device you can attach to your instances and is a recommended storage option when you run databases on an instance.
Which storage service is a scalable file system that only works with Linux-based workloads?
EFS is an elastic file system for Linux-based workloads.
Djamgatech: AI Driven Certification Preparation: Azure AI, AWS Machine Learning Specialty, AWS Data Analytics, GCP ML, GCP PDE,
Which AWS service provides a secure and resizable compute platform with choice of processor, storage, networking, operating system, and purchase model?
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. Amazon EC2 offers the broadest and deepest compute platform with choice of processor, storage, networking, operating system, and purchase model. Amazon EC2.
Which services allow you to build hybrid environments by connecting on-premises infrastructure to AWS?
Site-to-site VPN allows you to establish a secure connection between your on-premises equipment and the VPCs in your AWS account.
Direct Connect allows you to establish a dedicated network connection between your on-premises network and AWS.
What service could you recommend to a developer to automate the software release process?
CodePipeline is a developer tool that allows you to continuously automate the software release process.
Which service allows you to practice infrastructure as code by provisioning your AWS resources via scripted templates?
CloudFormation allows you to provision your AWS resources via scripted templates.
Which machine learning service allows you to add image analysis to your applications?
Rekognition is a service that makes it easy to add image analysis to your applications.
Which services allow you to run containerized applications without having to manage servers or clusters?
Fargate removes the need for you to interact with servers or clusters as it provisions, configures, and scales clusters of virtual machines to run containers for you.
ECS lets you run your containerized Docker applications on both Amazon EC2 and AWS Fargate.
EKS lets you run your containerized Kubernetes applications on both Amazon EC2 and AWS Fargate.
Amazon S3 offers multiple storage classes. Which storage class is best for archiving data when you want the cheapest cost and don’t mind long retrieval times?
S3 Glacier Deep Archive offers the lowest cost and is used to archive data. You can retrieve objects within 12 hours.
In the shared responsibility model, what is the customer responsible for?
You are responsible for patching the guest OS, including updates and security patches.
You are responsible for firewall configuration and securing your application.
A company needs phone, email, and chat access 24 hours a day, 7 days a week. The response time must be less than 1 hour if a production system has a service interruption. Which AWS Support plan meets these requirements at the LOWEST cost?
The Business Support plan provides phone, email, and chat access 24 hours a day, 7 days a week. The Business Support plan has a response time of less than 1 hour if a production system has a service interruption.
Which of the following is an advantage of consolidated billing on AWS?
Consolidated billing is a feature of AWS Organizations. You can combine the usage across all accounts in your organization to share volume pricing discounts, Reserved Instance discounts, and Savings Plans. This solution can result in a lower charge compared to the use of individual standalone accounts.
A company requires physical isolation of its Amazon EC2 instances from the instances of other customers. Which instance purchasing option meets this requirement?
With Dedicated Hosts, a physical server is dedicated for your use. Dedicated Hosts provide visibility and the option to control how you place your instances on an isolated, physical server. For more information about Dedicated Hosts, see Amazon EC2 Dedicated Hosts.
A company is hosting a static website from a single Amazon S3 bucket. Which AWS service will achieve lower latency and high transfer speeds?
CloudFront is a web service that speeds up the distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. Content is cached in edge locations. Content that is repeatedly accessed can be served from the edge locations instead of the source S3 bucket. For more information about CloudFront, see Accelerate static website content delivery.
Which AWS service provides a simple and scalable shared file storage solution for use with Linux-based Amazon EC2 instances and on-premises servers?
Amazon EFS provides an elastic file system that lets you share file data without the need to provision and manage storage. It can be used with AWS Cloud services and on-premises resources, and is built to scale on demand to petabytes without disrupting applications. With Amazon EFS, you can grow and shrink your file systems automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.
Which service allows you to generate encryption keys managed by AWS?
KMS allows you to generate and manage encryption keys. The keys generated by KMS are managed by AWS.
Which service can integrate with a Lambda function to automatically take remediation steps when it uncovers suspicious network activity when monitoring logs in your AWS account?
GuardDuty can perform automated remediation actions by leveraging Amazon CloudWatch Events and AWS Lambda. GuardDuty continuously monitors for threats and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon S3. GuardDuty analyzes multiple AWS data sources, such as AWS CloudTrail event logs, Amazon VPC Flow Logs, and DNS logs.
Which service allows you to create access keys for someone needing to access AWS via the command line interface (CLI)?
IAM allows you to create users and generate access keys for users needing to access AWS via the CLI.
Which service allows you to record software configuration changes within your Amazon EC2 instances over time?
Config helps with recording compliance and configuration changes over time for your AWS resources.
Which service assists with compliance and auditing by offering a downloadable report that provides the status of passwords and MFA devices in your account?
IAM provides a downloadable credential report that lists all users in your account and the status of their various credentials, including passwords, access keys, and MFA devices.
Which service allows you to locate credit card numbers stored in Amazon S3?
Macie is a data privacy service that helps you uncover and protect your sensitive data, such as personally identifiable information (PII) like credit card numbers, passport numbers, social security numbers, and more.
How do you manage permissions for multiple users at once using AWS Identity and Access Management (IAM)?
An IAM group is a collection of IAM users. When you assign an IAM policy to a group, all users in the group are granted permissions specified by the policy.
Which service protects your web application from cross-site scripting attacks?
WAF helps protect your web applications from common web attacks, like SQL injection or cross-site scripting.
Which AWS Trusted Advisor real-time guidance recommendations are available for AWS Basic Support and AWS Developer Support customers?
Basic and Developer Support customers get 50 service limit checks.
Basic and Developer Support customers get security checks for “Specific Ports Unrestricted” on Security Groups.
Basic and Developer Support customers get security checks on S3 Bucket Permissions.
Which service allows you to simplify billing by using a single payment method for all your accounts?
Organizations offers consolidated billing that provides 1 bill for all your AWS accounts. This also gives you access to volume discounts.
Which AWS service usage will always be free even after the 12-month free tier plan has expired?
One million Lambda requests are always free each month.
What is the easiest way for a customer on the AWS Basic Support plan to increase service limits?
The Basic Support plan allows 24/7 access to Customer Service via email and the ability to open service limit increase support cases.
Which types of issues are covered by AWS Support?
“How to” questions about AWS service and features
Problems detected by health checks
Djamgatech: AI Driven Certification Preparation: Azure AI, AWS Machine Learning Specialty, AWS Data Analytics, GCP ML, GCP PDE,
Which features of AWS reduce your total cost of ownership (TCO)?
Sharing servers with others allows you to save money.
Elastic computing allows you to trade capital expense for variable expense.
You pay only for the computing resources you use with no long-term commitments.
Which service allows you to select and deploy operating system and software patches automatically across large groups of Amazon EC2 instances?
Systems Manager allows you to automate operational tasks across your AWS resources.
Which service provides the easiest way to set up and govern a secure, multi-account AWS environment?
Control Tower allows you to centrally govern and enforce the best use of AWS services across your accounts.
Which cost management tool gives you the ability to be alerted when the actual or forecasted cost and usage exceed your desired threshold?
Budgets allow you to improve planning and cost control with flexible budgeting and forecasting. You can choose to be alerted when your budget threshold is exceeded.
Which tool allows you to compare your estimated service costs per Region?
The Pricing Calculator allows you to get an estimate for the cost of AWS services. Comparing service costs per Region is a common use case.
Who can assist with accelerating the migration of legacy contact center infrastructure to AWS?
Professional Services is a global team of experts that can help you realize your desired business outcomes with AWS.
The AWS Partner Network (APN) is a global community of partners that helps companies build successful solutions with AWS.
Which cost management tool allows you to view costs from the past 12 months, current detailed costs, and forecasts costs for up to 3 months?
Cost Explorer allows you to visualize, understand, and manage your AWS costs and usage over time.
Which service reduces the operational overhead of your IT organization?
Managed Services implements best practices to maintain your infrastructure and helps reduce your operational overhead and risk.
I assume it is your subscription where the VPCs are located, otherwise you can’t really discover the information you are looking for. On the EC2 server you could use AWS CLI or Powershell based scripts that query the IP information. Based on IP you can find out what instance uses the network interface, what security groups are tied to it and in which VPC the instance is hosted. Read more here…
When using AWS Lambda inside your VPC, your Lambda function will be allocated private IP addresses, and only private IP addresses, from your specified subnets. This means that you must ensure that your specified subnets have enough free address space for your Lambda function to scale up to. Each simultaneous invocation needs its own IP. Read more here…
When a Lambda “is in a VPC”, it really means that its attached Elastic Network Interface is the customer’s VPC and not the hidden VPC that AWS manages for Lambda.
The ENI is not related to the AWS Lambda management system that does the invocation (the data plane mentioned here). The AWS Step Function system can go ahead and invoke the Lambda through the API, and the network request for that can pass through the underlying VPC and host infrastructure.
Those Lambdas in turn can invoke other Lambda directly through the API, or more commonly by decoupling them, such as through Amazon SQS used as a trigger. Read more ….
How do I invoke an AWS Lambda function programmatically?
Invokes a Lambda function. You can invoke a function synchronously (and wait for the response), or asynchronously. To invoke a function asynchronously, set InvocationType to Event.
For synchronous invocation, details about the function response, including errors, are included in the response body and headers. For either invocation type, you can find more information in the execution log and trace.
When an error occurs, your function may be invoked multiple times. Retry behavior varies by error type, client, event source, and invocation type. For example, if you invoke a function asynchronously and it returns an error, Lambda executes the function up to two more times. For more information, see Retry Behavior.
For asynchronous invocation, Lambda adds events to a queue before sending them to your function. If your function does not have enough capacity to keep up with the queue, events may be lost. Occasionally, your function may receive the same event multiple times, even if no error occurs. To retain events that were not processed, configure your function with a dead-letter queue.
The status code in the API response doesn’t reflect function errors. Error codes are reserved for errors that prevent your function from executing, such as permissions errors, limit errors, or issues with your function’s code and configuration. For example, Lambda returns TooManyRequestsException if executing the function would cause you to exceed a concurrency limit at either the account level ( Concurrent Invocation Limit Exceeded) or function level ( Reserved Function Concurrent Invocation LimitExceeded).
For functions with a long timeout, your client might be disconnected during synchronous invocation while it waits for a response. Configure your HTTP client, SDK, firewall, proxy, or operating system to allow for long connections with timeout or keep-alive settings.
The subnet mask determines how many bits of the network address are relevant (and thus indirectly the size of the network block in terms of how many host addresses are available) –
192.0.2.0, subnet mask 255.255.255.0 means that 192.0.2 is the significant portion of the network number, and that there 8 bits left for host addresses (i.e. 192.0.2.0 thru 192.0.2.255)
192.0.2.0, subnet mask 255.255.255.128 means that 192.0.2.0 is the significant portion of the network number (first three octets and the most significant bit of the last octet), and that there 7 bits left for host addresses (i.e. 192.0.2.0 thru 192.0.2.127)
When in doubt, envision the network number and subnet mask in base 2 (i.e. binary) and it will become much clearer. Read more here…
Separate out the roles needed to do each job. (Assuming this is a corporate environment)
Have a role for EC2, another for Networking, another for IAM.
Everyone should not be admin. Everyone should not be able to add/remove IGW’s, NAT gateways, alter security groups and NACLS, or setup peering connections.
Also, another thing… lock down full internet access. Limit to what is needed and that’s it. Read more here….
How can we setup AWS public-private subnet in VPC without NAT server?
Within a single VPC, the subnets’ route tables need to point to each other. This will already work without additional routes because VPC sets up the local target to point to the VPC subnet.
Security groups are not used here since they are attached to instances, and not networks.
The NAT EC2 instance (server), or AWS-provided NAT gateway is necessary only if the private subnet internal addresses need to make outbound connections. The NAT will translate the private subnet internal addresses to the public subnet internal addresses, and the AWS VPC Internet Gateway will translate these to external IP addresses, which can then go out to the Internet. Read more here ….
What are the applications (or workloads) that cannot be migrated on to cloud (AWS or Azure or GCP)?
A good example of workloads that currently are not in public clouds are mobile and fixed core telecom networks for tier 1 service providers. This is despite the fact that these core networks are increasingly software based and have largely been decoupled from the hardware. There are a number of reasons for this such as the public cloud providers such as Azure and AWS do not offer the guaranteed availability required by telecom networks. These networks require 99.999% availability and is typically referred to as telecom grade.
The regulatory environment frequently restricts hosting of subscriber data outside the of the operators data centers or in another country and key network functions such as lawful interception cannot contractually be hosted off-prem. Read more here….
How many CIDRs can we add to my own created VPC?
You can add up to 5 IPv4 CIDR blocks, or 1 IPv6 block per VPC. You can further segment the network by utilizing up to 200 subnets per VPC. Amazon VPC Limits. Read more …
Why can’t a subnet’s CIDR be changed once it has been assigned?
Sure it can, but you’ll need to coordinate with the neighbors. You can merge two /25’s into a single /24 quite effortlessly if you control the entire range it covers. In practice you’ll see many tiny allocations in public IPv4 space, like /29’s and even smaller. Those are all assigned to different people. If you want to do a big shuffle there, you have a lot of coordinating to do.. or accept the fallout from the breakage you cause. Read more…
Can one VPC talk to another VPC?
Yes, but a Virtual Private Cloud is usually built for the express purpose of being isolated from unwanted external traffic. I can think of several good reasons to encourage that sort of communication, so the idea is not without merit. Read more..
Good knowledge about the AWS services, and how to leverage them to solve simple to complex problems.
As your question is related to the deployment Pod, you will probably be asked about deployment methods (A/B testing like blue-green deployment) as well as pipelining strategies. You might be asked during this interview to reason about a simple task and to code it (like parsing a log file). Also review the TCP/IP stack in-depth as well as the tools to troubleshoot it for the networking round. You will eventually have some Linux questions, the range of questions can vary from common CLI tools to Linux internals like signals / syscalls / file descriptors and so on.
Last but not least the Leadership principles, I can only suggest you to prepare a story for each of them. You will quickly find what LP they are looking for and would be able to give the right signal to your interviewer.
Finally, remember that theres a debrief after the (usually 5) stages of your on site interview, and more senior and convincing interviewers tend to defend their vote so don’t screw up with them.
Be natural, focus on the question details and ask for confirmation, be cool but not too much. At the end of the day, remember that your job will be to understand customer issues and provide a solution, so treat your interviewers as if they were customers and they will see a successful CSE in you, be reassured and give you the job.
Expect questions on cloudformations, Teraform, Aws ec2/rds and stack related questions.
It also depends on the support team you are being hired for. Networking or compute teams (Ec2) have different interview patterns vs database or big data support.
In any case, basics of OS, networking are critical to the interview. If you have a phone screen, we will be looking for basic/semi advance skills of these and your speciality. For example if you mention Oracle in your resume and you are interviewing for the database team, expect a flurry of those questions.
Other important aspect is the Amazon leadership principles. Half of your interview is based on LPs. If you fail to have scenarios where you do not demonstrate our LPs, you cannot expect to work here even though your technical skills are above average (Having extraordinary skills is a different thing).
The overall interview itself will have 1 phone screen if you are interviewing in the US and 1–2 if outside US. The onsite loop will be 4 rounds , 2 of which are technical (again divided into OS and networking and the specific speciality of the team you are interviewing for ) and 2 of them are leadership principles where we test your soft skills and management skills as they are very important in this job. You need to have a strong view point, disagree if it seems valid to do so, empathy and be a team player while showing the ability to pull off things individually as well. These skills will be critical for cracking LP interviews.
You will NOT be asked to code or write queries as its not part of the job, so you can concentrate on the theoretical part of the subject and also your resume. We will grill you on topics mentioned on your resume to start with.
Monolithic architecture is something that build from single piece of material, historically from rock. Monolith term normally use for object made from single large piece of material.” – Non-Technical Definition. “Monolithic application has single code base with multiple modules.
Large Monolithic code-base (often spaghetti code) puts immense cognitive complexity on the developer’s head. As a result, the development velocity is poor. Granular scaling (i.e., scaling part of the application) is not possible. Polyglot programming or polyglot database is challenging.
Drawbacks of Monolithic Architecture
This simple approach has a limitation in size and complexity. Application is too large and complex to fully understand and made changes fast and correctly. The size of the application can slow down the start-up time. You must redeploy the entire application on each update.
Sticky sessions, also known as session affinity, allow you to route a site user to the particular web server that is managing that individual user’s session. The session’s validity can be determined by a number of methods, including a client-side cookies or via configurable duration parameters that can be set at the load balancer which routes requests to the web servers.
Some advantages with utilizing sticky sessions are that it’s cost effective due to the fact you are storing sessions on the same web servers running your applications and that retrieval of those sessions is generally fast because it eliminates network latency. A drawback for using storing sessions on an individual node is that in the event of a failure, you are likely to lose the sessions that were resident on the failed node. In addition, in the event the number of your web servers change, for example a scale-up scenario, it’s possible that the traffic may be unequally spread across the web servers as active sessions may exist on particular servers. If not mitigated properly, this can hinder the scalability of your applications. Read more here …
After you terminate an instance, it remains visible in the console for a short while, and then the entry is automatically deleted. You cannot delete the terminated instance entry yourself. After an instance is terminated, resources such as tags and volumes are gradually disassociated from the instance, therefore may no longer be visible on the terminated instance after a short while.
When an instance terminates, the data on any instance store volumes associated with that instance is deleted.
By default, Amazon EBS root device volumes are automatically deleted when the instance terminates. However, by default, any additional EBS volumes that you attach at launch, or any EBS volumes that you attach to an existing instance persist even after the instance terminates. This behavior is controlled by the volume’s DeleteOnTermination attribute, which you can modify
When you first launch an instance with gp2 volumes attached, you get an initial burst credit allowing for up to 30 minutes of 3,000 iops/sec.
After the first 30 minutes, your volume will accrue credits as follows (taken directly from AWS documentation):
Within the General Purpose (SSD) implementation is a Token Bucket model that works as follows
Each token represents an “I/O credit” that pays for one read or one write.
A bucket is associated with each General Purpose (SSD) volume, and can hold up to 5.4 million tokens.
Tokens accumulate at a rate of 3 per configured GB per second, up to the capacity of the bucket.
Tokens can be spent at up to 3000 per second per volume.
The baseline performance of the volume is equal to the rate at which tokens are accumulated — 3 IOPS per GB per second.
In addition to this, gp2 volumes provide baseline performance of 3 iops per Gb, up to 1Tb (3000 iops). Volumes larger than 1Tb no longer work on the credit system, as they already provide a baseline of 3000 iops. Gp2 volumes have a cap of 10,000 iops regardless of the volume size (so the iops max out for volumes larger than 3.3Tb)
Elastic IP addresses are free when you have them assigned to an instance, feel free to use one! Elastic IPs get disassociated when you stop an instance, so you will get charged in the mean time. The benefit is that you get to keep that IP allocated to your account though, instead of losing it like any other. Once you start the instance you just re-associate it back and you have your old IP again.
Here are the changes associated with the use of Elastic IP addresses
No cost for Elastic IP addresses while in use
* $0.01 per non-attached Elastic IP address per complete hour
* $0.00 per Elastic IP address remap – first 100 remaps / month
* $0.10 per Elastic IP address remap – additional remap / month over 100
If you require any additional information about pricing please reference the link below
The short answer to reducing your AWS EC2 costs – turn off your instances when you don’t need them.
Your AWS bill is just like any other utility bill, you get charged for however much you used that month. Don’t make the mistake of leaving your instances on 24/7 if you’re only using them during certain days and times (ex. Monday – Friday, 9 to 5).
To automatically start and stop your instances, AWS offers an “EC2 scheduler” solution. A better option would be a cloud cost management tool that not only stops and starts your instances automatically, but also tracks your usage and makes sizing recommendations to optimize your cloud costs and maximize your time and savings.
You could potentially save money using Reserved Instances. But, in non-production environments such as dev, test, QA, and training, Reserved Instances are not your best bet. Why is this the case? These environments are less predictable; you may not know how many instances you need and when you will need them, so it’s better to not waste spend on these usage charges. Instead, schedule such instances (preferably using ParkMyCloud). Scheduling instances to be only up 12 hours per day on weekdays will save you 65% – better than all but the most restrictive 3-year RIs!
Well AWS is a web service provider which offers a set of services related to compute, storage, database, network and more to help the business scale and grow
All your concerns are related to AWS EC2 instance, so let me start with an instance
Instance:
An EC2 instance is similar to a server where you can host your websites or applications to make it available Globally
It is highly scalable and works on the pay-as-you-go model
You can increase or decrease the capacity of these instances as per the requirement
AMI:
AMI provides the information required to launch the EC2 instance
AMI includes the pre-configured templates of the operating system that runs on the AWS
Users can launch multiple instances with the same configuration from a single AMI
Snapshot:
Snapshots are the incremental backups for the Amazon EBS
Data in the EBS are stored in S3 by taking point-to-time snapshots
Unique data are only deleted when a snapshot is deleted
They are definitely all chalk and cheese to one another.
A VPN (Virtual Private Network) is essentially an encrypted “channel” connecting two networks, or a machine to a network, generally over the public internet.
A VPS (Virtual Private Server) is a rented virtual machine running on someone else’s hardware. AWS EC2 can be thought of as a VPS, but the term is usually used to describe low-cost products offered by lots of other hosting companies.
A VPC (Virtual Private Cloud) is a virtual network in AWS (Amazon Web Services). It can be divided into private and public subnets, have custom routing rules, have internal connections to other VPCs, etc. EC2 instances and other resources are placed in VPCs similarly to how physical data centers have operated for a very long time.
Elastic IP address is basically the static IP (IPv4) address that you can allocate to your resources.
Now, in case that you allocate IP to the resource (and the resource is running), you are not charged anything. On the other hand, if you create Elastic IP, but you do not allocate it to the resource (or the resource is not running), then you are charged some amount (should be around $0.005 per hour if I remember correctly)
Additional info about these:
You are limited to 5 Elastic IP addresses per region. If you require more than that, you can contact AWS support with a request for additional addresses. You need to have a good reason in order to be approved because IPv4 addresses are becoming a scarce resource.
In general, you should be good without Elastic IPs for most of the use-cases (as every EC2 instance has its own public IP, and you can use load balancers, as well as map most of the resources via Route 53).
One of the use-cases that I’ve seen where my client is using Elastic IP is to make it easier for him to access specific EC2 instance via RDP, as well as do deployment through Visual Studio, as he targets the Elastic IP, and thus does not have to watch for any changes in public IP (in case of stopping or rebooting).
At this time, AWS Transit Gateway does not support inter region attachments. The transit gateway and the attached VPCs must be in the same region. VPC peering supports inter region peering.
The EC2 instance is server instance whilst a Workspace is windows desktop instance
Both Windows Server and Windows workstation editions have desktops. Windows Server Core doesn’t not (and AWS doesn’t have an AMI for Windows Server Core that I could find).
It is possible to SSH into a Windows instance – this is done on port 22. You would not see a desktop when using SSH if you had enabled it. It is not enabled by default.
If you are seeing a desktop, I believe you’re “RDPing” to the Windows instance. This is done with the RDP protocol on port 3389.
Two different protocols and two different ports.
Workspaces doesn’t allow terminal or ssh services by default. You need to use Workspace client. You still can enable RDP or/and SSH but this is not recommended.
Workspaces is a managed desktop service. AWS is taking care of pre-build AMIs, software licenses, joining to domain, scaling etc.
What is Amazon EC2?Scalable, pay-as-you-go compute capacity in the cloud. Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.
What is Amazon WorkSpaces?Easily provision cloud-based desktops that allow end-users to access applications and resources. With a few clicks in the AWS Management Console, customers can provision a high-quality desktop experience for any number of users at a cost that is highly competitive with traditional desktops and half the cost of most virtual desktop infrastructure (VDI) solutions. End-users can access the documents, applications and resources they need with the device of their choice, including laptops, iPad, Kindle Fire, or Android tablets.
Elastic – Amazon EC2 enables you to increase or decrease capacity within minutes, not hours or days. You can commission one, hundreds or even thousands of server instances simultaneously.
Completely Controlled – You have complete control of your instances. You have root access to each one, and you can interact with them as you would any machine.
Flexible – You have the choice of multiple instance types, operating systems, and software packages. Amazon EC2 allows you to select a configuration of memory, CPU, instance storage, and the boot partition size that is optimal for your choice of operating system and application.
On the other hand, Amazon WorkSpaces provides the following key features:
Support Multiple Devices- Users can access their Amazon WorkSpaces using their choice of device, such as a laptop computer (Mac OS or Windows), iPad, Kindle Fire, or Android tablet.
Keep Your Data Secure and Available- Amazon WorkSpaces provides each user with access to persistent storage in the AWS cloud. When users access their desktops using Amazon WorkSpaces, you control whether your corporate data is stored on multiple client devices, helping you keep your data secure.
Choose the Hardware and Software you need- Amazon WorkSpaces offers a choice of bundles providing different amounts of CPU, memory, and storage so you can match your Amazon WorkSpaces to your requirements. Amazon WorkSpaces offers preinstalled applications (including Microsoft Office) or you can bring your own licensed software.
Amazon EBS vs Amazon EFS
An Amazon EBS volume stores data in a single Availability Zone. To attach an Amazon EC2 instance to an EBS volume, both the Amazon EC2 instance and the EBS volume must reside within the same Availability Zone.
Amazon EFS is a regional service. It stores data in and across multiple Availability Zones. The duplicate storage enables you to access data concurrently from all the Availability Zones in the Region where a file system is located. Additionally, on-premises servers can access Amazon EFS using AWS Direct Connect.
Provides secure, resizable compute capacity in the cloud. It makes web-scale cloud computing easier for developers. EC2
EC2 Spot
Run fault-tolerant workloads for up to 90% off. EC2Spot
EC2 Autoscaling
Automatically add or remove compute capacity to meet changes in demand. EC2_AustoScaling
Lightsail
Designed to be the easiest way to launch & manage a virtual private server with AWS. An easy-to-use cloud platform that offers everything need to build an application or website. Lightsail
Batch
Enables developers, scientists, & engineers to easily & efficiently run hundreds of thousands of batch computing jobs on AWS. Fully managed batch processing at any scale. Batch
Containers
Elastic Container Service (ECS)
Highly secure, reliable, & scalable way to run containers. ECS
Run code without thinking about servers. Pay only for the compute time you consume. Lamda
Edge and hybrid
Outposts
Run AWS infrastructure & services on premises for a truly consistent hybrid experience. Outposts
Snow Family
Collect and process data in rugged or disconnected edge environments. SnowFamily
Wavelength
Deliver ultra-low latency application for 5G devices. Wavelenth
VMware Cloud on AWS
Innovate faster, rapidly transition to the cloud, & work securely from any location. VMware_On_AWS
Local Zones
Run latency sensitive applications closer to end-users. LocalZones
Networking and Content Delivery
Use cases
Functionality
Service
Description
Build a cloud network
Define and provision a logically isolated network for your AWS resources
VPC
VPC lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. VPC
Connect VPCs and on-premises networks through a central hub
Transit Gateway
Transit Gateway connects VPCs & on-premises networks through a central hub. This simplifies network & puts an end to complex peering relationships. TransitGateway
Provide private connectivity between VPCs, services, and on-premises applications
PrivateLink
PrivateLink provides private connectivity between VPCs & services hosted on AWS or on-premises, securely on the Amazon network. PrivateLink
Route users to Internet applications with a managed DNS service
Route 53
Route 53 is a highly available & scalable cloud DNS web service. Route53
Scale your network design
Automatically distribute traffic across a pool of resources, such as instances, containers, IP addresses, and Lambda functions
Elastic Load Balancing
Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as EC2’s, containers, IP addresses, & Lambda functions. ElasticLoadBalancing
Direct traffic through the AWS Global network to improve global application performance
Global Accelerator
Global Accelerator is a networking service that sends user’s traffic through AWS’s global network infrastructure, improving internet user performance by up to 60%. GlobalAccelerator
Secure your network traffic
Safeguard applications running on AWS against DDoS attacks
Shield
Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. Shield
Protect your web applications from common web exploits
WAF
WAF is a web application firewall that helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. WAF
Centrally configure and manage firewall rules
Firewall Manager
Firewall Manager is a security management service which allows to centrally configure & manage firewall rules across accounts & apps in AWS Organization. link text
Build a hybrid IT network
Connect your users to AWS or on-premises resources using a Virtual Private Network
(VPN) – Client
VPN solutions establish secure connections between on-premises networks, remote offices, client devices, & the AWS global network. VPN
Create an encrypted connection between your network and your Amazon VPCs or AWS Transit Gateways
(VPN) – Site to Site
Site-to-Site VPN creates a secure connection between data center or branch office & AWS cloud resources. site_to_site
Establish a private, dedicated connection between AWS and your datacenter, office, or colocation environment
Direct Connect
Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. DirectConnect
Content delivery networks
Securely deliver data, videos, applications, and APIs to customers globally with low latency, and high transfer speeds
CloudFront
CloudFront expedites distribution of static & dynamic web content. CloudFront
Build a network for microservices architectures
Provide application-level networking for containers and microservices
App Mesh
App Mesh makes it accessible to guide & control microservices operating on AWS. AppMesh
Create, maintain, and secure APIs at any scale
API Gateway
API Gateway allows the user to design & expand their own REST and WebSocket APIs at any scale. APIGateway
Discover AWS services connected to your applications
Cloud Map
Cloud Map permits the name & handles the cloud resources. CloudMap
S3 is the storehouse for the internet i.e. object storage built to store & retrieve any amount of data from anywhere S3
AWS Backup
AWS Backup is an externally-accessible backup provider that makes it easier to align & optimize the backup of data across AWS services in the cloud. AWS_Backup
Amazon EBS
Amazon Elastic Block Store is a web service that provides block-level storage volumes. EBS
Amazon EFS Storage
EFS offers file storage for the user’s Amazon EC2 instances. It’s kind of blob Storage. EFS
Amazon FSx
FSx supply fully managed 3rd-party file systems with the native compatibility & characteristic sets for workloads. It’s available as FSx for Windows server (Fully managed file storage built on Windows Server) & Lustre (Fully managed high-performance file system integrated with S3). FSx_WindowsFSx_Lustre
AWS Storage Gateway
Storage Gateway is a service which connects an on-premises software appliance with cloud-based storage. Storage_Gateway
AWS DataSync
DataSync makes it simple & fast to move large amounts of data online between on-premises storage & S3, EFS, or FSx for Windows File Server. DataSync
AWS Transfer Family
The Transfer Family provides fully managed support for file transfers directly into & out of S3. Transfer_Family
AWS Snow Family
Highly-secure, portable devices to collect & process data at the edge, and migrate data into and out of AWS. Snow_Family
Classification: Object storage: S3 File storage services: Elastic File System, FSx for Windows Servers & FSx for Lustre Block storage: EBS Backup: AWS Backup Data transfer: Storage gateway –> 3 types: Tape, File, Volume. Transfer Family –> SFTP, FTPS, FTP. Edge computing and storage and Snow Family –> Snowcone, Snowball, Snowmobile
Databases
Database type
Use cases
Service
Description
Relational
Traditional applications, ERP, CRM, e-commerce
Aurora, RDS, Redshift
RDS is a web service that makes it easier to set up, control, and scale a relational database in the cloud. AuroraRDSRedshift
Key-value
High-traffic web apps, e-commerce systems, gaming applications
DynamoDB
DynamoDB is a fully administered NoSQL database service that offers quick and reliable performance with integrated scalability. DynamoDB
ElastiCache helps in setting up, managing, and scaling in-memory cache conditions. MemcachedRedis
Document
Content management, catalogs, user profiles
DocumentDB
DocumentDB (with MongoDB compatibility) is a quick, dependable, and fully-managed database service that makes it easy for you to set up, operate, and scale MongoDB-compatible databases.DocumentDB
Wide column
High scale industrial apps for equipment maintenance, fleet management, and route optimization
Keyspaces (for Apache Cassandra)
Keyspaces is a scalable, highly available, and managed Apache Cassandra–compatible database service. Keyspaces
Graph
Fraud detection, social networking, recommendation engines
Neptune
Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets. Neptune
Time series
IoT applications, DevOps, industrial telemetry
Timestream
Timestream is a fast, scalable, and serverless time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day. Timestream
Ledger
Systems of record, supply chain, registrations, banking transactions
Quantum Ledger Database (QLDB)
QLDB is a fully managed ledger database that provides a transparent, immutable, and cryptographically verifiable transaction log owned by a central trusted authority. QLDB
Developer Tools
Service
Description
Cloud9
Cloud9 is a cloud-based IDE that enables the user to write, run, and debug code. Cloud9
CodeArtifact
CodeArtifact is a fully managed artifact repository service that makes it easy for organizations of any size to securely store, publish, & share software packages used in their software development process. CodeArtifact
CodeBuild
CodeBuild is a fully managed service that assembles source code, runs unit tests, & also generates artefacts ready to deploy. CodeBuild
CodeGuru
CodeGuru is a developer tool powered by machine learning that provides intelligent recommendations for improving code quality & identifying an application’s most expensive lines of code. CodeGuru
Cloud Development Kit
Cloud Development Kit (AWS CDK) is an open source software development framework to define cloud application resources using familiar programming languages. CDK
CodeCommit
CodeCommit is a version control service that enables the user to personally store & manage Git archives in the AWS cloud. CodeCommit
CodeDeploy
CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as EC2, Fargate, Lambda, & on-premises servers. CodeDeploy
CodePipeline
CodePipeline is a fully managed continuous delivery service that helps automate release pipelines for fast & reliable app & infra updates. CodePipeline
CodeStar
CodeStar enables to quickly develop, build, & deploy applications on AWS. CodeStar
CLI
AWS CLI is a unified tool to manage AWS services & control multiple services from the command line & automate them through scripts. CLI
X-Ray
X-Ray helps developers analyze & debug production, distributed applications, such as those built using a microservices architecture. X-Ray
CDK uses the familiarity & expressive power of programming languages for modeling apps. CDK
Corretto
Corretto is a no-cost, multiplatform, production-ready distribution of the OpenJDK. Corretto
Crypto Tools
Cryptography is hard to do safely & correctly. The AWS Crypto Tools libraries are designed to help everyone do cryptography right, even without special expertise. Crypto Tools
Serverless Application Model (SAM)
SAM is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, & event source mappings. SAM
Tools for developing and managing applications on AWS
Security, Identity, & Compliance
Category
Use cases
Service
Description
Identity & access management
Securely manage access to services and resources
Identity & Access Management (IAM)
IAM is a web service for safely controlling access to AWS services. IAM
Securely manage access to services and resources
Single Sign-On
SSO helps in simplifying, managing SSO access to AWS accounts & business applications. SSO
Identity management for apps
Cognito
Cognito lets you add user sign-up, sign-in, & access control to web & mobile apps quickly and easily. Cognito
Managed Microsoft Active Directory
Directory Service
AWS Managed Microsoft Active Directory (AD) enables your directory-aware workloads & AWS resources to use managed Active Directory (AD) in AWS. DirectoryService
Simple, secure service to share AWS resources
Resource Access Manager
Resource Access Manager (RAM) is a service that enables you to easily & securely share AWS resources with any AWS account or within AWS Organization. RAM
Central governance and management across AWS accounts
Organizations
Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS. Orgs
Detection
Unified security and compliance center
Security Hub
Security Hub gives a comprehensive view of security alerts & security posture across AWS accounts. SecurityHub
Managed threat detection service
GuardDuty
GuardDuty is a threat detection service that continuously monitors for malicious activity & unauthorized behavior to protect AWS accounts, workloads, & data stored in S3. GuardDuty
Analyze application security
Inspector
Inspector is a security vulnerability assessment service improves the security & compliance of the AWS resources. Inspector
Record and evaluate configurations of your AWS resources
Config
Config is a service that enables to assess, audit, & evaluate the configurations of AWS resources. Config
Track user activity and API usage
CloudTrail
CloudTrail is a service that enables governance, compliance, operational auditing, & risk auditing of AWS account. CloudTrail
Security management for IoT devices
IoT Device Defender
IoT Device Defender is a fully managed service that helps secure fleet of IoT devices. IoTDD
Infrastructure protection
DDoS protection
Shield
Shield is a managed DDoS protection service that safeguards apps running. It provides always-on detection & automatic inline mitigations that minimize application downtime & latency. Shield
Filter malicious web traffic
Web Application Firewall (WAF)
WAF is a web application firewall that helps protect web apps or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. WAF
Central management of firewall rules
Firewall Manager
Firewall Manager eases the user AWS WAF administration & maintenance activities over multiple accounts & resources. FirewallManager
Data protection
Discover and protect your sensitive data at scale
Macie
Macie is a fully managed data (security & privacy) service that uses ML & pattern matching to discover & protect sensitive data. Macie
Key storage and management
Key Management Service (KMS)
KMS makes it easy for to create & manage cryptographic keys & control their use across a wide range of AWS services & in your applications. KMS
Hardware based key storage for regulatory compliance
CloudHSM
CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate & use your own encryption keys. CloudHSM
Provision, manage, and deploy public and private SSL/TLS certificates
Certificate Manager
Certificate Manager is a service that easily provision, manage, & deploy public and private SSL/TLS certs for use with AWS services & internal connected resources. ACM
Rotate, manage, and retrieve secrets
Secrets Manager
Secrets Manager assist the user to safely encode, store, & recover credentials for any user’s database & other services. SecretsManager
Incident response
Investigate potential security issues
Detective
Detective makes it easy to analyze, investigate, & quickly identify the root cause of potential security issues or suspicious activities. Detective
Provides scalable, cost-effective business continuity for physical, virtual, & cloud servers. CloudEndure
Compliance
No cost, self-service portal for on-demand access to AWS’ compliance reports
Artifact
Artifact is a web service that enables the user to download AWS security & compliance records. Artifact
Data Lakes & Analytics
Category
Use cases
Service
Description
Analytics
Interactive analytics
Athena
Athena is an interactive query service that makes it easy to analyze data in S3 using standard SQL. Athena
Big data processing
EMR
EMR is the industry-leading cloud big data platform for processing vast amounts of data using open source tools such as Apache Spark, Hive, HBase,Flink, Hudi, & Presto. EMR
Data warehousing
Redshift
The most popular & fastest cloud data warehouse. Redshift
Real-time analytics
Kinesis
Kinesis makes it easy to collect, process, & analyze real-time, streaming data so one can get timely insights. Kinesis
Operational analytics
Elasticsearch Service
Elasticsearch Service is a fully managed service that makes it easy to deploy, secure, & run Elasticsearch cost effectively at scale. ES
Dashboards & visualizations
Quicksight
QuickSight is a fast, cloud-powered business intelligence service that makes it easy to deliver insights to everyone in organization. QuickSight
Data movement
Real-time data movement
1) Amazon Managed Streaming for Apache Kafka (MSK) 2) Kinesis Data Streams 3) Kinesis Data Firehose 4) Kinesis Data Analytics 5) Kinesis Video Streams 6) Glue
MSK is a fully managed service that makes it easy to build & run applications that use Apache Kafka to process streaming data. MSKKDSKDFKDAKVSGlue
Data lake
Object storage
1) S3 2) Lake Formation
Lake Formation is a service that makes it easy to set up a secure data lake in days. A data lake is a centralized, curated, & secured repository that stores all data, both in its original form & prepared for analysis. S3LakeFormation
Backup & archive
1) S3 Glacier 2) Backup
S3 Glacier & S3 Glacier Deep Archive are a secure, durable, & extremely low-cost S3 cloud storage classes for data archiving & long-term backup. S3Glacier
Data catalog
1) Glue 2)) Lake Formation
Refer as above.
Third-party data
Data Exchange
Data Exchange makes it easy to find, subscribe to, & use third-party data in the cloud. DataExchange
Predictive analytics && machine learning
Frameworks & interfaces
Deep Learning AMIs
Deep Learning AMIs provide machine learning practitioners & researchers with the infrastructure & tools to accelerate deep learning in the cloud, at any scale. DeepLearningAMIs
Platform services
SageMaker
SageMaker is a fully managed service that provides every developer & data scientist with the ability to build, train, & deploy machine learning (ML) models quickly. SageMaker
Containers
Use cases
Service
Description
Store, encrypt, and manage container images
ECR
Refer compute section
Run containerized applications or build microservices
ECS
Refer compute section
Manage containers with Kubernetes
EKS
Refer compute section
Run containers without managing servers
Fargate
Fargate is a serverless compute engine for containers that works with both ECS & EKS. Fargate
Run containers with server-level control
EC2
Refer compute section
Containerize and migrate existing applications
App2Container
App2Container (A2C) is a command-line tool for modernizing .NET & Java applications into containerized applications. App2Container
Quickly launch and manage containerized applications
Copilot
Copilot is a command line interface (CLI) that enables customers to quickly launch & easily manage containerized applications on AWS. Copilot
Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to users of your application, which improves performance & reduces latency.
Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora (MySQL & PostgreSQL-compatible editions), where the database will automatically start up, shut down, & scale capacity up or down based on your application’s needs.
RDS Proxy is a fully managed, highly available database proxy for RDS that makes applications more scalable, resilient to database failures, & more secure.
AppSync is a fully managed service that makes it easy to develop GraphQL APIs by handling the heavy lifting of securely connecting to data sources like AWS DynamoDB, Lambda.
EventBridge is a serverless event bus that makes it easy to connect applications together using data from apps, integrated SaaS apps, & AWS services.
Step Functions is a serverless function orchestrator that makes it easy to sequence Lambda functions & multiple AWS services into business-critical applications.
The easiest way to set up and govern a new, secure multi-account AWS environment. ControlTower
Organizations
Organizations helps centrally govern environment as you grow & scale workloads on AWS Organizations
Well-Architected Tool
Well-Architected Tool helps review the state of workloads & compares them to the latest AWS architectural best practices. WATool
Budgets
Budgets allows to set custom budgets to track cost & usage from the simplest to the most complex use cases. Budgets
License Manager
License Manager makes it easier to manage software licenses from software vendors such as Microsoft, SAP, Oracle, & IBM across AWS & on-premises environments. LicenseManager
Provision
CloudFormation
CloudFormation enables the user to design & provision AWS infrastructure deployments predictably & repeatedly. CloudFormation
Service Catalog
Service Catalog allows organizations to create & manage catalogs of IT services that are approved for use on AWS. ServiceCatalog
OpsWorks
OpsWorks presents a simple and flexible way to create and maintain stacks and applications. OpsWorks
Marketplace
Marketplace is a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, & deploy software that runs on AWS. Marketplace
Operate
CloudWatch
CloudWatch offers a reliable, scalable, & flexible monitoring solution that can easily start. CloudWatch
CloudTrail
CloudTrail is a service that enables governance, compliance, operational auditing, & risk auditing of AWS account. CloudTrail
Read For Me launched at the 2021 AWS re:Invent Builders’ Fair in Las Vegas. A web application which helps the visually impaired ‘hear documents. With the help of AI services such as Amazon Textract, Amazon Comprehend, Amazon Translate and Amazon Polly utilizing an event-driven architecture and serverless technology, users upload a picture of a document, or anything with text, and within a few seconds “hear” that document in their chosen language.
AWS read for me
2- Delivering code and architectures through AWS Proton and Git
Infrastructure operators are looking for ways to centrally define and manage the architecture of their services, while developers need to find a way to quickly and safely deploy their code. In this session, learn how to use AWS Proton to define architectural templates and make them available to development teams in a collaborative manner. Also, learn how to enable development teams to customize their templates so that they fit the needs of their services.
3- Accelerate front-end web and mobile development with AWS Amplify
User-facing web and mobile applications are the primary touchpoint between organizations and their customers. To meet the ever-rising bar for customer experience, developers must deliver high-quality apps with both foundational and differentiating features. AWS Amplify helps front-end web and mobile developers build faster front to back. In this session, review Amplify’s core capabilities like authentication, data, and file storage and explore new capabilities, such as Amplify Geo and extensibility features for easier app customization with AWS services and better integration with existing deployment pipelines. Also learn how customers have been successful using Amplify to innovate in their businesses.
3- Train ML models at scale with Amazon SageMaker, featuring Aurora
Today, AWS customers use Amazon SageMaker to train and tune millions of machine learning (ML) models with billions of parameters. In this session, learn about advanced SageMaker capabilities that can help you manage large-scale model training and tuning, such as distributed training, automatic model tuning, optimizations for deep learning algorithms, debugging, profiling, and model checkpointing, so that even the largest ML models can be trained in record time for the lowest cost. Then, hear from Aurora, a self-driving vehicle technology company, on how they use SageMaker training capabilities to train large perception models for autonomous driving using massive amounts of images, video, and 3D point cloud data.
AWS RE:INVENT 2020 – LATEST PRODUCTS AND SERVICES ANNOUNCED:
Amazon Elasticsearch Service is uniquely positioned to handle log analytics workloads. With a multitude of open-source and AWS-native service options, users can assemble effective log data ingestion pipelines and couple these with Amazon Elasticsearch Service to build a robust, cost-effective log analytics solution. This session reviews patterns and frameworks leveraged by companies such as Capital One to build an end-to-end log analytics solution using Amazon Elasticsearch Service.
Many companies in regulated industries have achieved compliance requirements using AWS Config. They also need a record of the incidents generated by AWS Config in tools such as ServiceNow for audits and remediation. In this session, learn how you can achieve compliance as code using AWS Config. Through the creation of a noncompliant Amazon EC2 machine, this demo shows how AWS Config triggers an incident into a governance, risk, and compliance system for audit recording and remediation. The session also covers best practices for how to automate the setup process with AWS CloudFormation to support many teams.
3- Cost-optimize your enterprise workloads with Amazon EBS – Compute
Recent times have underscored the need to enable agility while maintaining the lowest total cost of ownership (TCO). In this session, learn about the latest volume types that further optimize your performance and cost, while enabling you to run newer applications on AWS with high availability. Dive deep into the latest AWS volume launches and cost-optimization strategies for workloads such as databases, virtual desktop infrastructure, and low-latency interactive applications.
Location data is a vital ingredient in today’s applications, enabling use cases from asset tracking to geomarketing. Now, developers can use the new Amazon Location Service to add maps, tracking, places, geocoding, and geofences to applications, easily, securely, and affordably. Join this session to see how to get started with the service and integrate high-quality location data from geospatial data providers Esri and HERE. Learn how to move from experimentation to production quickly with location capabilities. This session can help developers who require simple location data and those building sophisticated asset tracking, customer engagement, fleet management, and delivery applications.
In this session, learn how Amazon Connect Tasks makes it easy for you to prioritize, assign, and track all the tasks that agents need to complete, including work in external applications needed to resolve customer issues (such as emails, cases, and social posts). Tasks provides a single place for agents to be assigned calls, chats, and tasks, ensuring agents are focused on the highest-priority work. Also, learn how you can also use Tasks with Amazon Connect’s workflow capabilities to automate task-related actions that don’t require agent interaction. Come see how you can use Amazon Connect Tasks to increase customer satisfaction while improving agent productivity.
New agent-assist capabilities from Amazon Connect Wisdom make it easier and faster for agents to find the information they need to solve customer issues in real time. In this session, see how agents can use simple ML-powered search to find information stored across knowledge bases, wikis, and FAQs, like Salesforce and ServiceNow. Join the session to hear Traeger Pellet Grills discuss how it’s using these new features, along with Contact Lens for Amazon Connect, to deliver real-time recommendations to agents based on issues automatically detected during calls.
Grafana is a popular, open-source data visualization tool that enables you to centrally query and analyze observability data across multiple data sources. Learn how the new Amazon Managed Service for Grafana, announced with Grafana’s parent company Grafana Labs, solves common observability challenges. With the new fully managed service, you can monitor, analyze, and alarm on metrics, logs, and traces while offloading the operational management of security patching, upgrading, and resource scaling to AWS. This session also covers new Grafana capabilities such as advanced security features and native AWS service integrations to simplify configuration and onboarding of data sources.
Prometheus is a popular open-source monitoring and alerting solution optimized for container environments. Customers love Prometheus for its active open-source community and flexible query language, using it to monitor containers across AWS and on-premises environments. Amazon Managed Service for Prometheus is a fully managed Prometheus-compatible monitoring service. In this session, learn how you can use the same open-source Prometheus data model, existing instrumentation, and query language to monitor performance with improved scalability, availability, and security without having to manage the underlying infrastructure.
Today, enterprises use low-power, long-range wide-area network (LoRaWAN) connectivity to transmit data over long ranges, through walls and floors of buildings, and in commercial and industrial use cases. However, this requires companies to operate their own LoRa network server (LNS). In this session, learn how you can use LoRaWAN for AWS IoT Core to avoid time-consuming and undifferentiated development work, operational overhead of managing infrastructure, or commitment to costly subscription-based pricing from third-party service providers.
10-AWS CloudShell: The fastest way to get started with AWS CLI
AWS CloudShell is a free, browser-based shell available from the AWS console that provides a simple way to interact with AWS resources through the AWS command-line interface (CLI). In this session, see an overview of both AWS CloudShell and the AWS CLI, which when used together are the fastest and easiest ways to automate tasks, write scripts, and explore new AWS services. Also, see a demo of both services and how to quickly and easily get started with each.
Industrial organizations use AWS IoT SiteWise to liberate their industrial equipment data in order to make data-driven decisions. Now with AWS IoT SiteWise Edge, you can collect, organize, process, and monitor your equipment data on premises before sending it to local or AWS Cloud destinations—all while using the same asset models, APIs, and functionality. Learn how you can extend the capabilities of AWS IoT SiteWise to the edge with AWS IoT SiteWise Edge.
AWS Fault Injection Simulator is a fully managed chaos engineering service that helps you improve application resiliency by making it easy and safe to perform controlled chaos engineering experiments on AWS. In this session, see an overview of chaos engineering and AWS Fault Injection Simulator, and then see a demo of how to use AWS Fault Injection Simulator to make applications more resilient to failure.
Organizations are breaking down data silos and building petabyte-scale data lakes on AWS to democratize access to thousands of end users. Since its launch, AWS Lake Formation has accelerated data lake adoption by making it easy to build and secure data lakes. In this session, AWS Lake Formation GM Mehul A. Shah showcases recent innovations enabling modern data lake use cases. He also introduces a new capability of AWS Lake Formation that enables fine-grained, row-level security and near-real-time analytics in data lakes.
Machine learning (ML) models may generate predictions that are not fair, whether because of biased data, a model that contains bias, or bias that emerges over time as real-world conditions change. Likewise, closed-box ML models are opaque, making it difficult to explain to internal stakeholders, auditors, external regulators, and customers alike why models make predictions both overall and for individual inferences. In this session, learn how Amazon SageMaker Clarify is providing built-in tools to detect bias across the ML workflow including during data prep, after training, and over time in your deployed model.
Amazon EMR on Amazon EKS introduces a new deployment option in Amazon EMR that allows you to run open-source big data frameworks on Amazon EKS. This session digs into the technical details of Amazon EMR on Amazon EKS, helps you understand benefits for customers using Amazon EMR or running open-source Spark on Amazon EKS, and discusses performance considerations.
Finding unexpected anomalies in metrics can be challenging. Some organizations look for data that falls outside of arbitrary ranges; if the range is too narrow, they miss important alerts, and if it is too broad, they receive too many false alerts. In this session, learn about Amazon Lookout for Metrics, a fully managed anomaly detection service that is powered by machine learning and over 20 years of anomaly detection expertise at Amazon to quickly help organizations detect anomalies and understand what caused them. This session guides you through setting up your own solution to monitor for anomalies and showcases how to deliver notifications via various integrations with the service.
17- Improve application availability with ML-powered insights using Amazon DevOps Guru
As applications become increasingly distributed and complex, developers and IT operations teams need more automated practices to maintain application availability and reduce the time and effort spent detecting, debugging, and resolving operational issues manually. In this session, discover Amazon DevOps Guru, an ML-powered cloud operations service, informed by years of Amazon.com and AWS operational excellence, that provides an easy and automated way to improve an application’s operational performance and availability. See how you can transform your IT operations and reduce mean time to recovery (MTTR) with contextual insights.
Amazon Connect Voice ID provides real-time caller authentication that makes voice interactions in contact centers more secure and efficient. Voice ID uses machine learning to verify the identity of genuine customers by analyzing a caller’s unique voice characteristics. This allows contact centers to use an additional security layer that doesn’t rely on the caller answering multiple security questions, and it makes it easy to enroll and verify customers without disrupting the natural flow of the conversation. Join this session to see how fast and secure ML-based voice authentication can power your contact center.
G4ad instances feature the latest AMD Radeon Pro V520 GPUs and second-generation AMD EPYC processors. These new instances deliver the best price performance in Amazon EC2 for graphics-intensive applications such as virtual workstations, game streaming, and graphics rendering. This session dives deep into these instances, ideal use cases, and performance benchmarks, and it provides a demo.
new capability that enables deployment of Amazon ECS tasks on customer-managed infrastructure. This session covers the evolution of Amazon ECS over time, including new on-premises capabilities to manage your hybrid footprint using a common fully managed control plane and API. You learn some foundational technical details and important tenets that AWS is using to design these capabilities, and the session ends with a short demo of Amazon ECS Anywhere.
Amazon Aurora Serverless is an on-demand, auto scaling configuration of Amazon Aurora that automatically adjusts database capacity based on application demand. With Amazon Aurora Serverless v2, you can now scale database workloads instantly from hundreds to hundreds of thousands of transactions per second and adjust capacity in fine-grained increments to provide just the right amount of database resources. This session dives deep into Aurora Serverless v2 and shows how it can help you operate even the most demanding database workloads worry-free.
Apple delights its customers with stunning devices like iPhones, iPads, MacBooks, Apple Watches, and Apple TVs, and developers want to create applications that run on iOS, macOS, iPadOS, tvOS, watchOS, and Safari. In this session, learn how Amazon is innovating to improve the development experience for Apple applications. Come learn how AWS now enables you to develop, build, test, and sign Apple applications with the flexibility, scalability, reliability, and cost benefits of Amazon EC2.
When industrial equipment breaks down, this means costly downtime. To avoid this, you perform maintenance at regular intervals, which is inefficient and increases your maintenance costs. Predictive maintenance allows you to plan the required repair at an optimal time before a breakdown occurs. However, predictive maintenance solutions can be challenging and costly to implement given the high costs and complexity of sensors and infrastructure. You also have to deal with the challenges of interpreting sensor data and accurately detecting faults in order to send alerts. Come learn how Amazon Monitron helps you solve these challenges by offering an out-of-the-box, end-to-end, cost-effective system.
As data grows, we need innovative approaches to get insight from all the information at scale and speed. AQUA is a new hardware-accelerated cache that uses purpose-built analytics processors to deliver up to 10 times better query performance than other cloud data warehouses by automatically boosting certain types of queries. It’s available in preview on Amazon Redshift RA3 nodes in select regions at no extra cost and without any code changes. Attend this session to understand how AQUA works and which analytic workloads will benefit the most from AQUA.
Figuring out if a part has been manufactured correctly, or if machine part is damaged, is vitally important. Making this determination usually requires people to inspect objects, which can be slow and error-prone. Some companies have applied automated image analysis—machine vision—to detect anomalies. While useful, these systems can be very difficult and expensive to maintain. In this session, learn how Amazon Lookout for Vision can automate visual inspection across your production lines in few days. Get started in minutes, and perform visual inspection and identify product defects using as few as 30 images, with no machine learning (ML) expertise required.
AWS Proton is a new service that enables infrastructure operators to create and manage common container-based and serverless application stacks and automate provisioning and code deployments through a self-service interface for their developers. Learn how infrastructure teams can empower their developers to use serverless and container technologies without them first having to learn, configure, and maintain the underlying resources.
Migrating applications from SQL Server to an open-source compatible database can be time-consuming and resource-intensive. Solutions such as the AWS Database Migration Service (AWS DMS) automate data and database schema migration, but there is often more work to do to migrate application code. This session introduces Babelfish for Aurora PostgreSQL, a new translation layer for Amazon Aurora PostgreSQL that enables Amazon Aurora to understand commands from applications designed to run on Microsoft SQL Server. Learn how Babelfish for Aurora PostgreSQL works to reduce the time, risk, and effort of migrating Microsoft SQL Server-based applications to Aurora, and see some of the capabilities that make this possible.
Over the past decade, we’ve witnessed a digital transformation in healthcare, with organizations capturing huge volumes of patient information. But this data is often unstructured and difficult to extract, with information trapped in clinical notes, insurance claims, recorded conversations, and more. In this session, explore how the new Amazon HealthLake service removes the heavy lifting of organizing, indexing, and structuring patient information to provide a complete view of each patient’s health record in the FHIR standard format. Come learn how to use prebuilt machine learning models to analyze and understand relationships in the data, identify trends, and make predictions, ultimately delivering better care for patients.
When business users want to ask new data questions that are not answered by existing business intelligence (BI) dashboards, they rely on BI teams to create or update data models and dashboards, which can take several weeks to complete. In this session, learn how Merlin lets users simply enter their questions on the Merlin search bar and get answers in seconds. Merlin uses natural language processing and semantic data understanding to make sense of the data. It extracts business terminologies and intent from users’ questions, retrieves the corresponding data from the source, and returns the answer in the form of a number, chart, or table in Amazon QuickSight.
When developers publish images publicly for anyone to find and use—whether for free or under license—they must make copies of common images and upload them to public websites and registries that do not offer the same availability commitment as Amazon ECR. This session explores a new Amazon public registry, Amazon ECR Public, built with AWS experience operating Amazon ECR. Here, developers can share georeplicated container software worldwide for anyone to discover and download. Developers can quickly publish public container images with a single command. Learn how anyone can browse and pull container software for use in their own applications.
Industrial companies are constantly working to avoid unplanned downtime due to equipment failure and to improve operational efficiency. Over the years, they have invested in physical sensors, data connectivity, data storage, and dashboarding to monitor equipment and get real-time alerts. Current data analytics methods include single-variable thresholds and physics-based modeling approaches, which are not effective at detecting certain failure types and operating conditions. In this session, learn how Amazon Lookout for Equipment uses data from your sensors to detect abnormal equipment behavior so that you can take action before machine failures occur and avoid unplanned downtime.
In this session, learn how Contact Lens for Amazon Connect enables your contact center supervisors to understand the sentiment of customer conversations, identify call drivers, evaluate compliance with company guidelines, and analyze trends. This can help supervisors train agents, replicate successful interactions, and identify crucial company and product feedback. Your supervisors can conduct fast full-text search on all transcripts to quickly troubleshoot customer issues. With real-time capabilities, you can get alerted to issues during live customer calls and deliver proactive assistance to agents while calls are in progress, improving customer satisfaction. Join this session to see how real-time ML-powered analytics can power your contact center.
AWS Local Zones places compute, storage, database, and other select services closer to locations where no AWS Region exists today. Last year, AWS launched the first two Local Zones in Los Angeles, and organizations are using Local Zones to deliver applications requiring ultra-low-latency compute. AWS is launching Local Zones in 15 metro areas to extend access across the contiguous US. In this session, learn how you can run latency-sensitive portions of applications local to end users and resources in a specific geography, delivering single-digit millisecond latency for use cases such as media and entertainment content creation, real-time gaming, reservoir simulations, electronic design automation, and machine learning.
Your customers expect a fast, frictionless, and personalized customer service experience. In this session, learn about Amazon Connect Customer Profiles—a new unified customer profile capability to allow agents to provide more personalized service during a call. Customer Profiles automatically brings together customer information from multiple applications, such as Salesforce, Marketo, Zendesk, ServiceNow, and Amazon Connect contact history, into a unified customer profile. With Customer Profiles, agents have the information they need, when they need it, directly in their agent application, resulting in improved customer satisfaction and reduced call resolution times (by up to 15%).
Preparing training data can be tedious. Amazon SageMaker Data Wrangler provides a faster, visual way to aggregate and prepare data for machine learning. In this session, learn how to use SageMaker Data Wrangler to connect to data sources and use prebuilt visualization templates and built-in data transforms to streamline the process of cleaning, verifying, and exploring data without having to write a single line of code. See a demonstration of how SageMaker Data Wrangler can be used to perform simple tasks as well as more advanced use cases. Finally, see how you can take your data preparation workflows into production with a single click.
To provide access to critical resources when needed and also limit the potential financial impact of an application outage, a highly available application design is critical. In this session, learn how you can use Amazon CloudWatch and AWS X-Ray to increase the availability of your applications. Join this session to learn how AWS observability solutions can help you proactively detect, efficiently investigate, and quickly resolve operational issues. All of which help you manage and improve your application’s availability.
Security is critical for your Kubernetes-based applications. Join this session to learn about the security features and best practices for Amazon EKS. This session covers encryption and other configurations and policies to keep your containers safe.
Don’t miss the AWS Partner Keynote with Doug Yeum, head of Global Partner Organization; Sandy Carter, vice president, Global Public Sector Partners and Programs; and Dave McCann, vice president, AWS Migration, Marketplace, and Control Services, to learn how AWS is helping partners modernize their businesses to help their customers transform.
Join Swami Sivasubramanian for the first-ever Machine Learning Keynote, live at re:Invent. Hear how AWS is freeing builders to innovate on machine learning with the latest developments in AWS machine learning, demos of new technology, and insights from customers.
Join Peter DeSantis, senior vice president of Global Infrastructure and Customer Support, to learn how AWS has optimized its cloud infrastructure to run some of the world’s most demanding womath.ceilrkloads and give your business a competitive edge.
Join Dr. Werner Vogels at 8:00AM (PST) as he goes behind the scenes to show how Amazon is solving today’s hardest technology problems. Based on his experience working with some of the largest and most successful applications in the world, Dr. Vogels shares his insights on building truly resilient architectures and what that means for the future of software development.
Cloud architecture has evolved over the years as the nature of adoption has changed and the level of maturity in our thinking continues to develop. In this session, Rudy Valdez, VP of Solutions Architecture and Training & Certification, walks
Organizations around the world are minimizing operations and maximizing agility by developing with serverless building blocks. Join David Richardson, VP of Serverless, for a closer look at the serverless programming model, including event-dri
AWS edge computing solutions provide infrastructure and software that move data processing and analysis as close to the endpoint where data is generated as required by customers. In this session, learn about new edge computing capabilities announced at re:Invent and how customers are using purpose-built edge solutions to extend the cloud to the edge.
Topics on simplifying container deployment, legacy workload migration using containers, optimizing costs for containerized applications, container architectural choices, and more.
Do you need to know what’s happening with your applications that run on Amazon EKS? In this session, learn how you can combine open-source tools, such as Prometheus and Grafana, with Amazon CloudWatch using CloudWatch Container Insights. Come to this session for a demo of Prometheus metrics with Container Insights.
The hard part is done. You and your team have spent weeks poring over pull requests, building microservices and containerizing them. Congrats! But what do you do now? How do you get those services on AWS? How do you manage multiple environments? How do you automate deployments? AWS Copilot is a new command line tool that makes building, developing, and operating containerized applications on AWS a breeze. In this session, learn how AWS Copilot can help you and your team manage your services and deploy them to production, safely and delightfully.
Five years ago, if you talked about containers, the assumption was that you were running them on a Linux VM. Fast forward to today, and now that assumption is challenged—in a good way. Come to this session to explore the best data plane option to meet your needs. This session covers the advantages of different abstraction models (Amazon EC2 or AWS Fargate), the operating system (Linux or Windows), the CPU architecture (x86 or Arm), and the commercial model (Spot or On-Demand Instances.)
Security is critical for your Kubernetes-based applications. Join this session to learn about the security features and best practices for Amazon EKS. This session covers encryption and other configurations and policies to keep your containers safe.
In this session, learn how the Commonwealth Bank of Australia (CommBank) built a platform to run containerized applications in a regulated environment and then replicated it across multiple departments using Amazon EKS, AWS CDK, and GitOps. This session covers how to manage multiple multi-team Amazon EKS clusters across multiple AWS accounts while ensuring compliance and observability requirements and integrating Amazon EKS with AWS Identity and Access Management, Amazon CloudWatch, AWS Secrets Manager, Application Load Balancer, Amazon Route 53, and AWS Certificate Manager.
Amazon EKS is a fully managed service that makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. Join this session to learn about how Verizon runs its core applications on Amazon EKS at scale. Verizon also discusses how it worked with AWS to overcome several post-Amazon EKS migration challenges and ensured that the platform was robust.
Containers have helped revolutionize modern application architecture. While managed container services have enabled greater agility in application development, coordinating safe deployments and maintainable infrastructure has become more important than ever. This session outlines how to integrate CI/CD best practices into deployments of your Amazon ECS and AWS Fargate services using pipelines and the latest in AWS developer tooling.
With Amazon ECS, you can run your containerized workloads securely and with ease. In this session, learn how to utilize the full spectrum of Amazon ECS security features and its tight integrations with AWS security features to help you build highly secure applications.
Do you have to budget your spend for container workloads? Do you need to be able to optimize your spend in multiple services to reduce waste? If so, this session is for you. It walks you through how you can use AWS services and configurations to improve your cost visibility. You learn how you can select the best compute options for your containers to maximize utilization and reduce duplication. This combined with various AWS purchase options helps you ensure that you’re using the best options for your services and your budget.
You have a choice of approach when it comes to provisioning compute for your containers. Some users prefer to have more direct control of their instances, while others could do away with the operational heavy lifting. AWS Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. This session explores the benefits and considerations of running on Fargate or directly on Amazon EC2 instances. You hear about new and upcoming features and learn how Amenity Analytics benefits from the serverless operational model.
Are you confused by the many choices of containers services that you can run on AWS? This session explores all your options and the advantages of each. Whether you are just beginning to learn Docker or are an expert with Kubernetes, join this session to learn how to pick the right services that would work best for you.
Leading containers migration and modernization initiatives can be daunting, but AWS is making it easier. This session explores architectural choices and common patterns, and it provides real-world customer examples. Learn about core technologies to help you build and operate container environments at scale. Discover how abstractions can reduce the pain for infrastructure teams, operators, and developers. Finally, hear the AWS vision for how to bring it all together with improved usability for more business agility.
As the number of services grow within an application, it becomes difficult to pinpoint the exact location of errors, reroute traffic after failures, and safely deploy code changes. In this session, learn how to integrate AWS App Mesh with Amazon ECS to export monitoring data and implement consistent communications control logic across your application. This makes it easy to quickly pinpoint the exact locations of errors and automatically reroute network traffic, keeping your container applications highly available and performing well.
Enterprises are continually looking to develop new applications using container technologies and leveraging modern CI/CD tools to automate their software delivery lifecycles. This session highlights the types of applications and associated factors that make a candidate suitable to be containerized. It also covers best practices that can be considered as you embark on your modernization journey.
Because of its security, reliability, and scalability capabilities, Amazon Elastic Kubernetes Service (Amazon EKS) is used by organization in their most sensitive and mission-critical applications. This session focuses on how Amazon EKS networking works with an Amazon VPC and how to expose your Kubernetes application using Elastic Load Balancing load balancers. It also looks at options for more efficient IP address utilization.
Network design is a critical component in your large-scale migration journey. This session covers some of the real-world networking challenges faced when migrating to the cloud. You learn how to overcome these challenges by diving deep into topics such as establishing private connectivity to your on-premises data center and accelerating data migrations using AWS Direct Connect/Direct Connect gateway, centralizing and simplifying your networking with AWS Transit Gateway, and extending your private DNS into the cloud. The session also includes a discussion of related best practices.
5G will be the catalyst for the next industrial revolution. In this session, come learn about key technical use cases for different industry segments that will be enabled by 5G and related technologies, and hear about the architectural patterns that will support these use cases. You also learn about AWS-enabled 5G reference architectures that incorporate AWS services.
AWS offers a breadth and depth of machine learning (ML) infrastructure you can use through either a do-it-yourself approach or a fully managed approach with Amazon SageMaker. In this session, explore how to choose the proper instance for ML inference based on latency and throughput requirements, model size and complexity, framework choice, and portability. Join this session to compare and contrast compute-optimized CPU-only instances, such as Amazon EC2 C4 and C5; high-performance GPU instances, such as Amazon EC2 G4 and P3; cost-effective variable-size GPU acceleration with Amazon Elastic Inference; and highest performance/cost with Amazon EC2 Inf1 instances powered by custom-designed AWS Inferentia chips.
When it comes to architecting your workloads on VMware Cloud on AWS, it is important to understand design patterns and best practices. Come join this session to learn how you can build well-architected cloud-based solutions for your VMware workloads. This session covers infrastructure designs with native AWS service integrations across compute, networking, storage, security, and operations. It also covers the latest announcements for VMware Cloud on AWS and how you can use these new features in your current architecture.
One of the most critical phases of executing a migration is moving traffic from your existing endpoints to your newly deployed resources in the cloud. This session discusses practices and patterns that can be leveraged to ensure a successful cutover to the cloud. The session covers preparation, tools and services, cutover techniques, rollback strategies, and engagement mechanisms to ensure a successful cutover.
AWS DeepRacer is the fastest way to get rolling with machine learning. Developers of all skill levels can get hands-on, learning how to train reinforcement learning models in a cloud based 3D racing simulator. Attend a session to get started, and then test your skills by competing for prizes and glory in an exciting autonomous car racing experience throughout re:Invent!
AWS DeepRacer gives you an interesting and fun way to get started with reinforcement learning (RL). RL is an advanced machine learning (ML) technique that takes a very different approach to training models than other ML methods. Its super power is that it learns very complex behaviors without requiring any labeled training data, and it can make short-term decisions while optimizing for a longer-term goal. AWS DeepRacer makes it fast and easy to build models in Amazon SageMaker and train, test, and iterate quickly and easily on the track in the AWS DeepRacer 3D racing simulator.
As more organizations are looking to migrate to the cloud, Red Hat OpenShift Service offers a proven, reliable, and consistent platform across the hybrid cloud. Red Hat and AWS recently announced a fully managed joint service that can be deployed directly from the AWS Management Console and can integrate with other AWS Cloud-native services. In this session, you learn about this new service, which delivers production-ready Kubernetes that many enterprises use on premises today, enhancing your ability to shift workloads to the AWS Cloud and making it easier to adopt containers and deploy applications faster. This presentation is brought to you by Red Hat, an AWS Partner.
Event-driven architecture can help you decouple services and simplify dependencies as your applications grow. In this session, you learn how Amazon EventBridge provides new options for developers who are looking to gain the benefits of this approach.
Amazon Timestream is a fast, scalable, and serverless time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day at as little as one-tenth the cost of relational databases. In this session, dive deep on Amazon Timestream features and capabilities, including its serverless automatic scaling architecture, its storage tiering that simplifies your data lifecycle management, its purpose-built query engine that lets you access and analyze recent and historical data together, and its built-in time series analytics functions that help you identify trends and patterns in your data in near-real time.
Savings Plans is a flexible pricing model that allows you to save up to 72 percent on Amazon EC2, AWS Fargate, and AWS Lambda. Many AWS users have adopted Savings Plans since its launch in November 2019 for the simplicity, savings, ease of use, and flexibility. In this session, learn how many organizations use Savings Plans to drive more migrations and business outcomes. Hear from Comcast on their compute transformation journey to the cloud and how it started with RIs. As their cloud usage evolved, they adopted Savings Plans to drive business outcomes such as new architecture patterns.
The ability to deploy only configuration changes, separate from code, means you do not have to restart the applications or services that use the configuration and changes take effect immediately. In this session, learn best practices used by teams within Amazon to rapidly release features at scale. Learn about a pattern that uses AWS CodePipeline and AWS AppConfig that will allow you to roll out application configurations without taking applications out of service. This will help you ship features faster across complex environments or regions.
I watched (binged) the A Cloud Guru course in two days and did the 6 practice exams over a week. I originally was only getting 70%’s on the exams, but continued doing them on my free time (to the point where I’d have 15 minutes and knock one out on my phone lol) and started getting 90%’s. – A mix of knowledge vs memorization tbh. Just make sure you read why your answers are wrong.
I don’t really have a huge IT background, although will note I work in a DevOps (1 1/2 years) environment; so I do use AWS to host our infrastructure. However, the exam is very high level compared to what I do/services I use. I’m fairly certain with zero knowledge/experience, someone could pass this within two weeks. AWS is also currently promoting a “get certified” challenge and is offering 50% off.
Went through the entire CloudAcademy course. Most of the info went out the other ear. Got a 67% on their final exam. Took the ExamPro free exam, got 69%.
Was going to take it last Saturday, but I bought TutorialDojo’s exams on Udemy. Did one Friday night, got a 50% and rescheduled it a week later to today Sunday.
Took 4 total TD exams. Got a 50%, 54%, 67%, and 64%. Even up until last night I hated the TD exams with a passion, I thought they were covering way too much stuff that didn’t even pop up in study guides I read. Their wording for some problems were also atrocious. But looking back, the bulk of my “studying” was going through their pretty well written explanations, and their links to the white papers allowed me to know what and where to read.
Not sure what score I got yet on the exam. As someone who always hated testing, I’m pretty proud of myself. I also had to take a dump really bad starting at around question 25. Thanks to TutorialsDojo Jon Bonso for completely destroying my confidence before the exam, forcing me to up my game. It’s better to walk in way over prepared than underprepared.
I would like to thank this community for recommendations about exam preparation. It was wayyyy easier than I expected (also way easier than TD practice exams scenario-based questions-a lot less wordy on real exam). I felt so unready before the exam that I rescheduled the exam twice. Quick tip: if you have limited time to prepare for this exam, I would recommend scheduling the exam beforehand so that you don’t procrastinate fully.
Resources:
-Stephane’s course on Udemy (I have seen people saying to skip hands-on videos but I found them extremely helpful to understand most of the concepts-so try to not skip those hands-on)
-Tutorials Dojo practice exams (I did only 3.5 practice tests out of 5 and already got 8-10 EXACTLY worded questions on my real exam)
Previous Aws knowledge:
-Very little to no experience (deployed my group’s app to cloud via Elastic beanstalk in college-had 0 clue at the time about what I was doing-had clear guidelines)
Preparation duration: -2 weeks (honestly watched videos for 12 days and then went over summary and practice tests on the last two days)
I used Stephane Maarek on Udemy. Purchased his course and the 6 Practice Exams. Also got Neal Davis’ 500 practice questions on Udemy. I took Stephane’s class over 2 days, then spent the next 2 weeks going over the tests (3~4 per day) till I was constantly getting over 80% – passed my exam with a 882.
What an adventure, I’ve never really gieven though to getting a cert until one day it just dawned on me that it’s one of the few resources that are globally accepted. So you can approach any company and basically prove you know what’s up on AWS 😀
Passed with two weeks of prep (after work and weekends)
This was just a nice structured presentation that also gives you the powerpoint slides plus cheatsheets and a nice overview of what is said in each video lecture.
Udemy – AWS Certified Cloud Practitioner Practice Exams, created by Jon Bonso**, Tutorials Dojo**
These are some good prep exams, they ask the questions in a way that actually make you think about the related AWS Service. With only a few “Bullshit! That was asked in a confusing way” questions that popped up.
I took CCP 2 days ago and got the pass notification right after submitting the answers. In about the next 3 hours I got an email from Credly for the badge. This morning I got an official email from AWS congratulating me on passing, the score is much higher than I expected. I took Stephane Maarek’s CCP course and his 6 demo exams, then Neal Davis’ 500 questions also. On all the demo exams, I took 1 fail and all passes with about 700-800. But in the real exam, I got 860. The questions in the real exam are kind of less verbose IMO, but I don’t truly agree with some people I see on this sub saying that they are easier. Just a little bit of sharing, now I’ll find something to continue ^^
Passed the exam! Spent 25 minutes answering all the questions. Another 10 to review. I might come back and update this post with my actual score.
Background
– A year of experience working with AWS (e.g., EC2, Elastic Beanstalk, Route 53, and Amplify).
– Cloud development on AWS is not my strong suit. I just Google everything, so my knowledge is very spotty. Less so now since I studied for this exam.
Study stats
– Spent three weeks studying for the exam.
– Studied an hour to two every day.
– Solved 800-1000 practice questions.
– Took 450 screenshots of practice questions and technology/service descriptions as reference notes to quickly swift through on my phone and computer for review. Screenshots were of questions that I either didn’t know, knew but was iffy on, or those I believed I’d easily forget.
– Made 15-20 pages of notes. Chill. Nothing crazy. This is on A4 paper. Free-form note taking. With big diagrams. Around 60-80 words per page.
– I was getting low-to-mid 70%s on Neal Davis’s and Stephane Maarek’s practice exams. Highest score I got was an 80%.
– I got a 67(?)% on one of Stephane Maarek’s exams. The only sub-70% I ever got on any practice test. I got slightly anxious. But given how much harder Maarek’s exams are compared to the actual exam, the anxiety was undue.
– Finishing the practice exams on time was never a problem for me. I would finish all of them comfortably within 35 minutes.
Resources used
– AWS Cloud Practitioner Essentials on the AWS Training and Certification Portal
– AWS Certified Cloud Practitioner Practice Tests (Book) by Neal Davis
– 6 Practice Exams | AWS Certified Cloud Practitioner CLF-C01 by Stephane Maarek*
– Certified Cloud Practitioner Course by Exam Pro (Paid Version)**
– One or two free practice exams found by a quick Google search
*Regarding Exam Pro: I went through about 40% of the video lectures. I went through all the videos in the first few sections but felt that watching the lectures was too slow and laborious even at 1.5-2x speed. (The creator, for the most part, reads off of the slides, adding brief comments here and there.) So, I decided to only watch the video lectures for sections I didn’t have a good grasp on. (I believe the video lectures provided in the course are just split versions of the full length course available for free on YouTube under the freeCodeCamp channel, here.) The online course provides five practice exams. I did not take any of them.
**Regarding Stephane Maarek: I only took his practice exams. I did not take his study guide course.
Notes
– My study regimen (i.e., an hour to two every day for three weeks) was overkill.
– The questions on the practice exams created by Neal Davis and Stephane Maarek were significantly harder than those on the actual exam. I believe I could’ve passed without touching any of these resources.
– I retook one or two practice exams out of the 10+ I’ve taken. I don’t think there’s a need to retake the exams as long as you are diligent about studying the questions and underlying concepts you got wrong. I reviewed all the questions I missed on every practice exam the day before.
What would I do differently?
– Focus on practice tests only. No video lectures.
– Focus on the technologies domain. You can intuit your way through questions in the other domains.
I thank you all for helping me through this process! Couldn’t have done it without all of the recommendations and guidance on this page.
Background: I am a back-end developer that works 12 hours a day for corporate America, so no time to study (or do anything) but I made it work.
Could I have probably gone for SAA first? Yeah, but I wanted to prove to myself that I could do it. I studied for about a month. I used Maarek’s Udemy course at 1.5x speed and I couldn’t recommend it more. I also used his practice exams. I’ll be honest, I took 5 practice exams and got somehow managed to fail every single one in the mid 60’s lol. Cleared the exam with an 800. Practice exams WAY harder.
My 2 cents on must knows:
AWS Shared Security Model (who owns what)
Everything Billing (EC2 instance, S3, different support plans)
I had a few ML questions that caught me off guard
VPC concepts – i.e. subnets, NACL, Transit Gateway
I studied solidly for two weeks, starting with Tutorials Dojo (which was recommended somewhere on here). I turned all of their vocabulary words and end of module questions into note cards. I did the same with their final assessment and one free exam.
During my second week, I studied the cards for anywhere from one to two hours a day, and I’d randomly watch videos on common exam questions.
The last thing I did was watch a 3 hr long video this morning that walks you through setting up AWS Instances. The visual of setting things up filled in a lot of holes.
I had some PSI software problems, and ended up getting started late. I was pretty dejected towards the end of the exam, and was honestly (and pleasantly) surprised to see that I passed.
Hopefully this helps someone. Keep studying and pushing through – if you know it, you know it. Even if you have a bad start. Cheers 🍻
My Data Analytics and Machine Learning Specialities are due to expire in the next 18 months. DAS has obviously been discontinued and I hear (based on lack of recent updates) MLS is going the same way. I have had a look at the new MLE and DE associate exams and they seem a lot less rigorous than the old Specialties. Is there any intention to bring in a Professional level exam that covers AWS data tools? submitted by /u/Eightstream [link] [comments]
Hi Reddit family, This achievement wouldn’t have been possible without the amazing people here. Score: 853/1000 As part of my company’s performance requirements, I had to take the AWS Solutions Architect Associate (SAA) exam. AWS was entirely new to me, and I’m currently in my training period. To prepare, I followed Stephane Maarek’s course along with TD and Stephane Maarek’s practice tests. I also focused on hands-on practice with Lambda, VPC, API Gateway, SQS, SNS, DynamoDB, and CloudFormation, and worked on some mini-projects to strengthen my understanding. I dedicated 5–6 hours first week to complete the course and then spent practicing tests and working hands-on. Initially, I was really afraid to take the exam. However, reading posts and comments from this community boosted my confidence. I made my own notes and referred to the SAA Bible from this Reddit post. https://www.reddit.com/r/AWSCertifications/s/flwxxl1TFJ Thank you all ❤️ submitted by /u/Head_One4179 [link] [comments]
Scored 69% - 70% - 70% - 78% on Stephane's practice exams on Udemy. I know the weight is different since this is scored out of 65 and the real exam is out of 50 but we won't know which question is not graded. I am going to review heavy on the topics I missed for sure. How do you think my practice exam scores will translate to the real thing? submitted by /u/Pointfit_ [link] [comments]
Background: I have almost 3 years, 2 years and 11 months to be exact, of working experience as a Information Security Analyst I work full time (9 am - 5 pm) so I studied after work about 3-4 times a week and depending on the day this could range from 1.5 - 3 hours. For full time workers, weekends are your friend! You should try studying a minimum of 2 hours on weekends. It kind of sucked to do this because I usually workout in the evenings but try to adjust your schedule. Checking my google docs study guide that I made for myself, I started studying around September 3, 2024 and took my exam December 7, 2024. Exam Preparation: For exam preparation, I used the tutorials dojo video course with Jon Bonso. I also purchased the Jon Bonso practice exams. I watched all of the videos and created a condensed study guide. As for the practice exams: I took 2 timed exams and did one review exam I completed and reviewed all of the domain exams Final Score: 796/1000 Tips, (at least if your trying to be exam ready as quickly as possible) : Go through all of the Jon Bonso videos from the video course TAKE THE PRACTICE EXAMS. Both practice and review exams (to see answer to question immediately) The review exams are good because if you cannot see yourself improving on a certain topic throughout the test then is a weak area for you to focus your attention. It's been said before but word associations are your best friend: real-time --> Kinesis Data Streams Asynchronous Processing -->SQS Concurrent file access --> EFS Connection to AWS service in VPC --> Gateway Endpoint Not sure what my next cert will be but this one was cool to study for!! submitted by /u/Adventurous-Carrot-1 [link] [comments]
Hello, I just had the strangest (and most frustrating) experience during my AWS Certification exam, and I could really use your advice. I was taking the test remotely, and because it was cold, I had a hand warmer with me. Before starting, I specifically asked the proctor if it was okay to use the hand warmer, and they said it was fine. I proceeded with the exam confident that I wasn’t breaking any rules. About halfway through, the proctor suddenly asked if the hand warmer was paper. I said yes and even sent a message saying I’d stop using it to avoid any confusion. Despite this, the proctor terminated my test. Now I’m left with a couple of concerns: Could AWS ban me for this? I acted in good faith and made sure to ask for permission beforehand. What’s the best way to handle this with AWS Certification support to ensure I can reschedule the exam? I’ve already reached out to support and am waiting for a response. Has anyone here experienced something similar? What was the outcome? Any advice on how to approach this situation would be super helpful. I’m feeling pretty anxious about this, so I’d really appreciate your input. Thanks in advance! submitted by /u/legoland9 [link] [comments]
What cert would you get next in my position and why? I'm a Senior Product Manager in the Cloud Engineering space. I just passed my SAA a couple weeks ago (got my CCP a few years ago) and want to continue my learning. That said, I am not an engineer and my focus is more on having a thorough understanding of all the technical capabilities offered by AWS so I can best work with my internal customers. I am considering the AI Practitioner, or Data Engineer, or Machine Learning Engineer. submitted by /u/northstarhunter [link] [comments]
Hi everyone, I am currently preparing for AWS solution Architect Professional directly rather than first attempting for Solution Architect Associate first. Is it good decision or I should follow the order i.e Associate then professional. My main goal is AWS Machine Learning speciality. submitted by /u/Suspicious-Laugh7334 [link] [comments]
Hi, I recently joined a new company and I have to pass the cloud practitioner. I did the AWS course on AWS Skill Builder, and I spend almost a week doing training exams on exam topics and cloud guru. For the last 2-3 days I’ve been doing scores of minimum 59/65 so more than 90% each time. My only fear is that on whizlabs I have bad test results because their questions are more difficult because of outdated infos, the questions are tricky and sometimes the subject is more complicated on quotas and very specific things like the max number of ip tables in a VPC. I registered to pass it on Monday, is it chill or am I cooked ? If the questions are more like cloud guru and exam topics it’s chill, if it’s more like whizlabs I am not sure about passing. For those who passed it recently what should I expect from my Monday attempt ? Is it more global comprehension or very specific infos and quotas ? Thx for your replies Edit: I did the free exam trial on exampro.co and scored 59/65 submitted by /u/Arthuranium238 [link] [comments]
Hello! I have been working in IT industry for 7 years now 4 years at a Service provider being a network analyst then past three years with Nokia as Network engineer obtained various certs in the Cisco, Juniper, Fortinet, but now I feel like I am stuck in my career, unable to find opportunities to grow and evolve. I want to transition myself into Cloud Network/Architect roles. I tried to push myself to study AWS multiple times and find it relatively easy specially with 7+ years of networking experience I still wonder how do I get an entry level job with no cloud exposure? Please dont advise on doing open source projects Its hard to believe I would be even able to land an interview let alone a job with just some DIY projects. Has anyone made the jump to a more cloud/serverless network engineering roles and like to share their journey? TIA! submitted by /u/sheryyj [link] [comments]
Hello everyone, I have a small issue that keeps me from registering for the exam. So I have been studying for over a month now using the usual combo (Stephane Maarek + Tutorials Dojo). Right now I’m done with the SM course and I’m doing the TD exams on review mode. However, with every exam that I do I find questions that involve knits and grits in some services that are somewhat too in-depth for the usual SAA curriculum (especially in the Networking and Security sections). Mind you I score around 65-85% on each exam, the wrong questions are usually those that involve these knits and grits I just mentioned. I’ve been trying to push myself to register for the exam for the past 10 days but I keep holding back out of fear that I might face a lot of these questions in the exam. At the same time I’m so tired of studying and I just want to get over this phase as quick as I can. A small detail that contributes to the fear is that I will be paying for the exam using some money i saved up from freelancing (I rarely get work) so I don’t want it going to waste. Any advice? Is my fear valid or am I over-worried? Thanks! submitted by /u/RandomAverageUser [link] [comments]
Starting today, Amazon EC2 Hpc7a instances are available in additional AWS Region Europe (Paris). EC2 Hpc7a instances are powered by 4th generation AMD EPYC processors with up to 192 cores, and 300 Gbps of Elastic Fabric Adapter (EFA) network bandwidth for fast and low-latency internode communications. Hpc7a instances feature Double Data Rate 5 (DDR5) memory, which enables high-speed access to data in memory. Hpc7a instances are ideal for compute-intensive, tightly coupled, latency-sensitive high performance computing (HPC) workloads, such as computational fluid dynamics (CFD), weather forecasting, and multiphysics simulations, helping you scale more efficiently on fewer nodes. To optimize HPC instances networking for tightly coupled workloads, you can access these instances in a single Availability Zone within a Region. To learn more, see Amazon Hpc7a instances.
Starting today, Amazon EC2 Hpc6id instances are available in additional AWS Region Europe (Paris). These instances are optimized to efficiently run memory bandwidth-bound, data-intensive high performance computing (HPC) workloads, such as finite element analysis and seismic reservoir simulations. With EC2 Hpc6id instances, you can lower the cost of your HPC workloads while taking advantage of the elasticity and scalability of AWS. EC2 Hpc6id instances are powered by 64 cores of 3rd Generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.5 GHz, 1,024 GB of memory, and up to 15.2 TB of local NVMe solid state drive (SSD) storage. EC2 Hpc6id instances, built on the AWS Nitro System, offer 200 Gbps Elastic Fabric Adapter (EFA) networking for high-throughput inter-node communications that enable your HPC workloads to run at scale. The AWS Nitro System is a rich collection of building blocks that offloads many of the traditional virtualization functions to dedicated hardware and software. It delivers high performance, high availability, and high security while reducing virtualization overhead. To learn more about EC2 Hpc6id instances, see the product detail page.
Amazon Aurora PostgreSQL is now available as a quick create vector store in Amazon Bedrock Knowledge Bases. With the new Aurora quick create option, developers and data scientists building generative AI applications can select Aurora PostgreSQL as their vector store with one click to deploy an Aurora Serverless cluster preconfigured with pgvector in minutes. Aurora Serverless is an on-demand, autoscaling configuration where capacity is adjusted automatically based on application demand, making it ideal as a developer vector store. Knowledge Bases securely connects foundation models (FMs) running in Bedrock to your company data sources for Retrieval Augmented Generation (RAG) to deliver more relevant, context-specific, and accurate responses that make your FM more knowledgeable about your business. To implement RAG, organizations must convert data into embeddings (vectors) and store these embeddings in a vector store for similarity search in generative artificial intelligence (AI) applications. Aurora PostgreSQL, with the pgvector extension, has been supported as a vector store in Knowledge Bases for existing Aurora databases. With the new quick create integration with Knowledge Bases, Aurora is now easier to set up as a vector store for use with Bedrock. The quick create option in Bedrock Knowledge Bases is available in these regions with the exception of AWS GovCloud (US-West) which is planned for Q4 2024. To learn more about RAG with Amazon Bedrock and Aurora, see Amazon Bedrock Knowledge Bases. Amazon Aurora combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. To get started using Amazon Aurora PostgreSQL as a vector store for Amazon Bedrock Knowledge Bases, take a look at our documentation.
Amazon CloudWatch now offers centralized visibility into critical AWS service telemetry configurations, such as Amazon VPC Flow Logs, Amazon EC2 Detailed Metrics, and AWS Lambda Traces. This enhanced visibility enables central DevOps teams, system administrators, and service teams to identify potential gaps in their infrastructure monitoring setup. The telemetry configuration auditing experience seamlessly integrates with AWS Config to discover AWS resources, and can be turned on for the entire organization using the new AWS Organizations integration with Amazon CloudWatch. With visibility into telemetry configurations, you can identify monitoring gaps that might have been missed in your current setup. For example, this helps you identify gaps in your EC2 detailed metrics so that you can address them and easily detect short-lived performance spikes and build responsive auto-scaling policies. You can audit telemetry configuration coverage at both resource type and individual resource levels, refining the view by filtering across specific accounts, resource types, or resource tags to focus on critical resources. The telemetry configurations auditing experience is available in US East (N. Virginia), US West (Oregon), US East (Ohio), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm) regions. There is no additional cost to turn on the new experience, including for AWS Config. You can get started with auditing your telemetry configurations using the Amazon CloudWatch Console, by clicking on Telemetry config in the navigation panel, or programmatically using the API/CLI. To learn more, visit our documentation.
AWS Config added support for a service-linked recorder, a new type of AWS Config recorder that is managed by an AWS service and can record configuration data on service-specific resources, such as the new Amazon CloudWatch telemetry configurations audit. By enabling the service-linked recorder in Amazon CloudWatch, you gain centralized visibility into critical AWS service telemetry configurations, such as Amazon VPC Flow Logs, Amazon EC2 Detailed Metrics, and AWS Lambda Traces. With service-linked recorders, an AWS service can deploy and manage an AWS Config recorder on your behalf to discover resources and utilize the configuration data to provide differentiated features. For example, an Amazon CloudWatch managed service-linked recorder helps you identify monitoring gaps within specific critical resources within your organization, providing a centralized, single-pane view of telemetry configuration status. Service-linked recorders are immutable to ensure consistency, prevention of configuration drift, and simplified experience. Service-linked recorders operate independently of any existing AWS Config recorder, if one is enabled. This allows you to independently manage your AWS Config recorder for your specific use cases while authorized AWS services can manage the service-linked recorder for feature specific requirements. Amazon CloudWatch managed service-linked recorder is now available in US East (N. Virginia), US West (Oregon), US East (Ohio), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney) Europe (Frankfurt), Europe (Ireland), Europe (Stockholm) regions. The AWS Config service-linked recorder specific to Amazon CloudWatch telemetry configuration feature is available to customers at no additional cost. To learn more, please refer to our documentation.
Amazon RDS (Relational Database Service) Performance Insights expands the availability of its on-demand analysis experience to 15 new regions. This feature is available for Aurora MySQL, Aurora PostgreSQL, and RDS for PostgreSQL engines. This on-demand analysis experience, which was previously available in only 15 regions, is now available in all commercial regions. This feature allows you to analyze Performance Insights data for a time period of your choice. You can learn how the selected time period differs from normal, what went wrong, and get advice on corrective actions. Through simple-to-understand graphs and explanations, you can identify the chief contributors to performance issues. You will also get the guidance on the next steps to act on these issues. This can reduce the mean-time-to-diagnosis for database performance issues from hours to minutes. Amazon RDS Performance Insights is a database performance tuning and monitoring feature of RDS that allows you to visually assess the load on your database and determine when and where to take action. With one click in the Amazon RDS Management Console, you can add a fully-managed performance monitoring solution to your Amazon RDS database. To learn more about RDS Performance Insights, read the Amazon RDS User Guide and visit Performance Insights pricing for pricing details and region availability.
Today, we are introducing the new ModelTrainer class and enhancing the ModelBuilder class in the SageMaker Python SDK. These updates streamline training workflows and simplify inference deployments. The ModelTrainer class enables customers to easily set up and customize distributed training strategies on Amazon SageMaker. This new feature accelerates model training times, optimizes resource utilization, and reduces costs through efficient parallel processing. Customers can smoothly transition their custom entry points and containers from a local environment to SageMaker, eliminating the need to manage infrastructure. ModelTrainer simplifies configuration by reducing parameters to just a few core variables and providing user-friendly classes for intuitive SageMaker service interactions. Additionally, with the enhanced ModelBuilder class, customers can now easily deploy HuggingFace models, switch between developing in local environment to SageMaker, and customize their inference using their pre- and post-processing scripts. Importantly, customers can now pass the trained model artifacts from ModelTrainer class easily to ModelBuilder class, enabling a seamlessly transition from training to inference on SageMaker. You can learn more about ModelTrainer class here, ModelBuilder enhancements here, and get started using ModelTrainer and ModelBuilder sample notebooks.
We are excited to announce two new capabilities in SageMaker Inference that significantly enhance the deployment and scaling of generative AI models: Container Caching and Fast Model Loader. These innovations address critical challenges in scaling large language models (LLMs) efficiently, enabling faster response times to traffic spikes and more cost-effective scaling. By reducing model loading times and accelerating autoscaling, these features allow customers to improve the responsiveness of their generative AI applications as demand fluctuates, particularly benefiting services with dynamic traffic patterns. Container Caching dramatically reduces the time required to scale generative AI models for inference by pre-caching container images. This eliminates the need to download them when scaling up, resulting in significant reduction in scaling time for generative AI model endpoints. Fast Model Loader streams model weights directly from Amazon S3 to the accelerator, loading models much faster compared to traditional methods. These capabilities allow customers to create more responsive auto-scaling policies, enabling SageMaker to add new instances or model copies quickly when defined thresholds are reached, thus maintaining optimal performance during traffic spikes while at the same time managing costs effectively. These new capabilities are accessible in all AWS regions where Amazon SageMaker Inference is available. To learn more see our documentation for detailed implementation guidance.
I describe my journey here given this was my first aws certification exam. https://www.youtube.com/watch?v=yR7R1v6B-tA&t=413s&ab_channel=DaliCodes submitted by /u/DaliCodes [link] [comments]
I got a 742 but a pass is a pass, right? I studied for this exam for just two weeks because I was enrolled in a 7 week course that required us to take it. The last three weeks were dedicated to studying for it (the first four were focused on something else entirely). I procrastinated and waited until the last two weeks and only used tutorials dojo exams. I did all the practice exams and never scored above 60%. I studied about 4 hours a day but there were a few days where I didn’t study at all. I think it helped that I already had the Cloud practitioner cert so I had an okay understanding of what the services were. The exam did feel a little more difficult than TD mostly because it was much easier to eliminate answer choices within TD questions. All this to say, don’t do what I did. The test was hard and I was convinced that I failed until I got the results 3 hours later. submitted by /u/Ducky_doo1 [link] [comments]
Just passed my exam. Yay! But what the hell? I did Stephane Maareks Udemy course and studied like crazy for a 2 days after finishing. I still felt like there were like 15 questions with answers I couldn’t have known from the course or even using ChatGPT and free practice exams. Like 3 really detailed questions about the pillars. A question about a service called nitro hypervisor that I’m pretty sure i had never even heard of before. I I am lucky I passed honestly. I was expecting the exam to be much easier. I must have just guessed correctly on enough of them to pass. I heard it was really easy but the CCP is no joke. R submitted by /u/reddithoggscripts [link] [comments]
Hi, I am at Reinvent 2024, and I see lots of people with lots of nice 'aws certified' bag tag, they seems to be well done, colourful with a metal ring ... Do you know where they come from ? There was a store for aws certified merch in the past at aws web, but doesn't exist anymore. Thanks. submitted by /u/CyrilDevOps [link] [comments]
Hi everyone, I haven't taken an exam since I took and passed the AWS Solutions Architect Professional about a year ago. I've been self-studying and decided to go for the AWS DevOps Professional. When I went to the portal to sign up, there's some kind of "Authorization" that's needed in order to register and sign up for the exam. Does anyone know what this is? How long does it take to get an authorization? The reason why I'm concerned is because I'm going out of the country next Wednesday and planned to take this exam on Monday or Tuesday. I will be gone for a prolonged period of time and hoped to have this under my belt before leaving. :-/ I'm editing this to add that I have the Cloud Practitioner, SAA, SAP, Sysops, and Developer already. Thanks! submitted by /u/koffeebrown [link] [comments]
I am scheduled to take the CCP exam on December 6 (which is tomorrow in my timezone). I will not be able to take it since I haven’t finished studying the course material I am watching. I wasn’t able to reschedule it days before, and now I couldn’t cancel or reschedule it (since it’s less than 24 hrs now). Anyone had the same experience before? Please help me out. submitted by /u/hanicinq [link] [comments]
I don’t work in AWS daily if even weekly. I’m an engineer on the front end Microsoft side for my company and our entire backend cloud is AWS. I know nothing about AWS. I studied and passed the CCP a few years ago and had to recertify as it was expiring. Took the Cloud Quest and it was stupid easy. Took me 2.5 days on and off and was done. If you need to renew your CCP, do it, it’s easy. submitted by /u/Illnasty2 [link] [comments]
I have received a voucher for a foundational exam, but I’m undecided between choosing the Cloud Practitioner or AI Practitioner certification. Which one should I pursue first? For context: I am a Computer Science student majoring in Data Science. I plan to work primarily in the Data Science and Machine Learning sectors. However, the challenge is that my country has very few entry-level job opportunities in these fields. As a result, I might need to work for 1–2 years as a Software Engineer, specifically in backend development, before transitioning to my desired role. submitted by /u/Shuvouwu [link] [comments]
So I’ve got free access to ACG/PluralSight. I’ve used it to pass my CCP and currently using it for SAA. I see a lot of negative options on it. Do you think using their material + practice exams I’ll be able to actually pass the real exam? Been prepping using ACG for a few months now… submitted by /u/Wessyvert [link] [comments]
Ive heard that a lot of people just memorize exam content but can't do hands on stuff in aws and that projects are the most important thing. Certifications also cover a broad range of topics at a surface level but don't make you an expert in any particular one. Why don't we just do projects and save ourselves the hassle of certs? submitted by /u/Holiday_Ad9679 [link] [comments]
I would really like to thank this community for the support: mainly answering my questions and calming me when in doubt. ☺️ Thank you to Stephen Maarek and Jon Bonso (TD) for the resources and practice exams. 🎉 As you can see in the screenshot, my scores were not high to boost my confidence but I was able to pass my actual exam (shaking and sometimes thinking about failing WHILE answering it, no joke). To all the passers, congratulations to us! ☺️ To those who are still studying, good luck!👍🏻 and dont give up ☺️ submitted by /u/Joi_trades [link] [comments]
I took two of Stephane Maarek's practice tests for SAA-C03 exam. I did not do Stephane's course but followed Mark Wilkin and Chad Smith's course in O'Reilly so I think I'm not biased by the content. However, I was only able to get about 64% in two sample tests I have taken. Would the real test be harder or easier in your experience? I have my test coming up in 2 days. Should I postpone? submitted by /u/geeky_vin [link] [comments]
Hello everyone I am looking for some adv, 2 months ago i got my aws practitioner cert. i am currently studying Active Directory. I would like to be a sysadmin, but i've heard it would be best to go for SA and than Sysops administrator. please i need some great adv here. thank you in advance submitted by /u/Accurate_Evening_390 [link] [comments]
Today, we are announcing the support of GraphRAG, a new capability in Amazon Bedrock Knowledge Bases that enhances Generative AI applications by providing more comprehensive, relevant and explainable responses using RAG techniques combined with graph data. Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end Retrieval-Augmented Generation (RAG) workflows to create highly accurate, low latency, and custom Generative AI applications by incorporating contextual information from your company's data sources. Amazon Bedrock Knowledge Bases now offers a fully-managed GraphRAG capability with Amazon Neptune Analytics. Previously, customers faced challenges in conducting exhaustive, multi-step searches across disparate content. By identifying key entities across documents, GraphRAG delivers insights that leverage relationships within the data, enabling improved responses to end users. For example, users can ask a travel application for family-friendly beach destinations with direct flights and good seafood restaurants. Developers building Generative AI applications can enable GraphRAG in just a few clicks by specifying their data sources and choosing Amazon Neptune Analytics as their vector store when creating a knowledge base. This will automatically generate and store vector embeddings in Amazon Neptune Analytics, along with a graph representation of entities and their relationships. GraphRAG with Amazon Neptune is built right into Amazon Bedrock Knowledge Bases, offering an integrated experience with no additional setup or additional charges beyond the underlying services. GraphRAG is available in AWS Regions where Amazon Bedrock Knowledge Bases and Amazon Neptune Analytics are both available (see current list of supported regions). To learn more, visit the Amazon Bedrock User Guide.
Today, AWS Security Incident Response launches a new AWS Specialization with approved partners from the AWS Partner Network (APN). AWS customers today rely on various 3rd party tools and services to support their internal security incident response capabilities. To better help both customers and partners, AWS introduced AWS Security Incident Response, a new service that helps customers prepare for, respond to, and recover from security events. Alongside approved AWS Partners, AWS Security Incident Response monitors, investigates, and escalates triaged security findings from Amazon GuardDuty and other threat detection tools through AWS Security Hub. Security Incident Response identifies and escalates only high-priority incidents. Partners and customers can also leverage collaboration and communication features to streamline coordinated incident response for faster reaction and recovery. For example, service members can create a predefined "Incident Response Team" that is automatically alerted whenever a security case is escalated. Alerted members, which includes customers and partners, can then communicate and collaborate in a centralized format, with native feature integrations such as in-console messaging, video conferencing, and quick and secure data transfer.
Customers can access the service alongside AWS Partners that have been vetted and approved to use Security Incident Response. Learn more and explore AWS Security Incident Response Partners with specialized expertise to help you respond when it matters most.
Amazon Bedrock Marketplace provides generative AI developers access to over 100 publicly available and proprietary foundation models (FMs), in addition to Amazon Bedrock’s industry-leading, serverless models. Customers deploy these models onto SageMaker endpoints where they can select their desired number of instances and instance types. Amazon Bedrock Marketplace models can be accessed through Bedrock’s unified APIs, and models which are compatible with Bedrock’s Converse APIs can be used with Amazon Bedrock’s tools such as Agents, Knowledge Bases, and Guardrails.
Amazon Bedrock Marketplace empowers generative AI developers to rapidly test and incorporate a diverse array of emerging, popular, and leading FMs of various types and sizes. Customers can choose from a variety of models tailored to their unique requirements, which can help accelerate the time-to-market, improve the accuracy, or reduce the cost of their generative AI workflows. For example, customers can incorporate models highly-specialized for finance or healthcare, or language translation models for Asian languages, all from a single place.
Amazon Bedrock Marketplace is supported in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and South America (São Paulo). For more information, please refer to Amazon Bedrock Marketplace's announcement blog or documentation.
Today Amazon Web Services, Inc. (AWS) announced the general availability of Amazon SageMaker partner AI apps, a new capability that enables customers to easily discover, deploy, and use best-in-class machine learning (ML) and generative AI (GenAI) development applications from leading app providers privately and securely, all without leaving Amazon SageMaker AI so they can develop performant AI models faster. Until today, integrating purpose-built GenAI and ML development applications that provide specialized capabilities for a variety of model development tasks, required a considerable amount of effort. Beyond the need to invest time and effort in due diligence to evaluate existing offerings, customers had to perform undifferentiated heavy lifting in deploying, managing, upgrading and scaling these applications. Furthermore, to adhere to rigorous security and compliance protocols, organizations need their data to stay within the confines of their security boundaries without needing to move their data elsewhere, for example, to a Software as a Service (SaaS) application. Finally, the resulting developer experience is often fragmented, with developers having to switch back and forth between multiple disjointed interfaces. With SageMaker partner AI apps you can quickly subscribe to a partner solution and seamlessly integrate the app with your SageMaker development environment. SageMaker partner AI apps are fully managed and run privately and securely in your SageMaker environment reducing the risk of data and model exfiltration. At launch, you will be able to boost your team’s productivity and reduce time to market by enabling: Comet, to track, visualize, and manage experiments for AI model development; Deepchecks, to evaluate quality and compliance for AI models; Fiddler, to validate, monitor, analyze, and improve AI models in production; and, Lakera, to protect AI applications from security threats such as prompt attacks, data loss and inappropriate content. SageMaker partner AI apps is available in all currently supported regions except Gov Cloud. To learn more please visit SageMaker partner AI app’s developer guide.
Today, AWS Marketplace announces Buy with AWS, a new feature that helps accelerate discovery and procurement on AWS Partners’ websites for products available in AWS Marketplace. Partners that sell or resell products in AWS Marketplace can now offer new experiences on their websites that are powered by AWS Marketplace. Customers can more quickly identify solutions from Partners that are available in AWS Marketplace and use their AWS accounts to access a streamlined purchasing experience. Customers browsing on Partner websites can explore products that are “Available in AWS Marketplace” and request demos, access free trials, and request custom pricing. Customers can conveniently and securely make purchases by clicking the Buy with AWS button and completing transactions by logging in to their AWS accounts. All purchases made through Buy with AWS are transacted and managed within AWS Marketplace, allowing customers to take advantage of benefits such as consolidated AWS billing, centralized subscriptions management, and access to cost optimization tools. For AWS Partners, Buy with AWS provides a new way to engage website visitors and accelerate the path-to-purchase for customers. By adding Buy with AWS buttons to Partner websites, Partners can give website visitors the ability to subscribe to free trials, make purchases, and access custom pricing using their AWS accounts. Partners can complete an optional integration and build new experiences on websites that allow customers to search curated product listings and filter products from the AWS Marketplace catalog. Learn more about making purchases using Buy with AWS. Learn how AWS Partners can start selling using Buy with AWS.
Today, AWS Partner Central announces the preview of Partner Connections, a new feature allowing AWS Partners to discover and connect with other Partners for collaboration on shared customer opportunities. With Partner Connections, Partners can co-sell joint solutions, accelerate deal progression, and expand their reach by teaming with other AWS Partners. At the core of Partner Connections are two key capabilities: connections discovery and multi-partner opportunities. The connections discovery feature uses AI-powered recommendations to streamline Partner matchmaking, making it easier for Partners to find suitable collaborators and add them to their network. With multi-partner opportunities, Partners can work together seamlessly to create and manage joint customer opportunities in APN Customer Engagements (ACE). This integrated approach allows Partners to work seamlessly with AWS and other Partners on shared opportunities, reducing the operational overhead of managing multi-partner opportunities. Partners can also create, update, and share multi-partner opportunities using the Partner Central API for Selling. This allows Partners to collaborate with other Partners and AWS on joint sales opportunities from their own customer relationship management (CRM) system. Partner Connections (Preview) is available to all eligible AWS Partners who have signed the ACE Terms and Conditions and have linked their AWS account to their Partner Central account. To get started, log in to AWS Partner Central and review the ACE user guide for more information. To see how Partner Connections works, read the blog.
Amazon Bedrock Knowledge Bases now enables developers to build generative AI applications that can analyze and leverage insights from both textual and visual data, such as images, charts, diagrams, and tables. Bedrock Knowledge Bases offers end-to-end managed Retrieval-Augmented Generation (RAG) workflow that enables customers to create highly accurate, low-latency, secure, and custom generative AI applications by incorporating contextual information from their own data sources. With this launch, Bedrock Knowledge Bases extracts content from both text and visual data, generates semantic embeddings using the selected embedding model, and stores them in the chosen vector store. This enables users to retrieve and generate answers to questions derived not only from text but also from visual data. Additionally, retrieved results now include source attribution for visual data, enhancing transparency and building trust in the generated outputs. To get started, customers can choose between: Amazon Bedrock Data Automation, a managed service that automatically extracts content from multimodal data (currently in Preview), or FMs such as Claude 3.5 Sonnet or Claude 3 Haiku, with the flexibility to customize the default prompt. Multimodal data processing with Bedrock Data Automation is available in the US West (Oregon) region in preview. FM-based parsing is supported in all regions where Bedrock Knowledge Bases is available. For details on pricing for using Bedrock Data Automation or FM as a parser, please refer to the pricing page. To learn more, visit Amazon Bedrock Knowledge Bases product documentation.
Organizations are increasingly using applications with multimodal data to drive business value, improve decision-making, and enhance customer experiences. Amazon Bedrock Guardrails now supports multimodal toxicity detection for image content, enabling organizations to apply content filters to images. This new capability with Guardrails, now in public preview, removes the heavy lifting required by customers to build their own safeguards for image data or spend cycles with manual evaluation that can be error-prone and tedious. Bedrock Guardrails helps customers build and scale their generative AI applications responsibly for a wide range of use cases across industry verticals including healthcare, manufacturing, financial services, media and advertising, transportation, marketing, education, and much more. With this new capability, Amazon Bedrock Guardrails offers a comprehensive solution, enabling the detection and filtration of undesirable and potentially harmful image content while retaining safe and relevant visuals. Customers can now use content filters for both text and image data in a single solution with configurable thresholds to detect and filter undesirable content across categories such as hate, insults, sexual, and violence, and build generative AI applications based on their responsible AI policies. This new capability in preview is available with all foundation models (FMs) on Amazon Bedrock that support images including fine-tuned FMs in 11 AWS regions globally: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Mumbai), and AWS GovCloud (US-West). To learn more, visit the Amazon Bedrock Guardrails product page, read the News blog, and documentation.
Amazon Kendra is an AI-powered search service enabling organizations to build intelligent search experiences and retrieval augmented generation (RAG) systems to power generative AI applications. Starting today, AWS customers can use a new index - the GenAI Index for RAG and intelligent search. With the Kendra GenAI Index, customers get high out-of-the-box search accuracy powered by the latest information retrieval technologies and semantic models. Kendra GenAI Index supports mobility across AWS generative AI services like Amazon Bedrock Knowledge Base and Amazon Q Business, giving customers the flexibility to use their indexed content across different use cases. It is available as a managed retriever in Bedrock Knowledge Bases, enabling customers to create a Knowledge Base powered by the Kendra GenAI Index. Customers can also integrate such Knowledge Bases with other Bedrock Services like Guardrails, Prompt Flows, and Agents to build advanced generative AI applications. The GenAI Index supports connectors for 43 different data sources, enabling customers to easily ingest content from a variety of sources. Kendra GenAI Index is available in the US East (N. Virginia) and US West (Oregon) regions. To learn more, see Kendra GenAI Index in the Amazon Kendra Developer Guide. For pricing, please refer to Kendra pricing page.
Today, AWS announces the availability of new AWS AI Service Cards for Amazon Nova Reel; Amazon Canvas; Amazon Nova Micro, Lite, and Pro; Amazon Titan Image Generator; and Amazon Titan Text Embeddings. AI Service Cards are a resource designed to enhance transparency by providing customers with a single place to find information on the intended use cases and limitations, responsible AI design choices, and performance optimization best practices for AWS AI services. AWS AI Service Cards are part of our comprehensive development process to build services in a responsible way. They focus on key aspects of AI development and deployment, including fairness, explainability, privacy and security, safety, controllability, veracity and robustness, governance, and transparency. By offering these cards, AWS aims to empower customers with the knowledge they need to make informed decisions about using AI services in their applications and workflows. Our AI Service Cards will continue to evolve and expand as we engage with our customers and the broader community to gather feedback and continually iterate on our approach. For more information, see the AI Service Cards for
Amazon Nova Reel
Amazon Nova Canvas
Amazon Nova Micro, Lite and Pro
Amazon Titan Image Generator
Amazon Titan Text Embeddings
To learn more about AI Service Cards, as well as our broader approach to building AI in a responsible way, see our Responsible AI webpage.
Today, we are announcing the preview launch of Amazon Bedrock Data Automation (BDA), a new feature of Amazon Bedrock that enables developers to automate the generation of valuable insights from unstructured multimodal content such as documents, images, video, and audio to build GenAI-based applications. These insights include video summaries of key moments, detection of inappropriate image content, automated analysis of complex documents, and much more. Developers can also customize BDA’s output to generate specific insights in consistent formats required by their systems and applications. By leveraging BDA, developers can reduce development time and effort, making it easier to build intelligent document processing, media analysis, and other multimodal data-centric automation solutions. BDA offers high accuracy at lower cost than alternative solutions, along with features such as visual grounding with confidence scores for explainability and built-in hallucination mitigation. This ensures accurate insights from unstructured, multi-modal data content. Developers can get started with BDA on the Bedrock console, where they can configure and customize output using their sample data. They can then integrate BDA’s unified multi-modal inference API into their applications to process their unstructured content at scale with high accuracy and consistency. BDA is also integrated with Bedrock Knowledge Bases, making it easier for developers to generate meaningful information from their unstructured multi-modal content to provide more relevant responses for retrieval augmented generation (RAG). Bedrock Data Automation is available in preview in US West (Oregon) AWS Region. To learn more, visit the Bedrock Data Automation page.
Today, AWS announces that Amazon Bedrock now supports prompt caching. Prompt caching is a new capability that can reduce costs by up to 90% and latency by up to 85% for supported models by caching frequently used prompts across multiple API calls. It allows you to cache repetitive inputs and avoid reprocessing context, such as long system prompts and common examples that help guide the model’s response. When cache is used, fewer computing resources are needed to generate output. As a result, not only can we process your request faster, but we can also pass along the cost savings from using fewer resources. Amazon Bedrock is a fully managed service that offers a choice of high-performing FMs from leading AI companies via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI capabilities built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while providing tools to build customer trust and data governance. Prompt caching is now available on Claude 3.5 Haiku and Claude 3.5 Sonnet v2 in US West (Oregon) and US East (N. Virginia) via cross-region inference, and Nova Micro, Nova Lite, and Nova Pro models in US East (N. Virginia). At launch, only a select number of customers will have access to this feature. To learn more about participating in the preview, see this page. To learn more about prompt caching, see our documentation and blog.
A new scenario analysis capability of Amazon Q in QuickSight is now available in preview. This new capability provides an AI-assisted data analysis experience that helps you make better decisions, faster. Amazon Q in QuickSight simplifies in-depth analysis with step-by-step guidance, saving hours of manual data manipulation and unlocking data-driven decision-making across your organization.
Amazon Q in QuickSight helps business users perform complex scenario analysis up to 10x faster than spreadsheets. You can ask a question or state your goal in natural language and Amazon Q in QuickSight guides you through every step of advanced data analysis—suggesting analytical approaches, automatically analyzing data, surfacing relevant insights, and summarizing findings with suggested actions. This agentic approach breaks down data analysis into a series of easy-to-understand, executable steps, helping you find solutions to complex problems without specialized skills or tedious, error-prone data manipulation in spreadsheets. Working on an expansive analysis canvas, you can intuitively iterate your way to solutions by directly interacting with data, refining analysis steps, or exploring multiple analysis paths side-by-side. This scenario analysis capability is accessible from any Amazon QuickSight dashboard, so you can move seamlessly from visualizing data to modelling solutions. With Amazon Q in QuickSight, you can easily modify, extend, and reuse previous analyses, helping you quickly adapt to changing business needs.
Amazon Q in QuickSight Pro users can use this new capability in preview in the following AWS regions: US East (N. Virginia) and US West (Oregon). To learn more, visit the Amazon Q in QuickSight documentation and read the AWS News Blog.
Amazon Bedrock Intelligent Prompt Routing routes prompts to different foundational models within a model family, helping you optimize for quality of responses and cost. Using advanced prompt matching and model understanding techniques, Intelligent Prompt Routing predicts the performance of each model for each request and dynamically routes each request to the model that it predicts is most likely to give the desired response at the lowest cost. Customers can choose from two prompt routers in preview that route requests either between Claude Sonnet 3.5 and Claude Haiku, or between Llama 3.1 8B and Llama 3.1 70B. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models from leading AI companies via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI capabilities built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance. With Intelligent Prompt Routing, Amazon Bedrock can help customers build cost-effective generative AI applications with a combination of foundation models to get better performance at lower cost than a single foundation model. During preview, customers are charged regular on-demand pricing for the models that requests are routed to. Learn more in our documentation and blog.
Amazon SageMaker HyperPod now provides you with centralized governance across all generative AI development tasks, such as training and inference. You have full visibility and control over compute resource allocation, ensuring the most critical tasks are prioritized and maximizing compute resource utilization, reducing model development costs by up to 40%. With HyperPod task governance, administrators can more easily define priorities for different tasks and set up limits for how many compute resources each team can use. At any given time, administrators can also monitor and audit the tasks that are running or waiting for compute resources through a visual dashboard. When data scientists create their tasks, HyperPod automatically runs them, adhering to the defined compute resource limits and priorities. For example, when training for a high-priority model needs to be completed as soon as possible but all compute resources are in use, HyperPod frees up resources from lower-priority tasks to support the training. HyperPod pauses the low-priority task, saves the checkpoint, and reallocates the freed-up compute resources. The preempted low-priority task will resume from the last saved checkpoint as resources become available again. And when a team is not fully using the resource limits the administrator has set up, HyperPod use those idle resources to accelerate another team’s tasks. Additionally, HyperPod is now integrated with Amazon SageMaker Studio, bringing task governance and other HyperPod capabilities into the Studio environment. Data scientists can now seamlessly interact with HyperPod clusters directly from Studio, allowing them to develop, submit, and monitor machine learning (ML) jobs on powerful accelerator-backed clusters. Task governance for HyperPod is available in all AWS Regions where HyperPod is available: US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), and South America (São Paulo). To learn more, visit SageMaker HyperPod webpage, AWS News Blog, and SageMaker AI documentation.
Amazon SageMaker HyperPod announces flexible training plans, a new capability that allows you to train generative AI models within your timelines and budgets. Gain predictable model training timelines and run training workloads within your budget requirements, while continuing to benefit from features of SageMaker HyperPod such as resiliency, performance-optimized distributed training, and enhanced observability and monitoring.
In a few quick steps, you can specify your preferred compute instances, desired amount of compute resources, duration of your workload, and preferred start date for your generative AI model training. SageMaker then helps you create the most cost-efficient training plans, reducing time to train your model by weeks. Once you create and purchase your training plans, SageMaker automatically provisions the infrastructure and runs the training workloads on these compute resources without requiring any manual intervention. SageMaker also automatically takes care of pausing and resuming training between gaps in compute availability, as the plan switches from one capacity block to another. If you wish to remove all the heavy lifting of infrastructure management, you can also create and run training plans using SageMaker fully managed training jobs.
SageMaker HyperPod flexible training plans are available in the US East (N. Virginia), US East (Ohio), and US West (Oregon) AWS Regions. To learn more, visit: SageMaker HyperPod, documentation, and the announcement blog.
Amazon SageMaker HyperPod recipes help you get started training and fine-tuning publicly available foundation models (FMs) in minutes with state-of-the-art performance. SageMaker HyperPod helps customers scale generative AI model development across hundreds or thousands of AI accelerators with built-in resiliency and performance optimizations, decreasing model training time by up to 40%. However, as FM sizes continue to grow to hundreds of billions of parameters, the process of customizing these models can take weeks of extensive experimenting and debugging. In addition, performing training optimizations to unlock better price performance is often unfeasible for customers, as they often require deep machine learning expertise that could cause further delays in time to market.
With SageMaker HyperPod recipes, customers of all skill sets can benefit from state-of-the-art performance while quickly getting started training and fine-tuning popular publicly available FMs, including Llama 3.1 405B, Mixtral 8x22B, and Mistral 7B. SageMaker HyperPod recipes include a training stack tested by AWS, removing weeks of tedious work experimenting with different model configurations. You can also quickly switch between GPU-based and AWS Trainium-based instances with a one-line recipe change and enable automated model checkpointing for improved training resiliency. Finally, you can run workloads in production on the SageMaker AI training service of your choice.
SageMaker HyperPod recipes are available in all AWS Regions where SageMaker HyperPod and SageMaker training jobs are supported. To learn more and get started, visit the SageMaker HyperPod page and blog.