

Elevate Your Career with AI & Machine Learning For Dummies PRO and Start mastering the technologies shaping the future—download now and take the next step in your professional journey!
What are the corresponding Azure and Google Cloud services for each of the AWS services?
What are unique distinctions and similarities between AWS, Azure and Google Cloud services? For each AWS service, what is the equivalent Azure and Google Cloud service? For each Azure service, what is the corresponding Google Service? AWS Services vs Azure vs Google Services? Side by side comparison between AWS, Google Cloud and Azure Service?
For a better experience, use the mobile app here.


1
Category: Marketplace
Easy-to-deploy and automatically configured third-party applications, including single virtual machine or multiple virtual machine solutions.
References:
[AWS]:AWS Marketplace
[Azure]:Azure Marketplace
[Google]:Google Cloud Marketplace
Tags: #AWSMarketplace, #AzureMarketPlace, #GoogleMarketplace
Differences: They are both digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on their respective cloud platform.
2
Category: AI and machine learning
A cloud service to train, deploy, automate, and manage machine learning models.
References:
[AWS]:AWS SageMaker(build, train and deploy machine learning models), AWS DeepComposer (ML enabled musical keyboard), Amazon Fraud Detector (Detect more online fraud faster), Amazon CodeGuru (Automate code reviews and identify expensive lines of code), Contact Lens for Amazon Connect (Contact center analytics powered by ML), Amazon Kendra (Reinvent enterprise search with ML), Amazon Augmented AI (Easily implement human review of ML predictions), Amazon SageMaker Studio (The first visual IDE for machine learning), Amazon SageMaker Notebooks (Quickly start and share ML notebooks), Amazon SageMaker Experiments (Organize, track, and evaluate ML experiments), Amazon SageMaker Debugger (Analyze and debug ML models in real time), Amazon SageMaker Autopilot (Automatically create high quality ML models), Amazon SageMaker Model Monitor (Continuously monitor ML models)
[Azure]:Azure Machine Learning
[Google]:Google Cloud TensorFlow
Tags: #AI, #CloudAI, #SageMaker, #AzureMachineLearning, #TensorFlow
Differences: According to the StackShare community, Azure Machine Learning has a broader approval, being mentioned in 12 company stacks & 8 developers stacks; compared to Amazon Machine Learning, which is listed in 8 company stacks and 9 developer stacks.
3
Category: AI and machine learning
Build and connect intelligent bots that interact with your users using text/SMS, Skype, Teams, Slack, Office 365 mail, Twitter, and other popular services.
References:
[AWS]:Alexa Skills Kit (enables a developer to build skills, also called conversational applications, on the Amazon Alexa artificial intelligence assistant.)
[Azure]:Microsoft Bot Framework (building enterprise-grade conversational AI experiences.)
[Google]:Google Assistant Actions ( developer platform that lets you create software to extend the functionality of the Google Assistant, Google’s virtual personal assistant,)
AI-Powered Professional Certification Quiz Platform
Web|iOs|Android|Windows
🚀 Power Your Podcast Like AI Unraveled: Get 20% OFF Google Workspace!
Hey everyone, hope you're enjoying the deep dive on AI Unraveled. Putting these episodes together involves tons of research and organization, especially with complex AI topics.
A key part of my workflow relies heavily on Google Workspace. I use its integrated tools, especially Gemini Pro for brainstorming and NotebookLM for synthesizing research, to help craft some of the very episodes you love. It significantly streamlines the creation process!
Feeling inspired to launch your own podcast or creative project? I genuinely recommend checking out Google Workspace. Beyond the powerful AI and collaboration features I use, you get essentials like a professional email (you@yourbrand.com), cloud storage, video conferencing with Google Meet, and much more.
It's been invaluable for AI Unraveled, and it could be for you too.
Start Your Journey & Save 20%
Google Workspace makes it easy to get started. Try it free for 14 days, and as an AI Unraveled listener, get an exclusive 20% discount on your first year of the Business Standard or Business Plus plan!
Sign Up & Get Your Discount HereUse one of these codes during checkout (Americas Region):
AI- Powered Jobs Interview Warmup For Job Seekers

⚽️Comparative Analysis: Top Calgary Amateur Soccer Clubs – Outdoor 2025 Season (Kids' Programs by Age Group)
Business Standard Plan: 63P4G3ELRPADKQU
Business Standard Plan: 63F7D7CPD9XXUVT
Set yourself up for promotion or get a better job by Acing the AWS Certified Data Engineer Associate Exam (DEA-C01) with the eBook or App below (Data and AI)

Download the Ace AWS DEA-C01 Exam App:
iOS - Android
AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version
Business Standard Plan: 63FLKQHWV3AEEE6
Business Standard Plan: 63JGLWWK36CP7W
Invest in your future today by enrolling in this Azure Fundamentals - Pass the Azure Fundamentals Exam with Ease: Master the AZ-900 Certification with the Comprehensive Exam Preparation Guide!
- AWS Certified AI Practitioner (AIF-C01): Conquer the AWS Certified AI Practitioner exam with our AI and Machine Learning For Dummies test prep. Master fundamental AI concepts, AWS AI services, and ethical considerations.
- Azure AI Fundamentals: Ace the Azure AI Fundamentals exam with our comprehensive test prep. Learn the basics of AI, Azure AI services, and their applications.
- Google Cloud Professional Machine Learning Engineer: Nail the Google Professional Machine Learning Engineer exam with our expert-designed test prep. Deepen your understanding of ML algorithms, models, and deployment strategies.
- AWS Certified Machine Learning Specialty: Dominate the AWS Certified Machine Learning Specialty exam with our targeted test prep. Master advanced ML techniques, AWS ML services, and practical applications.
- AWS Certified Data Engineer Associate (DEA-C01): Set yourself up for promotion, get a better job or Increase your salary by Acing the AWS DEA-C01 Certification.
Business Plus Plan: M9HNXHX3WC9H7YE
With Google Workspace, you get custom email @yourcompany, the ability to work from anywhere, and tools that easily scale up or down with your needs.
Need more codes or have questions? Email us at info@djamgatech.com.
Tags: #AlexaSkillsKit, #MicrosoftBotFramework, #GoogleAssistant
Differences: One major advantage Google gets over Alexa is that Google Assistant is available to almost all Android devices.
4
Category: AI and machine learning
Description:API capable of converting speech to text, understanding intent, and converting text back to speech for natural responsiveness.
References:
[AWS]:Amazon Lex (building conversational interfaces into any application using voice and text.)
[Azure]:Azure Speech Services(unification of speech-to-text, text-to-speech, and speech translation into a single Azure subscription)
[Google]:Google APi.ai, AI Hub (Hosted repo of plug-and-play AI component), AI building blocks(for developers to add sight, language, conversation, and structured data to their applications.), AI Platform(code-based data science development environment, lets ML developers and data scientists quickly take projects from ideation to deployment.), DialogFlow (Google-owned developer of human–computer interaction technologies based on natural language conversations. ), TensorFlow(Open Source Machine Learning platform)
Tags: #AmazonLex, #CogintiveServices, #AzureSpeech, #Api.ai, #DialogFlow, #Tensorflow
Differences: api.ai provides us with such a platform which is easy to learn and comprehensive to develop conversation actions. It is a good example of the simplistic approach to solving complex man to machine communication problem using natural language processing in proximity to machine learning. Api.ai supports context based conversations now, which reduces the overhead of handling user context in session parameters. On the other hand in Lex this has to be handled in session. Also, api.ai can be used for both voice and text based conversations (assistant actions can be easily created using api.ai).
5
Category: AI and machine learning
Description:Computer Vision: Extract information from images to categorize and process visual data.
References:
[AWS]:Amazon Rekognition (based on the same proven, highly scalable, deep learning technology developed by Amazon’s computer vision scientists to analyze billions of images and videos daily. It requires no machine learning expertise to use.)
[Azure]:Cognitive Services(bring AI within reach of every developer—without requiring machine-learning expertise.)
[Google]:Google Vision (offers powerful pre-trained machine learning models through REST and RPC APIs.)
Tags: AmazonRekognition, #GoogleVision, #AzureSpeech
Differences: For now, only Google Cloud Vision supports batch processing. Videos are not natively supported by Google Cloud Vision or Amazon Rekognition. The Object Detection functionality of Google Cloud Vision and Amazon Rekognition is almost identical, both syntactically and semantically.
Differences:
Google Cloud Vision and Amazon Rekognition offer a broad spectrum of solutions, some of which are comparable in terms of functional details, quality, performance, and costs.
6
Category: Big data and analytics: Data warehouse
Description:Cloud-based Enterprise Data Warehouse (EDW) that uses Massively Parallel Processing (MPP) to quickly run complex queries across petabytes of data.
References:
[AWS]:AWS Redshift (scalable data warehouse that makes it simple and cost-effective to analyze all your data across your data warehouse and data lake.), Amazon Redshift Data Lake Export (Save query results in an open format),Amazon Redshift Federated Query(Run queries n line transactional data), Amazon Redshift RA3(Optimize costs with up to 3x better performance), AQUA: AQUA: Advanced Query Accelerator for Amazon Redshift (Power analytics with a new hardware-accelerated cache), UltraWarm for Amazon Elasticsearch Service(Store logs at ~1/10th the cost of existing storage tiers )
[Azure]:Azure Synapse formerly SQL Data Warehouse (limitless analytics service that brings together enterprise data warehousing and Big Data analytics.)
[Google]:BigQuery (RESTful web service that enables interactive analysis of massive datasets working in conjunction with Google Storage. )
Tags:#AWSRedshift, #GoogleBigQuery, #AzureSynapseAnalytics
Differences: Loading data, Managing resources (and hence pricing), Ecosystem. Ecosystem is where Redshift is clearly ahead of BigQuery. While BigQuery is an affordable, performant alternative to Redshift, they are considered to be more up and coming
7
Category: Big data and analytics: Data warehouse
Description: Apache Spark-based analytics platform. Managed Hadoop service. Data orchestration, ETL, Analytics and visualization
References:
[AWS]:EMR, Data Pipeline, Kinesis Stream, Kinesis Firehose, Glue, QuickSight, Athena, CloudSearch
[Azure]:Azure Databricks, Data Catalog Cortana Intelligence, HDInsight, Power BI, Azure Datafactory, Azure Search, Azure Data Lake Anlytics, Stream Analytics, Azure Machine Learning
[Google]:Cloud DataProc, Machine Learning, Cloud Datalab
Tags:#EMR, #DataPipeline, #Kinesis, #Cortana, AzureDatafactory, #AzureDataAnlytics, #CloudDataProc, #MachineLearning, #CloudDatalab
Differences: All three providers offer similar building blocks; data processing, data orchestration, streaming analytics, machine learning and visualisations. AWS certainly has all the bases covered with a solid set of products that will meet most needs. Azure offers a comprehensive and impressive suite of managed analytical products. They support open source big data solutions alongside new serverless analytical products such as Data Lake. Google provide their own twist to cloud analytics with their range of services. With Dataproc and Dataflow, Google have a strong core to their proposition. Tensorflow has been getting a lot of attention recently and there will be many who will be keen to see Machine Learning come out of preview.
8
Category: Virtual servers
Description:Virtual servers allow users to deploy, manage, and maintain OS and server software. Instance types provide combinations of CPU/RAM. Users pay for what they use with the flexibility to change sizes.
Batch: Run large-scale parallel and high-performance computing applications efficiently in the cloud.
References:
[AWS]:Elastic Compute Cloud (EC2), Amazon Bracket(Explore and experiment with quantum computing), Amazon Ec2 M6g Instances (Achieve up to 40% better price performance), Amazon Ec2 Inf1 instancs (Deliver cost-effective ML inference), AWS Graviton2 Processors (Optimize price performance for cloud workloads), AWS Batch, AWS AutoScaling, VMware Cloud on AWS, AWS Local Zones (Run low latency applications at the edge), AWS Wavelength (Deliver ultra-low latency applications for 5G devices), AWS Nitro Enclaves (Further protect highly sensitive data), AWS Outposts (Run AWS infrastructure and services on-premises)
[Azure]:Azure Virtual Machines, Azure Batch, Virtual Machine Scale Sets, Azure VMware by CloudSimple
[Google]:Compute Engine, Preemptible Virtual Machines, Managed instance groups (MIGs), Google Cloud VMware Solution by CloudSimple
Tags: #AWSEC2, #AWSBatch, #AWSAutoscaling, #AzureVirtualMachine, #AzureBatch, #VirtualMachineScaleSets, #AzureVMWare, #ComputeEngine, #MIGS, #VMWare
Differences: There is very little to choose between the 3 providers when it comes to virtual servers. Amazon has some impressive high end kit, on the face of it this sound like it would make AWS a clear winner. However, if your only option is to choose the biggest box available you will need to make sure you have very deep pockets, and perhaps your money may be better spent re-architecting your apps for horizontal scale.Azure’s remains very strong in the PaaS space and now has a IaaS that can genuinely compete with AWS
Google offers a simple and very capable set of services that are easy to understand. However, with availability in only 5 regions it does not have the coverage of the other players.
9
Category: Containers and container orchestrators
Description: A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.
Container orchestration is all about managing the lifecycles of containers, especially in large, dynamic environments.
References:
[AWS]:EC2 Container Service (ECS), Fargate(Run containers without anaging servers or clusters), EC2 Container Registry(managed AWS Docker registry service that is secure, scalable, and reliable.), Elastic Container Service for Kubernetes (EKS: runs the Kubernetes management infrastructure across multiple AWS Availability Zones), App Mesh( application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure)
[Azure]:Azure Container Instances, Azure Container Registry, Azure Kubernetes Service (AKS), Service Fabric Mesh
[Google]:Google Container Engine, Container Registry, Kubernetes Engine
Tags:#ECS, #Fargate, #EKS, #AppMesh, #ContainerEngine, #ContainerRegistry, #AKS
Differences: Google Container Engine, AWS Container Services, and Azure Container Instances can be used to run docker containers. Google offers a simple and very capable set of services that are easy to understand. However, with availability in only 5 regions it does not have the coverage of the other players.
10
Category: Serverless
Description: Integrate systems and run backend processes in response to events or schedules without provisioning or managing servers.
References:
[AWS]:AWS Lambda
[Azure]:Azure Functions
[Google]:Google Cloud Functions
Tags:#AWSLAmbda, #AzureFunctions, #GoogleCloudFunctions
Differences: Both AWS Lambda and Microsoft Azure Functions and Google Cloud Functions offer dynamic, configurable triggers that you can use to invoke your functions on their platforms. AWS Lambda, Azure and Google Cloud Functions support Node.js, Python, and C#. The beauty of serverless development is that, with minor changes, the code you write for one service should be portable to another with little effort – simply modify some interfaces, handle any input/output transforms, and an AWS Lambda Node.JS function is indistinguishable from a Microsoft Azure Node.js Function. AWS Lambda provides further support for Python and Java, while Azure Functions provides support for F# and PHP. AWS Lambda is built from the AMI, which runs on Linux, while Microsoft Azure Functions run in a Windows environment. AWS Lambda uses the AWS Machine architecture to reduce the scope of containerization, letting you spin up and tear down individual pieces of functionality in your application at will.
11
Category: Relational databases
Description: Managed relational database service where resiliency, scale, and maintenance are primarily handled by the platform.
References:
[AWS]:AWS RDS(MySQL and PostgreSQL-compatible relational database built for the cloud,), Aurora(MySQL and PostgreSQL-compatible relational database built for the cloud)
[Azure]:SQL Database, Azure Database for MySQL, Azure Database for PostgreSQL
[Google]:Cloud SQL
Tags: #AWSRDS, #AWSAUrora, #AzureSQlDatabase, #AzureDatabaseforMySQL, #GoogleCloudSQL
Differences: All three providers boast impressive relational database offering. RDS supports an impressive range of managed relational stores while Azure SQL Database is probably the most advanced managed relational database available today. Azure also has the best out-of-the-box support for cross-region geo-replication across its database offerings.
12
Category: NoSQL, Document Databases
Description:A globally distributed, multi-model database that natively supports multiple data models: key-value, documents, graphs, and columnar.
References:
[AWS]:DynamoDB (key-value and document database that delivers single-digit millisecond performance at any scale.), SimpleDB ( a simple web services interface to create and store multiple data sets, query your data easily, and return the results.), Managed Cassandra Services(MCS)
[Azure]:Table Storage, DocumentDB, Azure Cosmos DB
[Google]:Cloud Datastore (handles sharding and replication in order to provide you with a highly available and consistent database. )
Tags:#AWSDynamoDB, #SimpleDB, #TableSTorage, #DocumentDB, AzureCosmosDB, #GoogleCloudDataStore
Differences:DynamoDB and Cloud Datastore are based on the document store database model and are therefore similar in nature to open-source solutions MongoDB and CouchDB. In other words, each database is fundamentally a key-value store. With more workloads moving to the cloud the need for NoSQL databases will become ever more important, and again all providers have a good range of options to satisfy most performance/cost requirements. Of all the NoSQL products on offer it’s hard not to be impressed by DocumentDB; Azure also has the best out-of-the-box support for cross-region geo-replication across its database offerings.
13
Category:Caching
Description:An in-memory–based, distributed caching service that provides a high-performance store typically used to offload non transactional work from a database.
References:
[AWS]:AWS ElastiCache (works as an in-memory data store and cache to support the most demanding applications requiring sub-millisecond response times.)
[Azure]:Azure Cache for Redis (based on the popular software Redis. It is typically used as a cache to improve the performance and scalability of systems that rely heavily on backend data-stores.)
[Google]:Memcache (In-memory key-value store, originally intended for caching)
Tags:#Redis, #Memcached
<Differences: They all support horizontal scaling via sharding.They all improve the performance of web applications by allowing you to retrive information from fast, in-memory caches, instead of relying on slower disk-based databases.”, “Differences”: “ElastiCache supports Memcached and Redis. Memcached Cloud provides various data persistence options as well as remote backups for disaster recovery purposes. Redis offers persistence to disk, Memcache does not. This can be very helpful if you cache lots of data, since you remove the slowness around having a fully cold cache. Redis also offers several extra data structures that Memcache doesn’t— Lists, Sets, Sorted Sets, etc. Memcache only has Key/Value pairs. Memcache is multi-threaded. Redis is single-threaded and event driven. Redis is very fast, but it’ll never be multi-threaded. At hight scale, you can squeeze more connections and transactions out of Memcache. Memcache tends to be more memory efficient. This can make a big difference around the magnitude of 10s of millions or 100s of millions of keys. ElastiCache supports Memcached and Redis. Memcached Cloud provides various data persistence options as well as remote backups for disaster recovery purposes. Redis offers persistence to disk, Memcache does not. This can be very helpful if you cache lots of data, since you remove the slowness around having a fully cold cache. Redis also offers several extra data structures that Memcache doesn’t— Lists, Sets, Sorted Sets, etc. Memcache only has Key/Value pairs. Memcache is multi-threaded. Redis is single-threaded and event driven. Redis is very fast, but it’ll never be multi-threaded. At hight scale, you can squeeze more connections and transactions out of Memcache. Memcache tends to be more memory efficient. This can make a big difference around the magnitude of 10s of millions or 100s of millions of keys.
14
Category: Security, identity, and access
Description:Authentication and authorization: Allows users to securely control access to services and resources while offering data security and protection. Create and manage users and groups, and use permissions to allow and deny access to resources.
References:
[AWS]:Identity and Access Management (IAM), AWS Organizations, Multi-Factor Authentication, AWS Directory Service, Cognito(provides solutions to control access to backend resources from your app), Amazon Detective (Investigate potential security issues), AWS IAM Access Analyzer(Easily analyze resource accessibility)
[Azure]:Azure Active Directory, Azure Subscription Management + Azure RBAC, Multi-Factor Authentication, Azure Active Directory Domain Services, Azure Active Directory B2C, Azure Policy, Management Groups
[Google]:Cloud Identity, Identity Platform, Cloud IAM, Policy Intelligence, Cloud Resource Manager, Cloud Identity-Aware Proxy, Context-aware accessManaged Service for Microsoft Active Directory, Security key enforcement, Titan Security Key
Tags: #IAM, #AWSIAM, #AzureIAM, #GoogleIAM, #Multi-factorAuthentication
Differences: One unique thing about AWS IAM is that accounts created in the organization (not through federation) can only be used within that organization. This contrasts with Google and Microsoft. On the good side, every organization is self-contained. On the bad side, users can end up with multiple sets of credentials they need to manage to access different organizations. The second unique element is that every user can have a non-interactive account by creating and using access keys, an interactive account by enabling console access, or both. (Side note: To use the CLI, you need to have access keys generated.)
15
Category: Object Storage and Content delivery
Description:Object storage service, for use cases including cloud applications, content distribution, backup, archiving, disaster recovery, and big data analytics.
References:
[AWS]:Simple Storage Services (S3), Import/Export(used to move large amounts of data into and out of the Amazon Web Services public cloud using portable storage devices for transport.), Snowball( petabyte-scale data transport solution that uses devices designed to be secure to transfer large amounts of data into and out of the AWS Cloud), CloudFront( content delivery network (CDN) is massively scaled and globally distributed), Elastic Block Store (EBS: high performance block storage service), Elastic File System(shared, elastic file storage system that grows and shrinks as you add and remove files.), S3 Infrequent Access (IA: is for data that is accessed less frequently, but requires rapid access when needed. ), S3 Glacier( long-term storage of data that is infrequently accessed and for which retrieval latency times of 3 to 5 hours are acceptable.), AWS Backup( makes it easy to centralize and automate the back up of data across AWS services in the cloud as well as on-premises using the AWS Storage Gateway.), Storage Gateway(hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage), AWS Import/Export Disk(accelerates moving large amounts of data into and out of AWS using portable storage devices for transport)
[Azure]:Azure Blob storage, File Storage, Data Lake Store, Azure Backup, Azure managed disks, Azure Files, Azure Storage cool tier, Azure Storage archive access tier, Azure Backup, StorSimple, Import/Export
[Google]:Cloud Storage, GlusterFS, CloudCDN
Tags:#S3, #AzureBlobStorage, #CloudStorage
Differences:Source: All providers have good object storage options and so storage alone is unlikely to be a deciding factor when choosing a cloud provider. The exception perhaps is for hybrid scenarios, in this case Azure and AWS clearly win. AWS and Google’s support for automatic versioning is a great feature that is currently missing from Azure; however Microsoft’s fully managed Data Lake Store offers an additional option that will appeal to organisations who are looking to run large scale analytical workloads. If you are prepared to wait 4 hours for your data and you have considerable amounts of the stuff then AWS Glacier storage might be a good option. If you use the common programming patterns for atomic updates and consistency, such as etags and the if-match family of headers, then you should be aware that AWS does not support them, though Google and Azure do. Azure also supports blob leasing, which can be used to provide a distributed lock.
16
Category:Internet of things (IoT)
Description:A cloud gateway for managing bidirectional communication with billions of IoT devices, securely and at scale. Deploy cloud intelligence directly on IoT devices to run in on-premises scenarios.
References:
[AWS]:AWS IoT (Internet of Things), AWS Greengrass, Kinesis Firehose, Kinesis Streams, AWS IoT Things Graph
[Azure]:Azure IoT Hub, Azure IoT Edge, Event Hubs, Azure Digital Twins, Azure Sphere
[Google]:Google Cloud IoT Core, Firebase, Brillo, Weave, CLoud Pub/SUb, Stream Analysis, Big Query, Big Query Streaming API
Tags:#IoT, #InternetOfThings, #Firebase
Differences:AWS and Azure have a more coherent message with their products clearly integrated into their respective platforms, whereas Google Firebase feels like a distinctly separate product.
17
Category:Web Applications
Description:Managed hosting platform providing easy to use services for deploying and scaling web applications and services. API Gateway is a a turnkey solution for publishing APIs to external and internal consumers. Cloudfront is a global content delivery network that delivers audio, video, applications, images, and other files.
References:
[AWS]:Elastic Beanstalk (for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS), AWS Wavelength (for delivering ultra-low latency applications for 5G), API Gateway (makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.), CloudFront (web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations.),Global Accelerator ( improves the availability and performance of your applications with local or global users. It provides static IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers or Amazon EC2 instances.)AWS AppSync (simplifies application development by letting you create a flexible API to securely access, manipulate, and combine data from one or more data sources: GraphQL service with real-time data synchronization and offline programming features. )
[Azure]:App Service, API Management, Azure Content Delivery Network, Azure Content Delivery Network
[Google]:App Engine, Cloud API, Cloud Enpoint, APIGee
Tags: #AWSElasticBeanstalk, #AzureAppService, #GoogleAppEngine, #CloudEnpoint, #CloudFront, #APIgee
Differences: With AWS Elastic Beanstalk, developers retain full control over the AWS resources powering their application. If developers decide they want to manage some (or all) of the elements of their infrastructure, they can do so seamlessly by using Elastic Beanstalk’s management capabilities. AWS Elastic Beanstalk integrates with more apps than Google App Engines (Datadog, Jenkins, Docker, Slack, Github, Eclipse, etc..). Google App Engine has more features than AWS Elastic BEanstalk (App Identity, Java runtime, Datastore, Blobstore, Images, Go Runtime, etc..). Developers describe Amazon API Gateway as “Create, publish, maintain, monitor, and secure APIs at any scale”. Amazon API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management. On the other hand, Google Cloud Endpoints is detailed as “Develop, deploy and manage APIs on any Google Cloud backend”. An NGINX-based proxy and distributed architecture give unparalleled performance and scalability. Using an Open API Specification or one of our API frameworks, Cloud Endpoints gives you the tools you need for every phase of API development and provides insight with Google Cloud Monitoring, Cloud Trace, Google Cloud Logging and Cloud Trace.
18
Category:Encryption
Description:Helps you protect and safeguard your data and meet your organizational security and compliance commitments.
References:
[AWS]:Key Management Service AWS KMS, CloudHSM
[Azure]:Key Vault
[Google]:Encryption By Default at Rest, Cloud KMS
Tags:#AWSKMS, #Encryption, #CloudHSM, #EncryptionAtRest, #CloudKMS
Differences: AWS KMS, is an ideal solution for organizations that want to manage encryption keys in conjunction with other AWS services. In contrast to AWS CloudHSM, AWS KMS provides a complete set of tools to manage encryption keys, develop applications and integrate with other AWS services. Google and Azure offer 4096 RSA. AWS and Google offer 256 bit AES. With AWs, you can bring your own key
19
Category:Internet of things (IoT)
Description:A cloud gateway for managing bidirectional communication with billions of IoT devices, securely and at scale. Deploy cloud intelligence directly on IoT devices to run in on-premises scenarios.
References:
[AWS]:AWS IoT, AWS Greengrass, Kinesis Firehose ( captures and loads streaming data in storage and business intelligence (BI) tools to enable near real-time analytics in the AWS cloud), Kinesis Streams (for rapid and continuous data intake and aggregation.), AWS IoT Things Graph (makes it easy to visually connect different devices and web services to build IoT applications.)
[Azure]:Azure IoT Hub, Azure IoT Edge, Event Hubs, Azure Digital Twins, Azure Sphere
[Google]:Google Cloud IoT Core, Firebase, Brillo, Weave, CLoud Pub/SUb, Stream Analysis, Big Query, Big Query Streaming API
Tags:#IoT, #InternetOfThings, #Firebase
Differences:AWS and Azure have a more coherent message with their products clearly integrated into their respective platforms, whereas Google Firebase feels like a distinctly separate product.
20
Category:Object Storage and Content delivery
Description: Object storage service, for use cases including cloud applications, content distribution, backup, archiving, disaster recovery, and big data analytics.
References:
[AWS]:Simple Storage Services (S3), Import/Export Snowball, CloudFront, Elastic Block Store (EBS), Elastic File System, S3 Infrequent Access (IA), S3 Glacier, AWS Backup, Storage Gateway, AWS Import/Export Disk, Amazon S3 Access Points(Easily manage access for shared data)
[Azure]:Azure Blob storage, File Storage, Data Lake Store, Azure Backup, Azure managed disks, Azure Files, Azure Storage cool tier, Azure Storage archive access tier, Azure Backup, StorSimple, Import/Export
[Google]:Cloud Storage, GlusterFS, CloudCDN
Tags:#S3, #AzureBlobStorage, #CloudStorage
Differences:All providers have good object storage options and so storage alone is unlikely to be a deciding factor when choosing a cloud provider. The exception perhaps is for hybrid scenarios, in this case Azure and AWS clearly win. AWS and Google’s support for automatic versioning is a great feature that is currently missing from Azure; however Microsoft’s fully managed Data Lake Store offers an additional option that will appeal to organisations who are looking to run large scale analytical workloads. If you are prepared to wait 4 hours for your data and you have considerable amounts of the stuff then AWS Glacier storage might be a good option. If you use the common programming patterns for atomic updates and consistency, such as etags and the if-match family of headers, then you should be aware that AWS does not support them, though Google and Azure do. Azure also supports blob leasing, which can be used to provide a distributed lock.
21
Category: Backend process logic
Description: Cloud technology to build distributed applications using out-of-the-box connectors to reduce integration challenges. Connect apps, data and devices on-premises or in the cloud.
References:
[AWS]:AWS Step Functions ( lets you build visual workflows that enable fast translation of business requirements into technical requirements. You can build applications in a matter of minutes, and when needs change, you can swap or reorganize components without customizing any code.)
[Azure]:Logic Apps (cloud service that helps you schedule, automate, and orchestrate tasks, business processes, and workflows when you need to integrate apps, data, systems, and services across enterprises or organizations.)
[Google]:Dataflow ( fully managed service for executing Apache Beam pipelines within the Google Cloud Platform ecosystem.)
Tags:#AWSStepFunctions, #LogicApps, #Dataflow
Differences: AWS Step Functions makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Building applications from individual components that each perform a discrete function lets you scale and change applications quickly. AWS Step Functions belongs to \”Cloud Task Management\” category of the tech stack, while Google Cloud Dataflow can be primarily classified under \”Real-time Data Processing\”. According to the StackShare community, Google Cloud Dataflow has a broader approval, being mentioned in 32 company stacks & 8 developers stacks; compared to AWS Step Functions, which is listed in 19 company stacks and 7 developer stacks.
22
Category: Enterprise application services
Description:Fully integrated Cloud service providing communications, email, document management in the cloud and available on a wide variety of devices.
References:
[AWS]:Amazon WorkMail, Amazon WorkDocs, Amazon Kendra (Sync and Index)
[Azure]:Office 365
[Google]:G Suite
Tags: #AmazonWorkDocs, #Office365, #GoogleGSuite
Differences: G suite document processing applications like Google Docs are far behind Office 365 popular Word and Excel software, but G Suite User interface is intuite, simple and easy to navigate. Office 365 is too clunky. Get 20% off G-Suite Business Plan with Promo Code: PCQ49CJYK7EATNC
23
Category: Networking
Description: Provides an isolated, private environment in the cloud. Users have control over their virtual networking environment, including selection of their own IP address range, creation of subnets, and configuration of route tables and network gateways.
References:
[AWS]:Virtual Private Cloud (VPC), Cloud virtual networking, Subnets, Elastic Network Interface (ENI), Route Tables, Network ACL, Secutity Groups, Internet Gateway, NAT Gateway, AWS VPN Gateway, AWS Route 53, AWS Direct Connect, AWS Network Load Balancer, VPN CloudHub, AWS Local Zones, AWS Transit Gateway network manager (Centrally manage global networks)
[Azure]:Virtual Network(provide services for building networks within Azure.),Subnets (network resources can be grouped by subnet for organisation and security.), Network Interface (Each virtual machine can be assigned one or more network interfaces (NICs)), Network Security Groups (NSG: contains a set of prioritised ACL rules that explicitly grant or deny access), Azure VPN Gateway ( allows connectivity to on-premise networks), Azure DNS, Traffic Manager (DNS based traffic routing solution.), ExpressRoute (provides connections up to 10 Gbps to Azure services over a dedicated fibre connection), Azure Load Balancer, Network Peering, Azure Stack (Azure Stack allows organisations to use Azure services running in private data centers.), Azure Load Balancer , Azure Log Analytics, Azure DNS,
[Google]:Cloud Virtual Network, Subnets, Network Interface, Protocol fowarding, Cloud VPN, Cloud DNS, Virtual Private Network, Cloud Interconnect, CDN interconnect, Cloud DNS, Stackdriver, Google Cloud Load Balancing,
Tags:#VPC, #Subnets, #ACL, #VPNGateway, #CloudVPN, #NetworkInterface, #ENI, #RouteTables, #NSG, #NetworkACL, #InternetGateway, #NatGateway, #ExpressRoute, #CloudInterConnect, #StackDriver
Differences: Subnets group related resources, however, unlike AWS and Azure, Google do not constrain the private IP address ranges of subnets to the address space of the parent network. Like Azure, Google has a built in internet gateway that can be specified from routing rules.
24
Category: Management
Description: A unified management console that simplifies building, deploying, and operating your cloud resources.
References:
[AWS]: AWS Management Console, Trusted Advisor, AWS Usage and Billing Report, AWS Application Discovery Service, Amazon EC2 Systems Manager, AWS Personal Health Dashboard, AWS Compute Optimizer (Identify optimal AWS Compute resources)
[Azure]:Azure portal, Azure Advisor, Azure Billing API, Azure Migrate, Azure Monitor, Azure Resource Health
[Google]:Google CLoud Platform, Cost Management, Security Command Center, StackDriver
Tags: #AWSConsole, #AzurePortal, #GoogleCloudConsole, #TrustedAdvisor, #AzureMonitor, #SecurityCommandCenter
Differences: AWS Console categorizes its Infrastructure as a Service offerings into Compute, Storage and Content Delivery Network (CDN), Database, and Networking to help businesses and individuals grow. Azure excels in the Hybrid Cloud space allowing companies to integrate onsite servers with cloud offerings. Google has a strong offering in containers, since Google developed the Kubernetes standard that AWS and Azure now offer. GCP specializes in high compute offerings like Big Data, analytics and machine learning. It also offers considerable scale and load balancing – Google knows data centers and fast response time.
25
Category: DevOps and application monitoring
Description: Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments; Cloud services for collaborating on code development; Collection of tools for building, debugging, deploying, diagnosing, and managing multiplatform scalable apps and services; Fully managed build service that supports continuous integration and deployment.
References:
[AWS]:AWS CodePipeline(orchestrates workflow for continuous integration, continuous delivery, and continuous deployment), AWS CloudWatch (monitor your AWS resources and the applications you run on AWS in real time. ), AWS X-Ray (application performance management service that enables a developer to analyze and debug applications in aws), AWS CodeDeploy (automates code deployments to Elastic Compute Cloud (EC2) and on-premises servers. ), AWS CodeCommit ( source code storage and version-control service), AWS Developer Tools, AWS CodeBuild (continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. ), AWS Command Line Interface (unified tool to manage your AWS services), AWS OpsWorks (Chef-based), AWS CloudFormation ( provides a common language for you to describe and provision all the infrastructure resources in your cloud environment.), Amazon CodeGuru (for automated code reviews and application performance recommendations)
[Azure]:Azure Monitor, Azure DevOps, Azure Developer Tools, Azure CLI Azure PowerShell, Azure Automation, Azure Resource Manager , VM extensions , Azure Automation
[Google]:DevOps Solutions (Infrastructure as code, Configuration management, Secrets management, Serverless computing, Continuous delivery, Continuous integration , Stackdriver (combines metrics, logs, and metadata from all of your cloud accounts and projects into a single comprehensive view of your environment)
Tags: #CloudWatch, #StackDriver, #AzureMonitor, #AWSXray, #AWSCodeDeploy, #AzureDevOps, #GoogleDevopsSolutions
Differences: CodeCommit eliminates the need to operate your own source control system or worry about scaling its infrastructure. Azure DevOps provides unlimited private Git hosting, cloud build for continuous integration, agile planning, and release management for continuous delivery to the cloud and on-premises. Includes broad IDE support.
SageMaker | Azure Machine Learning Studio
A collaborative, drag-and-drop tool to build, test, and deploy predictive analytics solutions on your data.
Alexa Skills Kit | Microsoft Bot Framework
Build and connect intelligent bots that interact with your users using text/SMS, Skype, Teams, Slack, Office 365 mail, Twitter, and other popular services.
API capable of converting speech to text, understanding intent, and converting text back to speech for natural responsiveness.
Amazon Lex | Language Understanding (LUIS)
Allows your applications to understand user commands contextually.
Amazon Polly, Amazon Transcribe | Azure Speech Services
Enables both Speech to Text, and Text into Speech capabilities.
The Speech Services are the unification of speech-to-text, text-to-speech, and speech-translation into a single Azure subscription. It’s easy to speech enable your applications, tools, and devices with the Speech SDK, Speech Devices SDK, or REST APIs.
Amazon Polly is a Text-to-Speech (TTS) service that uses advanced deep learning technologies to synthesize speech that sounds like a human voice. With dozens of lifelike voices across a variety of languages, you can select the ideal voice and build speech-enabled applications that work in many different countries.
Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to their applications. Using the Amazon Transcribe API, you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech.
Amazon Rekognition | Cognitive Services
Computer Vision: Extract information from images to categorize and process visual data.
Amazon Rekognition is a simple and easy to use API that can quickly analyze any image or video file stored in Amazon S3. Amazon Rekognition is always learning from new data, and we are continually adding new labels and facial recognition features to the service.
Face: Detect, identy, and analyze faces in photos.
Emotions: Recognize emotions in images.
Alexa Skill Set | Azure Virtual Assistant
The Virtual Assistant Template brings together a number of best practices we’ve identified through the building of conversational experiences and automates integration of components that we’ve found to be highly beneficial to Bot Framework developers.
Big data and analytics
Data warehouse
AWS Redshift | SQL Data Warehouse
Cloud-based Enterprise Data Warehouse (EDW) that uses Massively Parallel Processing (MPP) to quickly run complex queries across petabytes of data.
Big data processing EMR | Azure Databricks
Apache Spark-based analytics platform.
Managed Hadoop service. Deploy and manage Hadoop clusters in Azure.
Data orchestration / ETL
AWS Data Pipeline, AWS Glue | Data Factory
Processes and moves data between different compute and storage services, as well as on-premises data sources at specified intervals. Create, schedule, orchestrate, and manage data pipelines.
A fully managed service that serves as a system of registration and system of discovery for enterprise data sources
Analytics and visualization
AWS Kinesis Analytics | Stream Analytics
Data Lake Analytics | Data Lake Store
Storage and analysis platforms that create insights from large quantities of data, or data that originates from many sources.
Business intelligence tools that build visualizations, perform ad hoc analysis, and develop business insights from data.
Delivers full-text search and related search analytics and capabilities.
Amazon Athena | Azure Data Lake Analytics
Provides a serverless interactive query service that uses standard SQL for analyzing databases.
Compute
Virtual servers
Elastic Compute Cloud (EC2) | Azure Virtual Machines
Virtual servers allow users to deploy, manage, and maintain OS and server software. Instance types provide combinations of CPU/RAM. Users pay for what they use with the flexibility to change sizes.
Run large-scale parallel and high-performance computing applications efficiently in the cloud.
AWS Auto Scaling | Virtual Machine Scale Sets
Allows you to automatically change the number of VM instances. You set defined metric and thresholds that determine if the platform adds or removes instances.
VMware Cloud on AWS | Azure VMware by CloudSimple
Redeploy and extend your VMware-based enterprise workloads to Azure with Azure VMware Solution by CloudSimple. Keep using the VMware tools you already know to manage workloads on Azure without disrupting network, security, or data protection policies.
Containers and container orchestrators
EC2 Container Service (ECS), Fargate | Azure Container Instances
Azure Container Instances is the fastest and simplest way to run a container in Azure, without having to provision any virtual machines or adopt a higher-level orchestration service.
EC2 Container Registry | Azure Container Registry
Allows customers to store Docker formatted images. Used to create all types of container deployments on Azure.
Elastic Container Service for Kubernetes (EKS) | Azure Kubernetes Service (AKS)
Deploy orchestrated containerized applications with Kubernetes. Simplify monitoring and cluster management through auto upgrades and a built-in operations console.
App Mesh | Service Fabric Mesh
Fully managed service that enables developers to deploy microservices applications without managing virtual machines, storage, or networking.
AWS App Mesh is a service mesh that provides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure. App Mesh standardizes how your services communicate, giving you end-to-end visibility and ensuring high-availability for your applications.
Serverless
Integrate systems and run backend processes in response to events or schedules without provisioning or managing servers.
AWS Lambda is an event-driven, serverless computing platform provided by Amazon as a part of the Amazon Web Services. It is a computing service that runs code in response to events and automatically manages the computing resources required by that code
Database
Relational database
AWS RDS | SQL Database Azure Database for MySQL Azure Database for PostgreSQL
Managed relational database service where resiliency, scale, and maintenance are primarily handled by the platform.
Amazon Relational Database Service is a distributed relational database service by Amazon Web Services. It is a web service running “in the cloud” designed to simplify the setup, operation, and scaling of a relational database for use in applications. Administration processes like patching the database software, backing up databases and enabling point-in-time recovery are managed automatically. Scaling storage and compute resources can be performed by a single API call as AWS does not offer an ssh connection to RDS instances.
NoSQL / Document
DynamoDB and SimpleDB | Azure Cosmos DB
A globally distributed, multi-model database that natively supports multiple data models: key-value, documents, graphs, and columnar.
Caching
AWS ElastiCache | Azure Cache for Redis
An in-memory–based, distributed caching service that provides a high-performance store typically used to offload non transactional work from a database.
Amazon ElastiCache is a fully managed in-memory data store and cache service by Amazon Web Services. The service improves the performance of web applications by retrieving information from managed in-memory caches, instead of relying entirely on slower disk-based databases. ElastiCache supports two open-source in-memory caching engines: Memcached and Redis.
Database migration
AWS Database Migration Service | Azure Database Migration Service
Migration of database schema and data from one database format to a specific database technology in the cloud.
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.
DevOps and application monitoring
AWS CloudWatch, AWS X-Ray | Azure Monitor
Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments.
Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers.
AWS X-Ray is an application performance management service that enables a developer to analyze and debug applications in the Amazon Web Services (AWS) public cloud. A developer can use AWS X-Ray to visualize how a distributed application is performing during development or production, and across multiple AWS regions and accounts.
AWS CodeDeploy, AWS CodeCommit, AWS CodePipeline | Azure DevOps
A cloud service for collaborating on code development.
AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications.
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define.
AWS CodeCommit is a source code storage and version-control service for Amazon Web Services’ public cloud customers. CodeCommit was designed to help IT teams collaborate on software development, including continuous integration and application delivery.
AWS Developer Tools | Azure Developer Tools
Collection of tools for building, debugging, deploying, diagnosing, and managing multiplatform scalable apps and services.
The AWS Developer Tools are designed to help you build software like Amazon. They facilitate practices such as continuous delivery and infrastructure as code for serverless, containers, and Amazon EC2.
AWS CodeBuild | Azure DevOps
Fully managed build service that supports continuous integration and deployment.
AWS Command Line Interface | Azure CLI Azure PowerShell
Built on top of the native REST API across all cloud services, various programming language-specific wrappers provide easier ways to create solutions.
The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.
AWS OpsWorks (Chef-based) | Azure Automation
Configures and operates applications of all shapes and sizes, and provides templates to create and manage a collection of resources.
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers.
AWS CloudFormation | Azure Resource Manager , VM extensions , Azure Automation
Provides a way for users to automate the manual, long-running, error-prone, and frequently repeated IT tasks.
AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. CloudFormation allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts.
Networking
Area
Cloud virtual networking, Virtual Private Cloud (VPC) | Virtual Network
Provides an isolated, private environment in the cloud. Users have control over their virtual networking environment, including selection of their own IP address range, creation of subnets, and configuration of route tables and network gateways.
Cross-premises connectivity
AWS VPN Gateway | Azure VPN Gateway
Connects Azure virtual networks to other Azure virtual networks, or customer on-premises networks (Site To Site). Allows end users to connect to Azure services through VPN tunneling (Point To Site).
DNS management
AWS Route 53 | Azure DNS
Manage your DNS records using the same credentials and billing and support contract as your other Azure services
Route 53 | Traffic Manager
A service that hosts domain names, plus routes users to Internet applications, connects user requests to datacenters, manages traffic to apps, and improves app availability with automatic failover.
Dedicated network
AWS Direct Connect | ExpressRoute
Establishes a dedicated, private network connection from a location to the cloud provider (not over the Internet).
Load balancing
AWS Network Load Balancer | Azure Load Balancer
Azure Load Balancer load-balances traffic at layer 4 (TCP or UDP).
Application Load Balancer | Application Gateway
Application Gateway is a layer 7 load balancer. It supports SSL termination, cookie-based session affinity, and round robin for load-balancing traffic.
Internet of things (IoT)
AWS IoT | Azure IoT Hub
A cloud gateway for managing bidirectional communication with billions of IoT devices, securely and at scale.
AWS Greengrass | Azure IoT Edge
Deploy cloud intelligence directly on IoT devices to run in on-premises scenarios.
Kinesis Firehose, Kinesis Streams | Event Hubs
Services that allow the mass ingestion of small data inputs, typically from devices and sensors, to process and route the data.
AWS IoT Things Graph | Azure Digital Twins
Azure Digital Twins is an IoT service that helps you create comprehensive models of physical environments. Create spatial intelligence graphs to model the relationships and interactions between people, places, and devices. Query data from a physical space rather than disparate sensors.
Management
Trusted Advisor | Azure Advisor
Provides analysis of cloud resource configuration and security so subscribers can ensure they’re making use of best practices and optimum configurations.
AWS Usage and Billing Report | Azure Billing API
Services to help generate, monitor, forecast, and share billing data for resource usage by time, organization, or product resources.
AWS Management Console | Azure portal
A unified management console that simplifies building, deploying, and operating your cloud resources.
AWS Application Discovery Service | Azure Migrate
Assesses on-premises workloads for migration to Azure, performs performance-based sizing, and provides cost estimations.
Amazon EC2 Systems Manager | Azure Monitor
Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments.
AWS Personal Health Dashboard | Azure Resource Health
Provides detailed information about the health of resources as well as recommended actions for maintaining resource health.
Security, identity, and access
Authentication and authorization
Identity and Access Management (IAM) | Azure Active Directory
Allows users to securely control access to services and resources while offering data security and protection. Create and manage users and groups, and use permissions to allow and deny access to resources.
Identity and Access Management (IAM) | Azure Role Based Access Control
Role-based access control (RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.
AWS Organizations | Azure Subscription Management + Azure RBAC
Security policy and role management for working with multiple accounts.
Multi-Factor Authentication | Multi-Factor Authentication
Safeguard access to data and applications while meeting user demand for a simple sign-in process.
AWS Directory Service | Azure Active Directory Domain Services
Provides managed domain services such as domain join, group policy, LDAP, and Kerberos/NTLM authentication that are fully compatible with Windows Server Active Directory.
Cognito | Azure Active Directory B2C
A highly available, global, identity management service for consumer-facing applications that scales to hundreds of millions of identities.
AWS Organizations | Azure Policy
Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources, so those resources stay compliant with your corporate standards and service level agreements.
AWS Organizations | Management Groups
Azure management groups provide a level of scope above subscriptions. You organize subscriptions into containers called “management groups” and apply your governance conditions to the management groups. All subscriptions within a management group automatically inherit the conditions applied to the management group. Management groups give you enterprise-grade management at a large scale, no matter what type of subscriptions you have.
Encryption
Server-side encryption with Amazon S3 Key Management Service | Azure Storage Service Encryption
Helps you protect and safeguard your data and meet your organizational security and compliance commitments.
Key Management Service AWS KMS, CloudHSM | Key Vault
Provides security solution and works with other services by providing a way to manage, create, and control encryption keys stored in hardware security modules (HSM).
Firewall
Web Application Firewall | Application Gateway – Web Application Firewall
A firewall that protects web applications from common web exploits.
Web Application Firewall | Azure Firewall
Provides inbound protection for non-HTTP/S protocols, outbound network-level protection for all ports and protocols, and application-level protection for outbound HTTP/S.
Security
Inspector | Security Center
An automated security assessment service that improves the security and compliance of applications. Automatically assess applications for vulnerabilities or deviations from best practices.
Certificate Manager | App Service Certificates available on the Portal
Service that allows customers to create, manage, and consume certificates seamlessly in the cloud.
GuardDuty | Azure Advanced Threat Protection
Detect and investigate advanced attacks on-premises and in the cloud.
AWS Artifact | Service Trust Portal
Provides access to audit reports, compliance guides, and trust documents from across cloud services.
AWS Shield | Azure DDos Protection Service
Provides cloud services with protection from distributed denial of services (DDoS) attacks.
Storage
Object storage
Simple Storage Services (S3) | Azure Blob storage
Object storage service, for use cases including cloud applications, content distribution, backup, archiving, disaster recovery, and big data analytics.
Virtual server disks
Elastic Block Store (EBS) | Azure managed disks
SSD storage optimized for I/O intensive read/write operations. For use as high-performance Azure virtual machine storage.
Shared files
Elastic File System | Azure Files
Provides a simple interface to create and configure file systems quickly, and share common files. Can be used with traditional protocols that access files over a network.
Archiving and backup
S3 Infrequent Access (IA) | Azure Storage cool tier
Cool storage is a lower-cost tier for storing data that is infrequently accessed and long-lived.
S3 Glacier | Azure Storage archive access tier
Archive storage has the lowest storage cost and higher data retrieval costs compared to hot and cool storage.
AWS Backup | Azure Backup
Back up and recover files and folders from the cloud, and provide offsite protection against data loss.
Hybrid storage
Storage Gateway | StorSimple
Integrates on-premises IT environments with cloud storage. Automates data management and storage, plus supports disaster recovery.
Bulk data transfer
AWS Import/Export Disk | Import/Export
A data transport solution that uses secure disks and appliances to transfer large amounts of data. Also offers data protection during transit.
AWS Import/Export Snowball, Snowball Edge, Snowmobile | Azure Data Box
Petabyte- to exabyte-scale data transport solution that uses secure data storage devices to transfer large amounts of data to and from Azure.
Web applications
Elastic Beanstalk | App Service
Managed hosting platform providing easy to use services for deploying and scaling web applications and services.
API Gateway | API Management
A turnkey solution for publishing APIs to external and internal consumers.
CloudFront | Azure Content Delivery Network
A global content delivery network that delivers audio, video, applications, images, and other files.
Global Accelerator | Azure Front Door
Easily join your distributed microservice architectures into a single global application using HTTP load balancing and path-based routing rules. Automate turning up new regions and scale-out with API-driven global actions, and independent fault-tolerance to your back end microservices in Azure—or anywhere.
Miscellaneous
Backend process logic
AWS Step Functions | Logic Apps
Cloud technology to build distributed applications using out-of-the-box connectors to reduce integration challenges. Connect apps, data and devices on-premises or in the cloud.
Enterprise application services
Amazon WorkMail, Amazon WorkDocs | Office 365
Fully integrated Cloud service providing communications, email, document management in the cloud and available on a wide variety of devices.
Gaming
GameLift, GameSparks | PlayFab
Managed services for hosting dedicated game servers.
Media transcoding
Elastic Transcoder | Media Services
Services that offer broadcast-quality video streaming services, including various transcoding technologies.
Workflow
Simple Workflow Service (SWF) | Logic Apps
Serverless technology for connecting apps, data and devices anywhere, whether on-premises or in the cloud for large ecosystems of SaaS and cloud-based connectors.
Hybrid
Outposts | Azure Stack
Azure Stack is a hybrid cloud platform that enables you to run Azure services in your company’s or service provider’s datacenter. As a developer, you can build apps on Azure Stack. You can then deploy them to either Azure Stack or Azure, or you can build truly hybrid apps that take advantage of connectivity between an Azure Stack cloud and Azure.
How does a business decide between Microsoft Azure or AWS?
Basically, it all comes down to what your organizational needs are and if there’s a particular area that’s especially important to your business (ex. serverless, or integration with Microsoft applications).
Some of the main things it comes down to is compute options, pricing, and purchasing options.
Here’s a brief comparison of the compute option features across cloud providers:
Here’s an example of a few instances’ costs (all are Linux OS):
Each provider offers a variety of options to lower costs from the listed On-Demand prices. These can fall under reservations, spot and preemptible instances and contracts.
Both AWS and Azure offer a way for customers to purchase compute capacity in advance in exchange for a discount: AWS Reserved Instances and Azure Reserved Virtual Machine Instances. There are a few interesting variations between the instances across the cloud providers which could affect which is more appealing to a business.
Another discounting mechanism is the idea of spot instances in AWS and low-priority VMs in Azure. These options allow users to purchase unused capacity for a steep discount.
With AWS and Azure, enterprise contracts are available. These are typically aimed at enterprise customers, and encourage large companies to commit to specific levels of usage and spend in exchange for an across-the-board discount – for example, AWS EDPs and Azure Enterprise Agreements.
You can read more about the differences between AWS and Azure to help decide which your business should use in this blog post
Source: AWS to Azure services comparison – Azure Architecture
Top 100 AWS Solutions Architect Associate Certification Exam Questions and Answers Dump SAA-C03


Elevate Your Career with AI & Machine Learning For Dummies PRO and Start mastering the technologies shaping the future—download now and take the next step in your professional journey!
What are the Top 100 AWS Solutions Architect Associate Certification Exam Questions and Answers Dump SAA-C03?
AWS Certified Solutions Architects are responsible for designing, deploying, and managing AWS cloud applications. The AWS Cloud Solutions Architect Associate exam validates an examinee’s ability to effectively demonstrate knowledge of how to design and deploy secure and robust applications on AWS technologies. The AWS Solutions Architect Associate training provides an overview of key AWS services, security, architecture, pricing, and support.

An Insightful Overview of SAA-C03 Exam Topics Encountered and Reflecting on My SAA-C03 Exam Journey: From Setback to Success
The AWS Certified Solutions Architect – Associate (SAA-C03) Examination is a required examination for the AWS Certified Solutions Architect – Professional level. Successful completion of this examination can lead to a salary raise or promotion for those in cloud roles. Below is the Top 100 AWS solutions architect associate exam prep facts and summaries questions and answers dump.
With average increases in salary of over 25% for certified individuals, you’re going to be in a much better position to secure your dream job or promotion if you earn your AWS Certified Solutions Architect Associate certification. You’ll also develop strong hands-on skills by doing the guided hands-on lab exercises in our course which will set you up for successfully performing in a solutions architect role.
AWS solutions architect associate SAA-C03 practice exam and cheat sheet 2023 pdf eBook Print Book
aws solutions architect associate SAA-C03 practice exam and flashcards 2023 pdf eBook Print Book
aws certified solutions architect pdf book 2023
aws solutions architect cheat sheet ebook 2023
The AWS Solutions Architect Associate is ideal for those performing in Solutions Architect roles and for anyone working at a technical level with AWS technologies. Earning the AWS Certified Solutions Architect Associate will build your credibility and confidence as it demonstrates that you have the cloud skills companies need to innovate for the future.
AWS Certified Solutions Architect – Associate average salary
The AWS Certified Solutions Architect – Associate average salary is $149,446/year
In this blog, we will help you prepare for the AWS Solution Architect Associate Certification Exam, give you some facts and summaries, provide AWS Solution Architect Associate Top Questions and Answers Dump
How long to study for the AWS Solutions Architect exam?
We recommend that you allocate at least 60 minutes of study time per day and you will then be able to complete the certification within 5 weeks (including taking the actual exam). Study times can vary based on your experience with AWS and how much time you have each day, with some students passing their exams much faster and others taking a little longer. Get our eBook here.

AWS Certified Solutions Architects are IT professionals who design cloud solutions with AWS services to meet given technical requirements. An AWS Solutions Architect Associate is expected to design and implement distributed systems on AWS that are high-performing, scalable, secure and cost optimized.
How hard is the AWS Certified Solutions Architect Associate exam?
The AWS Solutions Architect Associate exam is an associate-level exam that requires a solid understanding of the AWS platform and a broad range of AWS services. The AWS Certified Solutions Architect Associate exam questions are scenario-based questions and can be challenging. Despite this, the AWS Solutions Architect Associate is often earned by beginners to cloud computing.
The popular AWS Certified Solutions Architect Associate exam have its new version this August 2022.
AWS Certified Solutions Architect – Associate (SAA-C03) Exam Guide
The AWS Certified Solutions Architect – Associate (SAA-C03) exam is intended for individuals who perform in a solutions architect role.
The exam validates a candidate’s ability to use AWS technologies to design solutions based on the AWS Well-Architected Framework.
What is the format of the AWS Certified Solutions Architect Associate exam?
The SAA-C03 exam is a multiple choice examination that is 65 questions in length. You can take the exam in a testing center or using an online proctored exam from your home or office. You have 130 minutes to complete your exam and the passing mark is 720 points out of 100 points (72%). If English is not your first language you can request an accommodation when booking your exam that will qualify you for an additional 30 minutes exam extension.
The exam also validates a candidate’s ability to complete the following tasks:
• Design solutions that incorporate AWS services to meet current business requirements and future projected needs
• Design architectures that are secure, resilient, high-performing, and cost-optimized
• Review existing solutions and determine improvements
Unscored content
The exam includes 15 unscored questions that do not affect your score.
AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.
Target candidate description
The target candidate should have at least 1 year of hands-on experience designing cloud solutions that use AWS services
Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720.
Your score shows how you performed on the exam as a whole and whether or not you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.
What is the passing score for the AWS Solutions Architect exam?
All AWS certification exam results are reported as a score from 100 to 1000. Your score shows how you performed on the examination as a whole and whether or not you passed. The passing score for the AWS Certified Solutions Architect Associate is 720 (72%).
Can I take the AWS Exam from Home?
Yes, you can now take all AWS Certification exams with online proctoring using Pearson Vue or PSI. Here’s a detailed guide on how to book your AWS exam.
Are there any prerequisites for taking the AWS Certified Solutions Architect exam?
There are no prerequisites for taking AWS exams. You do not need any programming knowledge or experience working with AWS. Everything you need to know is included in our courses. We do recommend that you have a basic understanding of fundamental computing concepts such as compute, storage, networking, and databases.
How much does the AWS Solution Architect Exam cost?
The AWS Solutions Architect Associate exam cost is $150 US.
Once you successfully pass your exam, you will be issued a 50% discount voucher that you can use towards your next AWS Exam.
For more detailed information, check out this blog article on AWS Certification Costs.
The Role of an AWS Certified Solutions Architect Associate
AWS Certified Solutions Architects are IT professionals who design cloud solutions with AWS services to meet given technical requirements. An AWS Solutions Architect Associate is expected to design and implement distributed systems on AWS that are high-performing, scalable, secure and cost optimized.
Content outline:
Domain 1: Design Secure Architectures 30%
Domain 2: Design Resilient Architectures 26%
Domain 3: Design High-Performing Architectures 24%
Domain 4: Design Cost-Optimized Architectures 20%
Domain 1: Design Secure Architectures
This exam domain is focused on securing your architectures on AWS and comprises 30% of the exam. Task statements include:
Task Statement 1: Design secure access to AWS resources.
Knowledge of:
• Access controls and management across multiple accounts
• AWS federated access and identity services (for example, AWS Identity and Access Management [IAM], AWS Single Sign-On [AWS SSO])
• AWS global infrastructure (for example, Availability Zones, AWS Regions)
• AWS security best practices (for example, the principle of least privilege)
• The AWS shared responsibility model
Skills in:
• Applying AWS security best practices to IAM users and root users (for example, multi-factor authentication [MFA])
• Designing a flexible authorization model that includes IAM users, groups, roles, and policies
• Designing a role-based access control strategy (for example, AWS Security Token Service [AWS STS], role switching, cross-account access)
• Designing a security strategy for multiple AWS accounts (for example, AWS Control Tower, service control policies [SCPs])
• Determining the appropriate use of resource policies for AWS services
• Determining when to federate a directory service with IAM roles
Task Statement 2: Design secure workloads and applications.
Knowledge of:
• Application configuration and credentials security
• AWS service endpoints
• Control ports, protocols, and network traffic on AWS
• Secure application access
• Security services with appropriate use cases (for example, Amazon Cognito, Amazon GuardDuty, Amazon Macie)
• Threat vectors external to AWS (for example, DDoS, SQL injection)
Skills in:
• Designing VPC architectures with security components (for example, security groups, route tables, network ACLs, NAT gateways)
• Determining network segmentation strategies (for example, using public subnets and private subnets)
• Integrating AWS services to secure applications (for example, AWS Shield, AWS WAF, AWS SSO, AWS Secrets Manager)
• Securing external network connections to and from the AWS Cloud (for example, VPN, AWS Direct Connect)
Task Statement 3: Determine appropriate data security controls.
Knowledge of:
• Data access and governance
• Data recovery
• Data retention and classification
• Encryption and appropriate key management
Skills in:
• Aligning AWS technologies to meet compliance requirements
• Encrypting data at rest (for example, AWS Key Management Service [AWS KMS])
• Encrypting data in transit (for example, AWS Certificate Manager [ACM] using TLS)
• Implementing access policies for encryption keys
• Implementing data backups and replications
• Implementing policies for data access, lifecycle, and protection
• Rotating encryption keys and renewing certificates
Domain 2: Design Resilient Architectures
This exam domain is focused on designing resilient architectures on AWS and comprises 26% of the exam. Task statements include:
Task Statement 1: Design scalable and loosely coupled architectures.
Knowledge of:
• API creation and management (for example, Amazon API Gateway, REST API)
• AWS managed services with appropriate use cases (for example, AWS Transfer Family, Amazon
Simple Queue Service [Amazon SQS], Secrets Manager)
• Caching strategies
• Design principles for microservices (for example, stateless workloads compared with stateful workloads)
• Event-driven architectures
• Horizontal scaling and vertical scaling
• How to appropriately use edge accelerators (for example, content delivery network [CDN])
• How to migrate applications into containers
• Load balancing concepts (for example, Application Load Balancer)
• Multi-tier architectures
• Queuing and messaging concepts (for example, publish/subscribe)
• Serverless technologies and patterns (for example, AWS Fargate, AWS Lambda)
• Storage types with associated characteristics (for example, object, file, block)
• The orchestration of containers (for example, Amazon Elastic Container Service [Amazon ECS],Amazon Elastic Kubernetes Service [Amazon EKS])
• When to use read replicas
• Workflow orchestration (for example, AWS Step Functions)
Skills in:
• Designing event-driven, microservice, and/or multi-tier architectures based on requirements
• Determining scaling strategies for components used in an architecture design
• Determining the AWS services required to achieve loose coupling based on requirements
• Determining when to use containers
• Determining when to use serverless technologies and patterns
• Recommending appropriate compute, storage, networking, and database technologies based on requirements
• Using purpose-built AWS services for workloads
Task Statement 2: Design highly available and/or fault-tolerant architectures.
Knowledge of:
• AWS global infrastructure (for example, Availability Zones, AWS Regions, Amazon Route 53)
• AWS managed services with appropriate use cases (for example, Amazon Comprehend, Amazon Polly)
• Basic networking concepts (for example, route tables)
• Disaster recovery (DR) strategies (for example, backup and restore, pilot light, warm standby,
active-active failover, recovery point objective [RPO], recovery time objective [RTO])
• Distributed design patterns
• Failover strategies
• Immutable infrastructure
• Load balancing concepts (for example, Application Load Balancer)
• Proxy concepts (for example, Amazon RDS Proxy)
• Service quotas and throttling (for example, how to configure the service quotas for a workload in a standby environment)
• Storage options and characteristics (for example, durability, replication)
• Workload visibility (for example, AWS X-Ray)
Skills in:
• Determining automation strategies to ensure infrastructure integrity
• Determining the AWS services required to provide a highly available and/or fault-tolerant architecture across AWS Regions or Availability Zones
• Identifying metrics based on business requirements to deliver a highly available solution
• Implementing designs to mitigate single points of failure
• Implementing strategies to ensure the durability and availability of data (for example, backups)
• Selecting an appropriate DR strategy to meet business requirements
• Using AWS services that improve the reliability of legacy applications and applications not built for the cloud (for example, when application changes are not possible)
• Using purpose-built AWS services for workloads
Domain 3: Design High-Performing Architectures
This exam domain is focused on designing high-performing architectures on AWS and comprises 24% of the exam. Task statements include:
Task Statement 1: Determine high-performing and/or scalable storage solutions.
Knowledge of:
• Hybrid storage solutions to meet business requirements
• Storage services with appropriate use cases (for example, Amazon S3, Amazon Elastic File System [Amazon EFS], Amazon Elastic Block Store [Amazon EBS])
• Storage types with associated characteristics (for example, object, file, block)
Skills in:
• Determining storage services and configurations that meet performance demands
• Determining storage services that can scale to accommodate future needs
Task Statement 2: Design high-performing and elastic compute solutions.
Knowledge of:
• AWS compute services with appropriate use cases (for example, AWS Batch, Amazon EMR, Fargate)
• Distributed computing concepts supported by AWS global infrastructure and edge services
• Queuing and messaging concepts (for example, publish/subscribe)
• Scalability capabilities with appropriate use cases (for example, Amazon EC2 Auto Scaling, AWS Auto Scaling)
• Serverless technologies and patterns (for example, Lambda, Fargate)
• The orchestration of containers (for example, Amazon ECS, Amazon EKS)
Skills in:
• Decoupling workloads so that components can scale independently
• Identifying metrics and conditions to perform scaling actions
• Selecting the appropriate compute options and features (for example, EC2 instance types) to meet business requirements
• Selecting the appropriate resource type and size (for example, the amount of Lambda memory) to meet business requirements
Task Statement 3: Determine high-performing database solutions.
Knowledge of:
• AWS global infrastructure (for example, Availability Zones, AWS Regions)
• Caching strategies and services (for example, Amazon ElastiCache)
• Data access patterns (for example, read-intensive compared with write-intensive)
• Database capacity planning (for example, capacity units, instance types, Provisioned IOPS)
• Database connections and proxies
• Database engines with appropriate use cases (for example, heterogeneous migrations, homogeneous migrations)
• Database replication (for example, read replicas)
• Database types and services (for example, serverless, relational compared with non-relational, in-memory)
Skills in:
• Configuring read replicas to meet business requirements
• Designing database architectures
• Determining an appropriate database engine (for example, MySQL compared with
PostgreSQL)
• Determining an appropriate database type (for example, Amazon Aurora, Amazon DynamoDB)
• Integrating caching to meet business requirements
Task Statement 4: Determine high-performing and/or scalable network architectures.
Knowledge of:
• Edge networking services with appropriate use cases (for example, Amazon CloudFront, AWS Global Accelerator)
• How to design network architecture (for example, subnet tiers, routing, IP addressing)
• Load balancing concepts (for example, Application Load Balancer)
• Network connection options (for example, AWS VPN, Direct Connect, AWS PrivateLink)
Skills in:
• Creating a network topology for various architectures (for example, global, hybrid, multi-tier)
• Determining network configurations that can scale to accommodate future needs
• Determining the appropriate placement of resources to meet business requirements
• Selecting the appropriate load balancing strategy
Task Statement 5: Determine high-performing data ingestion and transformation solutions.
Knowledge of:
• Data analytics and visualization services with appropriate use cases (for example, Amazon Athena, AWS Lake Formation, Amazon QuickSight)
• Data ingestion patterns (for example, frequency)
• Data transfer services with appropriate use cases (for example, AWS DataSync, AWS Storage Gateway)
• Data transformation services with appropriate use cases (for example, AWS Glue)
• Secure access to ingestion access points
• Sizes and speeds needed to meet business requirements
• Streaming data services with appropriate use cases (for example, Amazon Kinesis)
Skills in:
• Building and securing data lakes
• Designing data streaming architectures
• Designing data transfer solutions
• Implementing visualization strategies
• Selecting appropriate compute options for data processing (for example, Amazon EMR)
• Selecting appropriate configurations for ingestion
• Transforming data between formats (for example, .csv to .parquet)
Domain 4: Design Cost-Optimized Architectures
This exam domain is focused optimizing solutions for cost-effectiveness on AWS and comprises 20% of the exam. Task statements include:
Task Statement 1: Design cost-optimized storage solutions.
Knowledge of:
• Access options (for example, an S3 bucket with Requester Pays object storage)
• AWS cost management service features (for example, cost allocation tags, multi-account billing)
• AWS cost management tools with appropriate use cases (for example, AWS Cost Explorer, AWS Budgets, AWS Cost and Usage Report)
• AWS storage services with appropriate use cases (for example, Amazon FSx, Amazon EFS, Amazon S3, Amazon EBS)
• Backup strategies
• Block storage options (for example, hard disk drive [HDD] volume types, solid state drive [SSD] volume types)
• Data lifecycles
• Hybrid storage options (for example, DataSync, Transfer Family, Storage Gateway)
• Storage access patterns
• Storage tiering (for example, cold tiering for object storage)
• Storage types with associated characteristics (for example, object, file, block)
Skills in:
• Designing appropriate storage strategies (for example, batch uploads to Amazon S3 compared with individual uploads)
• Determining the correct storage size for a workload
• Determining the lowest cost method of transferring data for a workload to AWS storage
• Determining when storage auto scaling is required
• Managing S3 object lifecycles
• Selecting the appropriate backup and/or archival solution
• Selecting the appropriate service for data migration to storage services
• Selecting the appropriate storage tier
• Selecting the correct data lifecycle for storage
• Selecting the most cost-effective storage service for a workload
Task Statement 2: Design cost-optimized compute solutions.
Knowledge of:
• AWS cost management service features (for example, cost allocation tags, multi-account billing)
• AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report)
• AWS global infrastructure (for example, Availability Zones, AWS Regions)
• AWS purchasing options (for example, Spot Instances, Reserved Instances, Savings Plans)
• Distributed compute strategies (for example, edge processing)
• Hybrid compute options (for example, AWS Outposts, AWS Snowball Edge)
• Instance types, families, and sizes (for example, memory optimized, compute optimized, virtualization)
• Optimization of compute utilization (for example, containers, serverless computing, microservices)
• Scaling strategies (for example, auto scaling, hibernation)
Skills in:
• Determining an appropriate load balancing strategy (for example, Application Load Balancer [Layer 7] compared with Network Load Balancer [Layer 4] compared with Gateway Load Balancer)
• Determining appropriate scaling methods and strategies for elastic workloads (for example, horizontal compared with vertical, EC2 hibernation)
• Determining cost-effective AWS compute services with appropriate use cases (for example, Lambda, Amazon EC2, Fargate)
• Determining the required availability for different classes of workloads (for example, production workloads, non-production workloads)
• Selecting the appropriate instance family for a workload
• Selecting the appropriate instance size for a workload
Task Statement 3: Design cost-optimized database solutions.
Knowledge of:
• AWS cost management service features (for example, cost allocation tags, multi-account billing)
• AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report)
• Caching strategies
• Data retention policies
• Database capacity planning (for example, capacity units)
• Database connections and proxies
• Database engines with appropriate use cases (for example, heterogeneous migrations, homogeneous migrations)
• Database replication (for example, read replicas)
• Database types and services (for example, relational compared with non-relational, Aurora, DynamoDB)
Skills in:
• Designing appropriate backup and retention policies (for example, snapshot frequency)
• Determining an appropriate database engine (for example, MySQL compared with PostgreSQL)
• Determining cost-effective AWS database services with appropriate use cases (for example, DynamoDB compared with Amazon RDS, serverless)
• Determining cost-effective AWS database types (for example, time series format, columnar format)
• Migrating database schemas and data to different locations and/or different database engines
Task Statement 4: Design cost-optimized network architectures.
Knowledge of:
• AWS cost management service features (for example, cost allocation tags, multi-account billing)
• AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report)
• Load balancing concepts (for example, Application Load Balancer)
• NAT gateways (for example, NAT instance costs compared with NAT gateway costs)
• Network connectivity (for example, private lines, dedicated lines, VPNs)
• Network routing, topology, and peering (for example, AWS Transit Gateway, VPC peering)
• Network services with appropriate use cases (for example, DNS)
Skills in:
• Configuring appropriate NAT gateway types for a network (for example, a single shared NAT
gateway compared with NAT gateways for each Availability Zone)
• Configuring appropriate network connections (for example, Direct Connect compared with VPN compared with internet)
• Configuring appropriate network routes to minimize network transfer costs (for example, Region to Region, Availability Zone to Availability Zone, private to public, Global Accelerator, VPC endpoints)
• Determining strategic needs for content delivery networks (CDNs) and edge caching
• Reviewing existing workloads for network optimizations
• Selecting an appropriate throttling strategy
• Selecting the appropriate bandwidth allocation for a network device (for example, a single VPN compared with multiple VPNs, Direct Connect speed)

Which key tools, technologies, and concepts might be covered on the exam?
The following is a non-exhaustive list of the tools and technologies that could appear on the exam.
This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam.
The general tools and technologies in this list appear in no particular order.
AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance:
• Compute
• Cost management
• Database
• Disaster recovery
• High performance
• Management and governance
• Microservices and component decoupling
• Migration and data transfer
• Networking, connectivity, and content delivery
• Resiliency
• Security
• Serverless and event-driven design principles
• Storage
AWS Services and Features
There are lots of new services and feature updates in scope for the new AWS Certified Solutions Architect Associate certification! Here’s a list of some of the new services that will be in scope for the new version of the exam:
Analytics:
• Amazon Athena
• AWS Data Exchange
• AWS Data Pipeline
• Amazon EMR
• AWS Glue
• Amazon Kinesis
• AWS Lake Formation
• Amazon Managed Streaming for Apache Kafka (Amazon MSK)
• Amazon OpenSearch Service (Amazon Elasticsearch Service)
• Amazon QuickSight
• Amazon Redshift
Application Integration:
• Amazon AppFlow
• AWS AppSync
• Amazon EventBridge (Amazon CloudWatch Events)
• Amazon MQ
• Amazon Simple Notification Service (Amazon SNS)
• Amazon Simple Queue Service (Amazon SQS)
• AWS Step Functions
AWS Cost Management:
• AWS Budgets
• AWS Cost and Usage Report
• AWS Cost Explorer
• Savings Plans
Compute:
• AWS Batch
• Amazon EC2
• Amazon EC2 Auto Scaling
• AWS Elastic Beanstalk
• AWS Outposts
• AWS Serverless Application Repository
• VMware Cloud on AWS
• AWS Wavelength
Containers:
• Amazon Elastic Container Registry (Amazon ECR)
• Amazon Elastic Container Service (Amazon ECS)
• Amazon ECS Anywhere
• Amazon Elastic Kubernetes Service (Amazon EKS)
• Amazon EKS Anywhere
• Amazon EKS Distro
Database:
• Amazon Aurora
• Amazon Aurora Serverless
• Amazon DocumentDB (with MongoDB compatibility)
• Amazon DynamoDB
• Amazon ElastiCache
• Amazon Keyspaces (for Apache Cassandra)
• Amazon Neptune
• Amazon Quantum Ledger Database (Amazon QLDB)
• Amazon RDS
• Amazon Redshift
• Amazon Timestream
Developer Tools:
• AWS X-Ray
Front-End Web and Mobile:
• AWS Amplify
• Amazon API Gateway
• AWS Device Farm
• Amazon Pinpoint
Machine Learning:
• Amazon Comprehend
• Amazon Forecast
• Amazon Fraud Detector
• Amazon Kendra
• Amazon Lex
• Amazon Polly
• Amazon Rekognition
• Amazon SageMaker
• Amazon Textract
• Amazon Transcribe
• Amazon Translate
Management and Governance:
• AWS Auto Scaling
• AWS CloudFormation
• AWS CloudTrail
• Amazon CloudWatch
• AWS Command Line Interface (AWS CLI)
• AWS Compute Optimizer
• AWS Config
• AWS Control Tower
• AWS License Manager
• Amazon Managed Grafana
• Amazon Managed Service for Prometheus
• AWS Management Console
• AWS Organizations
• AWS Personal Health Dashboard
• AWS Proton
• AWS Service Catalog
• AWS Systems Manager
• AWS Trusted Advisor
• AWS Well-Architected Tool
Media Services:
• Amazon Elastic Transcoder
• Amazon Kinesis Video Streams
Migration and Transfer:
• AWS Application Discovery Service
• AWS Application Migration Service (CloudEndure Migration)
• AWS Database Migration Service (AWS DMS)
• AWS DataSync
• AWS Migration Hub
• AWS Server Migration Service (AWS SMS)
• AWS Snow Family
• AWS Transfer Family
Networking and Content Delivery:
• Amazon CloudFront
• AWS Direct Connect
• Elastic Load Balancing (ELB)
• AWS Global Accelerator
• AWS PrivateLink
• Amazon Route 53
• AWS Transit Gateway
• Amazon VPC
• AWS VPN
Security, Identity, and Compliance:
• AWS Artifact
• AWS Audit Manager
• AWS Certificate Manager (ACM)
• AWS CloudHSM
• Amazon Cognito
• Amazon Detective
• AWS Directory Service
• AWS Firewall Manager
• Amazon GuardDuty
• AWS Identity and Access Management (IAM)
• Amazon Inspector
• AWS Key Management Service (AWS KMS)
• Amazon Macie
• AWS Network Firewall
• AWS Resource Access Manager (AWS RAM)
• AWS Secrets Manager
• AWS Security Hub
• AWS Shield
• AWS Single Sign-On
• AWS WAF
Serverless:
• AWS AppSync
• AWS Fargate
• AWS Lambda
Storage:
• AWS Backup
• Amazon Elastic Block Store (Amazon EBS)
• Amazon Elastic File System (Amazon EFS)
• Amazon FSx (for all types)
• Amazon S3
• Amazon S3 Glacier
• AWS Storage Gateway
Out-of-scope AWS services and features
The following is a non-exhaustive list of AWS services and features that are not covered on the exam.
These services and features do not represent every AWS offering that is excluded from the exam content.
Analytics:
• Amazon CloudSearch
Application Integration:
• Amazon Managed Workflows for Apache Airflow (Amazon MWAA)
AR and VR:
• Amazon Sumerian
Blockchain:
• Amazon Managed Blockchain
Compute:
• Amazon Lightsail
Database:
• Amazon RDS on VMware
Developer Tools:
• AWS Cloud9
• AWS Cloud Development Kit (AWS CDK)
• AWS CloudShell
• AWS CodeArtifact
• AWS CodeBuild
• AWS CodeCommit
• AWS CodeDeploy
• Amazon CodeGuru
• AWS CodeStar
• Amazon Corretto
• AWS Fault Injection Simulator (AWS FIS)
• AWS Tools and SDKs
Front-End Web and Mobile:
• Amazon Location Service
Game Tech:
• Amazon GameLift
• Amazon Lumberyard
Internet of Things:
• All services
Which new AWS services will be covered in the SAA-C03?
AWS Data Exchange,
AWS Data Pipeline,
AWS Lake Formation,
Amazon Managed Streaming for Apache Kafka,
Amazon AppFlow,
AWS Outposts,
VMware Cloud on AWS,
AWS Wavelength,
Amazon Neptune,
Amazon Quantum Ledger Database,
Amazon Timestream,
AWS Amplify,
Amazon Comprehend,
Amazon Forecast,
Amazon Fraud Detector,
Amazon Kendra,
AWS License Manager,
Amazon Managed Grafana,
Amazon Managed Service for Prometheus,
AWS Proton,
Amazon Elastic Transcoder,
Amazon Kinesis Video Streams,
AWS Application Discovery Service,
AWS WAF Serverless,
AWS AppSync,
Get the AWS SAA-C03 Exam Prep App on: iOS – Android – Windows 10/11
AWS solutions architect associate exam prep facts and summaries questions and answers dump – Solution Architecture Definition 1:
Solution architecture is a practice of defining and describing an architecture of a system delivered in context of a specific solution and as such it may encompass description of an entire system or only its specific parts. Definition of a solution architecture is typically led by a solution architect.
AWS solutions architect associate exam prep facts and summaries questions and answers dump – Solution Architecture Definition 2:
The AWS Certified Solutions Architect – Associate examination is intended for individuals who perform a solutions architect role and have one or more years of hands-on experience designing available, cost-efficient, fault-tolerant, and scalable distributed systems on AWS.
AWS solutions architect associate exam prep facts and summaries questions and answers dump – AWS Solution Architect Associate Exam Facts and Summaries (SAA-C03)
- Take an AWS Training Class
- Study AWS Whitepapers and FAQs: AWS Well-Architected webpage (various whitepapers linked)
- If you are running an application in a production environment and must add a new EBS volume with data from a snapshot, what could you do to avoid degraded performance during the volume’s first use?
Initialize the data by reading each storage block on the volume.
Volumes created from an EBS snapshot must be initialized. Initializing occurs the first time a storage block on the volume is read, and the performance impact can be impacted by up to 50%. You can avoid this impact in production environments by pre-warming the volume by reading all of the blocks. - If you are running a legacy application that has hard-coded static IP addresses and it is running on an EC2 instance; what is the best failover solution that allows you to keep the same IP address on a new instance?
Elastic IP addresses (EIPs) are designed to be attached/detached and moved from one EC2 instance to another. They are a great solution for keeping a static IP address and moving it to a new instance if the current instance fails. This will reduce or eliminate any downtime uses may experience. - Which feature of Intel processors help to encrypt data without significant impact on performance?
AES-NI - You can mount to EFS from which two of the following?
- On-prem servers running Linux
- EC2 instances running Linux
EFS is not compatible with Windows operating systems.
When a file(s) is encrypted and the stored data is not in transit it’s known as encryption at rest. What is an example of encryption at rest?
When would vertical scaling be necessary? When an application is built entirely into one source code, otherwise known as a monolithic application.
Fault-Tolerance allows for continuous operation throughout a failure, which can lead to a low Recovery Time Objective. RPO vs RTO
- High-Availability means automating tasks so that an instance will quickly recover, which can lead to a low Recovery Time Objective. RPO vs. RTO
- Frequent backups reduce the time between the last backup and recovery point, otherwise known as the Recovery Point Objective. RPO vs. RTO
- Which represents the difference between Fault-Tolerance and High-Availability? High-Availability means the system will quickly recover from a failure event, and Fault-Tolerance means the system will maintain operations during a failure.
- From a security perspective, what is a principal? An anonymous user falls under the definition of a principal. A principal can be an anonymous user acting on a system.
An authenticated user falls under the definition of a principal. A principal can be an authenticated user acting on a system.
- What are two types of session data saving for an Application Session State? Stateless and Stateful
23. It is the customer’s responsibility to patch the operating system on an EC2 instance.
24. In designing an environment, what four main points should a Solutions Architect keep in mind? Cost-efficient, secure, application session state, undifferentiated heavy lifting: These four main points should be the framework when designing an environment.
25. In the context of disaster recovery, what does RPO stand for? RPO is the abbreviation for Recovery Point Objective.
26. What are the benefits of horizontal scaling?
Vertical scaling can be costly while horizontal scaling is cheaper.
Horizontal scaling suffers from none of the size limitations of vertical scaling.
Having horizontal scaling means you can easily route traffic to another instance of a server.
Top
Reference: AWS Solution Architect Associate Exam Prep
Top 100 AWS solutions architect associate exam prep facts and summaries questions and answers dump – SAA-C03
For a better mobile experience, download the mobile app below:
Top AWS solutions architect associate exam prep facts and summaries questions and answers dump – Quizzes
A company is developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? (Select TWO.)
- A. CloudWatch
- B. DynamoDB
- C. Elastic Load Balancing
- D. ElastiCache
- E. Storage Gateway

Q1: A Solutions Architect is designing a critical business application with a relational database that runs on an EC2 instance. It requires a single EBS volume that can support up to 16,000 IOPS.
Which Amazon EBS volume type can meet the performance requirements of this application?
- A. EBS Provisioned IOPS SSD
- B. EBS Throughput Optimized HDD
- C. EBS General Purpose SSD
- D. EBS Cold HDD
Q2: An application running on EC2 instances processes sensitive information stored on Amazon S3. The information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 is a security risk.
Which solution will resolve the security concern?
- A. Access the data through an Internet Gateway.
- B. Access the data through a VPN connection.
- C. Access the data through a NAT Gateway.
- D.Access the data through a VPC endpoint for Amazon S3
Q3: An organization is building an Amazon Redshift cluster in their shared services VPC. The cluster will host sensitive data.
How can the organization control which networks can access the cluster?
- A. Run the cluster in a different VPC and connect through VPC peering.
- B. Create a database user inside the Amazon Redshift cluster only for users on the network.
- C. Define a cluster security group for the cluster that allows access from the allowed networks.
- D. Only allow access to networks that connect with the shared services network via VPN.

Q4: A web application allows customers to upload orders to an S3 bucket. The resulting Amazon S3 events trigger a Lambda function that inserts a message to an SQS queue. A single EC2 instance reads messages from the queue, processes them, and stores them in an DynamoDB table partitioned by unique order ID. Next month traffic is expected to increase by a factor of 10 and a Solutions Architect is reviewing the architecture for possible scaling problems.
Which component is MOST likely to need re-architecting to be able to scale to accommodate the new traffic?
- A. Lambda function
- B. SQS queue
- C. EC2 instance
- D. DynamoDB table
Q5: An application requires a highly available relational database with an initial storage capacity of 8 TB. The database will grow by 8 GB every day. To support expected traffic, at least eight read replicas will be required to handle database reads.
Which option will meet these requirements?
- A. DynamoDB
- B. Amazon S3
- C. Amazon Aurora
- D. Amazon Redshift
Q6: How can you improve the performance of EFS?
- A. Use an instance-store backed EC2 instance.
- B. Provision more throughput than is required.
- C. Divide your files system into multiple smaller file systems.
- D. Provision higher IOPs for your EFS.
Q7:
If you are designing an application that requires fast (10 – 25Gbps), low-latency connections between EC2 instances, what EC2 feature should you use?
- A. Snapshots
- B. Instance store volumes
- C. Placement groups
- D. IOPS provisioned instances.

Q8: A Solution Architect is designing an online shopping application running in a VPC on EC2 instances behind an ELB Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The application tier must read and write data to a customer managed database cluster. There should be no access to the database from the Internet, but the cluster must be able to obtain software patches from the Internet.
Which VPC design meets these requirements?
- A. Public subnets for both the application tier and the database cluster
- B. Public subnets for the application tier, and private subnets for the database cluster
- C. Public subnets for the application tier and NAT Gateway, and private subnets for the database cluster
- D. Public subnets for the application tier, and private subnets for the database cluster and NAT Gateway
Q9: What command should you run on a running instance if you want to view its user data (that is used at launch)?
- A. curl http://254.169.254.169/latest/user-data
- B. curl http://localhost/latest/meta-data/bootstrap
- C. curl http://localhost/latest/user-data
- D. curl http://169.254.169.254/latest/user-data

Q10: A company is developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? (Select TWO.)
- A. CloudWatch
- B. DynamoDB
- C. Elastic Load Balancing
- D. ElastiCache
- E. Storage Gateway
Q11: From a security perspective, what is a principal?
- A. An identity
- B. An anonymous user
- C. An authenticated user
- D. A resource
Q12: What are the characteristics of a tiered application?
- A. All three application layers are on the same instance
- B. The presentation tier is on an isolated instance than the logic layer
- C. None of the tiers can be cloned
- D. The logic layer is on an isolated instance than the data layer
- E. Additional machines can be added to help the application by implementing horizontal scaling
- F. Incapable of horizontal scaling
Q13: When using horizontal scaling, how can a server’s capacity closely match it’s rising demand?
A. By frequently purchasing additional instances and smaller resources
B. By purchasing more resources very far in advance
C. By purchasing more resources after demand has risen
D. It is not possible to predict demand
Q14: What is the concept behind AWS’ Well-Architected Framework?
A. It’s a set of best practice areas, principles, and concepts that can help you implement effective AWS solutions.
B. It’s a set of best practice areas, principles, and concepts that can help you implement effective solutions tailored to your specific business.
C. It’s a set of best practice areas, principles, and concepts that can help you implement effective solutions from another web host.
D. It’s a set of best practice areas, principles, and concepts that can help you implement effective E-Commerce solutions.
Question 127: Which options are examples of steps you take to protect your serverless application from attacks? (Select FOUR.)
A. Update your operating system with the latest patches.
B. Configure geoblocking on Amazon CloudFront in front of regional API endpoints.
C. Disable origin access identity on Amazon S3.
D. Disable CORS on your APIs.
E. Use resource policies to limit access to your APIs to users from a specified account.
F. Filter out specific traffic patterns with AWS WAF.
G. Parameterize queries so that your Lambda function expects a single input.
Question 128: Which options reflect best practices for automating your deployment pipeline with serverless applications? (Select TWO.)
A. Select one deployment framework and use it for all of your deployments for consistency.
B. Use different AWS accounts for each environment in your deployment pipeline.
C. Use AWS SAM to configure safe deployments and include pre- and post-traffic tests.
D. Create a specific AWS SAM template to match each environment to keep them distinct.
Question 129: Your application needs to connect to an Amazon RDS instance on the backend. What is the best recommendation to the developer whose function must read from and write to the Amazon RDS instance?
A. Use reserved concurrency to limit the number of concurrent functions that would try to write to the database
B. Use the database proxy feature to provide connection pooling for the functions
C. Initialize the number of connections you want outside of the handler
D. Use the database TTL setting to clean up connections

Question 130: A company runs a cron job on an Amazon EC2 instance on a predefined schedule. The cron job calls a bash script that encrypts a 2 KB file. A security engineer creates an AWS Key Management Service (AWS KMS) CMK with a key policy.
The key policy and the EC2 instance role have the necessary configuration for this job.
Which process should the bash script use to encrypt the file?
A) Use the aws kms encrypt command to encrypt the file by using the existing CMK.
B) Use the aws kms create-grant command to generate a grant for the existing CMK.
C) Use the aws kms encrypt command to generate a data key. Use the plaintext data key to encrypt the file.
D) Use the aws kms generate-data-key command to generate a data key. Use the encrypted data key to encrypt the file.
Question 131: A Security engineer must develop an AWS Identity and Access Management (IAM) strategy for a company’s organization in AWS Organizations. The company needs to give developers autonomy to develop and test their applications on AWS, but the company also needs to implement security guardrails to help protect itself. The company creates and distributes applications with different levels of data classification and types. The solution must maximize scalability.
Which combination of steps should the security engineer take to meet these requirements? (Choose three.)
A) Create an SCP to restrict access to highly privileged or unauthorized actions to specific AM principals. Assign the SCP to the appropriate AWS accounts.
B) Create an IAM permissions boundary to allow access to specific actions and IAM principals. Assign the IAM permissions boundary to all AM principals within the organization
C) Create a delegated IAM role that has capabilities to create other IAM roles. Use the delegated IAM role to provision IAM principals by following the principle of least privilege.
D) Create OUs based on data classification and type. Add the AWS accounts to the appropriate OU. Provide developers access to the AWS accounts based on business need.
E) Create IAM groups based on data classification and type. Add only the required developers’ IAM role to the IAM groups within each AWS account.
F) Create IAM policies based on data classification and type. Add the minimum required IAM policies to the developers’ IAM role within each AWS account.
Question 132: A company is ready to deploy a public web application. The company will use AWS and will host the application on an Amazon EC2 instance. The company must use SSL/TLS encryption. The company is already using AWS Certificate Manager (ACM) and will export a certificate for use with the deployment.
How can a security engineer deploy the application to meet these requirements?
A) Put the EC2 instance behind an Application Load Balancer (ALB). In the EC2 console, associate the certificate with the ALB by choosing HTTPS and 443.
B) Put the EC2 instance behind a Network Load Balancer. Associate the certificate with the EC2 instance.
C) Put the EC2 instance behind a Network Load Balancer (NLB). In the EC2 console, associate the certificate with the NLB by choosing HTTPS and 443.
D) Put the EC2 instance behind an Application Load Balancer. Associate the certificate with the EC2 instance.
What are the 6 pillars of a well architected framework:
AWS Well-Architected helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications and workloads. Based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization — AWS Well-Architected provides a consistent approach for customers and partners to evaluate architectures, and implement designs that can scale over time.
1. Operational Excellence
The operational excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. You can find prescriptive guidance on implementation in the Operational Excellence Pillar whitepaper.
2. Security
The security pillar includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies. You can find prescriptive guidance on implementation in the Security Pillar whitepaper.
3. Reliability
The reliability pillar includes the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. You can find prescriptive guidance on implementation in the Reliability Pillar whitepaper.
4. Performance Efficiency
The performance efficiency pillar includes the ability to use computing resources efficiently to meet system requirements and to maintain that efficiency as demand changes and technologies evolve. You can find prescriptive guidance on implementation in the Performance Efficiency Pillar whitepaper.
5. Cost Optimization
The cost optimization pillar includes the ability to avoid or eliminate unneeded cost or suboptimal resources. You can find prescriptive guidance on implementation in the Cost Optimization Pillar whitepaper.
6. Sustainability
- The ability to increase efficiency across all components of a workload by maximizing the benefits from the provisioned resources.
- There are six best practice areas for sustainability in the cloud:
- Region Selection – AWS Global Infrastructure
- User Behavior Patterns – Auto Scaling, Elastic Load Balancing
- Software and Architecture Patterns – AWS Design Principles
- Data Patterns – Amazon EBS, Amazon EFS, Amazon FSx, Amazon S3
- Hardware Patterns – Amazon EC2, AWS Elastic Beanstalk
- Development and Deployment Process – AWS CloudFormation
- Key AWS service:
- Amazon EC2 Auto Scaling
Source: 6 pillards of AWs Well architected Framework
The AWS Well-Architected Framework provides architectural best practices across the five pillars for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. The framework provides a set of questions that allows you to review an existing or proposed architecture. It also provides a set of AWS best practices for each pillar.
Using the Framework in your architecture helps you produce stable and efficient systems, which allows you to focus on functional requirements.
Other AWS Facts and Summaries and Questions/Answers Dump
- AWS Certified Solution Architect Associate Exam Prep App
- AWS S3 facts and summaries and Q&A Dump
- AWS DynamoDB facts and summaries and Questions and Answers Dump
- AWS EC2 facts and summaries and Questions and Answers Dump
- AWS Serverless facts and summaries and Questions and Answers Dump
- AWS Developer and Deployment Theory facts and summaries and Questions and Answers Dump
- AWS IAM facts and summaries and Questions and Answers Dump
- AWS Lambda facts and summaries and Questions and Answers Dump
- AWS SQS facts and summaries and Questions and Answers Dump
- AWS RDS facts and summaries and Questions and Answers Dump
- AWS ECS facts and summaries and Questions and Answers Dump
- AWS CloudWatch facts and summaries and Questions and Answers Dump
- AWS SES facts and summaries and Questions and Answers Dump
- AWS EBS facts and summaries and Questions and Answers Dump
- AWS ELB facts and summaries and Questions and Answers Dump
- AWS Autoscaling facts and summaries and Questions and Answers Dump
- AWS VPC facts and summaries and Questions and Answers Dump
- AWS KMS facts and summaries and Questions and Answers Dump
- AWS Elastic Beanstalk facts and summaries and Questions and Answers Dump
- AWS CodeBuild facts and summaries and Questions and Answers Dump
- AWS CodeDeploy facts and summaries and Questions and Answers Dump
- AWS CodePipeline facts and summaries and Questions and Answers Dump
What means undifferentiated heavy lifting?
The reality, of course, today is that if you come up with a great idea you don’t get to go quickly to a successful product. There’s a lot of undifferentiated heavy lifting that stands between your idea and that success. The kinds of things that I’m talking about when I say undifferentiated heavy lifting are things like these: figuring out which servers to buy, how many of them to buy, what time line to buy them.
Eventually you end up with heterogeneous hardware and you have to match that. You have to think about backup scenarios if you lose your data center or lose connectivity to a data center. Eventually you have to move facilities. There’s negotiations to be done. It’s a very complex set of activities that really is a big driver of ultimate success.
But they are undifferentiated from, it’s not the heart of, your idea. We call this muck. And it gets worse because what really happens is you don’t have to do this one time. You have to drive this loop. After you get your first version of your idea out into the marketplace, you’ve done all that undifferentiated heavy lifting, you find out that you have to cycle back. Change your idea. The winners are the ones that can cycle this loop the fastest.
On every cycle of this loop you have this undifferentiated heavy lifting, or muck, that you have to contend with. I believe that for most companies, and it’s certainly true at Amazon, that 70% of your time, energy, and dollars go into the undifferentiated heavy lifting and only 30% of your energy, time, and dollars gets to go into the core kernel of your idea.
I think what people are excited about is that they’re going to get a chance they see a future where they may be able to invert those two. Where they may be able to spend 70% of their time, energy and dollars on the differentiated part of what they’re doing.
AWS Certified Solutions Architect Associates Questions and Answers around the web.
Testimonial: Passed SAA-C02!
So my exam was yesterday and I got the results in 24 hours. I think that’s how they review all saa exams, not showing the results right away anymore.
I scored 858. Was practicing with Stephan’s udemy lectures and Bonso exam tests. My test results were as follows Test 1. 63%, 93% Test 2. 67%, 87% Test 3. 81 % Test 4. 72% Test 5. 75 % Test 6. 81% Stephan’s test. 80%
I was reading all question explanations (even the ones I got correct)
The actual exam was pretty much similar to these. The topics I got were:
A lot of S3 (make sure you know all of it from head to toes)
VPC peering
DataSync and Database Migration Service in same questions. Make sure you know the difference
One EKS question
2-3 KMS questions
Security group question
A lot of RDS Multi-AZ
SQS + SNS fan out pattern
ECS microservice architecture question
Route 53
NAT gateway
And that’s all I can remember)
I took extra 30 minutes, because English is not my native language and I had plenty of time to think and then review flagged questions.
Good luck with your exams guys!
Testimonial: Passed SAA-C02

Hey guys, just giving my update so all of you guys working towards your certs can stay motivated as these success stories drove me to reach this goal.
Background: 12 years of military IT experience, never worked with the cloud. I’ve done 7 deployments (that is a lot in 12 years), at which point I came home from the last one burnt out with a family that barely knew me. I knew I needed a change, but had no clue where to start or what I wanted to do. I wasn’t really interested in IT but I knew it’d pay the bills. After seeing videos about people in IT working from home(which after 8+ years of being gone from home really appealed to me), I stumbled across a video about a Solutions Architect’s daily routine working from home and got me interested in AWS.
AWS Solutions Architect SAA Certification Preparation time: It took me 68 days straight of hard work to pass this exam with confidence. No rest days, more than 120 pages of hand-written notes and hundreds and hundreds of flash cards.
In the beginning, I hopped on Stephane Maarek’s course for the CCP exam just to see if it was for me. I did the course in about a week and then after doing some research on here, got the CCP Practice exams from tutorialsdojo.com Two weeks after starting the Udemy course, I passed the exam. By that point, I’d already done lots of research on the different career paths and the best way to study, etc.
Cantrill(10/10) – That same day, I hopped onto Cantrill’s course for the SAA and got to work. Somebody had mentioned that by doing his courses you’d be over-prepared for the exam. While I think a combination of material is really important for passing the certification with confidence, I can say without a doubt Cantrill’s courses got me 85-90% of the way there. His forum is also amazing, and has directly contributed to me talking with somebody who works at AWS to land me a job, which makes the money I spent on all of his courses A STEAL. As I continue my journey (up next is SA Pro), I will be using all of his courses.
Neal Davis(8/10) – After completing Cantrill’s course, I found myself needing a resource to reinforce all the material I’d just learned. AWS is an expansive platform and the many intricacies of the different services can be tricky. For this portion, I relied on Neal Davis’s Training Notes series. These training notes are a very condensed version of the information you’ll need to pass the exam, and with the proper context are very useful to find the things you may have missed in your initial learnings. I will be using his other Training Notes for my other exams as well.
TutorialsDojo(10/10) – These tests filled in the gaps and allowed me to spot my weaknesses and shore them up. I actually think my real exam was harder than these, but because I’d spent so much time on the material I got wrong, I was able to pass the exam with a safe score.
As I said, I was surprised at how difficult the exam was. A lot of my questions were related to DBs, and a lot of them gave no context as to whether the data being loaded into them was SQL or NoSQL which made the choice selection a little frustrating. A lot of the questions have 2 VERY SIMILAR answers, and often time the wording of the answers could be easy to misinterpret (such as when you are creating a Read Replica, do you attach it to the primary application DB that is slowing down because of read issues or attach it to the service that is causing the primary DB to slow down). For context, I was scoring 95-100% on the TD exams prior to taking the test and managed a 823 on the exam so I don’t know if I got unlucky with a hard test or if I’m not as prepared as I thought I was (i.e. over-thinking questions).
Anyways, up next is going back over the practical parts of the course as I gear up for the SA Pro exam. I will be taking my time with this one, and re-learning the Linux CLI in preparation for finding a new job.
PS if anybody on here is hiring, I’m looking! I’m the hardest worker I know and my goal is to make your company as streamlined and profitable as possible. 🙂
Testimonial: How did you prepare for AWS Certified Solutions Architect – Associate Level certification?
Best way to prepare for aws solution architect associate certification
Practical knowledge is 30% important and rest is Jayendra blog and Dumps.
Buying udemy courses doesn’t make you pass, I can tell surely without going to dumps and without going to jayendra’s blog not easy to clear the certification.
Read FAQs of S3, IAM, EC2, VPC, SQS, Autoscaling, Elastic Load Balancer, EBS, RDS, Lambda, API Gateway, ECS.
Read the Security Whitepaper and Shared Responsibility model.
The most important thing is basic questions from the last introduced topics to the exam is very important like Amazon Kinesis, etc…
– ACloudGuru course with practice test’s
– Created my own cheat sheet in excel
– Practice questions on various website
– Few AWS services FAQ’s
– Some questions were your understanding about which service to pick for the use case.
– many questions on VPC
– a couple of unexpected question on AWS CloudHSM, AWS systems manager, aws athena
– encryption at rest and in transit services
– migration from on-premise to AWS
– backup data in az vs regional
I believe the time was sufficient.
Overall I feel AWS SAA was more challenging in theory than GCP Associate CE.
some resources I bookmarked:
- Comparison of AWS Services
- Solutions Architect – Associate | Qwiklabs
- okeeffed/cheat-sheets
- A curated list of AWS resources to prepare for the AWS Certifications
- AWS Cheat Sheet
Whitepapers are the important information about each services that are published by Amazon in their website. If you are preparing for the AWS certifications, it is very important to use the some of the most recommended whitepapers to read before writing the exam.
The following are the list of whitepapers that are useful for preparing solutions architectexam. Also you will be able to find the list of whitepapers in the exam blueprint.
- Overview of Security Processes
- Storage Options in the Cloud
- Defining Fault Tolerant Applications in the AWS Cloud
- Overview of Amazon Web Services
- Compliance Whitepaper
- Architecting for the AWS Cloud
Data Security questions could be the more challenging and it’s worth noting that you need to have a good understanding of security processes described in the whitepaper titled “Overview of Security Processes”.
In the above list, most important whitepapers are Overview of Security Processes and Storage Options in the Cloud. Read more here…
Big thanks to /u/acantril for his amazing course – AWS Certified Solutions Architect – Associate (SAA-C02) – the best IT course I’ve ever had – and I’ve done many on various other platforms:
CBTNuggets
LinuxAcademy
ACloudGuru
Udemy
Linkedin
O’Reilly
- #AWS #SAAC02 #SAAC03 #SolutionsArchitect #AWSSAA #SAA #AWSCertification #AWSTraining #LearnAWS #CloudArchitect #SolutionsArchitect #Djamgatech
If you’re on the fence with buying one of his courses, stop thinking and buy it, I guarantee you won’t regret it! Other materials used for study:
Jon Bonso Practice Exams for SAA-C02 @ Tutorialsdojo (amazing practice exams!)
Random YouTube videos (example)
Official AWS Documentation (example)
TechStudySlack (learning community)
Study duration approximately ~3 months with the following regimen:
Daily study from
30min
to2hrs
Usually early morning before work
Sometimes on the train when commuting from/to work
Sometimes in the evening
Due to being a father/husband, study wasn’t always possible
All learned topics reviewed weekly
Testimonial: I passed SAA-C02 … But don’t do what I did to pass it

I’ve been following this subreddit for awhile and gotten some helpful tips, so I’d like to give back with my two cents. FYI I passed the exam 788
The exam materials that I used were the following:
AWS Certified Solutions Architect Associate All-in-One Exam Guide (Banerjee)
Stephen Maarek’s Udemy course, and his 6 exam practices
Adrian Cantrill’s online course (about `60% done)
TutorialDojo’s exams
(My company has udemy business account so I was able to use Stephen’s course/exam)
I scheduled my exam at the end of March, and started with Adrian’s. But I was dumb thinking that I could go through his course within 3 weeks… I stopped around 12% of his course and went to the textbook and finished reading the all-in-one exam guide within a weekend. Then I started going through Stephen’s course. While learning the course, I pushed back the exam to end of April, because I knew I wouldn’t be ready by the exam comes along.
Five days before the exam, I finished Stephen’s course, and then did his final exam on the course. I failed miserably (around 50%). So I did one of Stephen’s practice exam and did worse (42%). I thought maybe it might be his exams that are slightly difficult, so I went and bought Jon Bonso’s exam and got 60% on his first one. And then I realized based on all the questions on the exams, I was definitely lacking some fundamentals. I went back to Adrian’s course and things were definitely sticking more – I think it has to do with his explanations + more practical stuff. Unfortunately, I could not finish his course before the exam (because I was cramming), and on the day of the exam, I could only do Bonso’s four of six exams, with barely passing one of them.
Please, don’t do what I did. I was desperate to get this thing over with it. I wanted to move on and work on other things for job search, but if you’re not in this situation, please don’t do this. I can’t for love of god tell you about OAI and Cloudfront and why that’s different than S3 URL. The only thing that I can remember is all the practical stuff that I did with Adrian’s course. I’ll never forget how to create VPC, because he make you manually go through it. I’m not against Stephen’s course – they are different on its own way (see the tips below).
So here’s what I recommend doing before writing for aws exam:
Don’t schedule your exam beforehand. Go through the materials that you are doing, and make sure you get at least 80% on all of the Jon Bonso’s exam (I’d recommend maybe 90% or higher)
If you like to learn things practically, I do recommend Adrian’s course. If you like to learn things conceptually, go with Stephen Maarek’s course. I find Stephen’s course more detailed when going through different architectures, but I can’t really say that because I didn’t really finish Adrian’s course
Jon Bonso’s exam was about the same difficulty as the actual exam. But they’re slightly more tricky. For example, many of the questions will give you two different situation and you really have to figure out what they are asking for because they might contradict to each other, but the actual question is asking one specific thing. However, there were few questions that were definitely obvious if you knew the service.
I’m upset that even though I passed the exam, I’m still lacking some practical stuff, so I’m just going to go through Adrian’s Developer exam but without cramming this time. If you actually learn the materials and practice them, they are definitely useful in the real world. I hope this will help you passing and actually learning the stuff.
P.S I vehemently disagree with Adrian in one thing in his course. doggogram.io is definitely better than catagram.io, although his cats are pretty cool
Testimonial: I passed the SAA-C02 exam!

I sat the exam at a PearsonVUE test centre and scored 816.
The exam had lots of questions around S3, RDS and storage. To be honest it was a bit of a blur but they are the ones I remember.
I was a bit worried before sitting the exam as I was only hit 76% in the official AWS practice exam the night before but it turned out alright in the end!
I have around 8 years of experience in IT but AWS was relatively new to me around 5 weeks ago.
Training Material Used
Firstly I ran through the u/stephanemaarek course which I found to pretty much cover all that was required!
I then used the u/Tutorials_Dojo practice exams. I took one before starting Stephane’s course to see where I was at with no training. I got 46% but I suppose a few of them were lucky guesses!
I then finished the course and took another test and hit around 65%, TD was great as they gave explanations on the answers. I then used this go back to the course to go over my weak areas again.
I then seemed to not be able to get higher than the low 70% on the exams so I went through u/neal-davis course, this was also great as it had an “Exam Cram” video at the end of each topic.
I also set up flashcards on BrainScape which helped me remember AWS services and what their function is.
All in all it was a great learning experience and I look forward to putting my skills into action!
Testimonial: I passed SAA with (799), had about an hour left on the clock.
Many FSx / EFS / Lustre questions
S3 Use cases, storage tiers, cloudfront were pretty prominent too
Only got one “figure out what’s wrong with this IAM policy” question
A handful of dynamodb questions and a handful for picking use cases between different database types or caching layers.
Other typical tips: When you’re unclear on what answer you should pick, or if they seem very similar – work on eliminating answers first. “It can’t be X because oy Y” and that can help a lot.
Testimonial: Passed the AWS Solutions Architect Associate exam!
I prepared mostly from freely available resources as my basics were strong. Bought Jon Bonso’s tests on Udemy and they turned out to be super important while preparing for those particular type of questions (i.e. the questions which feel subjective, but they aren’t), understanding line of questioning and most suitable answers for some common scenarios.
Created a Notion notebook to note down those common scenarios, exceptions, what supports what, integrations etc. Used that notebook and cheat sheets on Tutorials Dojo website for revision on final day.
Found the exam was little tougher than Jon Bonso’s, but his practice tests on Udemy were crucial. Wouldn’t have passed it without them.
Piece of advice for upcoming test aspirants: Get your basics right, especially networking. Understand properly how different services interact in VPC. Focus more on the last line of the question. It usually gives you a hint upon what exactly is needed. Whether you need cost optimization, performance efficiency or high availability. Little to no operational effort means serverless. Understand all serverless services thoroughly.
Testimonial: Passed Solutions Architect Associate (SAA-C02) Today!
I have almost no experience with AWS, except for completing the Certified Cloud Practitioner earlier this year. My work is pushing all IT employees to complete some cloud training and certifications, which is why I chose to do this.
How I Studied:
My company pays for acloudguru subscriptions for its employees, so I used that for the bulk of my learning. I took notes on 3×5 notecards on the key terms and concepts for review.
Once I scored passing grades on the ACG practice tests, I took the Jon Bonso tests on Udemy, which are much more difficult and fairly close to the difficulty of the actual exam. I scored 45%-74% on every Bonso practice test, and spent 1-2 hours after each test reviewing what I missed, supplementing my note cards, and taking time to understand my weak spots. I only took these tests once each, but in between each practice test, I would review all my note cards until I had the content largely memorized.
The Test:
This was one of the most difficult certification tests I’ve ever done. The exam was remote proctored with PearsonVUE (I used PSI for the CCP and didn’t like it as much) I felt like I was failing half the time. I marked about 25% of the questions for review, and I used up the entire allotted time. The questions are mostly about understanding which services interact with which other services, or which services are incompatible with the scenario. It was important for me to read through each response and eliminate the ones that don’t make sense. A lot of the responses mentioned a lot of AWS services that sound good but don’t actually work together (i.e. if it doesn’t make sense to have service X querying database Y, so that probably isn’t the right answer). I can’t point to one domain that really needs to be studied more than any other. You need to know all of the content for the exam.
Final Thoughts:
The ACG practice tests are not a good metric for success for the actual SAA exam, and I would not have passed without Bonso’s tests showing me my weak spots. PearsonVUE is better than PSI. Make sure to study everything thoroughly and review excessively. You don’t necessarily need 5 different study sources and years of experience to be able to pass (although both of those definitely help) and good luck to anyone that took the time to read!

Testimonial: Passed AWS CSAA today!
AWS Certified Solutions Architect Associate
So glad to pass my first AWS certification after 6 weeks of preparation.
My Preparation:
After a series of trial of error in regards to picking the appropriate learning content. Eventually, I went with the community’s advice, and took the course presented by the amazing u/stephanemaarek, in addition to the practice exams by Jon Bonso.
At this point, I can’t say anything that hasn’t been said already about how helpful they are. It’s a great combination of learning material, I appreciate the instructor’s work, and the community’s help in this sub.
Review:
Throughout the course I noted down the important points, and used the course slides as a reference in the first review iteration.
Before resorting to Udemy’s practice exams, I purchased a practice exam from another website, that I regret (not to defame the other vendor, I would simply recommend Udemy).
Udemy’s practice exams were incredible, in that they made me aware of the points I hadn’t understood clearly. After each exam, I would go both through the incorrect answers, as well as the questions I marked for review, wrote down the topic for review, and read the explanation thoroughly. The explanations point to the respective documentation in AWS, which is a recommended read, especially if you don’t feel confident with the service.
What I want to note, is that I didn’t get satisfying marks on the first go at the practice exams (I got an average of ~70%).
Throughout the 6 practice exams, I aggregated a long list of topics to review, went back to the course slides and practice-exams explanations, in addition to the AWS documentation for the respective service.
On the second go I averaged 85%. The second attempt at the exams was important as a confidence boost, as I made sure I understood the services more clearly.
The take away:
Don’t feel disappointed if you get bad results at your practice-exams. Make sure to review the topics and give it another shot.
The AWS documentation is your friend! It is vert clear and concise. My only regret is not having referenced the documentation enough after learning new services.
The exam:
I scheduled the exam using PSI.
I was very confident going into the exam. But going through such an exam environment for the first time made me feel under pressure. Partly, because I didn’t feel comfortable being monitored (I was afraid to get eliminated if I moved or covered my mouth), but mostly because there was a lot at stake from my side, and I had to pass it in the first go.
The questions were harder than expected, but I tried analyze the questions more, and eliminate the invalid answers.
I was very nervous and kept reviewing flagged questions up to the last minute. Luckily, I pulled through.
The take away:
The proctors are friendly, just make sure you feel comfortable in the exam place, and use the practice exams to prepare for the actual’s exam’s environment. That includes sitting in a straight posture, not talking/whispering, or looking away.
Make sure to organize the time dedicated for each questions well, and don’t let yourself get distracted by being monitored like I did.
Don’t skip the question that you are not sure of. Try to select the most probable answer, then flag the question. This will make the very-stressful, last-minute review easier.
You have been engaged by a company to design and lead a migration to an AWS environment. The team is concerned about the capabilities of the new environment, especially when it comes to high availability and cost-effectiveness. The design calls for about 20 instances (c3.2xlarge) pulling jobs/messages from SQS. Network traffic per instance is estimated to be around 500 Mbps at the beginning and end of each job. Which configuration should you plan on deploying?
Spread the Instances over multiple AZs to minimize the traffic concentration and maximize fault-tolerance. With a multi-AZ configuration, an additional reliability point is scored as the entire Availability Zone itself is ruled out as a single point of failure. This ensures high availability. Wherever possible, use simple solutions such as spreading the load out rather than expensive high tech solutions
To save money, you quickly stored some data in one of the attached volumes of an EC2 instance and stopped it for the weekend. When you returned on Monday and restarted your instance, you discovered that your data was gone. Why might that be?
The volume was ephemeral, block-level storage. Data on an instance store volume is lost if an instance is stopped.
The most likely answer is that the EC2 instance had an instance store volume attached to it. Instance store volumes are ephemeral, meaning that data in attached instance store volumes is lost if the instance stops.
Reference: Instance store lifetime
Your company likes the idea of storing files on AWS. However, low-latency service of the last few days of files is important to customer service. Which Storage Gateway configuration would you use to achieve both of these ends?
A file gateway simplifies file storage in Amazon S3, integrates to existing applications through industry-standard file system protocols, and provides a cost-effective alternative to on-premises storage. It also provides low-latency access to data through transparent local caching.
Cached volumes allow you to store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data.
You’ve been commissioned to develop a high-availability application with a stateless web tier. Identify the most cost-effective means of reaching this end.
Use an Elastic Load Balancer, a multi-AZ deployment of an Auto-Scaling group of EC2 Spot instances (primary) running in tandem with an Auto-Scaling group of EC2 On-Demand instances (secondary), and DynamoDB.
With proper scripting and scaling policies, running EC2 On-Demand instances behind the Spot instances will deliver the most cost-effective solution because On-Demand instances will only spin up if the Spot instances are not available. DynamoDB lends itself to supporting stateless web/app installations better than RDS .
You are building a NAT Instance in an m3.medium using the AWS Linux2 distro with amazon-linux-extras installed. Which of the following do you need to set?
Ensure that “Source/Destination Checks” is disabled on the NAT instance. With a NAT instance, the most common oversight is forgetting to disable Source/Destination Checks. TNote: This is a legacy topic and while it may appear on the AWS exam it will only do so infrequently.
You are reviewing Change Control requests and you note that there is a proposed change designed to reduce errors due to SQS Eventual Consistency by updating the “DelaySeconds” attribute. What does this mean?
When a new message is added to the SQS queue, it will be hidden from consumer instances for a fixed period.
Delay queues let you postpone the delivery of new messages to a queue for a number of seconds, for example, when your consumer application needs additional time to process messages. If you create a delay queue, any messages that you send to the queue remain invisible to consumers for the duration of the delay period. The default (minimum) delay for a queue is 0 seconds. The maximum is 15 minutes. To set delay seconds on individual messages, rather than on an entire queue, use message timers to allow Amazon SQS to use the message timer’s DelaySeconds value instead of the delay queue’s DelaySeconds value. Reference: Amazon SQS delay queues.
Amazon SQS keeps track of all tasks and events in an application: True or False?
False. Amazon SWF (not Amazon SQS) keeps track of all tasks and events in an application. Amazon SQS requires you to implement your own application-level tracking, especially if your application uses multiple queues. Amazon SWF FAQs.
You work for a company, and you need to protect your data stored on S3 from accidental deletion. Which actions might you take to achieve this?
Allow versioning on the bucket and to protect the objects by configuring MFA-protected API access.
Your Security Manager has hired a security contractor to audit your network and firewall configurations. The consultant doesn’t have access to an AWS account. You need to provide the required access for the auditing tasks, and answer a question about login details for the official AWS firewall appliance. Which actions might you do?
AWS has removed the Firewall appliance from the hub of the network and implemented the firewall functionality as stateful Security Groups, and stateless subnet NACLs. This is not a new concept in networking, but rarely implemented at this scale.
Create an IAM user for the auditor and explain that the firewall functionality is implemented as stateful Security Groups, and stateless subnet NACLs
Amazon ElastiCache can fulfill a number of roles. Which operations can be implemented using ElastiCache for Redis.
Amazon ElastiCache offers a fully managed Memcached and Redis service. Although the name only suggests caching functionality, the Redis service in particular can offer a number of operations such as Pub/Sub, Sorted Sets and an In-Memory Data Store. However, Amazon ElastiCache for Redis doesn’t support multithreaded architectures.
You have been asked to deploy an application on a small number of EC2 instances. The application must be placed across multiple Availability Zones and should also minimize the chance of underlying hardware failure. Which actions would provide this solution?
Deploy the EC2 servers in a Spread Placement Group.
Spread Placement Groups are recommended for applications that have a small number of critical instances which need to be kept separate from each other. Launching instances in a Spread Placement Group reduces the risk of simultaneous failures that might occur when instances share the same underlying hardware. Spread Placement Groups provide access to distinct hardware, and are therefore suitable for mixing instance types or launching instances over time. In this case, deploying the EC2 instances in a Spread Placement Group is the only correct option.
You manage a NodeJS messaging application that lives on a cluster of EC2 instances. Your website occasionally experiences brief, strong, and entirely unpredictable spikes in traffic that overwhelm your EC2 instances’ resources and freeze the application. As a result, you’re losing recently submitted messages from end-users. You use Auto Scaling to deploy additional resources to handle the load during spikes, but the new instances don’t spin-up fast enough to prevent the existing application servers from freezing. Can you provide the most cost-effective solution in preventing the loss of recently submitted messages?
Use Amazon SQS to decouple the application components and keep the messages in queue until the extra Auto-Scaling instances are available.
Neither increasing the size of your EC2 instances nor maintaining additional EC2 instances is cost-effective, and pre-warming an ELB signifies that these spikes in traffic are predictable. The cost-effective solution to the unpredictable spike in traffic is to use SQS to decouple the application components.
True statements on S3 URL styles
Virtual-host-style URLs (such as: https://bucket-name.s3.Region.amazonaws.com/key name) are supported by AWS.
Path-Style URLs (such as https://s3.Region.amazonaws.com/bucket-name/key name) are supported by AWS.
You run an automobile reselling company that has a popular online store on AWS. The application sits behind an Auto Scaling group and requires new instances of the Auto Scaling group to identify their public and private IP addresses. How can you achieve this?
Using a Curl or Get Command to get the latest meta-data from http://169.254.169.254/latest/meta-data/
What data formats are used to create CloudFormation templates?
JSOn and YAML
You have launched a NAT instance into a public subnet, and you have configured all relevant security groups, network ACLs, and routing policies to allow this NAT to function. However, EC2 instances in the private subnet still cannot communicate out to the internet. What troubleshooting steps should you take to resolve this issue?
Disable the Source/Destination Check on your NAT instance.
A NAT instance sends and retrieves traffic on behalf of instances in a private subnet. As a result, source/destination checks on the NAT instance must be disabled to allow the sending and receiving traffic for the private instances. Route 53 resolves DNS names, so it would not help here. Traffic that is originating from your NAT instance will not pass through an ELB. Instead, it is sent directly from the public IP address of the NAT Instance out to the Internet.
You need a storage service that delivers the lowest-latency access to data for a database running on a single EC2 instance. Which of the following AWS storage services is suitable for this use case?
Amazon EBS is a block level storage service for use with Amazon EC2. Amazon EBS can deliver performance for workloads that require the lowest-latency access to data from a single EC2 instance. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.
What are DynamoDB use cases?
Use cases include storing JSON data, BLOB data and storing web session data.
You are reviewing Change Control requests, and you note that there is a change designed to reduce costs by updating the Amazon SQS “WaitTimeSeconds” attribute. What does this mean?
When the consumer instance polls for new work, the SQS service will allow it to wait a certain time for one or more messages to be available before closing the connection.
Poor timing of SQS processes can significantly impact the cost effectiveness of the solution.
Long polling helps reduce the cost of using Amazon SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request) and false empty responses (when messages are available but aren’t included in a response).
Reference: Here
You have been asked to decouple an application by utilizing SQS. The application dictates that messages on the queue CAN be delivered more than once, but must be delivered in the order they have arrived while reducing the number of empty responses. Which option is most suitable?
Configure a FIFO SQS queue and enable long polling.
You are a security architect working for a large antivirus company. The production environment has recently been moved to AWS and is in a public subnet. You are able to view the production environment over HTTP. However, when your customers try to update their virus definition files over a custom port, that port is blocked. You log in to the console and you allow traffic in over the custom port. How long will this take to take effect?
Immediately.
You need to restrict access to an S3 bucket. Which methods can you use to do so?
There are two ways of securing S3, using either Access Control Lists (Permissions) or by using bucket Policies.
You are reviewing Change Control requests, and you note that there is a change designed to reduce wasted CPU cycles by increasing the value of your Amazon SQS “VisibilityTimeout” attribute. What does this mean?
When a consumer instance retrieves a message, that message will be hidden from other consumer instances for a fixed period.
Poor timing of SQS processes can significantly impact the cost effectiveness of the solution. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours.
With EBS, I can ____.
Create an encrypted volume from a snapshot of another encrypted volume.
Create an encrypted snapshot from an unencrypted snapshot by creating an encrypted copy of the unencrypted snapshot.
You can create an encrypted volume from a snapshot of another encrypted volume.
Although there is no direct way to encrypt an existing unencrypted volume or snapshot, you can encrypt them by creating either a volume or a snapshot. Reference: Encrypting unencrypted resources.
Following advice from your consultant, you have configured your VPC to use dedicated hosting tenancy. Your VPC has an Amazon EC2 Auto Scaling designed to launch or terminate Amazon EC2 instances on a regular basis, in order to meet workload demands. A subsequent change to your application has rendered the performance gains from dedicated tenancy superfluous, and you would now like to recoup some of these greater costs. How do you revert your instance tenancy attribute of a VPC to default for new launched EC2 instances?
Modify the instance tenancy attribute of your VPC from dedicated to default using the AWS CLI, an AWS SDK, or the Amazon EC2 API.
You can change the instance tenancy attribute of a VPC from dedicated to default. Modifying the instance tenancy of the VPC does not affect the tenancy of any existing instances in the VPC. The next time you launch an instance in the VPC, it has a tenancy of default, unless you specify otherwise during launch. You can modify the instance tenancy attribute of a VPC using the AWS CLI, an AWS SDK, or the Amazon EC2 API only. Reference: Change the tenancy of a VPC.
How do DynamoDB indices work?
What is Amazon DynamoDB?
Amazon DynamoDB is a fast, fully managed NoSQL database service. DynamoDB makes it simple and cost-effective to store and retrieve any amount of data and serve any level of request traffic.
DynamoDB is used to create tables that store and retrieve any level of data.
- DynamoDB uses SSD’s to store data.
- Provides Automatic and synchronous data.
- Maximum item size is 400KB
- Supports cross-region replication.
DynamoDB Core Concepts:
- The fundamental concepts around DynamoDB are:
- Tables-which is a collection of data.
- Items- They are the individual entries in the table.
- Attributes- These are the properties associated with the entries.
- Primary Keys.
- Secondary Indexes.
- DynamoDB streams.
Secondary Indexes:
- The Secondary index is a data structure that contains a subset of attributes from the table, along with an alternate key that supports Query operations.
- Every secondary index is related to only one table, from where it obtains data. This is called base table of the index.
- When you create an index you create an alternate key for the index i.e. Partition Key and Sort key, DynamoDB creates a copy of the attributes into the index, including primary key attributes derived from the table.
- After this is done, you use the query/scan in the same way as you would use a query on a table.
Every secondary index is instinctively maintained by DynamoDB.
DynamoDB Indexes: DynamoDB supports two indexes:
- Local Secondary Index (LSI)- The index has the same partition key as the base table but a different sort key,
- Global Secondary index (GSI)- The index has a partition key and sort key are different from those on the base table.
While creating more than one table using secondary table , you must do it in a sequence. Create table one after the another. When you create the first table wait for it to be active.
Once that table is active, create another table and wait for it to get active and so on. If you try to create one or more tables continuously DynamoDB will return a LimitExceededException.
You must specify the following, for every secondary index:
- Type- You must mention the type of index you are creating whether it is a Global Secondary Index or a Local Secondary index.
- Name- You must specify the name for the index. The rules for naming the indexes are the same as that for the table it is connected with. You can use the same name for the indexes that are connected with the different base table.
- Key- The key schema for the index states that every attribute in the index must be of the top level attribute of type-string, number, or binary. Other data types which include documents and sets are not allowed. Other requirements depend on the type of index you choose.
- For GSI- The partitions key can be any scalar attribute of the base table.
Sort key is optional and this too can be any scalar attribute of the base table.
- For LSI- The partition key must be the same as the base table’s partition key.
The sort key must be a non-key table attribute.
- Additional Attributes: The additional attributes are in addition to the tables key attributes. They are automatically projected into every index. You can use attributes for any data type, including scalars, documents and sets.
- Throughput: The throughput settings for the index if necessary are:
- GSI: Specify read and write capacity unit settings. These provisioned throughput settings are not dependent on the base tables settings.
- LSI- You do not need to specify read and write capacity unit settings. Any read and write operations on the local secondary index are drawn from the provisioned throughput settings of the base table.
You can create upto 5 Global and 5 Local Secondary Indexes per table. With the deletion of a table all the indexes are connected with the table are also deleted.
You can use the Scan or Query operation to fetch the data from the table. DynamoDB will give you the results in descending or ascending order.
(Source)
What is NLB in AWS?
An NLB is a Network Load Balancer.
Network Load Balancer Overview: A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration. When you enable an Availability Zone for the load balancer, Elastic Load Balancing creates a load balancer node in the Availability Zone. By default, each load balancer node distributes traffic across the registered targets in its Availability Zone only. If you enable cross-zone load balancing, each load balancer node distributes traffic across the registered targets in all enabled Availability Zones. It is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on your part. The Network Load Balancer is API-compatible with the Application Load Balancer, including full programmatic control of Target Groups and Targets. Here are some of the most important features:
- Static IP Addresses – Each Network Load Balancer provides a single IP address for each Availability Zone in its purview. If you have targets in us-west-2a and other targets in us-west-2c, NLB will create and manage two IP addresses (one per AZ); connections to that IP address will spread traffic across the instances in all the VPC subnets in the AZ. You can also specify an existing Elastic IP for each AZ for even greater control. With full control over your IP addresses, a Network Load Balancer can be used in situations where IP addresses need to be hard-coded into DNS records, customer firewall rules, and so forth.
- Zonality – The IP-per-AZ feature reduces latency with improved performance, improves availability through isolation and fault tolerance, and makes the use of Network Load Balancers transparent to your client applications. Network Load Balancers also attempt to route a series of requests from a particular source to targets in a single AZ while still providing automatic failover should those targets become unavailable.
- Source Address Preservation – With Network Load Balancer, the original source IP address and source ports for the incoming connections remain unmodified, so application software need not support X-Forwarded-For, proxy protocol, or other workarounds. This also means that normal firewall rules, including VPC Security Groups, can be used on targets.
- Long-running Connections – NLB handles connections with built-in fault tolerance, and can handle connections that are open for months or years, making them a great fit for IoT, gaming, and messaging applications.
- Failover – Powered by Route 53 health checks, NLB supports failover between IP addresses within and across regions.
How many types of VPC endpoints are available?
There are two types of VPC endpoints: (1) interface endpoints and (2) gateway endpoints. Interface endpoints enable connectivity to services over AWS PrivateLink.
What is the purpose of key pair with Amazon AWS EC2?
Amazon AWS uses key pair to encrypt and decrypt login information.
A sender uses a public key to encrypt data, which its receiver then decrypts using another private key. These two keys, public and private, are known as a key pair.
You need a key pair to be able to connect to your instances. The way this works on Linux and Windows instances is different.
First, when you launch a new instance, you assign a key pair to it. Then, when you log in to it, you use the private key.
The difference between Linux and Windows instances is that Linux instances do not have a password already set and you must use the key pair to log in to Linux instances. On the other hand, on Windows instances, you need the key pair to decrypt the administrator password. Using the decrypted password, you can use RDP and then connect to your Windows instance.
Amazon EC2 stores only the public key, and you can either generate it inside Amazon EC2 or you can import it. Since the private key is not stored by Amazon, it’s advisable to store it in a secure place as anyone who has this private key can log in on your behalf.
What is the difference between a VPC SG and an EC2 security group?
There are two types of Security Groups based on where you launch your instance. When you launch your instance on EC2-Classic, you have to specify an EC2-Classic Security Group . On the other hand, when you launch an instance in a VPC, you will have to specify an EC2-VPC Security Group. Now that we have a clear understanding what we are comparing, lets see their main differences:
- When the instance is launched, you can only choose a Security Group that resides in the same region as the instance.
- You cannot change the Security Group after the instance has launched (you may edit the rules)
- They are not IPv6 Capable
- You can change the Security Group after the instance has launched
- They are IPv6 Capable
Generally speaking, they are not interchangeable and there are more capabilities on the EC2-VPC SGs. You may read more about them on Differences Between Security Groups for EC2-Classic and EC2-VPC
Why do AWS DynamoDB and S3 use gateway VPC endpoints rather than interface endpoints?
I think this is historical in nature. S3 and DynamoDB were the first services to support VPC endpoints. The release of those VPC endpoint features pre-dates two important services that subsequently enabled interface endpoints: Network Load Balancer and AWS PrivateLink.
What is the best way to develop AWS Lambda functions locally on your laptop?
- Separate the Lambda handler from your core logic.
- Take advantage of execution context reuse to improve the performance of your function. Initialize SDK clients and database connections outside of the function handler, and cache static assets locally in the
/tmp
directory. Subsequent invocations processed by the same instance of your function can reuse these resources. This saves execution time and avoid potential data leaks across invocations, don’t use the execution context to store user data, events, or other information with security implications. If your function relies on a mutable state that can’t be stored in memory within the handler, consider creating a separate function or separate versions of a function for each user. - Use AWS Lambda Environment Variables to pass operational parameters to your function. For example, if you are writing to an Amazon S3 bucket, instead of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable.
How can I see if/when someone logs into my AWS Windows instance?
You can use VPC Flow Logs. The steps would be the following:
- Enable VPC Flow Logs for the VPC your EC2 instance lives in. You can do this from the VPC console
- Having VPC Flow Logs enabled will create a CloudWatch Logs log group
- Find the Elastic Network Interface assigned to your EC2 instance. Also, get the private IP of your EC2 instance. You can do this from the EC2 console.
- Find the CloudWatch Logs log stream for that ENI.
- Search the log stream for records where your Windows instance’s IP is the destination IP, make sure the port is the one you’re looking for. You’ll see records that tell you if someone has been connecting to your EC2 instance. For example, there are bytes transferred, status=ACCEPT, log-status=OK. You will also know the source IP that connected to your instance.
I recommend using CloudWatch Logs Metric Filters, so you don’t have to do all this manually. Metric Filters will find the patterns I described in your CloudWatch Logs entries and will publish a CloudWatch metric. Then you can trigger an alarm that notifies you when someone logs in to your instance.
Here are more details from the AWS Official Blog and the AWS documentation for VPC Flow Logs records:
VPC Flow Logs – Log and View Network Traffic Flows
Also, there are 3rd-party tools that simplify all these steps for you and give you very nice visibility and alerts into what’s happening in your AWS network resources. I’ve tried Observable Networks and it’s great: Observable Networks
While enabling ports on AWS NAT gateway when you allow inbound traffic on port 80/443 , do you need to allow outbound traffic on the same ports or is it sufficient to allow outbound traffic on ephemeral ports (1024-65535)?
Typically outbound traffic is not blocked by NAT on any port, so you would not need to explicitly allow those, since they should already be allowed. Your firewall generally would have a rule to allow return traffic that was initiated outbound from inside your office.
Is AWS traffic between EC2 nodes in the same availability zone secure with respect to sending sensitive data?
According to Amazon’s documentation, it is impossible for one instance to sniff traffic bound for a different instance.
https://d0.awsstatic.com/whitepapers/aws-security-whitepaper.pdf
- Packet sniffing by other tenants. It is not possible for a virtual instance running in promiscuous mode to receive or “sniff” traffic that is intended for a different virtual instance. While you can place your interfaces into promiscuous mode, the hypervisor will not deliver any traffic to them that is not addressed to them. Even two virtual instances that are owned by the same customer located on the same physical host cannot listen to each other’s traffic. Attacks such as ARP cache poisoning do not work within Amazon EC2 and Amazon VPC. While Amazon EC2 does provide ample protection against one customer inadvertently or maliciously attempting to view another’s data, as a standard practice you should encrypt sensitive traffic.
But as you can see, they still recommend that you should maintain encryption inside your network. We have taken the approach of terminating SSL at the external interface of the ELB, but then initiating SSL from the ELB to our back-end servers, and even further, to our (RDS) databases. It’s probably belt-and-suspenders, but in my industry it’s needed. Heck, we have some interfaces that require HTTPS and a VPN.
What’s the use case for S3 Pre-signed URL for uploading objects?
I get the use-case to allow access to private/premium content in S3 using Presigned-url that can be used to view or download the file until the expiration time set, But what’s a real life scenario in which a Webapp would have the need to generate URI to give users temporary credentials to upload an object, can’t the same be done by using the SDK and exposing a REST API at the backend.
Asking this since I want to build a POC for this functionality in Java, but struggling to find a real-world use-case for the same
Pre-signed URLs are used to provide short-term access to a private object in your S3 bucket. They work by appending an AWS Access Key, expiration time, and Sigv4 signature as query parameters to the S3 object. There are two common use cases when you may want to use them:
- Simple, occasional sharing of private files.
- Frequent, programmatic access to view or upload a file in an application.
Imagine you may want to share a confidential presentation with a business partner, or you want to allow a friend to download a video file you’re storing in your S3 bucket. In both situations, you could generate a URL, and share it to allow the recipient short-term access.
There are a couple of different approaches for generating these URLs in an ad-hoc, one-off fashion, including:
- Using the AWS Tools for Powershell.
- Using the AWS CLI.
Source: Here
AWS:REINVENT 2022 (Tips, Latest Tech, Surviving Vegas, Parties)

First time going there, what like to know in advance the do and don’t … from people with previous experiences.
Pre-plan as much as you can, but don’t sweat it in the moment if it doesn’t work out. The experience and networking are as if not more valuable than the sessions.
Deliberately know where your exits are. Most of Vegas is designed to keep you inside — when you’re burned out from the crowds and knowledge deluge is not the time to be trying to figure out how the hell you get out of wherever you are.
Study maps of how the properties interconnect before you go. You can get a lot of places without ever going outside. Be able to make a deliberate decision of what route to take. Same thing for the outdoor escalators and pedestrian bridges — they’re not necessarily intuitive, but if you know where they go, they’re a life saver running between events.
Drink more water and eat less food than you think you need to. Your mind and body will thank you.
Be prepared for all of the other Vegasisms if you ever plan on leaving the con boundaries (like to walk down the street to another venue) — you will likely be propositioned by mostly naked showgirls, see overt advertisement for or even be directly propositioned by prostitutes and their business associates, witness some pretty awful homelessness, and be “accidentally bumped into” pretty regularly by amateur pickpockets.
Switching gears between “work/AWS” and “surviving Vegas” multiple times a day can be seriously mentally taxing. I haven’t found any way to prevent that, just know it’s going to happen.
Take a burner laptop and not your production access work machine. You don’t want to accidentally crater your production environment because you gave the wrong cred as part of a lab.
There are helpful staffers everywhere around the con — don’t be afraid to leverage them — they tend to be much better informed than the ushers/directors/crowd wranglers at other cons.
Plan on getting Covid or at very least Con Crud. If you’re not used to being around a million sick people in the desert, it’s going to take its toll on your body one way or another.
Don’t set morning alarms. If your body needs to sleep in, that was more important than whatever morning session you wanted to catch. Watch the recording later on your own time and enjoy your mental clarity for the rest of the day.
Wander the expo floor when you’re bored to get a big picture of the ecosystem, but don’t expect anything too deep. The partner booths are all fun and games and don’t necessarily align with reality. Hang out at the “Ask AWS” booths — people ask some fun interesting questions and AWS TAMs/SAs and the other folks staffing the booth tend not to suck.
Listen to The Killers / Brandon Flowers when walking around outside — he grew up in Las Vegas and a lot of his music has subtle (and not so subtle) hints on how to survive and thrive there.
I’m sure there’s more, but that’s what I can think of off the top of my head.
Source: Many years of attending re:Invent as AWS staff, AWS partner, and AWS customer.
This is more Vegas-advice than pure Re:Invent advice, but if you’re going to be in the city for more than 3 days try to either:
Find a way off/out of the strip for an afternoon. A hike out at Red Rocks is a great option.
Get a pass to the spa at your hotel so that you can escape the casino/event/hotel room trap. It’s amazing how shitty you feel without realizing it until you do a quick workout and steam/sauna/ice bath routine.
I’ve also seen a whole variety of issues that people run into during hands-on workshops where for one reason or another their corporate laptop/email/security won’t let them sign up and log into a new AWS account. Make sure you don’t have any restrictions there, as that’ll be a big hassle. The workshops have been some of the best and most memorable sessions for me.
More tips:
Sign up for all the parties! Try to get your sessions booked too, it’s a pain to be on waitlists. Don’t do one session at Venetian followed by a session at MGM. You’ll never make it in time. Try to group your sessions by location/day.
Use reInventParties.com for that.
Check the Guides there as well. reInventGuides.com.
Start here: http://reInventParties.com
We catalog all the parties, keep a list of the latest (and older) guides, the Expo floor plan, drawings, etc. On Twitter as well @reInventParties
Hidden gem if you’re into that sort of thing, the Pinball Museum is a great place to hang for a bit with some friends.
Bring sunscreen, a water bottle you like, really comfortable shoes, and lip balm.
Get at least one cert if you don’t already have one. The Cert lounge is a wonderful place to chill and the swag there is top tier.
Check the partner parties, they have good food and good swag.
Register with an alt email address (something like yourname+reinvent@domain.com) so you can set an email rule for all the spam.
If your workplace has an SA, coordinate with them for schedules and info. They will also curate calendars for you and get you insider info if you want them to.
Prioritize workshops and chalk talks. Partner talks are long advertisements, take them with a grain of salt.
Even if you are an introvert, network. There are folks there with valuable insights and skills. You are one of those.
Don’t underestimate the distance between venues. Getting from MGM to Venetian can take forever.
Bring very comfortable walking shoes and be prepared to spend a LOT of time on your feet and walking 25-30,000 steps a day. All of the other comments and ideas are awesome. The most important thing to remember, especially for your very first year, is to have fun. Don’t just sit in breakouts all day and then go back to your hotel. Go to the after dark events. Don’t get too hung up on if you don’t make it to all the breakout sessions you want to go to. Let your first year be a learning curve on how to experience and enjoy re:Invent. It is the most epic week in Vegas you will ever experience. Maybe we will bump into each other. Love meeting new people.
FROM AWS:REINVENT 2021:
AWS on Air
Peter DeSantis Keynote
Join Peter DeSantis, Senior Vice President, Utility Computing and Apps, to learn how AWS has optimized its cloud infrastructure to run some of the world’s most demanding workloads and give your business a competitive edge.
Werner Vogels Keynote
Join Dr. Werner Vogels, CTO, Amazon.com, as he goes behind the scenes to show how Amazon is solving today’s hardest technology problems. Based on his experience working with some of the largest and most successful applications in the world, Dr. Vogels shares his insights on building truly resilient architectures and what that means for the future of software development.
Accelerating innovation with AI and ML
Applied artificial intelligence (AI) solutions, such as contact center intelligence (CCI), intelligent document processing (IDP), and media intelligence (MI), have had a significant market and business impact for customers, partners, and AWS. This session details how partners can collaborate with AWS to differentiate their products and solutions with AI and machine learning (ML). It also shares partner and customer success stories and discusses opportunities to help customers who are looking for turnkey solutions.
Application integration patterns for microservices
An implication of applying the microservices architectural style is that a lot of communication between components is done over the network. In order to achieve the full capabilities of microservices, this communication needs to happen in a loosely coupled manner. In this session, explore some fundamental application integration patterns based on messaging and connect them to real-world use cases in a microservices scenario. Also, learn some of the benefits that asynchronous messaging can have over REST APIs for communication between microservices.
Maintain application availability and performance with Amazon CloudWatch
Avoiding unexpected user behavior and maintaining reliable performance is crucial. This session is for application developers who want to learn how to maintain application availability and performance to improve the end user experience. Also, discover the latest on Amazon CloudWatch.
How Amazon.com transforms customer experiences through AI/ML
Amazon is transforming customer experiences through the practical application of AI and machine learning (ML) at scale. This session is for senior business and technology decision-makers who want to understand Amazon.com’s approach to launching and scaling ML-enabled innovations in its core business operations and toward new customer opportunities. See specific examples from various Amazon businesses to learn how Amazon applies AI/ML to shape its customer experience while improving efficiency, increasing speed, and lowering cost. Also hear the lessons the Amazon teams have learned from the cultural, process, and technical aspects of building and scaling ML capabilities across the organization.
Accelerating data-led migrations
Data has become a strategic asset. Customers of all sizes are moving data to the cloud to gain operational efficiencies and fuel innovation. This session details how partners can create repeatable and scalable solutions to help their customers derive value from their data, win new customers, and grow their business. It also discusses how to drive partner-led data migrations using AWS services, tools, resources, and programs, such as the AWS Migration Acceleration Program (MAP). Also, this session shares customer success stories from partners who have used MAP and other resources to help customers migrate to AWS and improve business outcomes.
Accelerate front-end web and mobile development with AWS Amplify
User-facing web and mobile applications are the primary touchpoint between organizations and their customers. To meet the ever-rising bar for customer experience, developers must deliver high-quality apps with both foundational and differentiating features. AWS Amplify helps front-end web and mobile developers build faster front to back. In this session, review Amplify’s core capabilities like authentication, data, and file storage and explore new capabilities, such as Amplify Geo and extensibility features for easier app customization with AWS services and better integration with existing deployment pipelines. Also learn how customers have been successful using Amplify to innovate in their businesses.
AWS Amplify is a set of tools and services that makes it quickand easy for front-end web and mobile developers to build full-stack applications on AWS
Amplify DataStore provides a programming model for leveraging shared and distributed data without writing additional code for offline and online scenarios, which makes working
with distributed, cross-user data just as simple as working with local-only data
AWS AppSync is a managed GraphQL API service
Amazon DynamoDB is a serverless key-value and document database that’s highly scalable
Amazon S3 allows you to store static assets
DevOps revolution
While DevOps has not changed much, the industry has fundamentally transformed over the last decade. Monolithic architectures have evolved into microservices. Containers and serverless have become the default. Applications are distributed on cloud infrastructure across the globe. The technical environment and tooling ecosystem has changed radically from the original conditions in which DevOps was created. So, what’s next? In this session, learn about the next phase of DevOps: a distributed model that emphasizes swift development, observable systems, accountable engineers, and resilient applications.
Innovation Day
Innovation Day is a virtual event that brings together organizations and thought leaders from around the world to share how cloud technology has helped them capture new business opportunities, grow revenue, and solve the big problems facing us today, and in the future. Featured topics include building the first human basecamp on the moon, the next generation F1 car, manufacturing in space, the Climate Pledge from Amazon, and building the city of the future at the foot of Mount Fuji.
Latest AWS Products and Services announced at re:invent 2021
Graviton 3: AWS today announced the newest generation of its Arm-based Graviton processors: the Graviton 3. The company promises that the new chip will be 25 percent faster than the last-generation chips, with 2x faster floating-point performances and a 3x speedup for machine-learning workloads. AWS also promises that the new chips will use 60 percent less power.
Trn1 to train models for various applications
AWS Mainframe Modernization: Cut mainframe migration time by 2/3
AWS Private 5G: Deploy and manage your own private 5G network (Set up and scale a private mobile network in days)
Transaction for Governed tables in Lake Formation: Automatically manages conflicts and error
Serverless and On-Demand Analytics for Redshift, EMAR, MSK, Kinesis:
Amazon Sagemaker Canvas: Create ML predictions without any ML experience or writing any code
AWS IoT TwinMaker: Real Time system that makes it easy to create and use digital twins of real-world systems.
Amazon DevOps Guru for RDS: Automatically detect, diagnose, and resolve hard-to-find database issues.
Amazon DynamoDB Standard-Infrequent Access table class: Reduce costs by up to 60%. Maintain the same performance, durability, scaling. and availability as Standard
AWS Database Migration Service Fleet Advisor: Accelerate database migration with automated inventory and migration: This service makes it easier and faster to get your data to the cloud and match it with the correct database service. “DMS Fleet Advisor automatically builds an inventory of your on-prem database and analytics service by streaming data from on prem to Amazon S3. From there, we take it over. We analyze [the data] to match it with the appropriate amount of AWS Datastore and then provide customized migration plans.
Amazon Sagemaker Ground Truth Plus: Deliver high-quality training datasets fast, and reduce data labeling cost.
Amazon SageMaker Training Compiler: Accelerate model training by 50%
Amazon SageMaker Inference Recommender: Reduce time to deploy from weeks to hours
Amazon SageMaker Serverless Inference: Lower cost of ownership with pay-per-use pricing
Amazon Kendra Experience Builder: Deploy Intelligent search applications powered by Amazon Kendra with a few clicks.
Amazon Lex Automated Chatbot Designer: Drastically Simplifies bot design with advanced natural language understanding
Amazon SageMaker Studio Lab: A no cost, no setup access to powerful machine learning technology
AWS Cloud WAN: Build, manage and monitor global wide area networks
AWS Amplify Studio: Visually build complete, feature-rich apps in hours instead of weeks, with full control over the application code.
AWS Carbon Footprint Tool: Don’t forget to turn off the lights.
AWS Well-Architected Sustainability Pillar: Learn, measure, and improve your workloads using environmental best practices in cloud computing
AWS re:Post: Get Answers from AWS experts. A Reimagined Q&A Experience for the AWS Community
How do you build something completely new?
FROM AWS:REINVENT 2020:
Automate anything with AWS Systems Manager
You can automate any task that involves interaction with AWS and on-premises resources, including in multi-account and multi-Region environments, with AWS Systems Manager. In this session, learn more about three new Systems Manager launches at re:Invent—Change Manager, Fleet Manager, and Application Manager. In addition, learn how Systems Manager Automation can be used across multiple Regions and accounts, integrate with other AWS services, and extend to on-premises. This session takes a deep dive into how to author a custom runbook using an automation document, and how to execute automation anywhere.
Deliver cloud operations at scale with AWS Managed Services
Learn how you can quickly build scaled AWS operations tooling to meet some of the most complex and compliant operations system requirements.
Turbocharging query execution on Amazon EMR
Learn about the performance improvements made in Amazon EMR for Apache Spark and Presto, giving Amazon EMR one of the fastest runtimes for analytics workloads in the cloud. This session dives deep into how AWS generates smart query plans in the absence of accurate table statistics. It also covers adaptive query execution—a technique to dynamically collect statistics during query execution—and how AWS uses dynamic partition pruning to generate query predicates for speeding up table joins. You also learn about execution improvements such as data prefetching and pruning of nested data types.
Detect machine learning (ML) model drift in production
Explore how state-of-the-art algorithms built into Amazon SageMaker are used to detect declines in machine learning (ML) model quality. One of the big factors that can affect the accuracy of models is the difference in the data used to generate predictions and what was used for training. For example, changing economic conditions could drive new interest rates affecting home purchasing predictions. Amazon SageMaker Model Monitor automatically detects drift in deployed models and provides detailed alerts that help you identify the source of the problem so you can be more confident in your ML applications.
Amazon Lightsail: The easiest way to get started on AWS
Amazon Lightsail is AWS’s simple, virtual private server. In this session, learn more about Lightsail and its newest launches. Lightsail is designed for simple web apps, websites, and dev environments. This session reviews core product features, such as preconfigured blueprints, managed databases, load balancers, networking, and snapshots, and includes a demo of the most recent launches. Attend this session to learn more about how you can get up and running on AWS in the easiest way possible.
Deep dive into AWS Lambda security: Function isolation
This session dives into the security model behind AWS Lambda functions, looking at how you can isolate workloads, build multiple layers of protection, and leverage fine-grained authorization. You learn about the implementation, the open-source Firecracker technology that provides one of the most important layers, and what this means for how you build on Lambda. You also see how AWS Lambda securely runs your functions packaged and deployed as container images. Finally, you learn about SaaS, customization, and safe patterns for running your own customers’ code in your Lambda functions.
Unauthorized users and financially motivated third parties also have access to advanced cloud capabilities. This causes concerns and creates challenges for customers responsible for the security of their cloud assets. Join us as Roy Feintuch, chief technologist of cloud products, and Maya Horowitz, director of threat intelligence and research, face off in an epic battle of defense against unauthorized cloud-native attacks. In this session, Roy uses security analytics, threat hunting, and cloud intelligence solutions to dissect and analyze some sneaky cloud breaches so you can strengthen your cloud defense. This presentation is brought to you by Check Point Software, an AWS Partner.
Best practices for security governance in serverless applications
AWS provides services and features that your organization can leverage to improve the security of a serverless application. However, as organizations grow and developers deploy more serverless applications, how do you know if all of the applications are in compliance with your organization’s security policies? This session walks you through serverless security, and you learn about protections and guardrails that you can build to avoid misconfigurations and catch potential security risks.
How Amazon.com automates cash identification & matching with AWS AI/ML
The Amazon Cash application service matches incoming customer payments with accounts and open invoices, while an email ingestion service (EIS) processes more than 1 million semi-structured and unstructured remittance emails monthly. In this session, learn how this EIS classifies the emails, extracts invoice data from the emails, and then identifies the right invoices to close on Amazon financial platforms. Dive deep on how these services automated 89.5% of cash applications using AWS AI & ML services. Hear about how these services will eliminate the manual effort of 1000 cash application analysts in the next 10 years.
Understanding AWS Lambda streaming events
Dive into the details of using Amazon Kinesis Data Streams and Amazon DynamoDB Streams as event sources for AWS Lambda. This session walks you through how AWS Lambda scales along with these two event sources. It also covers best practices and challenges, including how to tune streaming sources for optimum performance and how to effectively monitor them.
Building real-time applications using Apache Flink
Build real-time applications using Apache Flink with Apache Kafka and Amazon Kinesis Data Streams. Apache Flink is a framework and engine for building streaming applications for use cases such as real-time analytics and complex event processing. This session covers best practices for building low-latency applications with Apache Flink when reading data from either Amazon MSK or Amazon Kinesis Data Streams. It also covers best practices for running low-latency Apache Flink applications using Amazon Kinesis Data Analytics and discusses AWS’s open-source contributions to this use case.
App modernization on AWS with Apache Kafka and Confluent Cloud
Learn how you can accelerate application modernization and benefit from the open-source Apache Kafka ecosystem by connecting your legacy, on-premises systems to the cloud. In this session, hear real customer stories about timely insights gained from event-driven applications built on an event streaming platform from Confluent Cloud running on AWS, which stores and processes historical data and real-time data streams. Confluent makes Apache Kafka enterprise-ready using infinite Kafka storage with Amazon S3 and multiple private networking options including AWS PrivateLink, along with self-managed encryption keys for storage volume encryption with AWS Key Management Service (AWS KMS).
BI at hyperscale: Quickly build and scale dashboards with Amazon QuickSight
Data-driven business intelligence (BI) decision making is more important than ever in this age of remote work. An increasing number of organizations are investing in data transformation initiatives, including migrating data to the cloud, modernizing data warehouses, and building data lakes. But what about the last mile—connecting the dots for end users with dashboards and visualizations? Come to this session to learn how Amazon QuickSight allows you to connect to your AWS data and quickly build rich and interactive dashboards with self-serve and advanced analytics capabilities that can scale from tens to hundreds of thousands of users, without managing any infrastructure and only paying for what you use.
Is there an Updated SAA-C03 Practice Exam?
Yes as of August 2022.
This sample SAA-C03 sample exam PDF file can provide you with a hint of what the real SAA-C03 exam will look like in your upcoming test. In addition, the SAA-C03 sample questions also contain the necessary explanation and reference links that you can study.
Top-paying Cloud certifications:
- Google Certified Professional Cloud Architect — $175,761/year
- AWS Certified Solutions Architect – Associate — $149,446/year
- Azure/Microsoft Cloud Solution Architect – $141,748/yr
- Google Cloud Associate Engineer – $145,769/yr
- AWS Certified Cloud Practitioner — $131,465/year
- Microsoft Certified: Azure Fundamentals — $126,653/year
- Microsoft Certified: Azure Administrator Associate — $125,993/year
AWS Certified Solution Architect Associate Exam Prep Quiz App
Download AWS Solution Architect Associate Exam Prep Pro App (No Ads, Full version with answers) for:
Android – iOS – Windows 10 – Amazon Android
How to Load balance EC2 Instances in an Autoscaling Group?
In this AWS tutorial, we are going to discuss how we can make the best use of AWS services to build a highly scalable, and fault tolerant configuration of EC2 instances. The use of Load Balancers and Auto Scaling Groups falls under a number of best practices in AWS, including Performance Efficiency, Reliability and high availability.
Before we dive into this hands-on tutorial on how exactly we can build this solution, let’s have a brief recap on what an Auto Scaling group is, and what a Load balancer is.
Autoscaling group (ASG)
An Autoscaling group (ASG) is a logical grouping of instances which can scale up and scale down depending on pre-configured settings. By setting Scaling policies of your ASG, you can choose how many EC2 instances are launched and terminated based on your application’s load. You can do this based on manual, dynamic, scheduled or predictive scaling.
Elastic Load Balancer (ELB)
An Elastic Load Balancer (ELB) is a name describing a number of services within AWS designed to distribute traffic across multiple EC2 instances in order to provide enhanced scalability, availability, security and more. The particular type of Load Balancer we will be using today is an Application Load Balancer (ALB). The ALB is a Layer 7 Load Balancer designed to distribute HTTP/HTTPS traffic across multiple nodes – with added features such as TLS termination, Sticky Sessions and Complex routing configurations.
Getting Started
First of all, we open our AWS management console and head to the EC2 management console.
We scroll down on the left-hand side and select ‘Launch Templates’. A Launch Template is a configuration template which defines the settings for EC2 instances launched by the ASG.
Under Launch Templates, we will select “Create launch template”.
We specify the name ‘MyTestTemplate’ and use the same text in the description.
Under the ‘Auto Scaling guidance’ box, tick the box which says ‘Provide guidance to help me set up a template that I can use with EC2 Auto Scaling’ and scroll down to launch template contents.
When it comes to choosing our AMI (Amazon Machine Image) we can choose the Amazon Linux 2 under ‘Quick Start’.
The Amazon Linux 2 AMI is free tier eligible, and easy to use for our demonstration purposes.
Next, we select the ‘t2.micro’ under instance types, as this is also free tier eligible.
Under Network Settings, we create a new Security Group called ExampleSG in our default VPC, allowing HTTP access to everyone. It should look like this.
We can then add our IAM Role we created earlier. Under Advanced Details, select your IAM instance profile.
Then we need to include some user data which will load a simple web server and web page onto our Launch Template when the EC2 instance launches.
Under ‘advanced details’, and in ‘User data’ paste the following code in the box.
#!/bin/bash yum update -y yum install -y httpd.x86_64 systemctl start httpd.service systemctl enable httpd.service echo “Hello World from $(hostname -f)” > /var/www/html/index.html
Then simply click ‘Create Launch Template’ and we are done!
We are now able to build an Auto Scaling Group from our launch template.
On the same console page, select ‘Auto Scaling Groups’, and Create Auto Scaling Group.
We will call our Auto Scaling Group ‘ExampleASG’, and select the Launch Template we just created, then select next.
On the next page, keep the default VPC and select any default AZ and Subnet from the list and click next.
Under ‘Configure Advanced Options’ select ‘Attach to a new load balancer’ .
You will notice the settings below will change and we will now build our load balancer directly on the same page.
Select the Application Load Balancer, and leave the default Load Balancer name.
Choose an ‘Internet Facing’ Load balancer, select another AZ and leave all of the other defaults the same. It should look something like the following.
Under ‘Listeners and routing’, select ‘Create a target group’ and select the target group which was just created. It will be called something like ‘ExampleASG-1’. Click next.
Now we get to Group Size. This is where we specify the desired, minimum and maximum capacity of our Auto Scaling Group.
Set the capacities as follows:
Click ‘skip to review’, and click ‘Create Auto Scaling Group’.
You will now see the Auto Scaling Group building, and the capacity is updating.
After a short while, navigate to the EC2 Dashboard, and you will see that two EC2 instances have been launched!
To make sure our Auto Scaling group is working as it should – select any instance, and terminate the instance. After one instance has been terminated you should see another instance pending and go into a running state – bringing capacity back to 2 instances (as per our desired capacity).
If we also head over to the Load Balancer console, you will find our Application Load Balancer has been created.
If you select the load balancer, and scroll down, you will find the DNS name of your ALB – it will look something like ‘ ExampleASG-1-1435567571.us-east-1.elb.amazonaws.com’.
If you enter the DNS name into our URL, you should get the following page show up:
The message will display a ‘Hello World’ message including the IP address of the EC2 instance which is serving up the webpage behind the load balancer.
If you refresh the page a few times, you should see that the IP address listed will change. This is because the load balancer is routing you to the other EC2 instance, validating that our simple webpage is being served from behind our ALB.
The final step Is to make sure you delete all of the resources you configured! Start by deleting the Auto Scaling Group – and ensure you delete your load balancer also – this will ensure you don’t incur any charges.
Architectural Diagram
Below, you’ll find the architectural diagram of what we have built.
Learn how to Master AWS Cloud
Ultimate Training Packages – Our popular training bundles (on-demand video course + practice exams + ebook) will maximize your chances of passing your AWS certification the first time.
Membership – For unlimited access to our cloud training catalog, enroll in our monthly or annual membership program.
Challenge Labs – Build hands-on cloud skills in a secure sandbox environment. Learn, build, test and fail forward without risking unexpected cloud bills.
This post originally appeared on: https://digitalcloud.training/load-balancing-ec2-instances-in-an-autoscaling-group/
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS – Windows 10 – Amazon Android
There are significant protections provided to you natively when you are building your networking stack on AWS. This wide range of services and features can become difficult to manage, and becoming knowledgeable about what tools to use in which area can be challenging.
The two main security components which can be confused within VPC networking are the Security Group and the Network Access Control List (NACL). When you compare a Security Group vs NACL, you will find that although they are fairly similar in general, there is a distinct difference in the use cases for each of these security features.
In this blog post, we are going to explain the main differences between Security Group vs NACL and talk about the use cases and some best practices.
First of all, what do they have in common?
The main thing that is shared in common between a Security group vs a NACL is that they are both a firewall. So, what is a firewall?
Firewalls in computing monitor and control incoming and outgoing network traffic based on predetermined security rules. Firewalls provide a barrier between trusted and untrusted networks. The network layer which we are talking about in this instance is an Amazon Virtual Private Cloud – aka a VPC.
In the AWS cloud, VPCs are on-demand pools of shared resources, designed to provide a certain degree of isolation between different organizations and different teams within an account.
First, let’s talk about the particulars of a Security Group.
Security Group Key Features
Where do they live?
Security groups are tied to an instance. This can be either an EC2 instance, ECS cluster or an RDS database instance – providing routing rules and acting as a firewall for the resources contained within the security group. With a security group, you have to purposely assign a security group to the instances – if you don’t want them to use the default security group.
The default security group allows all traffic outbound by default, but no inbound traffic.
This means any instances within the subnet group gets the rule applied.
Stateful or Stateless
Security groups are stateful in nature. As a result, any changes applicable to an incoming rule will also be automatically applied to the outgoing rule in the same way. For example, allowing an incoming port 80 will automatically open the outgoing port 80 – without you having to explicitly direct traffic in the opposite direction.
Allow or Deny Rules
The only rule set that can be used in security groups is the Allow rule set. Thus, You cannot backlist a certain IP address from establishing a connection with any instances within your security group. This would have to be achieved using a different technology.
Limits
Instance can have multiple security groups. By default, AWS will let you apply up to five security groups to a virtual network interface, but it is possible to use up to 16 if you submit a limit increase request.
Additionally, you can have 60 inbound and 60 outbound rules per security group (for a total of 120 rules). IPv4 rules are enforced separately from IPv6 rules; a security group, for example, may have 60 IPv4 rules and 60 IPv6 rules.
Network Access Control Lists (NACLS)
Now let’s compare the Security Group vs NACLs using the same criteria.
Where do they live?
Network ACLs exist on an interact at the subnet level, so any instance in the subnet with an associated NACL will automatically follow the rules of the NACL.
Stateful or Stateless
Network ACLs are stateless. Consequently, any changes made to an incoming rule will not be reflected in an outgoing rule. For example, if you allow an incoming port 80, you would also need to apply the rule for outgoing traffic.
Allow or Deny Rules
Unlike a Security Group, NACLs support both allow and deny rules. By deny rules, you could explicitly deny a certain IP address to establish a connection; e.g. to block a specific known malicious IP address from establishing a connection to an EC2 Instance.
Limits
Subnet can have only one NACL. However, you can associate one network ACL to one or more subnets within a VPC. By default, you can have up to 200 unique NACLs within a VPC, however this is a soft limit that is adjustable.
Secondly, you can have 20 inbound and 20 outbound rules per NACL (for a total of 40 rules). IPv4 rules are enforced separately from IPv6 rules. A NACL, for example, may have 20 IPv4 rules and 20 IPv6 rules.
We hope that you now more keenly understand the difference between NACLs and security groups.
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS – Windows 10 – Amazon Android

Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS – Windows 10 – Amazon Android

A multi-account strategy in AWS can provide you with a secure and isolated platform from which to launch your resources. Whilst smaller organizations may only require a few AWS accounts, large corporations with many business units often require many accounts. These accounts may be organized hierarchically.
Building this account topology manually on the cloud requires a high degree of knowledge, and is rather error prone. If you want to set up a multi-account environment in AWS within a few clicks, you can use a service called AWS Control Tower.
AWS Control Tower allows your team to quickly provision and to set up and govern a secure, multi-account AWS environment, known as a landing zone. Built on the back of AWS Organizations, it automatically implements many accounts under the appropriate organizational units, with hardened service control policies attached. Provisioning new accounts happens in the click of a button, automating security configuration, and ensuring you extend governance into new accounts, without any manual intervention.
There are a number of key features which constitute AWS Control Tower, and in this article, we will explore each section and break down how it makes governing multiple accounts a lot easier.
The Landing Zone
A Landing Zone refers to the multi-account structure itself, which is configured to provide with a compliant and secure set of accounts upon which to start building. A Landing Zone can include extended features like federated account access via SSO and the utilization of centralized logging via Amazon CloudTrail and AWS Config.
The Landing Zone’s accounts follow guardrails set by you to ensure you are compliant to your own security requirements. Guardrails are rules written in plain English, leveraging AWS CloudFormation in the background to establish a hardened account baseline.
Guardrails can fit into one of a number of categories:
Mandatory – These come pre-configured on the accounts and can not be removed. An example may be “Enable AWS Config in All Available Regions or Disallow Deletion of Log Archive.
Optional – These are useful but not always necessary depending on your use case, and are up to your discretion if you choose to use them. Some examples may be Detect Whether Public Read Access to Amazon S3 Buckets is Allowed and Detect Whether Amazon EBS Volumes are Attached to Amazon EC2 Instances.
Elective Guardrails – Elective guardrails allow you to lock down certain behaviors which are commonly restricted in an AWS environment. These guardrails are not enabled by default, and can be disabled at any time. Examples of these guardrails are the following: Detect Whether MFA is Enabled for AWS IAM Users and Detect Whether Versioning for Amazon S3 Buckets is Enabled.
Guardrails provide immediate protection from any number of scenarios, without the need to be able to read or write complex security policies – a big upside compared to manual provisioning of permissions.
Account Factory
Account Factory is a component of Control Tower which allows you to automate the secure provisioning of new accounts, which exist according to defined security principles. Several pre-approved configurations are included as part of the launch of your new accounts including Networking information, and Region Selection. You also get seamless integration with AWS Service Catalog to allow your internal customers to configure and build new accounts. Third party Infrastructure as Code tooling like Terraform (Account Factory for Terraform) can be used also to provide your cloud teams the ability to benefit from a multiple account setup whilst using tools they are familiar with.
Architecture of Control Tower
Lets now dive into how Control Tower looks, with an architectural overview.
As you can see, there are a number of OUs (Organizational Units) in which accounts are placed. These are provisioned for you using AWS Organizations.
- Security OU – The Security OU contains two accounts, the Log Archive Account and the Audit Account. The Log Archive Account serves as a central store for all CloudTrail and AWS Config logs across the Landing Zone, securely stored within an S3 Bucket.
- Sandbox OU – The Sandbox OU is setup to host testing accounts (Sandbox Accounts) which are safely isolated from any production workloads.
- Production OU – This OU is for hosting all of your production accounts, containing production workloads.
- Non-Production OU – This OU can serve as a pre-production environment, in which further testing and development can take place.
- Suspended OU – this is secure OU, where you can move any deleted, reused or breached accounts. Permissions in this OU are extremely locked-down, ensuring it is a safe location.
- Shared Services OU – The Shared Services OU contains accounts in which services shared across multiple other accounts are hosted. This consists of three accounts:
- The Shared Services account (where the resources are directly shared)
- The Security Services Account (hosting services like Amazon Inspector, Amazon Macie, AWS Secrets Manager as well as any firewall solutions.)
- The Networking Account – This contains VPC Endpoints and components and things like DNS Endpoints.
Any organization can benefit from using AWS Control Tower. Whether you’re a multinational corporation with years of AWS Experience, or a burgeoning start-up with little experience in the cloud, Landing Zone can provide your customers with confidence that they are provisioning their architecture efficiently and securely.
This article originally appeared on: https://digitalcloud.training/
AWS Cloud Certifications Breaking News – Testimonials – AWS Top Stories – AWS solution architect associate preparation guide
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS – Windows 10 – Amazon Android

- Confused about best way to keep lambda's warmby /u/yourjusticewarrior2 (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 22, 2025 at 2:15 am
I have a Java 8 AWS Lambda setup that processes records via API Gateway, saves data to S3, sends Firebase push notifications, and asynchronously invokes another Lambda for background tasks. Cold starts initially took around 20 seconds, while warmed execution was about 500ms. To mitigate this, a scheduled event was used to ping the Lambda every 2 minutes, which helped but still resulted in periodic cold starts roughly once an hour. Switching to provisioned concurrency with two instances reduced the cold start time to 10 seconds, but didn’t match the 500ms warm performance. Why does provisioned concurrency not fully eliminate cold start delays, and is it worth paying for if it doesn't maintain consistently low response times? Lambda stats : Java 8 on Amazon Linux 2, x86_64 architecture, Memory 1024 (uses ~200mb on invocation), and ephemeral storage is 512 mb. submitted by /u/yourjusticewarrior2 [link] [comments]
- Make sense to combine AWS WAF + Cloudflare?by /u/Developer_Kid (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 22, 2025 at 1:52 am
Hi, im kinda new to AWS, first i was trying to proxy requests thought cloudflare cuz i know cloudflare and used it on some projects before. But i was learning about AWS waf, principally how to implement it in front of amplify or api gateway. Anyone that used both and can tell me if aws waf is powerfull like cloudflare? Not asking about prices, cuz i think cloudflare is way cheaper, but asking about security in general. Any advice? submitted by /u/Developer_Kid [link] [comments]
- Request for Customized EC2by /u/North-Equal6591 (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 22, 2025 at 1:28 am
Good day! Is it possible to request for customized EC2 from AWS? Currently, AWS does not offer the specifications we needed (EC2 with NVIDIA GPU and atleast 4.3GHz clock speed). I tried reaching out to AWS via this link: https://aws.amazon.com/contact-us/sales-support/ But could anyone confirm if customized EC2 is really possible? We only have Basic support plan. submitted by /u/North-Equal6591 [link] [comments]
- Global Accelerator: unexpected traffic NA-AUby /u/alekzio (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 21, 2025 at 11:15 pm
We have a global accelerator in front of our ALB. Almost 90% of the traffic has been switched from NA-EU (origin North America destiny Europe) to NA-AU (origin North America destiny Australia). We have checked the origin IPs from our ALB logs and we mostly see Europe IPs. As far as I understand if somebody is coming from the edge location AU it means that it may be in either Australia or New Zealand. Here https://aws.amazon.com/global-accelerator/features/ it says: Australia and New Zealand Edge Locations: Auckland, New Zealand; Melbourne, Australia; Perth, Australia; Sydney, Australia This is a chart from the billing dashboard filtered by "Global Accelerator" service. These are the GBs transferred from NA to both EU (Red) and AU (Blue). Our operations are not designed to expect such a change. ALB logs show pure IPs from Europe. I can't explain this traffic to AU. Any ideas? https://preview.redd.it/g8giuwqit72f1.png?width=2210&format=png&auto=webp&s=215c41dda4c63049c06d94eb07e78b33491af81a submitted by /u/alekzio [link] [comments]
- Account Suspended: Require temporary accessby /u/Skunki123 (Amazon Web Services (AWS): S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, VPC and more) on May 21, 2025 at 10:42 pm
Hello, My organization's AWS account has been suspended due to non-payment of Apr and May invoices (credit card issues are preventing us from making the payment). We are working on resolving those card issues and expect them to be resolved shortly. However, we need temporary access to the account console/IAM access to be able to restore and preserve crucial services. Is there any possibility of such access? u/awssupport submitted by /u/Skunki123 [link] [comments]
Download AWS Solution Architect Associate Exam Prep Pro App (No Ads, Full version with answers) for:

Android – iOS – Windows 10 – Amazon Android
All Platforms (PWA) – Android – iOS – Windows 10 – Amazon Android

What are AWS STEP FUNCTIONS?
There are many trends within the current cloud computing industry that have a sway on the conversations which take place throughout the market. One of these key areas of discussion is ‘Serverless’.
Serverless application deployment is a way of provisioning infrastructure in a managed way, without having to worry about any building and maintenance of servers – you launch the service and it works. Scaling, high availability, and automotive processes are looked after using managed AWS Serverless service. AWS Step Functions provides us a useful way to coordinate the components of distributed applications and microservices using visual workflows.
What is AWS Step Functions?
AWS Step Functions let developers build distributed applications, automate IT and business processes, and build data and machine learning pipelines by using AWS services.
Using Step Functions workflows, developers can focus on higher-value business logic instead of worrying about failures, retries, parallelization, and service integrations. In other words, AWS Step Functions is serverless workload orchestration service which can make developers’ lives much easier.
Components and Integrations
AWS Step Functions consist of a few components, the first being a State Machine.
What is a state machine?
The State Machine model uses given states and transitions to complete the tasks at hand. It is an abstract machine (system) that can be in one state at a time, but it can also switch between them. As a result, it doesn’t allow infinity loops, which removes one source of errors entirely, which is often costly.
With AWS Step Functions, you can define workflows as state machines, which simplify complex code into easy-to-understand statements and diagrams. The process of building applications and confirming they work as expected is actually much faster and easier.
State
In a state machine, a state is referred to by its name, which can be any string, but must be unique within the state machine. State instances exist until their execution is complete.
An individual component of your state machine can be in any of the following 8 types of states:
- Task state – Do some work in your state machine. From a task state, Amazon Step Functions can call Lambda functions directly
- Choice state – Make a choice between different branches of execution
- Fail state – Stops execution and marks it as failure
- Succeed state – Stops execution and marks it as a success
- Pass state – Simply pass its input to its output or inject some fixed data
- Wait state – Provide a delay for a certain amount of time or until a specified time/date
- Parallel state – Begin parallel branches of execution
- Map state – Adds a for-each loop condition
Limits
There are some limits which you need to be aware of when you are using AWS Step Functions. This table will break down the limits:
Use Cases and Examples
If you need to build workflows across multiple Amazon services, then AWS Step Functions are a great tool for you. Serverless microservices can be orchestrated with Step Functions, data pipelines can be built, and security incidents can be handled with Step Functions. It is possible to use Step Functions both synchronously and asynchronously.
Instead of manually orchestrating long-running, multiple ETL jobs or maintaining a separate application, Step Functions can ensure that these jobs are executed in order and complete successfully.
As a third feature, Step Functions are a great way to automate recurring tasks, such as updating patches, selecting infrastructure, and synchronizing data, and Step Functions will scale automatically, respond to timeouts, and retry missed tasks when they fail.
With Step Functions, you can create responsive serverless applications and microservices with multiple AWS Lambda functions without writing code for workflow logic, parallel processes, error handling, or timeouts.
Additionally, services and data can be orchestrated that run on Amazon EC2 instances, containers, or on-premises servers.
Pricing
Each time you perform a step in your workflow, Step Functions counts a state transition. State transitions, including retries, are charged across all state machines.
There is a Free Tier for AWS Step Functions of 4000 State Transitions per month.
With AWS Step Functions, you pay for the number state transitions you use per month.
Step Functions count a state transition each time a step of your workflow is executed. You are charged for the total number of state transitions across all your state machines, including retries.
State Transitions cost a flat rate of $0.000025 per state transition thereafter.
Summary
In summary, Step Functions are a powerful tool which you can use to improve the application development and productivity of your developers. By migrating your logic workflows into the cloud you will benefit from lower cost, rapid deployment. As this is a serverless service, you will be able to remove any undifferentiated heavy lifting from the application development process.
Interview Questions
Q: How does AWS Step Function create a State Machine?
A: A state machine is a collection of states which allows you to perform tasks in the form of lambda functions, or another service, in sequence, passing the output of one task to another. You can add branching logic based on the output of a task to determine the next state.
Q: How can we share data in AWS Step Functions without passing it between the steps?
A: You can make use of InputPath and ResultPath. In the ValidationWaiting step you can set the following properties (in State Machine definition)
This way you can send to external service only data that is actually needed by it and you won’t lose access to any data that was previously in the input.
Q: How can I diagnose an error or a failure within AWS Step Functions?
A: The following are some possible failure events that may occur
- State Machine Definition Issues.
- Task Failures due to exceptions thrown in a Lambda Function.
- Transient or Networking Issues.
- A task has surpassed its timeout threshold.
- Privileges are not set appropriately for a task to execute.
Source: This AWS Step Function post originally appeared on: https://digitalcloud.training/
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS – Windows 10 – Amazon Android

AWS Secrets Manager vs SSM Parameter Store
If you want to be an AWS cloud professional, you need to understand the differences between the myriad of services AWS offer. You also need an in-depth understanding on how to use the Security services to ensure that your account infrastructure is highly secure and safe to use. This is job zero at AWS, and there is nothing that is taken more seriously than Security. AWS makes it really easy to implement security best practices and provides you with many tools to do so.
AWS Secrets Manager and SSM Parameter store sound like very similar services on the surface -however, when you dig deeper – comparing AWS Secrets Manager vs SSM Parameter Store – you will find some significant differences which help you understand exactly when to use each tool.
AWS Secrets Manager
AWS Secrets Manager is designed to provide encryption for confidential information (like database credentials and API keys) that needs to be guarded safely in a secure way. Encryption is automatically enabled when creating a secret entry and there are a number of additional features we are going to explore in this article.
Through using AWS Secrets Manager, you can manage a wide range of secrets: Database credentials, API keys, and other self defined secrets are all eligible for this service.
If you are responsible for storing and managing secrets within your team, as well as ensuring that your company follows regulatory requirements – this is possible through AWS Secrets Manager which securely and safely stores all secrets within one place. Secrets Manager also has a large degree of added functionality.
SSM Parameter store
SSM Parameter store is slightly different. The key differences become evident when you compare how AWS Secrets Manager vs SSM Parameter Store are used.
The SSM Parameter Store focuses on a slightly wider set of requirements. Based on your compliance requirements, SSM Parameter Store can be used to store the secrets encrypted or unencrypted within your code base.
By storing environmental configuration data and other parameters, the software simplifies and optimizes the application deployment process. With the AWS Secrets Manager, you can add key rotation, cross-account access, and faster integration with services offered by AWS.
Based on this explanation you may think that they both sound similar. Let’s break down the similarities and differences between these roles.
Similarities
Managed Key/Value Store Services
Both services allow you to store values using a name and key. This is an extremely useful aspect of both of the services as the deployment of the application can reference different parameters or different secrets based on the deployment environment, allowing customizable and highly integratable deployments of your applications.
Both Referenceable in CloudFormation
You can use the powerful Infrastructure as Code (IaC) tool AWS CloudFormation to build your applications programmatically. The effortless deployment of either product using CloudFormation allows a seamless developer experience, without using painful manual processes.
While SSM Parameter Store only allows one version of a parameter to be active at any given time, Secrets Manager allows multiple versions to exist at the same time when you are rotating a secret using staging labels.
Similar Encryption Options
They are both inherently very secure services – and you do not have to choose one over another based on the encryption offered by either service.
Through another AWS Security service, KMS (the Key Management Service), IAM policies can be outlined to control and outline specific permissions on which only certain IAM users and roles have permission to decrypt the value. This restricts access to anyone who doesn’t need it – and it abides to the principle of least privilege, helping you abide by compliance standards.
Versioning
Versioning outlines the ability to save multiple, and iteratively developed versions of something to allow quicker restore lost versions, and maintain multiple copies of the same thing etc.
Both services support versioning of secret values within the service. This allows you to view multiple previous versions of your parameters. You can also optionally choose to promote a former version to the master up to date version, which can be useful as your application changes.
Given that there are lots of similarities between the two services, it is now time to view and compare the differences, along with some use cases of either service.
Differences
Cost
The costs are different across the services, namely the fact that SSM tends to cost less compared to Secrets Manager. Standard parameters are free for SSM. You won’t be charged for the first 10,000 parameters you store, however, Advanced Parameters will cost you. For every 10,000 API calls and every secret per month, AWS Secret Manager bills you a fixed fee.
This may factor into how you use each service and how you define your cloud spending strategy, so this is valuable information.
Password generation
A useful feature within AWS Secrets Manager allows us to generate random data during the creation phase to allow for the secure and auditable creation of strong and unique passwords and subsequently reference it in the same CloudFormation stack. This allows our applications to be fully built using IaC, and gives us all the benefits which that entails.
AWS Systems Manager Parameter Store on the other hand doesn’t work this way, and doesn’t allow us to generate random data — we need to do it manually using console or AWS CLI, and this can’t happen during the creation phase.
Rotation of Secrets
A Powerful feature of AWS Secrets Manager is the ability to automatically rotate credentials based on a pre-defined schedule, which you set. AWS Secrets Manager integrates this feature natively with many AWS services, and this feature (automated data rotation) is simply not possible using AWS Systems Manager Parameter Store.You will have to refresh and update data daily which will include a lot more manual setup to achieve the same functionality that is supported natively with Secrets Manager.
Cross-Account Access
Firstly, there is currently no way to attach resource-based IAM policy for AWS Systems Manager Parameter Store (Standard type).This means that cross-account access is not possible for Parameter store, and if you need this functionality you will have to configure an extensive work around, or use AWS Secrets Manager.
Size of Secrets
Each of the options stores a maximum set size of secret / parameter.
Secrets Manager can store secrets of up to 10kb in size.
Standard Parameters can use up to 4096 characters (4KB size) for each entry, and Advanced Parameters can store up to 8KB entries.
Multi-Region Deployment
Like with many other features of AWS secrets Manager, AWS SSM Parameter store does not come with the same functionality. In this case you can’t easily replicate your secrets across multiple regions for added functionality / value, and you will need to implement an extensive work around for this to work.
In terms of use cases, you may want to use AWS Secrets Manager to store your encrypted secrets with easy rotation. If you require a feature rich solution for managing your secrets to stay compliant with your regulatory and compliance requirements, consider choosing AWS Secrets Manager.
On the other hand, you may want to choose SSM Parameter Store as a cheaper option to store your encrypted or unencrypted secrets. Parameter Store will provide some limited functionality to enable your application deployments by storing your parameters in a safe, cheap and secure way.
Source: This post originally appeared on https://digitalcloud.training/
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS – Windows 10 – Amazon Android

Source: Disaster Recovery in the AWS Cloud
When you are building applications in the AWS cloud, you have to go to painstaking lengths to make your applications durable, resilient and highly available.
Whilst AWS can help you with this for the most part, it is nearly impossible to see a situation in which you will not need some kind of Disaster Recovery plan.
An organization’s Business Continuity and Disaster Recovery (BCDR) program is a set of approaches and processes that can be used to recover from a disaster and resume its regular business operations after the disaster has ended. An example of a disaster would be a natural calamity, an outage or disruption caused by a power outage, an employee mistake, a hardware failure, or a cyberattack.
With the implementation of a BCDR plan, businesses can operate as close to normal as possible after an unexpected interruption, and with the least possible loss of data.
In this blog post, we will explore three notable disaster recovery solutions, each with different merits and drawbacks, and different ways of restoring them once they’ve been lost. However, before we can appreciate these different methods, we need to break down some key terminology in Disaster Recovery. Using AWS infrastructure as a lens, we will examine all of these strategies.
What is Disaster Recovery?
This definition provides an excellent summary of disaster recovery – an extremely broad term.
“Disaster recovery involves a set of policies, tools, and procedures to enable the recovery or continuation of vital technology infrastructure and systems following a natural or human-induced disaster.”
This definition emphasizes the necessity of recovering systems, tools, etc. after a disaster. Disaster Recovery depends on many factors, including:
• Financial plan
• Competence in technology
• Use of tools
• The Cloud Provider used
It is essential to understand some key terminology, including RPO and RTO, in order to evaluate disaster recovery efficacy:
How do RPOs and RTOs differ?
RPO (Recovery Point Objective)
The Recovery Point Objective (RPO) is the maximum acceptable amount of data loss after an unplanned data-loss incident, expressed as an amount of time. This is a measure of a maximum, in order to get a low RPO, you will have to have a highly available solution.
RTO (Recovery Time Objective)
The Recovery Time Objective (RTO) is the maximum tolerable length of time that a computer, system, network or application can be down after a failure or disaster occurs. This is measured in minutes or hours and trying to retrieve as low of an RTO as possible is dependent on how quickly you can get your application back online.
Disaster Recovery Methods
Now that we understand these key concepts, we can break down three popular disaster recovery methods, namely Backup and Restore, Disaster Recovery Plan, and Disaster Recovery Contingency Plan.
Backup and Restore
Data loss or corruption can be mitigated by utilizing backup and restore. The replication of data to other data centers can also mitigate the effects of a disaster. Redeploying the infrastructure, configuration, and application code in the recovery Data center is in addition to restoring the data.
The recovery time objective (RTO) and recovery point objective (RPO) of backup and restoration are higher. The result is longer downtimes and greater data loss between the time of the disaster event and the time of recovery. Even so, backup and restore may still be the most cost-effective and easiest strategy for your workload. RTO and RPO in minutes or less are not required for all workloads.
RPO is dependent on how frequently you take snapshots, and RTO is dependent on how long it takes to restore snapshots.
Pilot Light
As far as affordability and reliability are concerned, Pilot Light strikes a perfect balance between the two. There is one key difference between Backup and Restore and Pilot Light: Pilot Light will always have its core functionality running somewhere, either in another region or in another account and region that separates it from Backup and Restore.
You can, for example, log into Backup and Restore and have all of your data synced into an S3 bucket, so that you can retrieve it in case of a disaster. It is important to note that when using Pilot Light, the data is synchronized with an always-on and always-available database replica.
Also, other core services, such as an EC2 instance with all of the necessary software already installed on it, will be available and ready to use at the touch of a button. There would be an Auto-Scaling Policy in place for each of these EC2 instances to ensure the instances would scale out in a timely manner in order to meet your production needs as soon as possible. This strategy focuses on a lower chance of overall downtime and is contingent on smaller aspects of your architecture running all of the time.
Multi-Site Active/Active
Having an exactly mirrored application across multiple AWS regions or data centers is the most resilient cloud disaster recovery strategy.
In the multi-site active/active strategy, you will be able to achieve the lowest RTO (recovery time objective) and RPO (recovery point objective). However, it is important to take into account the potential cost and complexity of operating active stacks in multiple locations.
There is a multi-AZ workload stack available in every region to ensure high availability. There is a live replication of data between each of the data stores within each Region, as well as a backup of this data. Hence, data backups are of crucial importance to protect against disasters that may lead to the loss or corruption of data as a result.
Only the most demanding applications should use this DR method, since it has the lowest RTOs and RPOs of any other DR technique.
Conclusion
It is impossible to build a Disaster Recovery plan that fits all circumstances, and no “one size fits all” approach exists. Budget ahead of time – and ensure that you don’t spend more than you can afford. It may seem like a lot of money is being spent on ‘What ifs?” – but if your applications CAN NOT go down – you have the capability to ensure this happens.
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS – Windows 10 – Amazon Android

S3 vs EBS vs EFS — Comparing AWS Storage Services
AWS offers many services, so many that it can often get pretty confusing for beginners and experts alike. This is especially true when it comes to the many storage options AWS provides its users. Knowing the benefits and use cases of AWS storage services will help you design the best solution. In this article, we’ll be looking at S3 vs EBS vs EFS.
So, what are these services and what do they do? Let’s start with S3.
Amazon S3 Benefits
The Amazon Simple Storage Service (Amazon S3) is AWS’s object storage solution. If you’ve ever used a service like Google Drive or Dropbox, you’ll know generally what S3 can do. At first glance, S3 is simply a place to store files, photos, videos, and other documents. However, after digging deeper, you’ll uncover the many functionalities of S3, making it much more than the average object storage service.
Some of these functionalities include scalable solutions, which essentially means that if your project gets bigger or smaller than originally expected, S3 can grow or shrink to easily meet your needs in a cost-effective manner. S3 also helps you to easily manage data, giving you the ability to control who accesses your content. With S3 you have data protection against all kinds of threats. It also replicates your data for increased durability and lets you choose between different storage classes to save you money.
S3 is incredibly powerful, so powerful, in fact, that even tech-giant Netflix uses S3 for its services. If you like Netflix, you have AWS S3 to thank for its convenience and efficiency! In fact, many of the websites you access on a daily basis either run off of S3 or use content stored in S3. Let’s look at a couple of use cases to get a better idea of how S3 is used in the real world.
Amazon S3 Use Cases
Have you ever accidentally deleted something important? S3 has backup and restore capabilities to make sure a user doesn’t lose data through versioning and deletion protection. Versioning means that AWS will save a new version of a file every time it’s updated and deletion protection makes sure a user has the right permissions before deleting a file.
What would a company do during an unexpected power outage or if their on-premises data center suddenly crashed? S3 data is protected in an Amazon managed data center, the same data centers Amazon uses to host their world-famous shopping website. By using S3, users get a second storage option without having to directly pay the rent and utilities of a physical site.
Some businesses need to store financial, medical, or other data mandated by industry standards. AWS allows users to archive this type of data with S3 Glacier, one of the many S3 storage classes to choose from. S3 Glacier is a cost-effective solution for archiving and one of the best in the market today.
Amazon EBS Benefits
Amazon Elastic Block Store (Amazon EBS) is an umbrella term for all of AWS’s block storage services. EBS is different from S3 in that it provides a storage volume directly connected to EC2 (Elastic Cloud Compute). EBS allows you to store files directly on an EC2 instance, allowing the instance to access your files in a quick and cheap manner. So when you hear or read about EBS, think “EC2 storage.”
You can customize your EBS volumes with the configuration best suited for the workload. For example, if you have a workload that requires greater throughput, then you could choose a Throughput Optimized HDD EBS volume. If you don’t have any specific needs for your workload then you could choose an EBS General Purpose SSD. If you need a high-performance volume then an EBS Provisioned IOPS SSD volume would do the trick. If you don’t understand yet, that’s okay! There’s a lot to learn about these volume types and we’ll cover that all in our video courses.
Just remember that EBS works with EC2 in a similar way to how your hard drive works with your computer. An EBS lets you save files locally to an EC2 instance. This storage capacity allows your EC2 to do some pretty powerful stuff that would otherwise be impossible. Let’s look at a couple of examples.
Amazon EBS Use Cases
Many companies look for cheaper ways to run their databases. Amazon EBS provides both Relational and NoSQL Databases with scalable solutions that have low-latency performance. Slack, the messaging app, uses EBS to increase database performance to better serve customers around the world.
Another use case of EBS involves backing up your instances. Because EBS is an AWS native solution, the backups you create in EBS can easily be uploaded to S3 for convenient and cost-effective storage. This way you’ll always be able to recover to a certain point-in-time if needed.
Amazon EFS Benefits
Elastic File System (EFS) is Amazon’s way of allowing businesses to share file data from multiple EC2’s or on-prem instances simultaneously. EFS is an elastic and serverless service. It automatically grows and shrinks depending on the file storing needs of your business without you having to provision or manage it.
Some advantages include being able to divide up your content between frequently accessed or infrequently accessed storage classes, helping you save some serious cash. EFS is an AWS native solution, so it also works with containers and functions like Amazon Elastic Container Service (ECS) and AWS Lambda.
Imagine an international company has a hundred EC2 instances with each hosting a web application (a website like this one). Hundreds of thousands of people are accessing these servers on a regular basis — therefore producing HUGE amounts of data. EFS is the AWS tool that would allow you to connect the data gathered from hundreds, even thousands of instances so you can perform data analytics and gather key business insights.
Amazon EFS Use Cases
Amazon Elastic File System (EFS) provides an easy-to-use, high-performing, and consistent file system needed for machine learning and big data workloads. Tons of data scientists use EFS to create the perfect environment for their heavy workloads.
EFS provides an effective means of managing content and web applications. EFS mimics many of the file structures web developers often use, making it easy to learn and implement in web applications like websites or other online content.
When companies like Discover and Ancestry switched from legacy storage systems to Amazon EFS they saved huge amounts of money due to decreased costs in management and time.
S3 vs EBS vs EFS Comparison Table


AWS Storage Summed Up
- S3 is for object storage. Think photos, videos, files, and simple web pages.
- EBS is for EC2 block storage. Think of a computer’s hard drive.
- EFS is a file system for many EC2 instances. Think multiple EC2 instances and lots of data.
I hope that clears up AWS storage options. Of course, we can only cover so much in an article but check out our AWS courses for video lectures and hands-on labs to really learn how these services work.
Thanks for reading!
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS – Windows 10 – Amazon Android

Serverless computing has been on the rise the last few years, and whilst there is still a large number of customers who are not cloud-ready, there is a larger contingent of users who want to realize the benefits of serverless computing to maximize productivity and to enable newer and more powerful ways of building applications.
Serverless in cloud computing
Serverless is a cloud computing execution model in which the cloud provider allocates machine resources on demand and manages the servers on behalf of their customers. Cloud service providers still use servers to execute code for developers, which makes the term “serverless” a misnomer. There is always a server running in the background somewhere, and the cloud provider (AWS in this case) will run the infrastructure for you and leave you with the room to build your applications.
AWS Lambda
Within the AWS world, the principal Serverless service is AWS Lambda. Using AWS Lambda, you can run code for virtually any type of application or backend service without provisioning or managing servers. AWS Lambda functions can be triggered from many services, and you only pay for what you use.
So how does Lambda work? Using Lambda, you can run your code on high availability compute infrastructure and manage your compute resources. This includes server and operating system maintenance, capacity provisioning and automatic scaling, code and security patch deployment, and code monitoring and logging. All you need to do is supply the code.
Source: https://digitalcloud.training/aws-lambda-versions-and-aliases/
How different services trigger Lambda functions:
https://docs.aws.amazon.com/lambda/latest/dg/lambda-services.html
API Gateway APIs in front of Lambda functions:
https://docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html
The 10 Best AWS Lambda Use Case |
AWS Lambda is a powerful service that has in recent years elevated AWS to be the leader in not only serverless architecture development, but within the cloud industry in general. For those of you who don’t know – Lambda is a serverless, event-driven compute service that lets you run code without provisioning or managing servers which can be used for virtually any type of application or backend service. Its serverless nature and the fact that it has wide appeal across different use cases has made AWS Lambda a useful tool when running your short running compute operations in the cloud. What makes Lambda better than other options?You can use Lambda to handle all the operational and administrative tasks on your behalf, such as provisioning capacity, monitoring fleet health, deploying and running your code, and monitoring and logging it. Lambda’s key features and selling points are as follows:
The use cases for AWS Lambda are varied and cannot be sufficiently explored in one blog post. However, we have put together the top ten use cases in which Lambda shines the best. 1: Processing uploaded S3 objectsOnce your files land in S3 buckets, you can immediately start processing them by Lambda using S3 object event notifications. Using AWS Lambda for thumbnail generation is a great example for this use case, as the solution is cost-effective and you won’t have to worry about scaling up since Lambda can handle any load you place on it. The alternative to a serverless function handling this request is an EC2 instance spinning up every time a photo needs converting to a thumbnail, or leaving an EC2 instance running 24/7 on the occasion that a thumbnail needs to be converted. This use case requires low latency, highly responsive event-driven architecture that allows your application to perform effectively at scale. 2: Document editing and conversion in a hurryWhen objects are uploaded to Amazon S3 you can leverage AWS Lambda to perform changes to the material to help with any business goal you may have. This can also include editing document types and adding watermarks to important corporate documents. For example, you could leverage a RESTful API, using Amazon S3 Object Lambda to convert documents to PDF and apply a watermark based on the requesting user. You could also convert a file from doc to PDF automatically upon being uploaded to a particular S3 Bucket. The use cases within this field are also unlimited. 3: Cleaning up the backendAny consumer-oriented website needs to have a fast response time as one of its top priorities. Slow response times or even a visible delay can cause traffic to be lost. It is likely that your consumers will simply switch to another site if your site is too busy dealing with background tasks to be able to display the next page or search results in a timely manner. While there are some sources of delay that are beyond your control, such as slow ISPs, there are some things you can do to increase your response time, and these are listed below. How does AWS Lambda come into play when it comes to cloud computing? Backend tasks should not delay frontend requests due to the fact that they are running on the backend. You can send the data to an AWS Lambda process if you need to parse the user input to store it in a database, or if there are other input processing tasks that are not necessary for rendering the next page. AWS Lambda can then clean up and send the data to your database or application. 4: Creating and operating serverless websitesIt is outdated to maintain a dedicated server, even a virtual server. Furthermore, provisioning the instances, updating the OS, etc. takes a lot of time and distracts you from focusing on the core functions. You don’t need to manage a single server or operating system when you use AWS Lambda and other AWS services to build a powerful website. For a basic version of this architecture you could use AWS API Gateway, DynamoDB, Amazon S3 and Amazon Cognito User Pools to achieve a simple, low effort and highly scalable website to solve any of your business use cases. 5: Real-time processing of bulk dataIt is not unusual for an application, or even a website, to handle a certain amount of real-time data at any given time. Depending on how the data is inputted, it can come from communication devices, peripherals interacting with the physical world, or user input devices. Generally, this data will arrive in short bursts, or even a few bytes at a time, in formats that are easy to parse, and will arrive in formats that are usually very easy to read. Nevertheless, there are times when your application might need to handle large amounts of streaming input data, so moving it to temporary storage for later processing may not be the best option. It is usually necessary to be able to identify specific values from a stream of data collected from a remote device, such as a telemetry device. It is possible to handle the necessary real-time tasks without hindering the operation of your main application by sending the stream of data to a Lambda application on AWS that can pull and process the required information quickly. 6: Rendering pages in real-timeThe Lambda service can play a significant role if you are using predictive page rendering in order to prepare webpages for display on your website. As an example, if you want to retrieve documents and multimedia files for use in the next requested page, you can use a Lambda-based application to retrieve them, perform the initial stages of rendering them for display, and then, if necessary, use them for use in the next page. 7: Automated backupsWhen you are operating an enterprise application in the cloud, certain manual tasks like backing up your database or other storage mediums can fall to the side. By taking the undifferentiated heavy lifting out of your operations you can focus on what delivers value. Using Lambda scheduled events is a great way of performing housekeeping within your account. By using the boto3 Python libraries and AWS Lambda, you can create backups, check for idle resources, generate reports, and perform other common tasks quickly. 8: Email Campaigns using AWS Lambda & SESYou can build out simple email campaigns to send mass emails to potential customers to improve your business outcomes. Any organization that engages in marketing has mass mailing services as part of its marketing services. Hardware expenditures, license costs, and technical expertise are often required for traditional solutions. You can build an in-house serverless email platform using AWS Lambda and Simple Email Service SES quite easily which can scale in line with your application. 9: Real-time log analysisYou could easily build out a Lambda function to check log files from Cloudtrail or Cloudwatch. Amazon CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, and optimize resource utilization. AWS CloudTrail can be used to track all API calls made within your account. It is possible to search the logs for specific events or log entries as they occur in the logs and be notified of them via SNS when they occur. You can also very easily implement custom notification hooks to Slack, Zendesk, or other systems by calling their API endpoint within Lambda. 10: AWS Lambda Use Case for Building Serverless ChatbotBuilding and running chatbots is not only time consuming but expensive also. Developers must provision, run and scale the infrastructural resources that run the chatbot code. However, with AWS Lambda you can run a scalable chatbot architecture quite easily, without having to provision all of the hardware you would have had to do if you were not doing this on the cloud. This article originally appeared on: https://digitalcloud.training/ |
Basics of Amazon Detective (Included in AWS SAA-C03 Exam)
Detective is integrated with Amazon GuardDuty, AWS Security Hub, and partner security products, through which you can easily navigate to Detective, you don’t have to organize any data or develop, configure, or tune queries and algorithms. There are no upfront costs and customers pay only for the events analyzed, with no additional software to deploy or other feeds to subscribe to.
Testimonial: Passed SAA-C03!

Hi, just got the word, I passed the cert!
I mainly used Maareks videos for the initial learning, did turorialsdojo for the practice test and used Cantrills to touch up on places I lacked knowledge.
My next cert is prob gonna be sysOps. This time I plan to just use Cantrills videos I think because I feel they helped me the most.
Source: r/awscertifications
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS – Windows 10 – Amazon Android
Testimonial: Passed SAA-C03!

Today I got the notification that I am officially an AWS Certified Solutions Architect and I’m so happy!
I was nervous because I had been studying for the C02 version, but at the last minute I registered for the C03 thinking it was somehow “better” because it was more up to date (?). I didn’t know how different it would be and with the announcement that Stephane was yet to release an updated version for this exam made me even more anxious. But it turned out well!
I used Stephane’s Udemy course and the practice exams from Tutorials Dojo to help me study. I think the practice exams were the most useful as they helped me understand better how the questions would be presented.
Looking back now, I don’t think there was a major difference between C02 and C03, so if you are thinking that you haven’t studied specifically for C03, I wouldn’t worry too much,
My experience with Practice exam –
I found Stephan’s practice exam to be more challenging and it really helped me in filling the gap. Options were very similar to each other so guessing was not an option in stephane’s exams.
With TD, questions were worded correctly but options were terrible. Like even if you don’t know the answer you can guess it. Some options were like ( Which one of the option is a planet – # sun – # Earth #cow – # AWS) like they were that easy to guess and that’s why I got 85% in the second test and I have to review all question because I don’t know the answer yet I was scoring.
Things of note:
Use the keyboard shortcuts (eg alt-n for next question). Over 65 questions, this will save at least 1-2 minutes.
Attempt every question on first read, even if you flag to come back to it, make a go of it there and then. That way if you time out, you’ve put in your first/gut feel answer. More often than not, during review you won’t change i anyway.
Don’t get disheartened. There are 15 non-scoring questions so conceivably one could get 15 plus 12-14 more wrong and still hit 720+ and pass!
Look for the keywords, obvious wrong answers. Most of the time it will be a choice of 2 answers, with maybe a keyword to nail home the right answer. I found a lot of keywords/points that made me thing ‘yep – has to be that’.
Read the entire question and all of the answers, even if sure on the right answer, just in case…
Discover what works best for you in terms of learning. Some people are more suited to books, some are hands on/projects, some are audio/video etc. Finding your way helps make learning something new a lot easier.
If at home, test your machine the week before and then again the day before don’t reboot. Remove the as much stress from the event as possible.
Source: r/awscertifications
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS –
SAA-C03 prep – Doubt about Policy Routing

I’m preparing for SAA-C03, when I have questions where to choose the correct policy routing I always struggle with Latency, Geolocation and Geoproximity.
Especially with these kinds of scenarios:
Latency
I’ve users in the US and in Europe, those in Europe have perf issues, you set up your application also in Europe and you pick which policy routing?
Obviously ;-P I’ve selected Geolocation, because they are in Europe and I want they use the EU instances!!! It will boost the latency as well 🙁 , or at least to me is logical, while using a Latency based policy, I cannot be sure that they will use my servers in Europe.
2. Geolocation and Geoproximity
I don’t have a specific case to show up, but my understanding is that when I need to change the bias, I pick proximity based routing. The problem for me, it’s to understand when a simple geolocation policy is not enough (any tips). Is that Geolocation is used mainly to restrict content and internationalization? For country/compliance based restrictions, I understand that is better to use CloudFront, so using Routing is even not an option in such cases…
Comments:
#1: Geolocation isn’t about performance, that’s a secondary effect, but it’s not the primary function.
Latency based routing is there for a reason, to ensure the lowest latency .. and latency (generally) is a good indicator of performance.. especially for any applications which are latency sensitive.
Geo-location is more about delivering content from a localized server .. it might be about data location, language, local laws.
These are taken from my lessons on it, geolocation doesn’t return the ‘closest’ record… if you have a record tagged UK and one tagged France and you are in Germany .. it won’t return either of those… it would do Germany, Europe, default etc.
The different routing types are pretty easy to understand once you think about them in the right way.
We know you like your hobbies and especially coding, We do too, but you should find time to build the skills that’ll drive your career into Six Figures. Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career. 85% of hiring managers say cloud certifications make a candidate more attractive. Start your cloud journey with this excellent books below:

Testimonial: Pass My SAA-C03
I passed my solutions architect associate test yesterday.
For those looking for guidance, I took Stephane’s udemy course and took several practice exams.
In addition, I took aws readness webinar and their practice tests.
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS –
AWS WAF & Shield
AWS WAF and AWS Shield help protect your AWS resources from web exploits and DDoS attacks.
AWS WAF is a web application firewall service that helps protect your web apps from common exploits that could affect app availability, compromise security, or consume excessive resources.
AWS Shield provides expanded DDoS attack protection for your AWS resources. Get 24/7 support from our DDoS response team and detailed visibility into DDoS events.
We’ll now go into more detail on each service.
AWS Web Application Firewall (WAF)
AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.
AWS WAF helps protect web applications from attacks by allowing you to configure rules that allow, block, or monitor (count) web requests based on conditions that you define.
These conditions include IP addresses, HTTP headers, HTTP body, URI strings, SQL injection and cross-site scripting.
Can allow or block web requests based on strings that appear in the requests using string match conditions.
For example, AWS WAF can match values in the following request parts:
- Header – A specified request header, for example, the User-Agent or Referer header.
- HTTP method – The HTTP method, which indicates the type of operation that the request is asking the origin to perform. CloudFront supports the following methods: DELETE, GET, HEAD, OPTIONS, PATCH, POST, and PUT.
- Query string – The part of a URL that appears after a ? character, if any.
- URI – The URI path of the request, which identifies the resource, for example, /images/daily-ad.jpg.
- Body – The part of a request that contains any additional data that you want to send to your web server as the HTTP request body, such as data from a form.
- Single query parameter (value only) – Any parameter that you have defined as part of the query string.
- All query parameters (values only) – As above buy inspects all parameters within the query string.
New rules can be deployed within minutes, letting you respond quickly to changing traffic patterns.
When AWS services receive requests for web sites, the requests are forwarded to AWS WAF for inspection against defined rules.
Once a request meets a condition defined in the rules, AWS WAF instructs the underlying service to either block or allow the request based on the action you define.
With AWS WAF you pay only for what you use.
AWS WAF pricing is based on how many rules you deploy and how many web requests your web application receives.
There are no upfront commitments.
AWS WAF is tightly integrated with Amazon CloudFront and the Application Load Balancer (ALB), services.
When you use AWS WAF on Amazon CloudFront, rules run in all AWS Edge Locations, located around the world close to end users.
This means security doesn’t come at the expense of performance.
Blocked requests are stopped before they reach your web servers.
When you use AWS WAF on an Application Load Balancer, your rules run in region and can be used to protect internet-facing as well as internal load balancers.
Web Traffic Filtering
AWS WAF lets you create rules to filter web traffic based on conditions that include IP addresses, HTTP headers and body, or custom URIs.
This gives you an additional layer of protection from web attacks that attempt to exploit vulnerabilities in custom or third-party web applications.
In addition, AWS WAF makes it easy to create rules that block common web exploits like SQL injection and cross site scripting.
AWS WAF allows you to create a centralized set of rules that you can deploy across multiple websites.
This means that in an environment with many websites and web applications you can create a single set of rules that you can reuse across applications rather than recreating that rule on every application you want to protect.
Full feature API
AWS WAF can be completely administered via APIs.
This provides organizations with the ability to create and maintain rules automatically and incorporate them into the development and design process.
For example, a developer who has detailed knowledge of the web application could create a security rule as part of the deployment process.
This capability to incorporate security into your development process avoids the need for complex handoffs between application and security teams to make sure rules are kept up to date.
AWS WAF can also be deployed and provisioned automatically with AWS CloudFormation sample templates that allow you to describe all security rules you would like to deploy for your web applications delivered by Amazon CloudFront.
AWS WAF is integrated with Amazon CloudFront, which supports custom origins outside of AWS – this means you can protect web sites not hosted in AWS.
Support for IPv6 allows the AWS WAF to inspect HTTP/S requests coming from both IPv6 and IPv4 addresses.
Real-time visibility
AWS WAF provides real-time metrics and captures raw requests that include details about IP addresses, geo locations, URIs, User-Agent and Referers.
AWS WAF is fully integrated with Amazon CloudWatch, making it easy to setup custom alarms when thresholds are exceeded, or attacks occur.
This information provides valuable intelligence that can be used to create new rules to better protect applications.
AWS Shield
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS.
AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection.
There are two tiers of AWS Shield – Standard and Advanced.
AWS Shield Standard
All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge.
AWS Shield Standard defends against most common, frequently occurring network and transport layer DDoS attacks that target web sites or applications.
When using AWS Shield Standard with Amazon CloudFront and Amazon Route 53, you receive comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks.
AWS Shield Advanced
Provides higher levels of protection against attacks targeting applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator and Amazon Route 53 resources.
In addition to the network and transport layer protections that come with Standard, AWS Shield Advanced provides additional detection and mitigation against large and sophisticated DDoS attacks, near real-time visibility into attacks, and integration with AWS WAF, a web application firewall.
AWS Shield Advanced also gives you 24×7 access to the AWS DDoS Response Team (DRT) and protection against DDoS related spikes in your Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator and Amazon Route 53 charges.
AWS Shield Advanced is available globally on all Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53 edge locations.
Origin servers can be Amazon S3, Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), or a custom server outside of AWS.
AWS Shield Advanced includes DDoS cost protection, a safeguard from scaling charges because of a DDoS attack that causes usage spikes on protected Amazon EC2, Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, or Amazon Route 53.
If any of the AWS Shield Advanced protected resources scale up in response to a DDoS attack, you can request credits via the regular AWS Support channel.
Source: https://digitalcloud.training/aws-waf-shield/
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS –
AWS Simple Workflow vs AWS Step Function vs Apache Airflow
There are a number of different services and products on the market which support building logic and processes within your application flow. While these services have largely similar pricing, there are different use cases for each service.
AWS Simple Workflow Service (SWF), AWS Step Functions and Apache Airflow all seem very similar, and at times it may seem difficult to distinguish each service. This article highlights the similarities and differences, benefits, drawbacks, and use cases of these services that see a growing demand.
What is AWS Simple Workflow Service?
The AWS Simple Workflow Service (SWF) allows you to coordinate work between distributed applications.
A task is an invocation of a logical step in an Amazon SWF application. Amazon SWF interacts with workers which are programs that retrieve, process, and return tasks.
As part of the coordination of tasks, execution dependencies, scheduling, and concurrency are managed accordingly.
What are AWS Step Functions?
AWS Step Functions enables you to coordinate distributed applications and microservices through visual workflows.
Your workflow can be visualized by state machines describing steps, their relationships, and their inputs and outputs. State machines represent individual steps in a workflow diagram by containing a number of states.
The states in your workflow can perform work, make choices, pass parameters, initiate parallel execution, manage timeouts, or terminate your workflow.
What is Apache Airflow?
Firstly, Apache Airflow is a third party tool – and is not an AWS Service. Apache Airflow is an open-source workflow management platform for data engineering pipelines.
This powerful and widely-used open-source workflow management system (WMS) allows programmatic creation, scheduling, orchestration, and monitoring of data pipelines and workflows.
Using Airflow, you can author workflows as Directed Acyclic Graphs (DAGs) of tasks, and Apache Airflow can integrate with many AWS and non-AWS services such as: Amazon Glacier, Amazon CloudWatch Logs and Google Cloud Secret Manager.
Benefits and Drawbacks
Let’s have a closer look at the benefits and drawbacks of each service.
AWS Simple Workflows pros and cons:
AWS Step Functions pros and cons:
Apache Airflow pros and cons:
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS –
Use Cases
Here’s an overview of some use cases of each service.
Choose AWS Simple Workflow Service if you are building:
- Order management systems
- Multi-stage message processing systems
- Billing management systems
- Video encoding systems
- Image conversion systems
Choose AWS Step Functions if you want to include:
- Microservice Orchestration
- Security and IT Automation
- Data Processing and ETL Orchestration
- New instances of Media Processing
Choose Apache Airflow if:
- ETL pipelines that extract data from multiple sources, and run Spark jobs or other data transformations
- Machine learning model training
- Automated generation of reports
- Backups and other DevOps tasks
Conclusion
Each of the services discussed has unique use cases and deployment considerations. It is always necessary to fully determine your solution requirements before you make a decision as to which service best fits your needs.
Source: https://www.linkedin.com/pulse/aws-simple-workflow-vs-step-functions-apache-airflow-neal-davis/
For further reading, visit: https://digitalcloud.training/aws-application-integration-services/
What does AWS mean by cost optimization?
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS –
There are many things that AWS actively try to help you with – and cost optimization is one of them. Cost optimization simply defined comes down to helping you reduce your cloud spend in specific areas, without impacting on the efficacy of your architecture and how it functions. Cost optimization is one of the pillars in the well architected framework, and we can use it to help us move towards a more streamlined, and cost efficient workload.
AWS Well-Architected Framework enables cloud architects to build fast, reliable, and secure infrastructures for a wide variety of workloads and applications. It is built around six pillars:
- Operational excellence
- Security
- Reliability
- Performance efficiency
- Cost optimization
- Sustainability
The Well-Architected Framework provides customers and partners with a consistent approach for evaluating architectures and implementing scalable designs on AWS. It is applicable for use whether you are a burgeoning start-up or an enterprise corporation using the AWS Cloud.
In this article however, we are going to focus on exactly what is cost optimization, explore some key principles of how it is defined and demonstrate some use cases as to how it could help you when architecting your own AWS Solutions.
What is Cost Optimization?
Besides being one of the pillars on the Well Architected framework, Cost Optimization is a broad, yet simple term and is defined by AWS as follows:
“The Cost Optimization pillar includes the ability to run systems to deliver business value at the lowest price point.”
It provides a comprehensive overview of the general design principles, best practices, and questions related to cost optimization. Once understood, it can have a massive impact on how you are launching your various applications on AWS.
As well as a definition of what the Cost Optimization is there are some key design principles which we’ll explore in order to make sure we are on the right track with enhancing our workloads:
Implement Cloud Financial Management
In order to maximize the value of your cloud investment, cloud financial management/cost optimization is essential for achieving financial success and maximizing the value of your cloud investment. As your organization moves into this new era of technology and usage management, there is an imperative need for you to devote resources and time to developing capability in this new area. As with security or operational excellence, if you want to become a cost-efficient organization, you will have to build capability through knowledge building, programs, resources, and processes in a similar manner to how you would build capability for security.
Adopt a consumption model
If you want to save money on computing resources, it is important to pay only for what you require, and to increase or decrease usage based on the needs of the business, without relying on elaborate forecasting.
Measure overall efficiency
It is important to measure the business output of a workload as well as the costs associated with delivering that workload. You can use this measure to figure out what gains you will make if you increase output and reduce costs. Efficiency doesn’t also have to be just financially worthwhile to help get your cloud spend under control. It can also help any one server from becoming under or over utilized and help from a performance standpoint also.
Stop spending money on unnecessary activities
When it comes to data center operations, AWS handle everything from racks and stacks to powering servers and providing the racking itself. By utilizing managed services, you will also be able to remove the operational burden of managing operating systems as well as applications. The advantage of this method is that you are able to focus on your customers and your business projects rather than on your IT infrastructure.
Analyze and attribute expenditure
There is no doubt that the cloud allows for easy identification of the usage and cost of systems, which in turn allows for transparent attribution of IT costs to individual workload owners. Achieving this helps workload owners to measure the return on investment (ROI) of their investment as well as to reduce their costs and optimize their resources.
Now that we fully understand what we mean when we say ‘Cost Optimization on AWS’, we are going to show some ways that we can use cost optimization principles in order to improve the overall financial performance of our workloads on Amazon S3, and Amazon EC2:
Cost optimization on S3
Amazon S3 is an object-storage service which provides 11 Nines of Durability, and near infinite, low-cost object storage. There are a number of ways to even further optimize your costs, and ensure you are adhering to the Cost Optimization pillar of the Well Architected Framework.
S3 Intelligent Tiering
Amazon S3 Intelligent-Tiering is a storage class that is intended to optimize storage costs as well as provide users with an easy way to move data to the most cost-effective access tier as their usage patterns change over time. In the case of S3, Intelligent-Tiering monitors access patterns and, for a small monthly fee, automatically moves objects that have not been accessed to lower-cost access tiers. You can automatically save storage costs when you use S3 Intelligent-Tiering, which is a technology that provides low-latency and high-throughput access tiers to reduce storage costs. As a result of S3 Intelligent-Tiering storage class, it is possible to automatically archive data that is asynchronously accessible.
S3 Storage Class Analysis
Amazon S3 Storage Class Analysis analyses storage access patterns to help you decide when to transition the right data to the right storage class. This is a relatively new Amazon S3 analytics feature that monitors your data access patterns and tells you when data should be moved to a lower-cost storage class based on the frequency with which it is accessed.
Cost optimization on EC2
Amazon EC2 is simply a Virtual Machine in the cloud that can be scaled up, scaled down dynamically as your application grows. There are a number of ways you can optimize your spend on EC2 depending on your use case, whilst still delivering excellent performance.
Savings Plans
In exchange for a commitment to a specific instance family within the AWS Region (for example, C7 in US-West-2), EC2 Instance Savings Plans offer savings of up to 72 percent off on-demand.
EC2 Instance Savings Plans allow you to switch between instance sizes within the family (for example, from c5.xlarge to c5.2xlarge) or operating systems (such as from Windows to Linux), or change from Dedicated to Default tenancy, while continuing to receive the discounted rate.
If you are using large amounts of particular EC2 instances, buying a Savings Plan allows you to flexibly save money on your compute spend.
Right-sizing EC2 Instances
Right-sizing is about matching instance types and sizes to your workload performance and capacity needs at the lowest possible cost. Furthermore, it involves analyzing deployed instances and identifying opportunities to eliminate or downsize them without compromising capacity or other requirements.
The Amazon EC2 service offers a variety of instance types tailored to fit the needs of different users. There are a number of instance types that offer different combinations of resources such as CPU, memory, storage, and networking, so that you can choose the right resource mix for your application.
You can use Trusted Advisor to give recommendations on which particular EC2 instances are running at low utilization. This takes a lot of undifferentiated heavy lifting out of your hands as AWS tell you the exact instances you need to re-size.
Using Spot Capacity where possible
Spot capacity is spare capacity that AWS has within their data centers, which they provide to you for a large discount (up to 90%). The downside is that if a customer is willing to pay the on-demand price for this capacity, you will be given a 2-minute warning after which your instances will be terminated.
Applications requiring online availability are not well suited to spot instances. The use of Spot Instances is recommended for stateless, fault-tolerant, and flexible applications. A Spot Instance can be used for big data, containerized workloads, continuous integration and delivery (CI/CD), stateless web servers, high performance computing (HPC), and rendering workloads, as well as anything else which can be interrupted and requires low cost.
There are many considerations when it comes to optimizing cost on AWS, and the Cost Optimization pillar provides us all of the resources we need to be fully enabled in our AWS journey.
Source: This article originally appeared on: https://digitalcloud.training/what-does-aws-mean-by-cost-optimization/
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS –
AWS Amplify
AWS Amplify is a set of tools and services that enables mobile and front-end web developers to build secure, scalable full stack applications powered by AWS. Amplify includes an open-source framework with use-case-centric libraries and a powerful toolchain to create and add cloud-based features to your application, and a web-hosting service to deploy static web applications.
AWS SAM
AWS SAM is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, and event source mappings. With just a few lines per resource, you can define the application you want and model it using YAML. During deployment, AWS SAM transforms and expands the AWS SAM syntax into AWS CloudFormation syntax, enabling you to build serverless applications faster.
Amazon Cognito
Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0.
Vue Javascript Framework
Vue JavaScript framework is a progressive framework for building user interfaces. Unlike other monolithic frameworks, Vue is designed to be incrementally adoptable. The core library focuses on the view layer only and is easy to pick up and integrate with other libraries or existing projects. Vue is also perfectly capable of powering sophisticated single-page applications when used in combination with modern tooling and supporting libraries.
AWS Cloud9
AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser. It includes a code editor, debugger, and terminal. AWS Cloud9 makes it easy to write, run, and debug serverless applications. It pre-configures the development environment with all the SDKs, libraries, and plugins needed for serverless development.
Swagger API
Swagger API is an open-source software framework backed by a large ecosystem of tools that help developers design, build, document, and consume RESTful web services. Swagger also allows you to understand and test your backend API specifically.
Amazon DynamoDB
Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-Region, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than 20 million requests per second.
Amazon EventBridge
Amazon EventBridge makes it easy to build event-driven applications because it takes care of event ingestion, delivery, security, authorization, and error handling for you. To achieve the promises of serverless technologies with event-driven architecture, such as being able to individually scale, operate, and evolve each service, the communication between the services must happen in a loosely coupled and reliable environment. Event-driven architecture is a fundamental approach for integrating independent systems or building up a set of loosely coupled systems that can operate, scale, and evolve independently and flexibly. In this lab, you use EventBridge to address the contest use case.
Amazon DynamoDB Streams
Amazon DynamoDB Streams is an ordered flow of information about changes to items in a DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table.
AWS Step Functions
AWS Step Functions is a serverless function orchestrator that makes it easy to sequence Lambda functions and multiple AWS services into business-critical applications. Through its visual interface, you can create and run a series of checkpointed and event-driven workflows that maintain the application state. The output of one step acts as input to the next. Each step in your application runs in order, as defined by your business logic. Orchestrating a series of individual serverless applications, managing retries, and debugging failures can be challenging. As your distributed applications become more complex, the complexity of managing them also grows. Step Functions automatically manages error handling, retry logic, and state. With its built-in operational controls, Step Functions manages sequencing, error handling, retry logic, and state, removing a significant operational burden from your team.
When your processing requires a series of steps, use Step Functions to build a state machine to orchestrate the workflow. This lets you keep your Lambda functions focused on business logic.
Returning to the baker in our analogy, when an order to make a pie comes in, the order is actually a series of related but distinct steps. Some steps have to be done first or in sequence, and some can be done in parallel. Some take longer than others. Someone with expertise in each step performs that step. To make things go smoothly and let the experts stick to their expertise, you need a way to manage the flow of steps and keep whoever needs to know informed of the status.
- AWS Step Functions: https://aws.amazon.com/step-functions/
- AWS Step Functions Resources with links to documentation, whitepapers, tutorials, and webinars
- AWS Step Functions Developer Guide
- AWS Step Functions developer guide: States
- AWS Step Functions developer guide: Error Handling in Step Functions
- AWS Step Functions developer guide: Intrinsic Functions
- AWS Step Functions developer guide: Service Integrations
- States Language Specification
- AWS Lambda developer guide: Orchestration Examples with AWS Step Functions
Amazon Simple Notification Service (Amazon SNS)
Amazon Simple Notification Service (Amazon SNS) is a fully managed messaging service for both system-to-system and app-to-person (A2P) communication. The service enables you to communicate between systems through publish/subscribe (pub/sub) patterns that enable messaging between decoupled microservice applications or to communicate directly to users via SMS, mobile push, and email. The system-to-system pub/sub functionality provides topics for high-throughput, push-based, many-to-many messaging. Using Amazon SNS topics, your publisher systems can fan out messages to a large number of subscriber systems or customer endpoints including Amazon Simple Queue Service (Amazon SQS) queues, Lambda functions, and HTTP/S, for parallel processing. The A2P messaging functionality enables you to send messages to users at scale using either a pub/sub pattern or direct-publish messages using a single API.
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS –
Three pillars of observability
Observability extends traditional monitoring with approaches that address the kinds of questions you want to answer about your applications. Business metrics are sometimes an afterthought, only coming into play when someone in the business asks the question, and you have to figure out how to get the answers from the data you have. If you build in these needs when you’re building the application, you’ll have much more visibility into what’s happening within your application.
Logs, metrics, and distributed tracing are often known as the three pillars of observability. These are powerful tools that, if well understood, can unlock the ability to build better systems.
Logs provide valuable insights into how you measure your application health. Event logs are especially helpful in uncovering growing and unpredictable behaviors that components of a distributed system exhibit. Logs come in three forms: plaintext, structured, and binary.
Metrics are a numeric representation of data measured over intervals of time about the performance of your systems. You can configure and receive automatic alerts when certain metrics are met.
Tracing can provide visibility into both the path that a request traverses and the structure of a request. An event-driven or microservices architecture consists of many different distributed parts that must be monitored. Imagine a complex system consisting of multiple microservices, and an error occurs in one of the services in the call chain. Even if every microservice is logging properly and logs are consolidated in a central system, it can be difficult to find all relevant log messages.
AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application and shows a map of your application’s underlying components. You can use X-Ray to analyze both applications in development and in production, from simple three-tier applications to complex microservices applications consisting of thousands of services.
Amazon CloudWatch Logs Insights is a fully managed service that is designed to work at cloud scale with no setup or maintenance required. The service analyzes massive logs in seconds and gives you fast, interactive queries and visualizations. CloudWatch Logs Insights can handle any log format and autodiscovers fields from JSON logs.
Amazon CloudWatch ServiceLens is a feature that enables you to visualize and analyze the health, performance, and availability of your applications in a single place. CloudWatch ServiceLens ties together CloudWatch metrics and logs, as well as traces from X-Ray, to give you a complete view of your applications and their dependencies. This enables you to quickly pinpoint performance bottlenecks, isolate root causes of application issues, and determine impacted users.
Characteristics of modern applications that challenge traditional approaches
AWS services that address the three pillars of observability
CloudWatch Logs
- Amazon CloudWatch: https://aws.amazon.com/cloudwatch/
- Amazon CloudWatch Logs User Guide
- Amazon CloudWatch Logs user guide: Analyzing Log Data with CloudWatch Logs Insights
- AWS Lambda developer guide: Accessing Amazon CloudWatch Logs for AWS Lambda
AWS X-Ray
- AWS X-Ray: https://aws.amazon.com/xray/
- AWS X-Ray Resources with links to documentation, webinars, and blog posts
- AWS X-Ray Developer Guide
- AWS X-Ray developer guide: Integrating AWS X-Ray with Other AWS services
- AWS Lambda developer guide: Using AWS Lambda with AWS X-Ray
- Amazon API Gateway developer guide: Tracing User Requests to REST APIs Using X-Ray
CloudWatch metrics
Three types of API Gateway authorizers for HTTP APIs
- JWT authorizer
- Amazon Cognito user pools
- IAM permissions
Three types of JSON Web Tokens (JWTs) used by Amazon Cognito
Three things Lambda does for you when polling a stream
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS –
What is a Security Group?
In the world of Cloud Computing, Security is always job zero. This means that we design everything with Security in mind – at every single layer of our application! While you may have heard about AWS Security Groups – have you ever stopped to think about what a security group is, and what it actually does?
If for example, you are launching a Web Server to launch a brand new website hosted on AWS, you will have to prevent and allow certain protocols to initiate communication with your Web Server in order for users to interact with your website. On the other hand, if you give everyone access to your server using any protocol you may be leaving sensitive information easily reachable from anyone else on the internet, ruining your security posture.
The balance of allowing this kind of access is done using a specific technology in AWS, and today we are going to explore how Security Groups work, and what problems they help you solve.
What is a Security Group?
Security groups control traffic reaching and leaving the resources they are associated with according to the security group rules set by each group. After you associate a security group with an EC2 instance, it controls the instance’s inbound and outbound traffic.
Although VPCs come with a default security group when you create them, additional security groups can be created for any VPC within your account.
Security groups can only be associated with resources in the VPC for which they were created, and do not apply to resources in different VPCs.
Each security group has rules for controlling traffic based on protocols and ports. There are separate rules for inbound and outbound traffic.
Let’s have a look at what a security group looks like.
As stated earlier, Security Groups control inbound and outbound traffic in relation to resources placed in these security groups. Below are some example rules that you would see routinely when interacting with security groups for a Web Server.
Inbound
Outbound
Security Groups can also be used for the Relational Database Service, and for Amazon Elasticache to control traffic in a similar way.
Security Group Quotas
There is a limit of Security Groups you can have within a Region, and a limit on the number of outbound and inbound rules you can have per security group.
For the number of Security Groups within a Region, you can have 2500 Security Groups per Region by default. This quota applies to individual AWS account VPCs and shared VPCs, and is adjustable through launching a support ticket with AWS Support.
Regarding the number of inbound and outbound rules per Security Group, you can have 60 inbound and 60 outbound rules per security group (making a total of 120 rules). An IPv4 quota is enforced separately from IPv6 quotas. For example, an IPv4 quota can have 60 inbound rules, while an IPv6 quota can have 60 inbound rules.
Both inbound and outbound rules can be changed with a quota change. Per network interface, this quota multiplied by the quota for security groups cannot exceed 1,000.
Best Practices with Security Groups
When we are inevitably using Security Groups as part of our infrastructure, we can use some best practices to ensure that we are aligning ourselves with the highest security standards possible.
- Ensure your Security Groups do not have a large range of ports open
When large port ranges are open, instances are vulnerable to unwanted attacks. Furthermore, they make it very difficult to trace vulnerabilities. Web servers may only require 80 and 443 ports to be open, and not any more.
- Create new security groups and restrict traffic appropriately
If you are using the default AWS Security group for your active resources you are going to unnecessarily open up your instances and your applications and cause a lessened security posture.
- Where possible, restrict access to required IP address(es) and by port, even internally within your organization
If you are allowing all access (0.0.0.0/0 or ::/0) to your resources, you are asking for trouble. Where possible, you can actually restrict access to your resources based on an individual IP address or range of addresses. This would prevent any bad actors accessing your instances and lessen your security posture.
- Chain Security Groups together
When Chaining Security Groups, the inbound and outbound rules are set up in a way that traffic can only flow from the top tier to the bottom tier and back up again. The security groups act as firewalls to prevent a security breach in one tier to automatically provide subnet-wide access of all resources to the compromised client.
By Neal Davis
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS –
AWS Front-End Web and Mobile
The AWS Front-End Web and Mobile services support development workflows for native iOS/Android, React Native, and JavaScript developers. You can develop apps and deliver, test, and monitor them using managed AWS services.
AWS AppSync Features

AWS AppSync is a fully managed service that makes it easy to develop GraphQL APIs.
Securely connects to to data sources like AWS DynamoDB, Lambda, and more.
Add caches to improve performance, subscriptions to support real-time updates, and client-side data stores that keep offline clients in sync.
AWS AppSync automatically scales your GraphQL API execution engine up and down to meet API request volumes.
GraphQL
AWS AppSync uses GraphQL, a data language that enables client apps to fetch, change and subscribe to data from servers.
In a GraphQL query, the client specifies how the data is to be structured when it is returned by the server.
This makes it possible for the client to query only for the data it needs, in the format that it needs it in.
GraphQL also includes a feature called “introspection” which lets new developers on a project discover the data available without requiring knowledge of the backend.
Real-time data access and updates
AWS AppSync lets you specify which portions of your data should be available in a real-time manner using GraphQL Subscriptions.
GraphQL Subscriptions are simple statements in the application code that tell the service what data should be updated in real-time.
Offline data synchronization
The Amplify DataStore provides a queryable on-device DataStore for web, mobile and IoT developers.
When combined with AWS AppSync the DataStore can leverage advanced versioning, conflict detection and resolution in the cloud.
This allows automatic merging of data from different clients as well as providing data consistency and integrity.
Data querying, filtering, and search in apps
AWS AppSync gives client applications the ability to specify data requirements with GraphQL so that only the needed data is fetched, allowing for both server and client filtering.
AWS AppSync supports AWS Lambda, Amazon DynamoDB, and Amazon Elasticsearch.
GraphQL operations can be simple lookups, complex queries & mappings, full text searches, fuzzy/keyword searches, or geo lookups.
Server-Side Caching
AWS AppSync’s server-side data caching capabilities reduce the need to directly access data sources.
Data is delivered at low latency using high speed in-memory managed caches.
AppSync is fully managed and eliminates the operational overhead of managing cache clusters.
Provides the flexibility to selectively cache data fields and operations defined in the GraphQL schema with customizable expiration.
Security and Access Control
AWS AppSync allows several levels of data access and authorization depending on the needs of an application.
Simple access can be protected by a key.
AWS IAM roles can be used for more restrictive access control.
AWS AppSync also integrates with:
- Amazon Cognito User Pools for email and password functionality
- Social providers (Facebook, Google+, and Login with Amazon).
- Enterprise federation with SAML.
Customers can use the Group functionality for logical organization of users and roles as well as OAuth features for application access.
Custom Domain Names
AWS AppSync enables customers to use custom domain names with their AWS AppSync API to access their GraphQL endpoint and real-time endpoint.
Used with AWS Certificate Manager (ACM) certificates..
A custom domain name can be associated with any available AppSync API in your account.
When AppSync receives a request on the custom domain endpoint, it routes it to the associated API for handling.
Source: https://digitalcloud.training/aws-front-end-web-and-mobile/ (Neal Davis)
Serverless Application Security
Cloud security best practices are serverless best practices. These include applying the principle of least privilege, securing data in transit and at rest, writing code that is security-aware, and monitoring and auditing actively.
Apply a defense in depth approach to your serverless application security.
OWASP Top 10 Security Threats:
- Injection (code)
- Broken authentication (identity and access)
- Sensitive data exposure (data)
- XML external entities (XXE) (code)
- Broken access control (identity and access)
- Security misconfiguration (logging and monitoring)
- Cross-site scripting (XSS) (code)
- Insecure desterilization (code)
- Using components with known vulnerabilities (code and infrastructure)
- Insufficient logging and monitoring (logging and monitoring)
Six security design principles in serverless applications:
- Apply security at all layers
- Implement strong identity and access controls
- Protect data in transit and at rest
- Protect against attacks
- Minimize attack surface area
- Mitigate distributed denial of service (DDoS) attack impacts
- Implement inspection and protection
- Enable auditing and traceability
- Automate security best practices
Three general approaches to protecting against attacks
Handling Scale in Serverless Applications
Thinking serverless at scale means knowing the quotas of the services you are using and focusing on scaling trade-offs and optimizations among those services to find the balance that makes the most sense for your workload.
As your solutions evolve and your usage patterns become clearer, you should continue to find ways to optimize performance and costs and make the trade-offs that best support the workload you need rather than trying to scale infinitely on all components. Don’t expect to get it perfect on the first deployment. Build in the kind of monitoring and observability that will help you understand what’s happening, and be prepared to tweak things that make sense for the access patterns that happen in production.
Lambda Power Tuning helps you understand the optimal memory to allocate to functions.
You can specify whether you want to optimize on cost, performance, or a balance of the two.
Under the hood, a Step Functions state machine invokes the function you’ve specified at different memory settings from 128 MB to 3 GB and captures both duration and cost values.
Let’s take a look at Lambda Power Tuning in action with a function I’ve written.
The function I have determines the hash value of a lot of numbers. Computationally, it’s expensive. I’d like to know whether I should be allocating 1 GB, 1.5 GB, or 3 GB of RAM to it.
I can specify the memory values to test in the file deploy.sh. In my example, I’m only using 1, 1.5 GB, and 3 GB. The state machine takes the following parameters (you define these in sample-execution-input.json):
- Lambda ARN
- Number of invocations for each memory configuration
- Static payload to pass to the Lambda function for each invocation
- Parallel invocation: Whether all invocations should be in parallel or not. Depending on the value, you may experience throttling.
- Strategy: Can be cost, speed, or balanced. Default is cost.
If you specify Cost, it will report the cheapest option regardless of performance. Speed will suggest fastest regardless of cost. Balanced will choose a compromise according to balancedWeight. balancedWeight is a number between 0 and 1. Zero is speed strategy. One is cost strategy.
Let’s take a look at the inputs I’ve specified and find out how much memory we should allocate.
In this configuration, I’m specifying that I want this function to execute as quickly as possible.
Results.power shows that 3 GB provides the best performance.
Let’s update my configuration to use the default strategy of cost and run again.
Results.power shows that 1 GB is the best option for price.
Use this tool to help you evaluate how to configure your Lambda functions.
How API Gateway responds to a burst of requests
Automating the Deployment Pipeline
Automation is especially important with serverless applications. Lots of distributed services that can be independently deployed mean more, smaller deployment pipelines that each build and test a service or set of services. With an automated pipeline, you can incorporate better detection of anomalies and more testing, halt your pipeline at a certain step, and automatically roll back a change if a deployment were to fail or if an alarm threshold is triggered.
Your pipeline may be a mix and match of AWS or third-party components that suit your needs, but the concepts apply generally to whatever tools your organization uses for each of these steps in the deployment tool chain. This module will reference the AWS tools that you can use in each step in your CI/CD pipeline.
CI/CD best practices
Configure testing using safe deployments in AWS SAM:
- Declare an AutoPublishAlias
- Set safe deployment type
- Set a list of up to 10 alarms that will trigger a rollback
- Configure a Lambda function to run pre- and post-deployment tests
Use traffic shifting with pre- and post-deployment hooks
- PreTraffic: When the application is deployed, the PreTraffic Lambda function runs to determine if things should continue. If that function completes successfully (i.e., returns a 200 status code), the deployment continues. If the function does not complete successfully, the deployment rolls back.
- PostTraffic: If the traffic successfully completes the traffic shifting progression to 100 percent of traffic to the new alias, the PostTraffic Lambda function runs. If it returns a 200 status code, the deployment is complete. If the PostTraffic function is not successful, the deployment is rolled back.
Use separate account per environment
It’s a best practice with serverless to use separate accounts for each stage or environment in your deployment. Each developer has an account, and the staging and deployment environments are each in their own accounts.
This approach limits the blast radius of issues that occur (for example, unexpectedly high concurrency) and allows you to secure each account with IAM credentials more effectively with less complexity in your IAM policies within a given account. It also makes it less complex to differentiate which resources are associated with each environment.
Because of the way costs are calculated with serverless, spinning up additional environments doesn’t add much to your cost. Other than where you are provisioning concurrency or database capacity, the cost of running tests in three environments is not different than running them in one environment because it’s mostly about the total number of transactions that occur, not about having three sets of infrastructure.
Use on AWS SAM template with parameters across environments
As noted earlier, AWS SAM supports CloudFormation syntax so that your AWS SAM template can be the same for each deployment environment with dynamic data for the environment provided when the stack is created or updated. This helps you ensure that you have parity between all testing environments and aren’t surprised by configurations or resources that are different or missing from one environment to the next.
AWS SAM lets you build out multiple environments using the same template, even across accounts:
- Use parameters and mappings when possible to build dynamic templates based on user inputs and pseudo parameters, such as AWS: Region
- Use the Globals section to simplify templates
- Use ExportValue and ImportValue to share resource information across stacks
Manage secrets across environments with Parameter Store:
AWS Systems Manager Parameter Store supports encrypted values and is account specific, accessible through AWS SAM templates at deployment, and accessible from code at runtime.
Testing throughout the pipeline
Another best practice is to test throughout the pipeline. Assuming these steps in a pipeline – build, deploy to test environment, deploy to staging environment, and deploy to production – drag the type of test to the pipeline step where you would perform the tests before allowing the next step in deployment to continue.
Automated deployments
- Serverless Developer Tools page
- Tutorial: Deploy an Updated Lambda Function with CodeDeploy and the AWS Serverless Application Model
- Whitepaper: Practicing Continuous Integration and Continuous Delivery on AWS: Accelerating Software Delivery with DevOps
- Quick Start: Serverless CI/CD for the Enterprise on AWS
- AWS re:Invent 2019: CI/CD for Serverless Applications
- AWS CodeDeploy user guide: AppSpec ‘hooks’ section for an AWS Lambda deployment
Deploying serverless applications
- AWS Serverless Application Model Developer Guide, Deploying Serverless Applications Gradually
Serverless Deployment Quiz1:
Which of the following are best practices you should implement into ongoing deployments of your application? (Select THREE.)
A. Test throughout the pipeline
B. Create account-specific AWS SAM templates for each environment
C. Use traffic shifting with pre- and post-deployment hooks
D. Use an AutoPublish alias
E. Use stage variables to manage secrets across environments
Serverless Deployment Quiz2:
You are reviewing the team’s plan for managing the application’s deployment. Which suggestions would you agree with? (Select TWO.)
A. Use IAM to control development and production access within one AWS account to separate development code from production code
B. Use AWS SAM CLI for local development testing
C. Use CloudFormation to write all of the infrastructure as code for deploying the application
D. Use Amplify to deploy the user interface and AWS SAM to deploy the serverless backend
Scaling considerations for serverless applications
True
- Using HTTP APIs and first-class service integrations can reduce end-to-end latency because it lets you connect the API call directly to a service API rather than requiring a Lambda function between API Gateway and the other AWS service.
- Provisioned concurrency may be less expensive than on-demand in some cases. If your provisioned concurrency is used more than 60 percent during a given time period, then it will probably be less expensive to use provisioned concurrency or a combination of on-demand and provisioned concurrency.
- With Amazon SQS as an event source, Lambda will manage concurrency. Lambda will increase concurrency when the queue depth is increasing, and decrease concurrency when errors are being returned.
- You can set a batch window to increase the time before Lambda polls a stream or queue. This lets you reduce costs by avoiding regularly invoking the function with a small number of records if you have a relatively low volume of incoming records.
False
- Setting reserved concurrency on a version: You cannot set reserved concurrency per function version. You set reserved concurrency on the function and can set provisioned concurrency on an alias. It’s important to keep the total provisioned concurrency for active aliases to less than the reserved concurrency for the function.
- Setting the number of shards on a DynamoDB table: You do not directly control the number of shards the table uses. You can directly add shards to a Kinesis Data Stream. With a DynamoDB table, the way you provision read/write capacity and your scaling decisions drive the number of shards. DynamoDB will automatically adjust the number of shards needed based on the way you’ve configured the table and the volume of data.
- Concurrency in synchronous invocations: Lambda will use concurrency equal to the request rate multiplied by function duration. As one function invocation ends, Lambda can reuse its environment rather than spinning up a new one, so function duration plays an important factor in concurrency for synchronous and asynchronous invocations.
- The impact of higher function memory: A higher memory configuration does have a higher price per millisecond, but because duration is also a factor of cost, your function may finish faster at higher memory configurations and that might mean an overall lower cost.
A shorter duration may reduce the concurrency Lambda needs, but depending on the nature of the function, higher memory may not have a measurable impact on duration. You can use tools like Lambda Power Tuning (https://github.com/alexcasalboni/aws-lambda-power-tuning) to find the best balance for your functions.
There is no stopping Amazon Web Services (AWS) from innovating, improving, and ensuring the customer gets the best experience possible as a result. Providing a seamless user experience is a constant commitment for AWS, and their ongoing innovation allows the customer’s applications to be more innovative – creating a better customer experience.
AWS makes managing networking in the cloud one of the easiest parts of the cloud service experience. When managing your infrastructure on premises, you would have had to devote a significant amount of time to understanding how your networking stack works. It is important to note that AWS does not have a magic bullet that will make all issues go away, but they are constantly providing new exciting features that will enhance your ability to scale in the cloud, and the key to this is elasticity.
Elasticity is defined as “The ability to acquire resources as you need them and release resources when you no longer need them” – this is one of the biggest selling points of the cloud. The three networking features which we are going to talk about today are all elastic in nature, namely the Elastic Network Interface (ENI), the Elastic Fabric Adapter (EFA), and the Elastic Network Adapter (ENA). Let’s compare and contrast these AWS features to allow us to get a greater understanding into how AWS can help our managed networking requirements.
AWS ENI (Elastic Network Interface)
You may be wondering what an ENI is in AWS? The AWS ENI (AWS Elastic Network Interface) is a virtual network card that can be attached to any instance of the Amazon Elastic Compute Cloud (EC2). The purpose of these devices is to enable network connectivity for your instances. If you have more than one of these devices connected to your instance, it will be able to communicate on two different subnets -offering a whole host of advantages.
For example, using multiple ENIs per instance allows you to decouple the ENI from the EC2 instance, in turn allowing you far more flexibility to design an elastic network which can adapt to failure and change.
As stated, you can connect several ENIs to the same EC2 instance and attach your single EC2 instance to many different subnets. You could for example have one ENI connected to a public-facing subnet, and another ENI connected to another internal private subnet.
You could also, for example, attach an ENI to a running EC2 instance, or you could have it live after the EC2 instance is deleted.
Finally, it can also be implemented as a crude form of high availability: Attach an ENI to an EC2 instance; if that instance dies, launch another and attach the ENI to that one as well. It will only affect traffic flow for a short period of time.
AWS EFA (Elastic Fabric Adapter)
In Amazon EC2 instances, Elastic Fabric Adapters (EFAs) are network devices that accelerate high-performance computing (HPC) and machine learning.
EFAs are Elastic Network Adapters (ENAs) with additional OS-bypass capabilities.
AWS Elastic Fabric Adapter (EFA) is a specialized network interface for Amazon EC2 instances that allows customers to run high levels of inter-instance communication, such as HPC applications on AWS at scale on.
Due to EFA’s support for libfabric APIs, applications using a supported MPI library can be easily migrated to AWS without having to make any changes to their existing code.
For this reason, AWS EFA is often used in conjunction with Cluster placement groups – which allow physical hosts to be placed much closer together within an AZ to decrease latency even more. Some use cases for EFA are in weather modelling, semiconductor design, streaming a live sporting event, oil and gas simulations, genomics, finance, and engineering, amongst others.
AWS ENA (Elastic Network Adapter)
Finally, let’s discuss the AWS ENA (Elastic Network Adapter).
The Elastic Network Adapter (ENA) is designed to provide Enhanced Networking to your EC2 instances.
With ENA, you can expect high throughput and packet per second (PPS) performance, as well as consistently low latencies on Amazon EC2 instances. Using ENA, you can utilize up to 20 Gbps of network bandwidth on certain EC2 instance types – massively improving your networking throughput compared to other EC2 instances, or on premises machines. ENA-based Enhanced Networking is currently supported on X1 instances.
Key Differences
There are a number of differences between these three networking options.
- Elastic Network Interface (ENI) is a logical networking component that represents a virtual networking card
- Elastic Network Adapter (ENA) physical device, Intel 82599 Virtual Function (VF) to provide high end performance on certain specified and supported EC2 types
- Elastic Fabric Adapter (EFA) is a network device which you can attach to your EC2 instance to accelerate High Performance Computing (HPC)
- Elastic Network Adapter (ENA) is only available on the X1 instance type, Elastic Network Interfaces (ENI) are ubiquitous across all EC2 instances and Elastic Fabric Adapters are available for only certain instance types.
- In order to support VPC networking, an ENA ENI provides traditional IP networking features.
- EFA ENIs provide all the functionality of ENA ENIs plus hardware support to allow applications to communicate directly with the EFA ENI without involving the instance kernel (OS-bypass communication).
- Since the EFA ENI has advanced capabilities, it can only be attached to stopped instances or at launch.
Limitations
EFA has the following limitations:
- p4d.24xlarge and dl1.24xlarge instances support up to four EFAs. All other supported instance types support only one EFA per instance.
- It is not possible to send EFA traffic from one subnet to another. It is possible to send IP traffic from one subnet to another using the EFA.
- EFA OS-bypass traffic cannot be routed. EFA IP traffic can be routed normally.
- An EFA must belong to a security group that allows inbound and outbound traffic to and from the group.
ENA has the following limitations:
- ENA is only used currently in the X1 instance type
ENI has the following limitations:
- You lack the visibility of a physical networking card, due to virtualization
- Only a few instances types support up to four networking cards, the majority only support 1
Pricing
- You are not priced per ENI with EC2, you are only limited to how many your instance type supports. There is however a charge for additional public IPs on the same instance.
- EFA is available as an optional EC2 networking feature that you can enable on any supported EC2 instance at no additional cost.
- ENA pricing is absorbed into the cost of running an X1 instance
This article originally appeared on: https://digitalcloud.training/aws-networking-eni-vs-efa-vs-ena/
I Passed SAA-C03 Testimonials
Passed AWS SAA C03!!

Thanks to all the people who posted there testing experience here. It gave me a lot of perspective from the exam point of view and how to prepare for the new version.
Stephan Marek’s udemy course and his practice test on Udemy was the key to my success in this test. I did not use any other resource for my preparation.
I am a consultant and have been working on AWS from the last 5+ years, not much hands on work though. My initial cert expired last year so wanted to renew.
Overall, the C03 version was very similar to the C02/C01 version. I did not get a single question about AI/ML services and the questions were majorly related to more fundamental services like VPC, SQS, Lambda, cloud watch, event bridge, Storage (S3, glacier, lifecycle policies). Source: r/awscertification
Passed SAP-C01 AWS Certified Solutions Architect Professional
Resources used were:
Adrian (for the labs),
Jon (For the Test Bank),
and Stephane for a quick overview played on double speed.
Total time spent studying was about a month. I don’t do much hands on as a security compliance guy, but do work with AWS based applications everyday. It helps to know things to a very low level.
So I am sharing how I passed my certification SAA C03 in less than 40 Days without any prior experience in AWS, (my org asked me to do it)
So the Materials I have used:
Neal Davis SAA C03: https://www.udemy.com/course-dashboard-redirect/?course_id=2469516 This was my primary resource and I built foundation using this.
Tutorial Dojo practice Tests: https://www.udemy.com/course-dashboard-redirect/?course_id=1520628 These will make you learn how you will implement your theory in questions and connect the dots.
Neal Davis Practice Tests: https://www.udemy.com/course-dashboard-redirect/?course_id=1878624 I highly recommend these, since Neal’s tests will give you less hints in questions and after doing these you now have absolute understanding how actual Exam questions will be.
Lastly I used Stephane Maarek SAA CO3: https://www.udemy.com/course-dashboard-redirect/?course_id=2196488 To close out final remaining gaps and revision.
After doing tests just make sure you know why the particular answer is wrong.
I scheduled my exam on 26th September and gave the test in Pearson Center. The exam was extremely lengthy I took all my time to just do the questions and I did not have time to look back at my Flagged questions (actually while I was clicking on End Review button timeup and test ended itself) My results came after 50 hours of completing the test and these 50 hours were the most difficult in complete journey.
Today I received my result and I score 914 and got the badge and certification.
So how do you know you are ready. Once you start getting 80+ consistently in 2-3 tests just book your exam.
Passed SAP-C01!

Just found out I passed the Solutions Architect Pro exam. It was a tough one, took me almost the full 3 hours to answer and review every question. At the end of the exam, I felt that it could have gone either way. Had to wait about 20 painful hours to get my final result (857/1000). I’m honestly amazed, I felt so unprepared. What made it worse is that I suddenly felt ill on the night of the exam. Only got about three hours sleep, realized it was too late to reschedule and had to drag myself to the test center. Was very tempted to bail and pay the $300 to resit, very glad I didn’t!
No formal cloud background, but have worked in IT/software for about 10 years as a software engineer. Some of my roles included network setup/switch configuration/Linux and Windows server admin, which definitely comes in useful (but isn’t required). I got my first cert in January (CCP), and have since got the other three associate certs (SAA, DVA, SOA).
People are not joking when they say this is an endurance test. You need to try and stay focused for the full three hours. It took me about two hours to answer every question, and a further hour to review my answers.
In terms of prep, I used a combination of Stefan Maarek (Udemy) and Adrian Cantrill (learn.cantrill.io). Both courses worked well together I found (Adrian Cantrill for the theory/practical, and Stefan Maarek for the review/revision). I used Tutorials Dojo for practice exams and review (tutorialsdojo.com). The exam questions are very close to the real thing, and the question summary/explanations are extremely well written. My advice is to sit the practice exam, and then carefully review each question (regardless of if you got it right/wrong) and read/understand the explanations as to why each answer is right/wrong. It takes time, but it will really prepare you for the real thing.
I’m particularly impressed with the Advanced Demos on the Adrian Cantrill course, some of those really helped out with having the knowledge to answer the exam questions. I particularly liked the Organizations, Active Directory, Hybrid DNS, Hybrid SSM, VPN and WordPress demos.
In terms of the exam, lots of questions on IAM (cross-account roles), Organizations (billing/SCP/RAM), Database performance issues, migrations, Transit Gateway, DX/VPN, containerisation (ECS/EKS), disaster recovery. Some of the scenario questions are quite tricky, all four answers appear valid but there will be subtle differences between them. So you have to work out what is different between each answer.
A tip I will leave you: a lot of the migration questions will get you to pick between using snow devices or uploading via the internet/DX. Quick way to work out if uploading is feasible is to multiply the line speed by 10,000 – this will give you the approximate number of bytes that can be transferred in a day. E.g. a line speed of 50Mbps will let you transfer 500GBytes in a day (assuming nothing else is using that link). So if you had to transfer 100TB, then you will need to use snow devices (unless you were happy waiting 200 days).
Just passed the SAA-C03 exam (864) and wanted to provide some feedback since that was helpful for me when I was browsing here before the exam.
I come from an IT background and have a strong knowledge in the VPC portion so that section was a breeze for me in the preparation process (I had never used AWS before this so everything else was new, but the concepts were somewhat familiar considering my background). I started my preparation about a month ago, and used the Mareek class on Udemy. Once I finished the class and reviewed my notes I moved to Mareek’s 6 practice exams (on Udemy). I wasn’t doing extremely well on the PEs (I passed on 4/6 of the exams with 70s grades) I reviewed the exam questions after each exam and moved on to the next. I also purchased Tutorial Dojo’s 6 exams set but only ended up taking one out of 6 (which I passed).
Overall the practice exams ended up being a lot harder than the real exam which had mostly the regular/base topics: a LOT of S3 stuff and storage in general, a decent amount of migration questions, only a couple questions on VPCs and no ML/AI stuff.
Sharing the study guide that I followed when I prepared for the AWS Certified Solutions Architect Associate SAA-C03 exam. I passed this test and thought of sharing a real exam experience in taking this challenging test.
First off: my background – I have 8 years of development.experience and been doing AWS for several project, both personally and at work. Studied for a total of 2 months. Focused on the official Exam Guide, and carefully studied the Task Statements and related AWS services.
For my exam prep, I bought the adrian cantrill video course, tutorialsdojo (TD) video course and practice exams. Adrian’s course is just right and highly educational but like others has said, the content is long and cover more than just the exam. Did all of the hands-on labs too and played around some machine learning services in my AWS account.
TD video course is short and a good overall summary of the topics items you’ve just learned. One TD lesson covers multiple topics so the content is highly concise. After I completed doing Adrian’s video course, I used TD’s video course as a refresher, did a couple of their hands-on labs then head on to their practice exams.
For the TD practice exams, I took the exam in chronologically and didn’t jumped back and forth until I completed all tests. I first tried all of the 7 timed-mode tests, and review every wrong ones I got on every attempt., then the 6 review-mode tests and the section/topic-based tests. I took the final-test mode roughly 3 times and this is by far one of the helpful feature of the website IMO. The final-test mode generates a unique set from all TD question bank, so every attempt is challenging for me. I also noticed that the course progress doesn’t move if I failed a specific test, so I used to retake the test that I failed.
The actual AWS exam is almost the same with the ones in the TD tests where:
All of the questions are scenario-based
There are two (or more) valid solutions in the question, e.g:
Need SSL: options are ACM and self-signed URL
Need to store DB credentials: options are SSM Parameter Store and Secrets Manager
The scenarios are long-winded and asks for:
MOST Operationally efficient solution
MOST cost-effective
LEAST amount overhead
Overall, I enjoyed the exam and felt fully prepared while taking the test, thanks to Adrian and TD, but it doesn’t mean the whole darn thing is easy. You really need to put some elbow grease and keep your head lights on when preparing for this exam. Good luck to all and I hope my study guide helped out anyone who is struggling.
Just another thread about passing the general exam? I passed SAA-C03 yesterday, would like to share my experience on how I earned the examination.
Background:
– graduate with networking background
– working experience on on-premise infrastructure automation, mainly using ansible, python, zabbix and etc.
– cloud experience, short period like 3-6 months with practice
– provisioned cloud application using terraform in azure and aws
Course that I used fully:
– AWS Certified Solutions Architect – Associate (SAA-C03) | learn.cantri (cantrill.io)
– AWS Certified Solutions Architect Associate Exam – SAA-C03 Study Path (tutorialsdojo.com)
Course that I used partially or little:
– Ultimate AWS Certified Solutions Architect Associate (SAA) | Udemy
– Practice Exams | AWS Certified Solutions Architect Associate | Udemy
Lab that I used:
– Free tier account with cantrill instruction
– Acloudguru lab and sandbox
– Percepio lab
Comment on course:
cantrill course is depth and lot of practical knowledge, like email alias and etc.. check in to know more
tutorialdojo practice exam help me filter the answer and guide me on correct answer. If I am wrong in specific topic, I rewatch cantrill video. However, there is some topics that not covered by cantrill but the guideline/review in practice exam will provide pretty much detail. I did all the other mode before the timed-based, after that get average 850 in timed-based exam, while scoring the final practice exam with 63/65. However, real examination is harder compared to practice exam in my opinion.
udemy course and practice exam, I go through some of them but I think the practice exam is quite hard compared to tutorialdojo.
lab – just get hand dirty and they will make your knowledge deep dive in your brain, my advice is try not only to do copy and paste lab but really read the description for each parameter in aws portal
Advice:
you need to know some general exam topics like how to:
– s3 private access
– ec2 availability
– kinesis product including firehose, data stream, blabla
– iam
My next target will be AWS SAP and CKA, still searching suitable material for AWS SAP but proposed mainly using acloudguru sandbox and homelab to learn the subject, practice with acantrill lab in github.
Good luck anyone!
Passed SAA

I wanted to give my personal experience. I have a background in IT, but I have never worked in AWS previous to 5 weeks ago. I got my Cloud Practitioner in a week and SAA after another 4 weeks of studying (2-4 hours a day). I used Cantril’s Course and Tutorials Dojo Practice Exams. I highly, highly recommend this combo. I don’t think I would have passed without the practice exams, as they are quite difficult. In my opinion, they are much more difficult than the actual exam. They really hit the mark on what kind of content you will see. I got a 777, and that’s with getting 70-80%’s on the practice exams. I probably could have done better, but I had a really rough night of sleep and I came down with a cold. I was really on the struggle bus halfway through the test.
I only had a couple of questions on ML / AI, so make sure you know the differences between them all. Lot’s of S3 and EC2. You really need to know these in and out.
My company is offering stipend’s for each certification, so I’m going straight to developer next.
Just passed my SAA-C03 yesterday with 961 points. My first time doing AWS certification. I used Cantrill’s course. Went through the course materials twice, and took around 6 months to study, but that’s mostly due to my busy schedule. I found his materials very detailed and probably go beyond what you’d need for the actual exam.
I also used Stephane’s practice exams on Udemy. I’d say it’s instrumental in my passing doing these to get used to the type of questions in the actual exams and review missing knowledge. Would not have passed otherwise.
Just a heads-up, there are a few things popped up that I did not see in the course materials or practice exams:
* Lake Formation: question about pooling data from RDS and S3, as well as controlling access.
* S3 Requester Pays: question about minimizing S3 data cost when sharing with a partner.
* Pinpoint journey: question about customer replying to SMS sent-out and then storing their feedback.
Not sure if they are graded or Amazon testing out new parts.
Cheers.
Passed Solutions Architect Professional (SAP-C01)

I’ve spent the last 2 months of my life focusing on this exam and now it’s over! I wanted to write down some thoughts that I hope are informative to others. I’m also happy to answer any other questions.
APPROACH
I used Stephane’s courses to pass CCP, SAA, DVA… however I heard such great things about Adrian’s course that I purchased it and started there.
The detail and clarity that Adrian employs is amazing, and I was blown away by the informative diagrams that he includes with his lessons. His UDP joke made me lol. The course took a month to get through with many daily hours, and I made over 100 pages of study notes in a Google document. After finishing his course, I went through Stephane’s for redundancy.
As many have mentioned here, Stephane does a great job of summarizing concepts, and for me, I really value the slides that he provides with his courses. It helps to memorize and solidify concepts for the actual exam.
After I went through the courses, I bought TutorialsDojo practice exams and started practicing. As everyone says, these are almost a must-use resource before an AWS exam. I recognized three questions on the real exam, and the thought exercise of taking the mocks came in handy during the real exam.
Total preparation: 10 weeks
DIFFICULTY
I heard on this Subreddit that if this exam is a 10, then the associate-level exams are a 3. I was a bit skeptical, but I found the exam a bit harder than the practice exam questions. I just found a few obscure things referred to during the real exam, and some concepts combined in single questions. The Pro-level exams are *at least* 2 times as hard, in my opinion. You need to have Stephane’s slides (or the exam “power-ups” that Adrian points out)/the bolded parts down cold and really understand the fundamentals.
WHILE STUDYING
As my studying progressed, I found myself on this sub almost every day reading others’ experiences and questions. Very few people in my circle truly understand the dedication and hard work that is required to pass any AWS exam, so observing and occasionally interacting here with like-minded people was great. We’re all in this together!
POST-EXAM
I was waiting anxiously for my exam result. When I took the associate exams, I got a binary PASS/FAIL immediately… I got my Credly email 17 hours after finishing the exam, and when I heard from AWS, my score was more than expected which feels great.
WHAT’S NEXT
I’m a developer and have to admit I’ve caught the AWS bug. I want to pursue more… I heard Adrian mention in another thread that some of his students take the Security specialty exam right after SAP, and I think I will do the same after some practice exams. Or DevOps Pro… Then I’m taking a break 🙂
I had a lot on S3, cloudfront, DBs, and a lot on a bunch Lambda and containers. Lots of which is the most cost-effective solution questions.
I think did ok but my online proctoring experience kinda jacked with my mind alittle bit(specifics in separate thread), at one point I even got yelled at for thinking outloud to myself which kinda sucked as that’sone way I talk myself through situations :-/
For two weeks I used MANY practice exams on Youtube, Tutorials Dojo, a cloud guru, and shout out for Cloud Guru Amit (Youtube) has a keyword method that worked well for me, and just reading up on various white papers on stuff I wasn’t clear on/got wrong.
ONTO AWS-Security Specialty and CompTIA Sec+ for me.
I passed my SAA! Here’s some tips and thoughts.

Shoutout to Adrian his course was great at preparing me for all the knowledge needed for the exam. (with exception of a question on Polly and Textstract which none of the resources Adrian, Stephan for Test review and dojo practice exams covered)
I got a 78 and went in person to a testing site close by to avoid potential hiccups with online testing. I studied over the course of 4 months but did the bulk of the course in 2 months.
I want to reiterate a common theme in these posts that should not be overlooked in case you are in deep in your journey and plan on taking the tests in the near 4 weeks out or 75% through the videos. BUY THE TUTORIALDOJO PRACTICE EXAMS AND TAKE THEM. EVEN BEFORE YOU ARE DONE WITH ALL THE COURSE.
I thought it would be smarter to finish the course and then do the tests to get a higher score BUT you will inevitably strengthen your skills and knowledge through 1) Doing the tests to get used to the format. 2) REVIEW REVIEW REVIEW – The questions fall into 4 categories and afterwords you will see all the questions and why they are the right answer or almost the right answer. Knowing your weaknesses is crucial for intentional, intelligent, and efficient reviewing.
I took screenshots of all the questions I got wrong or wasn’t completely sure of why I got them right.
Got a lot of questions based on cloudfront,s3,secrets manger, kms, databases, container(ECS) and ML question based on amazon transcribe.
Just passed the AWS Certified Solutions Architect Associate exam SAA-C03 and thank God I allocated some time improving my core networking knowledge. In my point of view, the exam is filled with networking and security questions, so make sure that you really focus on these two domains.
If you don’t know that the port number of MySQL is 3306 and the one for MS SQL is 1433, then you might get overwhelmed by the content of the SAA-C03 exam. Knowing how big or how small a particular VPC (or network) would be based on a given CIDR notation would help too. Integrating SSL / HTTPS into your services like ALB, CloudFront etc are also present in the exam.
On the top of my head, these are the related networking stuff I encountered. Most of the things in this list are somewhat mentioned in the official exam guide:
Ports (e.g. 3306 = MySQL, 1433 = Microsoft SQL)
Regional API Gateway
DNS Resolution between On-Premises networks and AWS
Internal vs external IPs
EKS – Kubernetes Pod Networking
Ephemeral Ports
CIDR blocks
VPC Peering
Lots of Endpoint types (e.g. S3 File Gateway endpoints, Interface Endpoint, Gateway Endpoint
As far as I know, AWS shuffles the content of their exam so you probably could get these topics too. Some feature questions could range from basic to advanced, so make sure you know each feature of all the AWS services mentioned in the exam guide. Here’s what i could remember:
Amazon MQ with active/sync
S3 Features (Requester Pays, Object Lock etc)
Data Lakes
Amazon Rekognition
Amazon Comprehend
For my exam prep, I started my study with Jon Bonso/TD’s SAA video course then moved to Adrian Cantrill’s course. Both are very solid resources and each instructor has a different style of teaching. Jon’s course is more like a minimalist, modern YouTube style teaching. He starts with an overview first before going to the nitty-gritty tech details of things, with fancy montage of videos to drive the fundamental concept in AWS. I recommend his stuff as a crash course to learn the majority of SAA-related content. There’s also a bunch of playcloud/hands-on labs included in his course which I find very helpful too.
Adrian’s course has a much longer course and include the necessary networking/tech fundamentals. Like what other people are saying in this sub, the quality of his stuff is superb and very well delivered. If you are not in a rush and really want to learn the ropes of being a solutions architect, then his course is definitely a must-have. He also has a good videos on YouTube and mini-projects in Github that you can check out.
About half-way in Adrian’s course, I started doing mock exams from TutorialsDojo (TD) and AWS Skill Builder just to reinforce my knowledge. I take a practice test first, then review my correct and incorrect answer. If I notice that I get a lot of mistake in a particular service, I go back to Adrian’s course to make those concepts stick better.
I also recommend trying out the demo/sample/preview lessons before you buy any SAA course. From there, you can decide which teaching style would work best for you:
Adrian Cantrill course: https://learn.cantrill.io/courses/aws-certified-solutions-architect-associate-saa-c03/lectures/41301631
TD mock exams: https://portal.tutorialsdojo.com/courses/aws-certified-solutions-architect-associate-practice-exams/
AWS Skill Builder practice questions set https://explore.skillbuilder.aws/learn/course/external/view/elearning/13266/aws-certified-solutions-architect-associate-official-practice-question-set-saa-c03-english
Thank you to all the helpful guys and gals in this community who shared tips!
About me:
My overall objective was to pivot more towards cloud security role from a traditional cybersecurity role. I am a security professional and have 10+ years of experience with certifications like CCIE Security, CISSP, OSCP and others. Mostly I have worked in consulting environments doing deployment and pre-sales work.
My cloud Journey:
I started studying AWS certification in January 2022 and did SA Associate in March, SA Professional in August and Security Specialty in September. I used Adrian’s, Stefan’s, and Neal’s videos in mix. I used tutorialdojo for practice test.
Preparation Material:
For videos, Adrian’s stood out with the level of effort this guy has put in. Had this been 6-8 years back this kind of on-site bootcamp for 1 candidate would sell for at minimum 5000 USD . I used them at 1.25x speed but it was difficult to come back to Adrian’s content due to its length if I were to recall/revise something. That’s why I had Stefan’s and Neal’s stuff in my pocket, they usually go on sale for 12-13 USD so no harm in having them. Neal did better job than Stefan for SA Pro as his slides were much more visually appealing. But I felt Stefan covered more concepts. Topics like VPC, Transit gateway can be better understood if the visuals are better. I never made any notes, I purchased Tutorial Dojo’s notes but I dont think they were of much use. You can always find notes made by other people on github and I felt they were more helpful. You can also download video slides from udemy and I did cut a few slides from there and pasted in my google docs if I were to revise them. For the practice test I felt dojo’s wordings were complex compared to the real exam but it does give a very good idea of the difficulty of exam. The real exam had more crisp content.
About Exam:
The exams itself were interesting because it helped me learn the new datacenter’s architecture. Concepts and technologies like lambda, step function, AWS organization, SCP were very interesting and I feel way more confident now compared to what I was 1 year back. Because I target security roles I want to point out that not everything is covered in AWS certifications for these roles. I had gone through CSA 4.0 guide back in December 2021 before starting AWS journey and I think thats helped me visualize many scenarios. Concepts like shadow IT, legal hold, vendor lock in, SOC2/3 reports , portability and interoperability problems in cloud environments were very new to me. I wish AWS can include these stuff in the security exam. These concepts are more towards compliance and governance but its important to know if you are going to interview for cloud security architect roles. I also feel concepts included in DevSecOps should be included more in the security specialty exam.
A bit of criticism here. The exam is very much product specific and many people coming from deployment/research backgrounds will even call it a marketing exam. In fact one L7 Principal Security SA from AWS told me that he considers this a marketing exam. On this forum, there are often discussions on how difficult the AWS SA Pro is but I disagree on that. These exams were no way near the difficulty level of CCIE , CISSP or OSCP which I did in past. The difficulty of exam is high because of its long length of questions, the reading fatigue it can cause, and lack of Visio diagrams. All of these things are not relevant to the real world if you are working as a Solution Architect/Security Architect. Especially for SA Pro almost all questions goes like this – Example scenario -‘A customer plans to migrate to AWS cloud where the application are to be resided on EC2 with auto-scaling enabled in private subnet, those EC2 are behind an ALB which is in public subnet. A replica of this should be created in EU region and Route53 should be doing geolocation routing’ . In the real world, these kinds of issues are always communicated using Visio diagrams i.e. “Current state architecture diagram” and “Future state architecture diagram”. In almost every question I had to read this and draw on the provided sheet which created extra work and reading fatigue. I bet non-english speakers who are experienced architects will find it irritating even though they are given 30 minutes extra. If AWS would change these long sentences into diagrams that can make things easier and more aligned to real world, not sure if they would want to do it because then the difficulty goes down. Also because SMEs are often paid per question they make they don’t want to put more effort in creating diagrams. That’s the problem when you outsource question creation to 3rd party SMEs, the payment is on the number of questions made and I don’t think companies even pay for this. Often this is voluntary work against which the company grants some sort of free recertification or exam voucher.
There seems to be quite a noise for Advanced Networking exam which is considered most difficult. While I haven’t looked into the exam, I would say if it doesn’t has diagrams in each question then the exam is not aligned to real world. Networking challenges should never be communicated without diagrams. Again the difficulty is high because it causes reading fatigue which doesn’t happen in a life of a security architect.
Tips to be a successful consultant:
If you were to become a cloud security architect, I would still highly recommend AWS SA Pro, Security specialty not so much because there was more KMS here and a little bit here and there but the Security specialty was not an eye-opener for me as SA Pro was. Even AWS job description for L6 Security Arch ( Proserv ) role says that the candidate must be able to complete AWS SA Pro in 3 months of hiring which means this is more relevant than the Security specialty even for security roles. But these are all products and you need knowledge beyond that for security roles. The driving force of security has mostly been compliance, you should be really good in things like PCI DSS , ISO 27001 , Cloud Control Matrix because the end of the day you need to map these controls to the product so understanding product is not even 50% of the job. Terraform/Pulumi if you were to communicate your ideas/PoC as IaC. Some python/boto3 SDK which will help you in creating use cases ( need for ProServ roles but not for SA roles ) . If you are looking to do threat modeling of cloud native applications you again need AWS knowledge plus , securing SDLC process, SAST/DAST and then MITRE ATT&CK/Cloud controls matrix etc.
Similarly, if you want to be in networking roles, don’t think AWS Advance Networking will help you be a good consultant. Its a very complex topic and I would recommend look beyond by following courses by Ivan Pepelnjak who himself is a networking veteran. https://www.ipspace.net/Courses . This kind of stuff will help you be a much confident consultant.
I am starting my python journey now which will help me automate use cases. Feel free to ping me if you have any questions.
So finally got my score and I scored 886 which is definitely more than I expected. I have been working on AWS for about a year but my company is slowing moving there so I don’t have ton of hands on experience yet.
I got lot of helpful information from so many people on this subreddit. This is now my turn to share my experience.
Study plan
Started with Cantril’s SAA-C02 course and later switched to his SAA-C03 course. He does a great job at explaining everything in detail. He really covers every topic in great detail and the demos are well structured and detailed. Worth every penny. It does take a long time to finish his course so plan accordingly.
Tutorials DoJo study guide & cheat sheets – I liked this 300 odd pages PDF where all the crucial topics are summarized. Bonso does a great job in comparing similar services and highlighting things that may get you confused during the exam. I took notes within the PDF and used highlighter tool a lot. Helped me revise couple of days before the exam.
Tutorials DoJo practice tests – These tests are the BEST. The questions are similar to what they ask in the exam. The explanation under every question is very helpful. Read thru these for every question that you got wrong and even on the questions that you got right but weren’t 100% sure.
Official exam guide – I used this at the end to check if I have an understanding of knowledge and skill items. The consolidated list of services is really helpful. I took notes against each service and especially focused on services that look similar.
Labs – While Cantrill’s labs are great, if you are following him along then you may be going too fast and missing few things. If you are new to a particular service then you should absolutely go back and go thru every screen at your own pace. I did spend time doing labs but nearly not enough as I had hoped for.
Exam experience
First few questions were easy. Lot of short questions which definitely helped me with my nerves.
Questions started getting longer and answers were confusing too. I flagged about 20 odd questions for review but could only review half of them before the timer was done.
Remember that 15 questions are not scored. No point spending a lot of time on a question that may not even count against your final score. Use the flag for review feature and come back to a question later if time permits.
Watch out for exactly what they are asking for. You as an architect might want to solve the problem in another way than what the question is asking you to do.
I will edit/add if I remember more things.
2022 – 2023 AWS Solutions Architect Associate SAA-C03 Practice Exam
Lots of the comments here about networking / VPC questions being prevalent are true. Also so many damn Aurora questions, it was like a presales chat.
The questions are actually quite detailed; as some had already mentioned. So pay close attention to the minute details Some questions you definitely have to flag for re-review.
It is by far harder than the Developer Associate exam, despite it having a broader scope. The DVA-C02 exam was like doing a speedrun but this felt like finishing off Sigrun on GoW. Ya gotta take your time.
I took the TJ practice exams. It somewhat helped, but having intimate knowledge of VPC and DB concepts would help more.
Passed AWS SAP-C01

Just passed the SAP this past weekend, and it was for sure a challenge. I had some familiarity with AWS already having cloud practitioner and passing the SAA back in 2019. I originally wanted to pass the professional version to keep my certs active, so I decided to cram to pass this before they made changes in November. Overall I was able to pass on my first attempt after studying for about 6 weeks heavily. This consisted of on average of about 4 hours a day of studying.
I used the following for studying:
a cloud guru video course and labs(this was ok in my opinion but didn’t really go into as much detail as I think it should have)
Stephane Maarek’s video course was really awesome and hit on everything I really needed on the test. I also took his practice tests a bunch of times.
tutorial dojos practice tests were worth every penny and the review mode on there was perfect to practice and go over material rapidly.
Overall I would focus at first on going through a the full video course with Stephane and then tackling some practice tests. I would then revisit his videos often on subjects I needed to revisit. On the day of the test I took it remotely which honestly think added a little more stress with the proctor all over me on any movement. I ended up passing with a score of 811. Not the best score but I honestly thought I did worse on the test overall as it was challenging and time flew by.
Passed with 819.
Approach:
Took Ultimate AWS Certified Solutions Architect Associate SAA-C03 course by Stephane Maarek on Udemy. Sat through all lectures and labs. I think Maarek’s course provides a good overview of all necessary services including hands on labs which prepare you for real world tasks.
Finished all Practice Exams by Tutorial Dojo. Did half of the tests first in review mode and the rest in timed mode.
For last minute summary preparation, I used Tutorials Dojo Study Guide eBook. It was around $4 and summarizes all services. Good ebook to go through before your exam. It is around 280 pages. I only went through summary of services that I was struggling with.
Exam Day and Details:
I opted in for in person exam with Pearson since I live close to their testing centers and I heard about people running into issues with online exams. If you have a testing center nearby, I highly recommend you go there. Unlike online exams, you are free to use bathroom and use blank sheets of paper. It just thought there was more freedom during in person class.
The exam questions were harder than TD. They were more detailed and usually had combination of multiple services as the correct answers. Read the questions very carefully and flag them for review if you aren’t sure.
Around 5-10 questions were exactly the same from TD which was very helpful.
There were a lot of questions related to S3, EBS, EFS, RDS and DynamoDB. So focus on those.
I saw ~5 questions with AWS services which I had never heard of before. I believe those were part of 15 ungraded questions. If you see services you haven’t heard of, I wouldn’t worry much about them as they are likely part of 15 ungraded questions.
It took me around 1.5 hours to finish the exam including the review. I finished my exam around 4PM and got results next morning around 5AM. I only got email from credly. However, I was able to download my exam report from https://www.aws.training/Certification immediately.
Tips:
Try to get at least 80% on few tests on TD before your take your exam.
Take half of the TD practice exams in review mode and go through the answers in detail (even the right ones).
Opt in for in person exam if possible.
If you see AWS services you hadn’t seen before, don’t panic. It’s likely they are part of 15 ungraded questions.
Read questions very carefully.
Relax. It’s just a certification exam and you can retake it in 14 days if you failed. But if you followed all of above, there is very little chance that you will fail.
Next:
AWS Certified Developer – Associate Certification
Good luck everyone!
AWS Certified Solutions Architect Associate
I passed the SAA-C03 AWS Certified Solutions Architect Assoc. exam this week, all thanks to this helpful Reddit sub! Thank you to everyone who are sharing tips and inspiration on a regular basis. Sharing my exam experience here:
Topics I encountered in the exam:
Lots of S3 features (ex: Object Lock, S3 Access Points)
Lots of advanced cloud designs. I remember the following:
AWS cloud only with 1 VPC
AWS cloud only with 3 VPCs connected using a Transit Gateway
AWS cloud only with 3 VPCs with a shared VPC which contains shared resources that the other 2 VPCs can use.
AWS cloud + on-prem via VPN
AWS cloud + on-prem via Direct Connect
AWS cloud + on-prem with SD WAN connection
Lots of networking ( multicasting via Transit Gateway, Container Networking, Route 53 resolvers )
Lots of Containers – EKS, EKS Anywhere, EKS Distro, ECS Anywhere
Lots of new AWS services – Compute Optimizer, License Manager, Proton, Managed Grafana etc.
Reviewers used:
Tutorials Dojo (TD) SAA-C03 video course and practice exams
Adrian Cantrill: AWS Certified Solutions Architect – Associate (SAA-C03)
Official SAA-C03 Exam Guide: https://d1.awsstatic.com/training-and-certification/docs-sa-assoc/AWS-Certified-Solutions-Architect-Associate_Exam-Guide.pdf
Exam Tips
If you’re not a newbie anymore I recommend skipping the basic lessons included in Adrian Cantrill’s course and focus on the related SAA-C03 stuff.
Do labs labs labs! Adrian has a collection of labs on this github. TD has hands-on labs too with a real AWS Console. I do find the TD labs helpful in testing my actual knowledge in certain topics.
Take the TD practice exams at least twice and aim to get 90% on all test
Review the suitable use cases for each AWS service. The TD and Adrian’s video courses usually covers the use cases for every AWS service. Familiarize yourself with that and make notes
Make sure that whenever you watch the videos, you create your own notes that you can review later on.
source: r/awscertifiations
Passed SAA-C03
Hi guys,
I’ve successfully passed the SAA-C03 exam on Saturday with a score of 832. I felt like the exam was pretty difficult and was wondering if I would pass… Maybe I got a harder test set.
What I did to prepare:
- Tom Carpenter’s course on Linkedin Learning for SAA-C02. I started preparing for the exam last year, but had a break in between. Meanwhile AWS released the new version, so this course is not that relevant anymore. They will probably update it for SAA-C03 in the future.
- Tutorials Dojo practice tests and materials: Now these were great! I’ve did a couple of their practice tests in review mode and a couple in timed mode. Overall (unpopular opinion) I felt like the exam was harder than the practice tests, but the practice tests and explanations prepared me pretty well for it.
- Whizlabs SAA-C03 course: They have some practice tests which were fine, but they also have Labs which are great if you want to explore the AWS services in a guided environment.
- Skillcertpro practice tests: The first 5 were fine, but the others were horrible. Stay away from them! They are full of typos and also incorrect answers (S3 was ‘eventually consistent’ in one of the questions)
- 1.5 years experience with AWS
Amazon AWS Outages 2022 – 2023
Is AWS Down? Is Amazon down? AWS Outage today?
Are too many companies dependent on ONE cloud company? Amazon’s AWS outage impacted thousands of companies and the products they offer to consumers including doorbells, security cameras, refrigerators, 911 services, and productivity software.
Amazon AWS Outages Highlight Promise And Peril Of Public Clouds For 5G – Forbes
AWS Skills Builder
What’s better, AWS Skill Builder or AWS Workshops?
Workshops help you practice various labs/scenarios in your AWS Account. Its more like learning by doing.
Skill builder is more structured – like being taught a formal class either through text or video.
https://awesome-aws-workshops.com/
AWS Glue is a pay-as-you-go service from Amazon that helps you with your ETL (extract, transform and load) needs. It automates time-consuming steps of data preparation for analytics. It extracts the data from different data sources, transforms it, and then saves it in the data warehouse. Today, we will explore AWS Glue in detail. Let’s start with the components of AWS Glue.
AWS Glue Components
Below, you’ll find some of the core components of AWS Glue.
Data Catalog
Data Catalog is the persistent metadata store in your AWS Glue. You have one data catalog per AWS account. It contains the metadata related to all your data sources, table definitions, and job definitions to manage the ETL process in AWS Glue.
Crawler
Crawler connects to your data source and data targets. It crawls through the schema and creates metadata in your AWS Glue data catalog.
Classifier
The classifier object determines the schema of a data store. AWS Glue has built-in classifiers for common data types like CSV, Json, XML, etc. AWS Glue also provides default classifiers for common RDBMS systems as well.
Data store
A data store is used to store the actual data in a persistent data storage system like S3 or a relational database management system.
Database
Database in the AWS Glue terminology refers to the collection of associated data catalog table definitions organized into a logical group in AWS Glue.
AWS Glue Architecture
How AWS Glue Works
- You will Identify the data sources which you will use.
- You will define a crawler to point to each data source and populate the AWS Glue data catalog with the metadata table definitions. This metadata will be used when data is transformed during the ETL process.
- Now your data catalog has been categorized, and the data is available for instant searching, querying, and ETL processing.
- You will provide a script through the console or API so that the data can be transformed. AWS Glue can also generate a script for this purpose.
- You will run the job or schedule the job to run based on a particular trigger. A trigger can be based on a particular schedule or occurring of an event.
- When a job is executed, the script extracts the data from the data source(s), transforms it, and loads the transformed data into the data target. The script is run in the Apache Spark environment in AWS Glue.
When To Use AWS Glue
Below are some of the top use cases for AWS Glue.
Build a data warehouse
If you want to build a data warehouse that will collect data from different sources, cleanse it, validate it, and transform it, then AWS Glue is an ideal fit. You can transform and move the AWS cloud data into your data store too.
Use AWS S3 as data lake
You can convert your S3 data into a data lake by cataloguing its data into AWS Glue. The transformed data will be available to AWS redshift and AWS Athena for querying. Both Redshift and Athena can directly query your S3 using AWS Glue.
Create event-driven ETL pipeline
AWS Glue is a perfect fit if you want to launch an ETL job as soon as fresh data is available in S3. You can use AWS Lambda along with AWS Glue to orchestrate the ETL process.
Features of AWS Glue
Below are some of the top features of AWS Glue.
Automatic schema recognition
Crawler is a very powerful component of AWS Glue that automatically recognizes the schema of your data. Users do not need to design the schema of each data source manually. Crawlers automatically identify the schema and parse the data.
Automatic ETL code generation
AWS Glue is capable of creating the ETL code automatically. You just need to specify the source of the data and its target data store; AWS Glue will automatically create the relevant code in scala or python for the entire ETL pipeline.
Job scheduler
ETL jobs are very flexible in AWS Glue. You can execute the jobs on-demand, and you can also schedule them to be triggered based on a schedule or event. Multiple jobs can be executed in parallel, and you can even mention the job dependencies as well.
Developer endpoints
Developers can take advantage of developer endpoints to debug AWS Glue as well as develop custom crawlers, writers, and data transformers, which can later be imported into custom libraries.
Integrated data catalog
The data catalog is the most powerful component of AWS Glue. It is the central metadata store of all the diverse data sources of your pipeline. You only have to maintain just one data catalog per AWS account.
Benefits of Using AWS Glue
Strong integrations
AWS Glue has strong integrations with other AWS services. It provides native support for AWS RDS and Aurora databases. It also supports AWS Redshift, S3, and all common database engines and databases running in your EC2 instances. AWS Glue even supports NoSQL data sources like DynamoDB.
Built-in orchestration
You do not need to set up or maintain ETL pipeline infrastructure. AWS Glue will automatically handle the low-level complexities for you. The crawlers automate the process of schema identification and parsing, freeing you from the burden of manually evaluating and parsing different complex data sources. AWS Glue also creates the ETL pipeline code automatically. It has built-in features for logging, monitoring, alerting, and restarting failure scenarios as well.
AWS Glue is serverless, which means you do not need to worry about maintaining the underlying infrastructure. AWS glue has built-in scaling capabilities, so it can automatically handle the extra load. It automatically handles the setup, configuration, and scaling of underlying resources.
Cost-effective
You only pay for what you use. You will only be charged for the time when your jobs are running. This is especially beneficial if your workload is unpredictable and you are not sure about the infrastructure to provision for your ETL jobs.
Drawbacks of Using AWS Glue
Here are some of the drawbacks of using AWS Glue.
Reliance on Apache Spark
As the AWS Glue jobs run in Apache Spark, the team must have expertise in Spark in order to customize the generated ETL job. AWS Glue also creates the code in python or scala – so your engineers must have knowledge of these programming languages too.
Complexity of some use cases
Apache spark is not very efficient in use cases like advertisement, gaming, and fraud detection because these jobs need high cardinality joins. Spark is not very good when it comes to high cardinality joins. You can handle these scenarios by implementing additional components, although that will make your ETL pipeline complex.
Similarly, if you need to combine steam and batch jobs, that will be complex to handle in AWS Glue. This is because AWS Glue requires batch and stream processes to be separate. As a result, you need to maintain extra code to make sure that both of these processes run in a combined manner.
AWS Glue Pricing
For the ETL jobs, you will be charged only for the time the job is running. AWS will charge you on an hourly basis depending on the number of DPUs (Data Processing Units) that are needed to run your job. One DPU is approximately 4 vCPUs with 16GB of memory. You will also pay for the storage of the data stored in the AWS Glue data catalog. The first million objects are free in the catalog, and the first million accesses are also free. Crawlers and development endpoints are also charged based on an hourly rate, and the rate depends on the number of DPU’s.
Frequently Asked Questions
How is AWS Glue different from AWS Lake Formation?
Lake Formation’s main area is governance and data management functionality, whereas AWS Glue is strong in ETL and data processing. They both complement each other as the lake formation is primarily a permission management layer that uses the AWS glue catalog under the hood.
Can AWS Glue write to DynamoDB?
Yes, AWS Glue can write to DynamoDB. However, the option of writing is not available in the console. You will need to customize the script to achieve that.
Can AWS Glue write to RDS?
Yes, AWS Glue can write to any RDS engine. When using the ETL job wizard, you can select the target option of “JDBC” and then you can create a connection to any RDS-compliant database.
Is AWS Glue in real-time?
AWS Glue can process data from Amazon Kinesis Data Streams using micro-batches in real-time. For a large data set, there might be some delay. It can process petabytes of data both in batches and in real-time.
Does AWS Glue Auto Scale?
AWS Glue provides autoscaling starting from version 3.0. It automatically adds or removes workers based on the workload.
Where is AWS Glue Data Catalog Stored?
As AWS Glue is a drop-in replacement to Hive metastore. Most probably, the data is stored in MySQL database. However, it is not confirmed because there is no official information from AWS regarding this.
How Fast is AWS Glue?
AWS Glue 3 has improved a lot in terms of speed. The speed of version 3 is 2.4 times faster than the version 2. This is because it uses vectorized readers and micro-parallel SIMD CPU instructions for faster data parsing, tokenization, and indexing.
Is AWS Glue Expensive?
No, AWS Glue is not expensive. This is because it is based on serverless architecture, and you are charged only when it is actually used. There is no permanent infrastructure cost, so AWS Glue is not costly.
Is AWS Glue a Database?
No. AWS Glue is a fully managed cloud service from Amazon through which you can prepare data for analysis through an automated ETL process.
Is AWS Glue difficult to learn?
AWS Glue is not really difficult to learn. This is because it provides a GUI-based interface through which you can easily manage the process of authoring, running, and monitoring the whole process of ETL jobs.
What is The Difference Between AWS Glue and EMR?
AWS Glue and EMR are both AWS solutions for ETL processing. EMR is a slightly faster and cheaper platform, especially if you already have the required infrastructure available. However, if you want a serverless solution where you expect your workload to be inconsistent, then AWS Glue is the better option.
Top 100 AWS Certified Cloud Practitioner Exam Preparation Questions and Answers Dumps


Elevate Your Career with AI & Machine Learning For Dummies PRO and Start mastering the technologies shaping the future—download now and take the next step in your professional journey!
Welcome to the Top 100 AWS Certified Cloud Practitioner Exam Preparation Questions and Answers Dumps :
Table of Content:
Top 100 Questions and Answers Dumps,
Courses, Labs and Training Materials,
Jobs,
AI-Powered Professional Certification Quiz Platform
Web|iOs|Android|Windows
🚀 Power Your Podcast Like AI Unraveled: Get 20% OFF Google Workspace!
Hey everyone, hope you're enjoying the deep dive on AI Unraveled. Putting these episodes together involves tons of research and organization, especially with complex AI topics.
A key part of my workflow relies heavily on Google Workspace. I use its integrated tools, especially Gemini Pro for brainstorming and NotebookLM for synthesizing research, to help craft some of the very episodes you love. It significantly streamlines the creation process!
Feeling inspired to launch your own podcast or creative project? I genuinely recommend checking out Google Workspace. Beyond the powerful AI and collaboration features I use, you get essentials like a professional email (you@yourbrand.com), cloud storage, video conferencing with Google Meet, and much more.
It's been invaluable for AI Unraveled, and it could be for you too.
Start Your Journey & Save 20%
Google Workspace makes it easy to get started. Try it free for 14 days, and as an AI Unraveled listener, get an exclusive 20% discount on your first year of the Business Standard or Business Plus plan!
Sign Up & Get Your Discount HereUse one of these codes during checkout (Americas Region):
AI- Powered Jobs Interview Warmup For Job Seekers

⚽️Comparative Analysis: Top Calgary Amateur Soccer Clubs – Outdoor 2025 Season (Kids' Programs by Age Group)
Business Standard Plan: 63P4G3ELRPADKQU
Business Standard Plan: 63F7D7CPD9XXUVT
Set yourself up for promotion or get a better job by Acing the AWS Certified Data Engineer Associate Exam (DEA-C01) with the eBook or App below (Data and AI)

Download the Ace AWS DEA-C01 Exam App:
iOS - Android
AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version
Business Standard Plan: 63FLKQHWV3AEEE6
Business Standard Plan: 63JGLWWK36CP7W
Invest in your future today by enrolling in this Azure Fundamentals - Pass the Azure Fundamentals Exam with Ease: Master the AZ-900 Certification with the Comprehensive Exam Preparation Guide!
- AWS Certified AI Practitioner (AIF-C01): Conquer the AWS Certified AI Practitioner exam with our AI and Machine Learning For Dummies test prep. Master fundamental AI concepts, AWS AI services, and ethical considerations.
- Azure AI Fundamentals: Ace the Azure AI Fundamentals exam with our comprehensive test prep. Learn the basics of AI, Azure AI services, and their applications.
- Google Cloud Professional Machine Learning Engineer: Nail the Google Professional Machine Learning Engineer exam with our expert-designed test prep. Deepen your understanding of ML algorithms, models, and deployment strategies.
- AWS Certified Machine Learning Specialty: Dominate the AWS Certified Machine Learning Specialty exam with our targeted test prep. Master advanced ML techniques, AWS ML services, and practical applications.
- AWS Certified Data Engineer Associate (DEA-C01): Set yourself up for promotion, get a better job or Increase your salary by Acing the AWS DEA-C01 Certification.
Business Plus Plan: M9HNXHX3WC9H7YE
With Google Workspace, you get custom email @yourcompany, the ability to work from anywhere, and tools that easily scale up or down with your needs.
Need more codes or have questions? Email us at info@djamgatech.com.
AWS Cloud Support Engineer Job Interview Prep,
Top 20 AWS Training and Certification Q&A ,
Latest Products & Services at AWS RE:INVENT


The AWS Certified Cloud Practitioner average salary is — $131,465/year
What is the AWS Certified Cloud Practitioner Exam?
The AWS Certified Cloud Practitioner Exam (CLF-C02) is an introduction to AWS services and the intention is to examine the candidates ability to define what the AWS cloud is and its global infrastructure. It provides an overview of AWS core services security aspects, pricing and support services. The main objective is to provide an overall understanding about the Amazon Web Services Cloud platform. The course helps you get the conceptual understanding of the AWS and can help you know about the basics of AWS and cloud computing, including the services, cases and benefits [Get AWS CCP Practice Exam PDF Dumps here]
2023 AWS CCP CLF-C02 Practice Exam Course on – Top 250+ Questions and Detailed Answers – Success Guaranteed – Save 50% with this link

AWS CCP CLF-C02 on Android – AWS CCP CLF-C02 on iOS – AWS CCP CLF-C02 on Windows 10/11
To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
AWS Certified Cloud Practitioner Exam Prep (CLF-C02) Questions and Answers
AWS Certified Cloud Practitioner Exam Certification Prep Quiz App
Download AWS Cloud Practitioner Exam Prep Pro App (No Ads, Full Version with Answers) for:
AWS CCP CLF-C02 on Android – AWS CCP CLF-C02 on iOS – AWS CCP CLF-C02 on Windows 10/11
Below we are providing you with:
- aws cloud practitioner exam questions
- aws cloud practitioner sample questions
- aws cloud practitioner exam dumps
- aws cloud practitioner practice questions and answers
- aws cloud practitioner practice exam questions and references
Q1: For auditing purposes, your company now wants to monitor all API activity for all regions in your AWS environment. What can you use to fulfill this new requirement?
- A. For each region, enable CloudTrail and send all logs to a bucket in each region.
- B. Enable CloudTrail for all regions.
- C. Ensure one CloudTrail is enabled for all regions.
- D. Use AWS Config to enable the trail for all regions.
Answer:
Top
Q2: What is the best solution to provide secure access to an S3 bucket not using the internet?
- A. Use a VPN connection.
- B. Use an Internet Gateway.
- C. Use a VPC Endpoint to access S3.
- D. Use a NAT Gateway.
Answer:
Top
Q3: In the AWS Shared Responsibility Model, which of the following are the responsibility of AWS?
- A. Securing Edge Locations
- B. Encrypting data
- C. Password policies
- D. Decomissioning data
Answer:
Top
Q4: You have EC2 instances running at 90% utilization and you expect this to continue for at least a year. What type of EC2 instance would you choose to ensure your cost stay at a minimum?
- A. Dedicated host instances
- B. On-demand instances
- C. Spot instances
- D. Reserved instances
Answer:
Top
Q5: What tool would you use to get an estimated monthly cost for your environment?
- A. TCO Calculator
- B. Simply Monthly Calculator
- C. Cost Explorer
- D. Consolidated Billing
Answer:
Top
Q6: How do you make sure your organization does not exceed its monthly budget?
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
Get AWS Certified Cloud Practitioner Practice Exam CCP CLF-C02 eBook Print Book here
- A. Sign up for the free alert under filing preferences in the AWS Management Console.
- B. Set a schedule to regularly review the Billing an Cost Management dashboard each month.
- C. Create an email alert in AWS Budget
- D. In CloudWatch, create an alarm that triggers each time the limit is exceeded.
Answer:
Top
Q7: An Edge Location is a specialization AWS data centre that works with which services?
- A. Lambda
- B. CloudWatch
- C. CloudFront
- D. Route 53
Answer:
Top
Q8: What is the preferred method of linking 2 AWS accounts?
- A. AWS Organizations
- B. Cost Explorer
- C. VPC Peering
- D. Consolidated billing
Answer:
Top
Q9: Which of the following service is most useful when a Disaster Recovery method is triggered in AWS.
- A. Amazon Route 53
- B. Amazon SNS
- C. Amazon SQS
- D. Amazon Inspector
Answer:
Q10: Which of the following disaster recovery deployment mechanisms that has the highest downtime
- A. Pilot light
- B. Warm standby
- C. Multi Site
- D. Backup and Restore
Answer: iOS – Android [Get AWS Certified Cloud Practitioner Exam Practice CCP CLF-C01 eBook Print Book here]
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11 [Get AWS CCP Practice Exam PDF Dumps here]
Q11: Your company is planning to host resources in the AWS Cloud. They want to use services which can be used to decouple resources hosted on the cloud. Which of the following services can help fulfil this requirement?
- A. AWS EBS Volumes
- B. AWS EBS Snapshots
- C. AWS Glacier
- D. AWS SQS
Answer:
Q12: If you have a set of frequently accessed files that are used on a daily basis, what S3 storage class should you store them in?
- A. Infrequent Access
- B. Fast Access
- C. Reduced Redundancy
- D. Standard
Answer:
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11 [Get AWS CCP Practice Exam PDF Dumps here]

Q13: What is the availability and durability rating of S3 Standard Storage Class?
Choose the correct answer:
- A. 99.999999999% Durability and 99.99% Availability
- B. 99.999999999% Availability and 99.90% Durability
- C. 99.999999999% Durability and 99.00% Availability
- D. 99.999999999% Availability and 99.99% Durability
Answer:
Q14: What AWS database is primarily used to analyze data using standard SQL formatting with compatibility for your existing business intelligence tools
- A. Redshift
- B. RDS
- C. DynamoDB
- D. ElastiCache
Answer:
Q15: What are the benefits of DynamoDB?
Choose the 3 correct answers:
- A. Single-digit millisecond latency.
- B. Supports multiple known NoSQL database engines like MariaDB and Oracle NoSQL.
- C. Supports both document and key-value store data models.
- D. Automatic scaling of throughput capacity.
Answer:
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
[Get AWS CCP Practice Exam PDF Dumps here]
Q16: Which of the following are the benefits of AWS Organizations?
Choose the 2 correct answers:
- A. Analyze cost before migrating to AWS.
- B. Centrally manage access polices across multiple AWS accounts.
- C. Automate AWS account creation and management.
- D. Provide technical help (by AWS) for issues in your AWS account.
Answer: iOS – Android [Get AWS CCP Practice Exam PDF Dumps here]
Q17: There is a requirement hosting a set of servers in the Cloud for a short period of 3 months. Which of the following types of instances should be chosen to be cost effective.
- A. Spot Instances
- B. On-Demand
- C. No Upfront costs Reserved
- D. Partial Upfront costs Reserved
Answer:
Q18: Which of the following is not a disaster recovery deployment technique.
- A. Pilot light
- B. Warm standby
- C. Single Site
- D. Multi-Site
Answer:
Top
Q19: Which of the following are attributes to the costing for using the Simple Storage Service. Choose 2 answers from the options given below
- A. The storage class used for the objects stored.
- B. Number of S3 buckets.
- C. The total size in gigabytes of all objects stored.
- D. Using encryption in S3
Answer:
Q20: What endpoints are possible to send messages to with Simple Notification Service?
Choose the 3 correct answers:
- A. SQS
- B. SMS
- C. FTP
- D. Lambda
Answer:
Top
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
Q21: What service helps you to aggregate logs from your EC2 instance? Choose one answer from the options below:
- A. SQS
- B. S3
- C. Cloudtrail
- D. Cloudwatch Logs
Answer:
Q22: A company is deploying a new two-tier web application in AWS. The company wants to store their most frequently used data so that the response time for the application is improved. Which AWS service provides the solution for the company’s requirements?
- A. MySQL Installed on two Amazon EC2 Instances in a single Availability Zone
- B. Amazon RDS for MySQL with Multi-AZ
- C. Amazon ElastiCache
- D. Amazon DynamoDB
Answer:
Top
Q23: You have a distributed application that periodically processes large volumes of data across multiple Amazon EC2 Instances. The application is designed to recover gracefully from Amazon EC2 instance failures. You are required to accomplish this task in the most cost-effective way. Which of the following will meet your requirements?
- A. Spot Instances
- B. Reserved Instances
- C. Dedicated Instances
On-Demand Instances
Answer:
Top
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
Q24: Which of the following features is associated with a Subnet in a VPC to protect against Incoming traffic requests?
- A. AWS Inspector
- B. Subnet Groups
- C. Security Groups
- D. NACL
Answer:
Top
Q25: A company is deploying a two-tier, highly available web application to AWS. Which service provides durable storage for static content while utilizing Overall CPU resources for the web tier?
- A. Amazon EBC volume.
- B. Amazon S3
- C. Amazon EC2 instance store
- D. Amazon RDS instance
Answer:
Top
Q26: What are characteristics of Amazon S3?
Choose 2 answers from the options given below.
- A. S3 allows you to store objects of virtually unlimited size.
- B. S3 allows you to store unlimited amounts of data.
- C. S3 should be used to host relational database.
- D. Objects are directly accessible via a URL.
Answer:
Q26: When working on the costing for on-demand EC2 instances , which are the following are attributes which determine the costing of the EC2 Instance. Choose 3 answers from the options given below
- A. Instance Type
- B. AMI Type
- C. Region
- D. Edge location
Answer:
Q27: You have a mission-critical application which must be globally available at all times. If this is the case, which of the below deployment mechanisms would you employ
- A. Deployment to multiple edge locations
- B. Deployment to multiple Availability Zones
- D. Deployment to multiple Data Centers
- D. Deployment to multiple Regions
Answer:
Q28: Which of the following are right principles when designing cloud based systems. Choose 2 answers from the options below
- A. Build Tightly-coupled components
- B. Build loosely-coupled components
- C. Assume everything will fail
- D. Use as many services as possible
Answer:
Q29: You have 2 accounts in your AWS account. One for the Dev and the other for QA. All are part of consolidated billing. The master account has purchase 3 reserved instances. The Dev department is currently using 2 reserved instances. The QA team is planning on using 3 instances which of the same instance type. What is the pricing tier of the instances that can be used by the QA Team?
- A. No Reserved and 3 on-demand
- B. One Reserved and 2 on-demand
- C. Two Reserved and 1 on-demand
- D. Three Reserved and no on-demand
Answer:
Q30: Which one of the following features is normally present in all of AWS Support plans
- A. 24/7 access to Customer Service
- B. Access to all features in the Trusted Advisor
- C. A technical Account Manager
- D. A dedicated support person
Answer:
Q31: Which of the following storage mechanisms can be used to store messages effectively which can be used across distributed systems?
- A. Amazon Glacier
- B. Amazon EBS Volumes
- C. Amazon EBS Snapshots
- D. Amazon SQS
Answer:
Q32: You are exploring what services AWS has off-hand. You have a large number of data sets that need to be processed. Which of the following services can help fulfil this requirement.
- A. EMR
- B. S3
- C. Glacier
- D. Storage Gateway
Answer:
Q33: Which of the following services allows you to analyze EC2 Instances against pre-defined security templates to check for vulnerabilities
- A. AWS Trusted Advisor
- B. AWS Inspector
- C. AWS WAF
- D. AWS Shield
Answer:
Top
Q34: Your company is planning to offload some of the batch processing workloads on to AWS. These jobs can be interrupted and resumed at any time. Which of the following instance types would be the most cost effective to use for this purpose.
- A. On-Demand
- B. Spot
- C. Full Upfront Reserved
- D. Partial Upfront Reserved
Answer:
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
Q35: Which of the following is not a category recommendation given by the AWS Trusted Advisor?
- A. Security
- B. High Availability
- C. Performance
- D. Fault tolerance
Answer:
Q36: Which of the below cannot be used to get data onto Amazon Glacier.
- A. AWS Glacier API
- B. AWS Console
- C. AWS Glacier SDK
- D. AWS S3 Lifecycle policies
Answer:
Q37: Which of the following from AWS can be used to transfer petabytes of data from on-premise locations to the AWS Cloud.
- A. AWS Import/Export
- B. AWS EC2
- C. AWS Snowball
- D. AWS Transfer
Answer:
Q38: Which of the following services allows you to analyze EC2 Instances against pre-defined security templates to check for vulnerabilities
- A. AWS Trusted Advisor
- B. AWS Inspector
- C. AWS WAF
- D. AWS Shield
Answer:
Top
Q39: Your company wants to move an existing Oracle database to the AWS Cloud. Which of the following services can help facilitate this move.
- A. AWS Database Migration Service
- B. AWS VM Migration Service
- C. AWS Inspector
- D. AWS Trusted Advisor
Answer:
Top
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
Q40: Which of the following features of AWS RDS allows for offloading reads of the database.
- A. Cross region replication
- B. Creating Read Replica’s
- C. Using snapshots
- D. Using Multi-AZ feature
Answer:
Top
Q41: Which of the following does AWS perform on its behalf for EBS volumes to make it less prone to failure?
- A. Replication of the volume across Availability Zones
- B. Replication of the volume in the same Availability Zone
- C. Replication of the volume across Regions
- D. Replication of the volume across Edge locations
Answer:
Q42: Your company is planning to host a large e-commerce application on the AWS Cloud. One of their major concerns is Internet attacks such as DDos attacks.
Which of the following services can help mitigate this concern. Choose 2 answers from the options given below
- A. A. Cloudfront
- B. AWS Shield
- C. C. AWS EC2
- D. AWS Config
Answer:
Q43: Which of the following are 2 ways that AWS allows to link accounts
- A. Consolidating billing
- B. AWS Organizations
- C. Cost Explorer
- D. IAM
Answer:
Q44: Which of the following helps in DDos protection. Choose 2 answers from the options given below
- A. Cloudfront
- B. AWS Shield
- C. AWS EC2
- D. AWS Config
Answer:
Q45: Which of the following can be used to call AWS services from programming languages
- A. AWS SDK
- B. AWS Console
- C. AWS CLI
- D. AWS IAM
Answer:
Q46: A company wants to host a self-managed database in AWS. How would you ideally implement this solution?
- A. Using the AWS DynamoDB service
- B. Using the AWS RDS service
- C. Hosting a database on an EC2 Instance
- D. Using the Amazon Aurora service
Answer:
Q47: When creating security groups, which of the following is a responsibility of the customer. Choose 2 answers from the options given below.
- A. Giving a name and description for the security group
- B. Defining the rules as per the customer requirements.
- C. Ensure the rules are applied immediately
- D. Ensure the security groups are linked to the Elastic Network interface
Answer:
Q48: There is a requirement to host a database server for a minimum period of one year. Which of the following would result in the least cost?
- A. Spot Instances
- B. On-Demand
- C. No Upfront costs Reserved
- D. Partial Upfront costs Reserved
Answer:
Q49: Which of the below can be used to import data into Amazon Glacier?
Choose 3 answers from the options given below:
- A. AWS Glacier API
- B. AWS Console
- C. AWS Glacier SDK
- D. AWS S3 Lifecycle policies
Answer:
Q50: Which of the following can be used to secure EC2 Instances hosted in AWS. Choose 2 answers
- A. Usage of Security Groups
- B. Usage of AMI’s
- C. Usage of Network Access Control Lists
- D. Usage of the Internet gateway
Answer:
Q51: Which of the following can be used to host virtual servers on AWS
- A. AWS IAM
- B. AWS Server
- C. AWS EC2
- D. AWS Regions
Answer:
Q52: You plan to deploy an application on AWS. This application needs to be PCI Compliant. Which of the below steps are needed to ensure the compliance? Choose 2 answers from the below list:
- A. Choose AWS services which are PCI Compliant
- B. Ensure the right steps are taken during application development for PCI Compliance
- C. Encure the AWS Services are made PCI Compliant
- D. Do an audit after the deployment of the application for PCI Compliance.
Answer:
Q54: The Trusted Advisor service provides insight regarding which four categories of an AWS account?
- A. Security, fault tolerance, high availability, performance and Service Limits
- B. Security, access control, high availability, performance and Service Limits
- C. Performance, cost optimization, Security, fault tolerance and Service Limits
- D. Performance, cost optimization, Access Control, Connectivity, and Service Limits
Answer:
Top
Q55: As per the AWS Acceptable Use Policy, penetration testing of EC2 instances
- A. May be performed by AWS, and will be performed by AWS upon customer request
- B. May be performed by AWS, and is periodically performed by AWS
- C. Are expressly prohibited under all circumtances
- D. May be performed by the customer on their own instances with prior authorization from AWS
- E. May be performed by the customer on their own instances, only if performed from EC2 instances
Answer:
Top
Q56: What is the AWS feature that enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket
- A. File Transfer
- B. HTTP Transfer
- C. Transfer Acceleration
- D. S3 Acceleration
Answer:
Top
Q56: What best describes an AWS region?
Choose the correct answer:
- A. The physical networking connections between Availability Zones.
- B. A specific location where an AWS data center is located.
- C. A collection of DNS servers.
- D. An isolated collection of AWS Availability Zones, of which there are many placed all around the world.
Answer:
Top
Q57: Which of the following is a factor when calculating Total Cost of Ownership (TCO) for the AWS Cloud?
- A. The number of servers migrated to AWS
- B. The number of users migrated to AWS
- C. The number of passwords migrated to AWS
- D. The number of keys migrated to AWS
Answer:
Q58: Which AWS Services can be used to store files? Choose 2 answers from the options given below:
- A. Amazon CloudWatch
- B. Amazon Simple Storage Service (Amazon S3)
- C. Amazon Elastic Block Store (Amazon EBS)
- D. AWS COnfig
- D. AWS Amazon Athena
Q59: What best describes Amazon Web Services (AWS)?
Choose the correct answer:
- A. AWS is the cloud.
- B. AWS only provides compute and storage services.
- C. AWS is a cloud services provider.
- D. None of the above.
Answer:
Q60: Which AWS service can be used as a global content delivery network (CDN) service?
- A. Amazon SES
- B. Amazon CouldTrail
- C. Amazon CloudFront
- D. Amazon S3
Answer:
Q61: What best describes the concept of fault tolerance?
Choose the correct answer:
- A. The ability for a system to withstand a certain amount of failure and still remain functional.
- B. The ability for a system to grow in size, capacity, and/or scope.
- C. The ability for a system to be accessible when you attempt to access it.
- D. The ability for a system to grow and shrink based on demand.
Answer:
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
Q62: The firm you work for is considering migrating to AWS. They are concerned about cost and the initial investment needed. Which of the following features of AWS pricing helps lower the initial investment amount needed?
Choose 2 answers from the options given below:
- A. The ability to choose the lowest cost vendor.
- B. The ability to pay as you go
- C. No upfront costs
- D. Discounts for upfront payments
Answer:
Q63: What best describes the concept of elasticity?
Choose the correct answer:
- A. The ability for a system to grow in size, capacity, and/or scope.
- B. The ability for a system to grow and shrink based on demand.
- C. The ability for a system to withstand a certain amount of failure and still remain functional.
- D. ability for a system to be accessible when you attempt to access it.
Answer:
Q64: Your company has started using AWS. Your IT Security team is concerned with the security of hosting resources in the Cloud. Which AWS service provides security optimization recommendations that could help the IT Security team secure resources using AWS?
- A. AWS API Gateway
- B. Reserved Instances
- C. AWS Trusted Advisor
- D. AWS Spot Instances
Answer:
Q65: What is the relationship between AWS global infrastructure and the concept of high availability?
Choose the correct answer:
- A. AWS is centrally located in one location and is subject to widespread outages if something happens at that one location.
- B. AWS regions and Availability Zones allow for redundant architecture to be placed in isolated parts of the world.
- C. Each AWS region handles a different AWS services, and you must use all regions to fully use AWS.
- D. None of the above
Answer
Q66: You are hosting a number of EC2 Instances on AWS. You are looking to monitor CPU Utilization on the Instance. Which service would you use to collect and track performance metrics for AWS services?
- A. Amazon CloudFront
- B. Amazon CloudSearch
- C. Amazon CloudWatch
- D. AWS Managed Services
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
Answer:
Q67: Which of the following support plans give access to all the checks in the Trusted Advisor service.
Choose 2 answers from the options given below:
- A. Basic
- B. Business
- C. Enterprise
- D. None
Answer:
Q68: Which of the following in AWS maps to a separate geographic location?
A. AWS Region
B. AWS Data Centers
C. AWS Availability Zone
Answer:
Q69: What best describes the concept of scalability?
Choose the correct answer:
- A. The ability for a system to grow and shrink based on demand.
- B. The ability for a system to grow in size, capacity, and/or scope.
- C. The ability for a system be be accessible when you attempt to access it.
- D. The ability for a system to withstand a certain amount of failure and still remain functional.
Answer
Q70: If you wanted to monitor all events in your AWS account, which of the below services would you use?
- A. AWS CloudWatch
- B. AWS CloudWatch logs
- C. AWS Config
- D. AWS CloudTrail
Answer:
Q71: What are the four primary benefits of using the cloud/AWS?
Choose the correct answer:
- A. Fault tolerance, scalability, elasticity, and high availability.
- B. Elasticity, scalability, easy access, limited storage.
- C. Fault tolerance, scalability, sometimes available, unlimited storage
- D. Unlimited storage, limited compute capacity, fault tolerance, and high availability.
Answer:
Q72: What best describes a simplified definition of the “cloud”?
Choose the correct answer:
- A. All the computers in your local home network.
- B. Your internet service provider
- C. A computer located somewhere else that you are utilizing in some capacity.
- D. An on-premise data center that your company owns.
Answer
Top
Q73: Your development team is planning to host a development environment on the cloud. This consists of EC2 and RDS instances. This environment will probably only be required for 2 months.
Which types of instances would you use for this purpose?
- A. On-Demand
- B. Spot
- C. Reserved
- D. Dedicated
Answer:
Q74: Which of the following can be used to secure EC2 Instances?
- A. Security Groups
- B. EC2 Lists
- C. AWS Configs
- D. AWS CloudWatch
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
Answer:
Q75: What is the purpose of a DNS server?
Choose the correct answer:
- A. To act as an internet search engine.
- B. To protect you from hacking attacks.
- C. To convert common language domain names to IP addresses.
- D. To serve web application content.
Answer:
Q76:What best describes the concept of high availability?
Choose the correct answer:
- A. The ability for a system to grow in size, capacity, and/or scope.
- B. The ability for a system to withstand a certain amount of failure and still remain functional.
- C. The ability for a system to grow and shrink based on demand.
- D. The ability for a system to be accessible when you attempt to access it.
Answer:
Top
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
Q77: What is the major difference between AWS’s RDS and DynamoDB database services?
Choose the correct answer:
- A. RDS offers NoSQL database options, and DynamoDB offers SQL database options.
- B. RDS offers one SQL database option, and DynamoDB offers many NoSQL database options.
- C. RDS offers SQL database options, and DynamoDB offers a NoSQL database option.
- D. None of the above
Answer:
Q78: What are two open source in-memory engines supported by ElastiCache?
Choose the 2 correct answers:
- A. CacheIt
- B. Aurora
- C. MemcacheD
- D. Redis
Answer:
Q79: What AWS database service is used for data warehousing of petabytes of data?
Choose the correct answer:
- A. RDS
- B. Elasticache
- C. Redshift
- D. DynamoDB
Answer:
Q80: Which AWS service uses a combination of publishers and subscribers?
Choose the correct answer:
- A. Lambda
- B. RDS
- C. EC2
- D. SNS
Answer:
Q81: What SQL database engine options are available in RDS?
Choose the 3 correct answers:
- A. MySQL
- B. MongoDB
- C. PostgreSQL
- D. MariaDB
Answer:
Q81: What is the name of AWS’s RDS SQL database engine?
Choose the correct answer:
- A. Lightsail
- B. Aurora
- C. MySQL
- D. SNS
Answer:
Q82: Under what circumstances would you choose to use the AWS service CloudTrail?
Choose the correct answer:
- A. When you want to log what actions various IAM users are taking in your AWS account.
- B. When you want a serverless compute platform.
- C. When you want to collect and view resource metrics.
- D. When you want to send SMS notifications based on events that occur in your account.
Answer:
Q83: If you want to monitor the average CPU usage of your EC2 instances, which AWS service should you use?
Choose the correct answer:
- A. CloudMonitor
- B. CloudTrail
- C. CloudWatch
- D. None of the above
Answer:
Q84: What is AWS’s relational database service?
Choose the correct answer:
- A. ElastiCache
- B. DymamoDB
- C. RDS
- D. Redshift
Answer:
Q85: If you want to have SMS or email notifications sent to various members of your department with status updates on resources in your AWS account, what service should you choose?
Choose the correct answer:
- A. SNS
- B. GetSMS
- C. RDS
- D. STS
Answer:
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
Q86: Which AWS service can provide a Desktop as a Service (DaaS) solution?
A. EC2
B. AWS Systems Manager
C. Amazon WorkSpaces
D. Elastic Beanstalk
Q87: Your company has recently migrated large amounts of data to the AWS cloud in S3 buckets. But it is necessary to discover and protect the sensitive data in these buckets. Which AWS service can do that?
A. GuardDuty
B. Amazon Macie
C. CloudTrail
D. AWS Inspector
Q88: Your Finance Department has instructed you to save costs wherever possible when using the AWS Cloud. You notice that using reserved EC2 instances on a 1year contract will save money. What payment method will save the most money?
A: Deferred
B: Partial Upfront
C: All Upfront
D: No Upfront
Q89: A fantasy sports company needs to run an application for the length of a football season (5 months). They will run the application on an EC2 instance and there can be no interruption. Which purchasing option best suits this use case?
A. On-Demand
B. Reserved
C. Dedicated
D. Spot
Q90: Your company is considering migrating its data center to the cloud. What are the advantages of the AWS cloud over an on-premises data center?
A. Replace upfront operational expenses with low variable operational expenses.
B. Maintain physical access to the new data center, but share responsibility with AWS.
C. Replace low variable costs with upfront capital expenses.
D. Replace upfront capital expenses with low variable costs.
Q91: You are leading a pilot program to try the AWS Cloud for one of your applications. You have been instructed to provide an estimate of your AWS bill. Which service will allow you to do this by manually entering your planned resources by service?
A. AWS CloudTrail
B. AWS Cost and Usage Report
C. AWS Pricing Calculator
D. AWS Cost Explorer
Q92: Which AWS service would enable you to view the spending distribution in one of your AWS accounts?
A. AWS Spending Explorer
B. Billing Advisor
C. AWS Organizations
D. AWS Cost Explorer
Q93: You are managing the company’s AWS account. The current support plan is Basic, but you would like to begin using Infrastructure Event Management. What support plan (that already includes Infrastructure Event Management without an additional fee) should you upgrade to?
A. Upgrade to Enterprise plan.
B. Do nothing. It is included in the Basic plan.
C. Upgrade to Developer plan.
D. Upgrade to the Business plan. No other steps are necessary.
Q94: You have decided to use the AWS Cost and Usage Report to track your EC2 Reserved Instance costs. To where can these reports be published?
A. Trusted Advisor
B. An S3 Bucket that you own.
C. CloudWatch
D. An AWS owned S3 Bucket.
Q95: What can we do in AWS to receive the benefits of volume pricing for your multiple AWS accounts?
A. Use consolidated billing in AWS Organizations.
B. Purchase services in bulk from AWS Marketplace.
C. Use AWS Trusted Advisor
D. You will receive volume pricing by default.
Q96: A gaming company is using the AWS Developer Tool Suite to develop, build, and deploy their applications. Which AWS service can be used to trace user requests from end-to-end through the application?
A. AWS X-Ray
B. CloudWatch
C. AWS Inspector
D. CloudTrail
Q97: A company needs to use a Load Balancer which can serve traffic at the TCP, and UDP layers. Additionally, it needs to handle millions of requests per second at very low latencies. Which Load Balancer should they use?
A. TCP Load Balancer
B. Application Load Balancer
C. Classic Load Balancer
D. Network Load Balancer
Q98: Your company is migrating its services to the AWS cloud. The DevOps team has heard about infrastructure as code, and wants to investigate this concept. Which AWS service would they investigate?
A. AWS CloudFormation
B. AWS Lambda
C. CodeCommit
D. Elastic Beanstalk
Q99: You have a MySQL database that you want to migrate to the cloud, and you need it to be significantly faster there. You are looking for a speed increase up to 5 times the current performance. Which AWS offering could you use?
A. Elasticache
B. Amazon Aurora
C. DynamoDB
D. Amazon RDS MySQL
Q100:A developer is trying to programmatically retrieve information from an EC2 instance such as public keys, ip address, and instance id. From where can this information be retrieved?
A. Instance metadata
B. Instance Snapshot
C. CloudWatch Logs
D. Instance userdata
Q101: Why is AWS more economical than traditional data centers for applications with varying compute workloads?
A) Amazon EC2 costs are billed on a monthly basis.
B) Users retain full administrative access to their Amazon EC2 instances.
C) Amazon EC2 instances can be launched on demand when needed.
D) Users can permanently run enough instances to handle peak workloads.
Q102: Which AWS service would simplify the migration of a database to AWS?
A) AWS Storage Gateway
B) AWS Database Migration Service (AWS DMS)
C) Amazon EC2
D) Amazon AppStream 2.0
Q103: Which AWS offering enables users to find, buy, and immediately start using software solutions in their AWS environment?
A) AWS Config
B) AWS OpsWorks
C) AWS SDK
D) AWS Marketplace
Q104: Which AWS networking service enables a company to create a virtual network within AWS?
A) AWS Config
B) Amazon Route 53
C) AWS Direct Connect
D) Amazon Virtual Private Cloud (Amazon VPC)
Q105: Which component of the AWS global infrastructure does Amazon CloudFront use to ensure low-latency delivery?
A) AWS Regions
B) Edge locations
C) Availability Zones
D) Virtual Private Cloud (VPC)
Q106: How would a system administrator add an additional layer of login security to a user’s AWS Management Console?
A) Use Amazon Cloud Directory
B) Audit AWS Identity and Access Management (IAM) roles
C) Enable multi-factor authentication
D) Enable AWS CloudTrail
Q107: Which service can identify the user that made the API call when an Amazon EC2 instance is terminated?
A) AWS Trusted Advisor
B) AWS CloudTrail
C) AWS X-Ray
D) AWS Identity and Access Management (AWS IAM)
Q108: Which service would be used to send alerts based on Amazon CloudWatch alarms?
A) Amazon Simple Notification Service (Amazon SNS)
B) AWS CloudTrail
C) AWS Trusted Advisor
D) Amazon Route 53
Q109: Where can a user find information about prohibited actions on the AWS infrastructure?
A) AWS Trusted Advisor
B) AWS Identity and Access Management (IAM)
C) AWS Billing Console
D) AWS Acceptable Use Policy
Q110: Which of the following is an AWS responsibility under the AWS shared responsibility model?
A) Configuring third-party applications
B) Maintaining physical hardware
C) Securing application access and data
D) Managing guest operating systems
Q111: Which recommendations are included in the AWS Trusted Advisor checks? (Select TWO.)
AWS CCP Exam Topics:
The AWS Cloud Practitioner exam is broken down into 4 domains
- Cloud Concepts
- Security and Compliance
- Technology
- Billing and Pricing.
AWS Certified Cloud Practitioner Exam Whitepapers:
AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers.
- Overview of Amazon Web Services
- Architecting for the Cloud: AWS Best Practices
- How AWS Pricing works whitepaper.
- The Total Cost of (Non) Ownership of Web Application in the Cloud
- Compare AWS Support Plans
Online Training and Labs for AWS Cloud Certified Practitioner Exam
AWS Cloud Practitioners Jobs
AWS Certified Cloud Practitioner Exam info and details, How To:
The AWS Certified Cloud Practitioner Exam is a multiple choice, multiple answer exam. Here is the Exam Overview:
- Certification Name: AWS Certified Cloud Practitioner.
- Prerequisites for the Exam: None.
- Exam Pattern: Multiple Choice Questions
- Number of Questions: 65
- Duration: 90 mins
- Exam fees: US $100
- Exam Guide on AWS Website
- Available languages for tests: English, Japanese, Korean, Simplified Chinese
- Read AWS whitepapers
- Register for certification account here.
- Prepare for Certification Here
Additional Information for reference
Below are some useful reference links that would help you to learn about AWS Practitioner Exam.
- AWS certified cloud practitioner/
- certification faqs
- AWS Cloud Practitioner Certification Exam on Quora
Other Relevant and Recommended AWS Certifications
AWS Certified Cloud Practitioner
AWS Certified Solutions Architect – Associate
AWS Certified Solution Architect Exam Prep App: Free
AAWS Certified Developer – Associate
AWS Certified SysOps Administrator – Associate
AWS Certified Solutions Architect – Professional
AWS Certified DevOps Engineer – Professional
AWS Certified Big Data Specialty
AWS Certified Advanced Networking.
AWS Certified Security – Specialty
Other AWS Certification Exams Questions and Answers Dumps:
Top 20 AWS Certified Associate SysOps Administrator Practice Quiz – Questions and Answers Dumps
Big Data and Data Analytics 101 – Top 100 AWS Certified Data Analytics Specialty Certification Questions and Answers Dumps
CyberSecurity 101 and Top 25 AWS Certified Security Specialty Questions and Answers Dumps
Networking 101 and Top 20 AWS Certified Advanced Networking Specialty Questions and Answers Dumps
Other AWS Facts and Summaries and Questions/Answers Dump
- AWS S3 facts and summaries and Q&A Dump
- AWS DynamoDB facts and summaries and Questions and Answers Dump
- AWS EC2 facts and summaries and Questions and Answers Dump
- AWS Serverless facts and summaries and Questions and Answers Dump
- AWS Developer and Deployment Theory facts and summaries and Questions and Answers Dump
- AWS IAM facts and summaries and Questions and Answers Dump
- AWS Lambda facts and summaries and Questions and Answers Dump
- AWS SQS facts and summaries and Questions and Answers Dump
- AWS RDS facts and summaries and Questions and Answers Dump
- AWS ECS facts and summaries and Questions and Answers Dump
- AWS CloudWatch facts and summaries and Questions and Answers Dump
- AWS SES facts and summaries and Questions and Answers Dump
- AWS EBS facts and summaries and Questions and Answers Dump
- AWS ELB facts and summaries and Questions and Answers Dump
- AWS Autoscaling facts and summaries and Questions and Answers Dump
- AWS VPC facts and summaries and Questions and Answers Dump
- AWS KMS facts and summaries and Questions and Answers Dump
- AWS Elastic Beanstalk facts and summaries and Questions and Answers Dump
- AWS CodeBuild facts and summaries and Questions and Answers Dump
- AWS CodeDeploy facts and summaries and Questions and Answers Dump
- AWS CodePipeline facts and summaries and Questions and Answers Dump
- Pros and Cons of Cloud Computing
- Cloud Customer Insurance – Cloud Provider Insurance – Cyber Insurance
Below is a listing of AWS certification exam quiz apps for all platforms:
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
AWS Certified Cloud practitioner Exam Prep FREE version: CCP, CLF-C01
Online Training and Labs for AWS Certified Solution Architect Associate Exam
AWS Certified Solution Architect Associate Jobs
AWS Certification and Training Apps for all platforms:
AWS Cloud practitioner FREE version:
AWS Certified Cloud practitioner for the web:pwa
AWS Certified Cloud practitioner Exam Prep App for iOS
AWS Certified Cloud practitioner Exam Prep App for Microsoft/Windows10
AWS Certified Cloud practitioner Exam Prep App for Android (Google Play Store)
AWS Certified Cloud practitioner Exam Prep App for Android (Amazon App Store)
AWS Certified Cloud practitioner Exam Prep App for Android (Huawei App Gallery)
AWS Solution Architect FREE version:
AWS Certified Solution Architect Associate Exam Prep App for iOS:
Solution Architect Associate for Android Google Play
AWS Certified Solution Architect Associate Exam Prep App :Pwa
AWS Certified Solution Architect Associate Exam Prep App for Amazon android
AWS Certified Cloud practitioner Exam Prep App for Microsoft/Windows10
AWS Certified Cloud practitioner Exam Prep App for Huawei App Gallery
AWS Cloud Practitioner PRO Versions:
AWS Certified Cloud practitioner PRO Exam Prep App for iOS
AWS Certified Cloud Practitioner PRO Associate Exam Prep App for android google
AWS Certified Cloud practitioner Exam Prep App for Amazon android
AWS Certified Cloud practitioner Exam Prep App for Windows 10
AWS Certified Cloud practitioner Exam Prep PRO App for Android (Huawei App Gallery)
AWS Solution Architect PRO
AWS Certified Solution Architect Associate PRO versions for iOS
AWS Certified Solution Architect Associate PRO Exam Prep App for Android google
AWS Certified Solution Architect Associate PRO Exam Prep App for Windows10
AWS Certified Solution Architect Associate PRO Exam Prep App for Amazon android
Huawei App Gallery: Coming soon
AWS Certified Developer Associates Free version:
AWS Certified Developer Associates for Android (Google Play)
AWS Certified Developer Associates Web/PWA
AWS Certified Developer Associates for iOs
AWS Certified Developer Associates for Android (Huawei App Gallery)
AWS Certified Developer Associates for windows 10 (Microsoft App store)
Amazon App Store: Coming soon
AWS Developer Associates PRO version
PRO version with mock exam for android (Google Play)
PRO version with mock exam ios
AWS Certified Developer Associates PRO for Android (Microsoft App Store)
AWS Certified Developer Associates PRO for Android (Huawei App Gallery): Coming soon
Latest Cloud AWS Cloud Training Questions and Answers from around the Web:
Jon Bonso vs Stephane Maarek CCP Practice Exam Differences
Tutorialsdojo.com are the best in the market IMO
They have a long standing reputation for quality.
I’ve used them, I’ve recommended them to friends and family and I recommend them to students of my AWS courses also.
And last but not least, the Djamgatech Apps for iOs and and android.
Practice on the web directly here via the AWS Cloud Practitioner Exam Perp App
I would also recommend checking: Exam Digest
What is the difference between Amazon EC2 Savings Plans and Spot Instances?
Amazon EC2 Savings Plans are ideal for workloads that involve a consistent amount of compute usage over a 1-year or 3-year term.
With Amazon EC2 Savings Plans, you can reduce your compute costs by up to 72% over On-Demand costs.
Spot Instances are ideal for workloads with flexible start and end times, or that can withstand interruptions. With Spot Instances, you can reduce your compute costs by up to 90% over On-Demand costs.
Unlike Amazon EC2 Savings Plans, Spot Instances do not require contracts or a commitment to a consistent amount of compute usage.
Amazon EBS vs Amazon EFS
An Amazon EBS volume stores data in a single Availability Zone.
To attach an Amazon EC2 instance to an EBS volume, both the Amazon EC2 instance and the EBS volume must reside within the same Availability Zone.
Amazon EFS is a regional service. It stores data in and across multiple Availability Zones.
The duplicate storage enables you to access data concurrently from all the Availability Zones in the Region where a file system is located. Additionally, on-premises servers can access Amazon EFS using AWS Direct Connect.
Which cloud deployment model allows you to connect public cloud resources to on-premises infrastructure?
Applications made available through hybrid deployments connect cloud resources to on-premises infrastructure and applications. For example, you might have an application that runs in the cloud but accesses data stored in your on-premises data center.
What is the difference between Amazon EC2 Savings Plans and Spot Instances?
Amazon EC2 Savings Plans are ideal for workloads that involve a consistent amount of compute usage over a 1-year or 3-year term.
With Amazon EC2 Savings Plans, you can reduce your compute costs by up to 72% over On-Demand costs.
Spot Instances are ideal for workloads with flexible start and end times, or that can withstand interruptions. With Spot Instances, you can reduce your compute costs by up to 90% over On-Demand costs.
Unlike Amazon EC2 Savings Plans, Spot Instances do not require contracts or a commitment to a consistent amount of compute usage.
Which benefit of cloud computing helps you innovate and build faster?
Agility: The cloud gives you quick access to resources and services that help you build and deploy your applications faster.
Which developer tool allows you to write code within your web browser?
Cloud9 is an integrated development environment (IDE) that allows you to write code within your web browser.
Which method of accessing an EC2 instance requires both a private key and a public key?
SSH allows you to access an EC2 instance from your local laptop using a key pair, which consists of a private key and a public key.
Which service allows you to track the name of the user making changes in your AWS account?
CloudTrail tracks user activity and API calls in your account, which includes identity information (the user’s name, source IP address, etc.) about the API caller.
Which analytics service allows you to query data in Amazon S3 using Structured Query Language (SQL)?
Athena is a query service that makes it easy to analyze data in Amazon S3 using SQL.
Which machine learning service helps you build, train, and deploy models quickly?
SageMaker helps you build, train, and deploy machine learning models quickly.
Which EC2 storage mechanism is recommended when running a database on an EC2 instance?
EBS is a storage device you can attach to your instances and is a recommended storage option when you run databases on an instance.
Which storage service is a scalable file system that only works with Linux-based workloads?
EFS is an elastic file system for Linux-based workloads.

Which AWS service provides a secure and resizable compute platform with choice of processor, storage, networking, operating system, and purchase model?
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. Amazon EC2 offers the broadest and deepest compute platform with choice of processor, storage, networking, operating system, and purchase model. Amazon EC2.
Which services allow you to build hybrid environments by connecting on-premises infrastructure to AWS?
Site-to-site VPN allows you to establish a secure connection between your on-premises equipment and the VPCs in your AWS account.
Direct Connect allows you to establish a dedicated network connection between your on-premises network and AWS.
What service could you recommend to a developer to automate the software release process?
CodePipeline is a developer tool that allows you to continuously automate the software release process.
Which service allows you to practice infrastructure as code by provisioning your AWS resources via scripted templates?
CloudFormation allows you to provision your AWS resources via scripted templates.
Which machine learning service allows you to add image analysis to your applications?
Rekognition is a service that makes it easy to add image analysis to your applications.
Which services allow you to run containerized applications without having to manage servers or clusters?
Fargate removes the need for you to interact with servers or clusters as it provisions, configures, and scales clusters of virtual machines to run containers for you.
ECS lets you run your containerized Docker applications on both Amazon EC2 and AWS Fargate.
EKS lets you run your containerized Kubernetes applications on both Amazon EC2 and AWS Fargate.
Amazon S3 offers multiple storage classes. Which storage class is best for archiving data when you want the cheapest cost and don’t mind long retrieval times?
S3 Glacier Deep Archive offers the lowest cost and is used to archive data. You can retrieve objects within 12 hours.

In the shared responsibility model, what is the customer responsible for?
You are responsible for patching the guest OS, including updates and security patches.
You are responsible for firewall configuration and securing your application.
A company needs phone, email, and chat access 24 hours a day, 7 days a week. The response time must be less than 1 hour if a production system has a service interruption. Which AWS Support plan meets these requirements at the LOWEST cost?
The Business Support plan provides phone, email, and chat access 24 hours a day, 7 days a week. The Business Support plan has a response time of less than 1 hour if a production system has a service interruption.
For more information about AWS Support plans, see Compare AWS Support Plans.
Which Amazon EC2 pricing model adjusts based on supply and demand of EC2 instances?
Spot Instances are discounted more heavily when there is more capacity available in the Availability Zones.
For more information about Spot Instances, see Amazon EC2 Spot Instances.
Which of the following is an advantage of consolidated billing on AWS?
Consolidated billing is a feature of AWS Organizations. You can combine the usage across all accounts in your organization to share volume pricing discounts, Reserved Instance discounts, and Savings Plans. This solution can result in a lower charge compared to the use of individual standalone accounts.
For more information about consolidated billing, see Consolidated billing for AWS Organizations.
A company requires physical isolation of its Amazon EC2 instances from the instances of other customers. Which instance purchasing option meets this requirement?
With Dedicated Hosts, a physical server is dedicated for your use. Dedicated Hosts provide visibility and the option to control how you place your instances on an isolated, physical server. For more information about Dedicated Hosts, see Amazon EC2 Dedicated Hosts.
A company is hosting a static website from a single Amazon S3 bucket. Which AWS service will achieve lower latency and high transfer speeds?
CloudFront is a web service that speeds up the distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. Content is cached in edge locations. Content that is repeatedly accessed can be served from the edge locations instead of the source S3 bucket. For more information about CloudFront, see Accelerate static website content delivery.
Which AWS service provides a simple and scalable shared file storage solution for use with Linux-based Amazon EC2 instances and on-premises servers?
Amazon EFS provides an elastic file system that lets you share file data without the need to provision and manage storage. It can be used with AWS Cloud services and on-premises resources, and is built to scale on demand to petabytes without disrupting applications. With Amazon EFS, you can grow and shrink your file systems automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.
For more information about using Amazon EFS, see Walkthrough: Create and mount a file system on premises with AWS Direct Connect and VPN.
Which service allows you to generate encryption keys managed by AWS?
KMS allows you to generate and manage encryption keys. The keys generated by KMS are managed by AWS.
Which service can integrate with a Lambda function to automatically take remediation steps when it uncovers suspicious network activity when monitoring logs in your AWS account?
GuardDuty can perform automated remediation actions by leveraging Amazon CloudWatch Events and AWS Lambda. GuardDuty continuously monitors for threats and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon S3. GuardDuty analyzes multiple AWS data sources, such as AWS CloudTrail event logs, Amazon VPC Flow Logs, and DNS logs.
Which service allows you to create access keys for someone needing to access AWS via the command line interface (CLI)?
IAM allows you to create users and generate access keys for users needing to access AWS via the CLI.
Which service allows you to record software configuration changes within your Amazon EC2 instances over time?
Config helps with recording compliance and configuration changes over time for your AWS resources.
Which service assists with compliance and auditing by offering a downloadable report that provides the status of passwords and MFA devices in your account?
IAM provides a downloadable credential report that lists all users in your account and the status of their various credentials, including passwords, access keys, and MFA devices.
Which service allows you to locate credit card numbers stored in Amazon S3?
Macie is a data privacy service that helps you uncover and protect your sensitive data, such as personally identifiable information (PII) like credit card numbers, passport numbers, social security numbers, and more.
How do you manage permissions for multiple users at once using AWS Identity and Access Management (IAM)?
An IAM group is a collection of IAM users. When you assign an IAM policy to a group, all users in the group are granted permissions specified by the policy.
Which service protects your web application from cross-site scripting attacks?
Which AWS Trusted Advisor real-time guidance recommendations are available for AWS Basic Support and AWS Developer Support customers?
Basic and Developer Support customers get 50 service limit checks.
Basic and Developer Support customers get security checks for “Specific Ports Unrestricted” on Security Groups.
Basic and Developer Support customers get security checks on S3 Bucket Permissions.
Which service allows you to simplify billing by using a single payment method for all your accounts?
Organizations offers consolidated billing that provides 1 bill for all your AWS accounts. This also gives you access to volume discounts.
Which AWS service usage will always be free even after the 12-month free tier plan has expired?
One million Lambda requests are always free each month.
What is the easiest way for a customer on the AWS Basic Support plan to increase service limits?
The Basic Support plan allows 24/7 access to Customer Service via email and the ability to open service limit increase support cases.
Which types of issues are covered by AWS Support?
“How to” questions about AWS service and features
Problems detected by health checks

Which features of AWS reduce your total cost of ownership (TCO)?
Sharing servers with others allows you to save money.
Elastic computing allows you to trade capital expense for variable expense.
You pay only for the computing resources you use with no long-term commitments.
Which service allows you to select and deploy operating system and software patches automatically across large groups of Amazon EC2 instances?
Systems Manager allows you to automate operational tasks across your AWS resources.
Which service provides the easiest way to set up and govern a secure, multi-account AWS environment?
Control Tower allows you to centrally govern and enforce the best use of AWS services across your accounts.
Which cost management tool gives you the ability to be alerted when the actual or forecasted cost and usage exceed your desired threshold?
Budgets allow you to improve planning and cost control with flexible budgeting and forecasting. You can choose to be alerted when your budget threshold is exceeded.
Which tool allows you to compare your estimated service costs per Region?
The Pricing Calculator allows you to get an estimate for the cost of AWS services. Comparing service costs per Region is a common use case.
Who can assist with accelerating the migration of legacy contact center infrastructure to AWS?
Professional Services is a global team of experts that can help you realize your desired business outcomes with AWS.
The AWS Partner Network (APN) is a global community of partners that helps companies build successful solutions with AWS.
Which cost management tool allows you to view costs from the past 12 months, current detailed costs, and forecasts costs for up to 3 months?
Cost Explorer allows you to visualize, understand, and manage your AWS costs and usage over time.
Which service reduces the operational overhead of your IT organization?
Managed Services implements best practices to maintain your infrastructure and helps reduce your operational overhead and risk.
How do I set up Failover on Amazon AWS Route53?
How can a program running inside AWS EC2 determine which VPC and security group an incoming IP address or TCP connection belongs to, for application-layer firewalling?
I assume it is your subscription where the VPCs are located, otherwise you can’t really discover the information you are looking for. On the EC2 server you could use AWS CLI or Powershell based scripts that query the IP information. Based on IP you can find out what instance uses the network interface, what security groups are tied to it and in which VPC the instance is hosted. Read more here…What are some tips, tricks and gotchas when using AWS Lambda to connect to a VPC?
When using AWS Lambda inside your VPC, your Lambda function will be allocated private IP addresses, and only private IP addresses, from your specified subnets. This means that you must ensure that your specified subnets have enough free address space for your Lambda function to scale up to. Each simultaneous invocation needs its own IP. Read more here…
How do AWS step functions communicate with lambda functions which are in a VPC?
When a Lambda “is in a VPC”, it really means that its attached Elastic Network Interface is the customer’s VPC and not the hidden VPC that AWS manages for Lambda.
The ENI is not related to the AWS Lambda management system that does the invocation (the data plane mentioned here). The AWS Step Function system can go ahead and invoke the Lambda through the API, and the network request for that can pass through the underlying VPC and host infrastructure.
Those Lambdas in turn can invoke other Lambda directly through the API, or more commonly by decoupling them, such as through Amazon SQS used as a trigger. Read more ….
How do I invoke an AWS Lambda function programmatically?
public InvokeResult invoke(InvokeRequest request)
Invokes a Lambda function. You can invoke a function synchronously (and wait for the response), or asynchronously. To invoke a function asynchronously, set InvocationType
to Event
.
For synchronous invocation, details about the function response, including errors, are included in the response body and headers. For either invocation type, you can find more information in the execution log and trace.
When an error occurs, your function may be invoked multiple times. Retry behavior varies by error type, client, event source, and invocation type. For example, if you invoke a function asynchronously and it returns an error, Lambda executes the function up to two more times. For more information, see Retry Behavior.
For asynchronous invocation, Lambda adds events to a queue before sending them to your function. If your function does not have enough capacity to keep up with the queue, events may be lost. Occasionally, your function may receive the same event multiple times, even if no error occurs. To retain events that were not processed, configure your function with a dead-letter queue.
The status code in the API response doesn’t reflect function errors. Error codes are reserved for errors that prevent your function from executing, such as permissions errors, limit errors, or issues with your function’s code and configuration. For example, Lambda returns TooManyRequestsException
if executing the function would cause you to exceed a concurrency limit at either the account level ( Concurrent Invocation Limit Exceeded
) or function level ( Reserved Function Concurrent Invocation LimitExceeded
).
For functions with a long timeout, your client might be disconnected during synchronous invocation while it waits for a response. Configure your HTTP client, SDK, firewall, proxy, or operating system to allow for long connections with timeout or keep-alive settings.
This operation requires permission for the lambda:InvokeFunction action. Read more…
What are the differences between default and non-default AWS VPCs?
Default VPC
- 1 per region
- a set VPC CIDR range … you can’t changed it
- has everything configured by default .. 1 subnet per AZ, an internet gateway, routes and subnets set to allocate IPv4 by default.
Custom VPCs
- As any as you want per region (within limits)
- Customisable CIDR range
- Customisable subnet structure
- Nothing configured by default, you have to configure everything
The subnet mask determines how many bits of the network address are relevant (and thus indirectly the size of the network block in terms of how many host addresses are available) –
192.0.2.0, subnet mask 255.255.255.0 means that 192.0.2 is the significant portion of the network number, and that there 8 bits left for host addresses (i.e. 192.0.2.0 thru 192.0.2.255)
192.0.2.0, subnet mask 255.255.255.128 means that 192.0.2.0 is the significant portion of the network number (first three octets and the most significant bit of the last octet), and that there 7 bits left for host addresses (i.e. 192.0.2.0 thru 192.0.2.127)
When in doubt, envision the network number and subnet mask in base 2 (i.e. binary) and it will become much clearer. Read more here…
What are some best practices securing my Amazon Virtual Private Cloud (VPC)?
IAM is the new perimeter.
Separate out the roles needed to do each job. (Assuming this is a corporate environment)
Have a role for EC2, another for Networking, another for IAM.
Everyone should not be admin. Everyone should not be able to add/remove IGW’s, NAT gateways, alter security groups and NACLS, or setup peering connections.
Also, another thing… lock down full internet access. Limit to what is needed and that’s it. Read more here….
Within a single VPC, the subnets’ route tables need to point to each other. This will already work without additional routes because VPC sets up the local
target to point to the VPC subnet.
Security groups are not used here since they are attached to instances, and not networks.
See: Amazon Virtual Private Cloud
The NAT EC2 instance (server), or AWS-provided NAT gateway is necessary only if the private subnet internal addresses need to make outbound connections. The NAT will translate the private subnet internal addresses to the public subnet internal addresses, and the AWS VPC Internet Gateway will translate these to external IP addresses, which can then go out to the Internet. Read more here ….
What are the applications (or workloads) that cannot be migrated on to cloud (AWS or Azure or GCP)?
A good example of workloads that currently are not in public clouds are mobile and fixed core telecom networks for tier 1 service providers. This is despite the fact that these core networks are increasingly software based and have largely been decoupled from the hardware. There are a number of reasons for this such as the public cloud providers such as Azure and AWS do not offer the guaranteed availability required by telecom networks. These networks require 99.999% availability and is typically referred to as telecom grade.
The regulatory environment frequently restricts hosting of subscriber data outside the of the operators data centers or in another country and key network functions such as lawful interception cannot contractually be hosted off-prem. Read more here….
How many CIDRs can we add to my own created VPC?
You can add up to 5 IPv4 CIDR blocks, or 1 IPv6 block per VPC. You can further segment the network by utilizing up to 200 subnets per VPC. Amazon VPC Limits. Read more …
Why can’t a subnet’s CIDR be changed once it has been assigned?
Sure it can, but you’ll need to coordinate with the neighbors. You can merge two /25’s into a single /24 quite effortlessly if you control the entire range it covers. In practice you’ll see many tiny allocations in public IPv4 space, like /29’s and even smaller. Those are all assigned to different people. If you want to do a big shuffle there, you have a lot of coordinating to do.. or accept the fallout from the breakage you cause. Read more…
Can one VPC talk to another VPC?
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
What questions to expect in cloud support engineer deployment roles at AWS?
Cloud Support Engineer (CSE) is a role which requires the following abilities:
- Wide range of technical skills
- Good communication and time management
- Good knowledge about the AWS services, and how to leverage them to solve simple to complex problems.
As your question is related to the deployment Pod, you will probably be asked about deployment methods (A/B testing like blue-green deployment) as well as pipelining strategies. You might be asked during this interview to reason about a simple task and to code it (like parsing a log file). Also review the TCP/IP stack in-depth as well as the tools to troubleshoot it for the networking round. You will eventually have some Linux questions, the range of questions can vary from common CLI tools to Linux internals like signals / syscalls / file descriptors and so on.
Last but not least the Leadership principles, I can only suggest you to prepare a story for each of them. You will quickly find what LP they are looking for and would be able to give the right signal to your interviewer.
Finally, remember that theres a debrief after the (usually 5) stages of your on site interview, and more senior and convincing interviewers tend to defend their vote so don’t screw up with them.
Be natural, focus on the question details and ask for confirmation, be cool but not too much. At the end of the day, remember that your job will be to understand customer issues and provide a solution, so treat your interviewers as if they were customers and they will see a successful CSE in you, be reassured and give you the job.
Expect questions on cloudformations, Teraform, Aws ec2/rds and stack related questions.
Its a high tech call center. You are expected to take calls, chats of customers and give them technical advice. You will not be doing any of the cool stuff you did earlier (if you are coming from engineering job or DBA). You will surely gain a very good knowledge of multiple AWS services and the one that you will be hired in, however most of the knowledge will be theoretical and nothing practical in day-to-day life.
It also depends on the support team you are being hired for. Networking or compute teams (Ec2) have different interview patterns vs database or big data support.
In any case, basics of OS, networking are critical to the interview. If you have a phone screen, we will be looking for basic/semi advance skills of these and your speciality. For example if you mention Oracle in your resume and you are interviewing for the database team, expect a flurry of those questions.
Other important aspect is the Amazon leadership principles. Half of your interview is based on LPs. If you fail to have scenarios where you do not demonstrate our LPs, you cannot expect to work here even though your technical skills are above average (Having extraordinary skills is a different thing).
The overall interview itself will have 1 phone screen if you are interviewing in the US and 1–2 if outside US. The onsite loop will be 4 rounds , 2 of which are technical (again divided into OS and networking and the specific speciality of the team you are interviewing for ) and 2 of them are leadership principles where we test your soft skills and management skills as they are very important in this job. You need to have a strong view point, disagree if it seems valid to do so, empathy and be a team player while showing the ability to pull off things individually as well. These skills will be critical for cracking LP interviews.
You will NOT be asked to code or write queries as its not part of the job, so you can concentrate on the theoretical part of the subject and also your resume. We will grill you on topics mentioned on your resume to start with.
Traditional monolithic architectures are hard to scale: TRUE
Monolithic architecture is something that build from single piece of material, historically from rock. Monolith term normally use for object made from single large piece of material.” – Non-Technical Definition. “Monolithic application has single code base with multiple modules.
Large Monolithic code-base (often spaghetti code) puts immense cognitive complexity on the developer’s head. As a result, the development velocity is poor. Granular scaling (i.e., scaling part of the application) is not possible. Polyglot programming or polyglot database is challenging.
Drawbacks of Monolithic Architecture
This simple approach has a limitation in size and complexity. Application is too large and complex to fully understand and made changes fast and correctly. The size of the application can slow down the start-up time. You must redeploy the entire application on each update.
18. Sticky Sessions help increase your application’s scability: FALSE
Sticky sessions, also known as session affinity, allow you to route a site user to the particular web server that is managing that individual user’s session. The session’s validity can be determined by a number of methods, including a client-side cookies or via configurable duration parameters that can be set at the load balancer which routes requests to the web servers.
Some advantages with utilizing sticky sessions are that it’s cost effective due to the fact you are storing sessions on the same web servers running your applications and that retrieval of those sessions is generally fast because it eliminates network latency. A drawback for using storing sessions on an individual node is that in the event of a failure, you are likely to lose the sessions that were resident on the failed node. In addition, in the event the number of your web servers change, for example a scale-up scenario, it’s possible that the traffic may be unequally spread across the web servers as active sessions may exist on particular servers. If not mitigated properly, this can hinder the scalability of your applications. Read more here …
AWS recommends replicating across Availability Zones for resiliency: TRUE
If you need to replicate your data or applications in an AWS Local Zone, AWS recommends that you use one of the following zones as the failover zone:
Another Local Zone
An Availability Zone in the Region that is not the parent zone. You can use the describe-availability-zones command to view the parent zone.
For more information about AWS Regions and Availability Zones, see AWS Global Infrastructure.
What are the benefits of AWS Cloud Computing?
- Trade Capital expenses for variable expenses
- Increase speed and agility
- Benefit from massive economies at scale
- Stop spending money on running and maintaining data centers
- Stop guessing capacity
- Go global in minutes
What is the default behavior for an EC2 instance when terminated?
After you terminate an instance, it remains visible in the console for a short while, and then the entry is automatically deleted. You cannot delete the terminated instance entry yourself. After an instance is terminated, resources such as tags and volumes are gradually disassociated from the instance, therefore may no longer be visible on the terminated instance after a short while.
When an instance terminates, the data on any instance store volumes associated with that instance is deleted.
By default, Amazon EBS root device volumes are automatically deleted when the instance terminates. However, by default, any additional EBS volumes that you attach at launch, or any EBS volumes that you attach to an existing instance persist even after the instance terminates. This behavior is controlled by the volume’s DeleteOnTermination
attribute, which you can modify
For more information, please visit: Terminate Your Instance
How do Amazon EC2 EBS burst credits work?
The documentation on General Purpose SSD (gp2) EBS volumes can be found at this page: New SSD-Backed Elastic Block Storage
When you first launch an instance with gp2 volumes attached, you get an initial burst credit allowing for up to 30 minutes of 3,000 iops/sec.
After the first 30 minutes, your volume will accrue credits as follows (taken directly from AWS documentation):
Within the General Purpose (SSD) implementation is a Token Bucket model that works as follows
- Each token represents an “I/O credit” that pays for one read or one write.
- A bucket is associated with each General Purpose (SSD) volume, and can hold up to 5.4 million tokens.
- Tokens accumulate at a rate of 3 per configured GB per second, up to the capacity of the bucket.
- Tokens can be spent at up to 3000 per second per volume.
- The baseline performance of the volume is equal to the rate at which tokens are accumulated — 3 IOPS per GB per second.
In addition to this, gp2 volumes provide baseline performance of 3 iops per Gb, up to 1Tb (3000 iops). Volumes larger than 1Tb no longer work on the credit system, as they already provide a baseline of 3000 iops. Gp2 volumes have a cap of 10,000 iops regardless of the volume size (so the iops max out for volumes larger than 3.3Tb)
Is elastic IP service free if we associate it with any VM (EC2 server)?
Elastic IP addresses are free when you have them assigned to an instance, feel free to use one! Elastic IPs get disassociated when you stop an instance, so you will get charged in the mean time. The benefit is that you get to keep that IP allocated to your account though, instead of losing it like any other. Once you start the instance you just re-associate it back and you have your old IP again.
Here are the changes associated with the use of Elastic IP addresses
No cost for Elastic IP addresses while in use
* $0.01 per non-attached Elastic IP address per complete hour
* $0.00 per Elastic IP address remap – first 100 remaps / month
* $0.10 per Elastic IP address remap – additional remap / month over 100
If you require any additional information about pricing please reference the link below
Amazon EC2 Pricing – Amazon Web Services
The other cost are as outlined in the paragraph you have quoted.
How do I reduce my AWS EC2 cost? My AWS EC2 expenditure comprises 80% of my AWS bill.
The short answer to reducing your AWS EC2 costs – turn off your instances when you don’t need them.
Your AWS bill is just like any other utility bill, you get charged for however much you used that month. Don’t make the mistake of leaving your instances on 24/7 if you’re only using them during certain days and times (ex. Monday – Friday, 9 to 5).
To automatically start and stop your instances, AWS offers an “EC2 scheduler” solution. A better option would be a cloud cost management tool that not only stops and starts your instances automatically, but also tracks your usage and makes sizing recommendations to optimize your cloud costs and maximize your time and savings.
You could potentially save money using Reserved Instances. But, in non-production environments such as dev, test, QA, and training, Reserved Instances are not your best bet. Why is this the case? These environments are less predictable; you may not know how many instances you need and when you will need them, so it’s better to not waste spend on these usage charges. Instead, schedule such instances (preferably using ParkMyCloud). Scheduling instances to be only up 12 hours per day on weekdays will save you 65% – better than all but the most restrictive 3-year RIs!
You can also save money with:
- Spot Instances
- AWS Dedicated Hosts & Dedicated Instances
- Auto Scaling Groups
- Rightsizing
What is the difference between an Instance, AMI and Snaphots in AWS? What are they used for?
Well AWS is a web service provider which offers a set of services related to compute, storage, database, network and more to help the business scale and grow
All your concerns are related to AWS EC2 instance, so let me start with an instance
Instance:
- An EC2 instance is similar to a server where you can host your websites or applications to make it available Globally
- It is highly scalable and works on the pay-as-you-go model
- You can increase or decrease the capacity of these instances as per the requirement
AMI:
- AMI provides the information required to launch the EC2 instance
- AMI includes the pre-configured templates of the operating system that runs on the AWS
- Users can launch multiple instances with the same configuration from a single AMI
Snapshot:
- Snapshots are the incremental backups for the Amazon EBS
- Data in the EBS are stored in S3 by taking point-to-time snapshots
- Unique data are only deleted when a snapshot is deleted
- Multiple EBS can be created using these snapshots
What are the main differences between a VPNs, VPS and VPC?
They are definitely all chalk and cheese to one another.
A VPN (Virtual Private Network) is essentially an encrypted “channel” connecting two networks, or a machine to a network, generally over the public internet.
A VPS (Virtual Private Server) is a rented virtual machine running on someone else’s hardware. AWS EC2 can be thought of as a VPS, but the term is usually used to describe low-cost products offered by lots of other hosting companies.
A VPC (Virtual Private Cloud) is a virtual network in AWS (Amazon Web Services). It can be divided into private and public subnets, have custom routing rules, have internal connections to other VPCs, etc. EC2 instances and other resources are placed in VPCs similarly to how physical data centers have operated for a very long time.
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
What is the use of elastic IP in AWS?
Elastic IP address is basically the static IP (IPv4) address that you can allocate to your resources.
Now, in case that you allocate IP to the resource (and the resource is running), you are not charged anything. On the other hand, if you create Elastic IP, but you do not allocate it to the resource (or the resource is not running), then you are charged some amount (should be around $0.005 per hour if I remember correctly)
Additional info about these:
You are limited to 5 Elastic IP addresses per region. If you require more than that, you can contact AWS support with a request for additional addresses. You need to have a good reason in order to be approved because IPv4 addresses are becoming a scarce resource.
In general, you should be good without Elastic IPs for most of the use-cases (as every EC2 instance has its own public IP, and you can use load balancers, as well as map most of the resources via Route 53).
One of the use-cases that I’ve seen where my client is using Elastic IP is to make it easier for him to access specific EC2 instance via RDP, as well as do deployment through Visual Studio, as he targets the Elastic IP, and thus does not have to watch for any changes in public IP (in case of stopping or rebooting).
Why would you choose not to use AWS Transit Gateway instead of VPC peering?
At this time, AWS Transit Gateway does not support inter region attachments. The transit gateway and the attached VPCs must be in the same region. VPC peering supports inter region peering.
Difference between AWS Workspace and AWS Ec2 VM?
- The EC2 instance is server instance whilst a Workspace is windows desktop instance
Both Windows Server and Windows workstation editions have desktops. Windows Server Core doesn’t not (and AWS doesn’t have an AMI for Windows Server Core that I could find).
It is possible to SSH into a Windows instance – this is done on port 22. You would not see a desktop when using SSH if you had enabled it. It is not enabled by default.
If you are seeing a desktop, I believe you’re “RDPing” to the Windows instance. This is done with the RDP protocol on port 3389.
- Two different protocols and two different ports.
- Workspaces doesn’t allow terminal or ssh services by default. You need to use Workspace client. You still can enable RDP or/and SSH but this is not recommended.
- Workspaces is a managed desktop service. AWS is taking care of pre-build AMIs, software licenses, joining to domain, scaling etc.
- What is Amazon EC2? Scalable, pay-as-you-go compute capacity in the cloud. Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.
- What is Amazon WorkSpaces? Easily provision cloud-based desktops that allow end-users to access applications and resources. With a few clicks in the AWS Management Console, customers can provision a high-quality desktop experience for any number of users at a cost that is highly competitive with traditional desktops and half the cost of most virtual desktop infrastructure (VDI) solutions. End-users can access the documents, applications and resources they need with the device of their choice, including laptops, iPad, Kindle Fire, or Android tablets.
- Amazon EC2 can be classified as a tool in the “Cloud Hosting” category, while Amazon WorkSpaces is grouped under “Virtual Desktop”.
Some of the features offered by Amazon EC2 are:
- Elastic – Amazon EC2 enables you to increase or decrease capacity within minutes, not hours or days. You can commission one, hundreds or even thousands of server instances simultaneously.
- Completely Controlled – You have complete control of your instances. You have root access to each one, and you can interact with them as you would any machine.
- Flexible – You have the choice of multiple instance types, operating systems, and software packages. Amazon EC2 allows you to select a configuration of memory, CPU, instance storage, and the boot partition size that is optimal for your choice of operating system and application.
On the other hand, Amazon WorkSpaces provides the following key features:
- Support Multiple Devices- Users can access their Amazon WorkSpaces using their choice of device, such as a laptop computer (Mac OS or Windows), iPad, Kindle Fire, or Android tablet.
- Keep Your Data Secure and Available- Amazon WorkSpaces provides each user with access to persistent storage in the AWS cloud. When users access their desktops using Amazon WorkSpaces, you control whether your corporate data is stored on multiple client devices, helping you keep your data secure.
- Choose the Hardware and Software you need- Amazon WorkSpaces offers a choice of bundles providing different amounts of CPU, memory, and storage so you can match your Amazon WorkSpaces to your requirements. Amazon WorkSpaces offers preinstalled applications (including Microsoft Office) or you can bring your own licensed software.
Amazon EBS vs Amazon EFS
An Amazon EBS volume stores data in a single Availability Zone.
To attach an Amazon EC2 instance to an EBS volume, both the Amazon EC2 instance and the EBS volume must reside within the same Availability Zone.
Amazon EFS is a regional service. It stores data in and across multiple Availability Zones.
The duplicate storage enables you to access data concurrently from all the Availability Zones in the Region where a file system is located. Additionally, on-premises servers can access Amazon EFS using AWS Direct Connect.
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11

AWS Services Cheat Sheet:
Compute
Category | Service | Description |
Instances (Virtual machines) | EC2 | Provides secure, resizable compute capacity in the cloud. It makes web-scale cloud computing easier for developers. EC2 |
EC2 Spot | Run fault-tolerant workloads for up to 90% off. EC2Spot | |
EC2 Autoscaling | Automatically add or remove compute capacity to meet changes in demand. EC2_AustoScaling | |
Lightsail | Designed to be the easiest way to launch & manage a virtual private server with AWS. An easy-to-use cloud platform that offers everything need to build an application or website. Lightsail | |
Batch | Enables developers, scientists, & engineers to easily & efficiently run hundreds of thousands of batch computing jobs on AWS. Fully managed batch processing at any scale. Batch | |
Containers | Elastic Container Service (ECS) | Highly secure, reliable, & scalable way to run containers. ECS |
Elastic Container Registry (ECR) | Easily store, manage, & deploy container images. ECR | |
Elastic Kubernetes Service (EKS) | Fully managed Kubernetes service. EKS | |
Fargate | Serverless compute for containers. Fargate | |
Serverless | Lambda | Run code without thinking about servers. Pay only for the compute time you consume. Lamda |
Edge and hybrid | Outposts | Run AWS infrastructure & services on premises for a truly consistent hybrid experience. Outposts |
Snow Family | Collect and process data in rugged or disconnected edge environments. SnowFamily | |
Wavelength | Deliver ultra-low latency application for 5G devices. Wavelenth | |
VMware Cloud on AWS | Innovate faster, rapidly transition to the cloud, & work securely from any location. VMware_On_AWS | |
Local Zones | Run latency sensitive applications closer to end-users. LocalZones |
Networking and Content Delivery
Use cases | Functionality | Service | Description |
Build a cloud network | Define and provision a logically isolated network for your AWS resources | VPC | VPC lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. VPC |
Connect VPCs and on-premises networks through a central hub | Transit Gateway | Transit Gateway connects VPCs & on-premises networks through a central hub. This simplifies network & puts an end to complex peering relationships. TransitGateway | |
Provide private connectivity between VPCs, services, and on-premises applications | PrivateLink | PrivateLink provides private connectivity between VPCs & services hosted on AWS or on-premises, securely on the Amazon network. PrivateLink | |
Route users to Internet applications with a managed DNS service | Route 53 | Route 53 is a highly available & scalable cloud DNS web service. Route53 | |
Scale your network design | Automatically distribute traffic across a pool of resources, such as instances, containers, IP addresses, and Lambda functions | Elastic Load Balancing | Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as EC2’s, containers, IP addresses, & Lambda functions. ElasticLoadBalancing |
Direct traffic through the AWS Global network to improve global application performance | Global Accelerator | Global Accelerator is a networking service that sends user’s traffic through AWS’s global network infrastructure, improving internet user performance by up to 60%. GlobalAccelerator | |
Secure your network traffic | Safeguard applications running on AWS against DDoS attacks | Shield | Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. Shield |
Protect your web applications from common web exploits | WAF | WAF is a web application firewall that helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. WAF | |
Centrally configure and manage firewall rules | Firewall Manager | Firewall Manager is a security management service which allows to centrally configure & manage firewall rules across accounts & apps in AWS Organization. link text | |
Build a hybrid IT network | Connect your users to AWS or on-premises resources using a Virtual Private Network | (VPN) – Client | VPN solutions establish secure connections between on-premises networks, remote offices, client devices, & the AWS global network. VPN |
Create an encrypted connection between your network and your Amazon VPCs or AWS Transit Gateways | (VPN) – Site to Site | Site-to-Site VPN creates a secure connection between data center or branch office & AWS cloud resources. site_to_site | |
Establish a private, dedicated connection between AWS and your datacenter, office, or colocation environment | Direct Connect | Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. DirectConnect | |
Content delivery networks | Securely deliver data, videos, applications, and APIs to customers globally with low latency, and high transfer speeds | CloudFront | CloudFront expedites distribution of static & dynamic web content. CloudFront |
Build a network for microservices architectures | Provide application-level networking for containers and microservices | App Mesh | App Mesh makes it accessible to guide & control microservices operating on AWS. AppMesh |
Create, maintain, and secure APIs at any scale | API Gateway | API Gateway allows the user to design & expand their own REST and WebSocket APIs at any scale. APIGateway | |
Discover AWS services connected to your applications | Cloud Map | Cloud Map permits the name & handles the cloud resources. CloudMap |
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
Storage
Service | Description |
AWS S3 | S3 is the storehouse for the internet i.e. object storage built to store & retrieve any amount of data from anywhere S3 |
AWS Backup | AWS Backup is an externally-accessible backup provider that makes it easier to align & optimize the backup of data across AWS services in the cloud. AWS_Backup |
Amazon EBS | Amazon Elastic Block Store is a web service that provides block-level storage volumes. EBS |
Amazon EFS Storage | EFS offers file storage for the user’s Amazon EC2 instances. It’s kind of blob Storage. EFS |
Amazon FSx | FSx supply fully managed 3rd-party file systems with the native compatibility & characteristic sets for workloads. It’s available as FSx for Windows server (Fully managed file storage built on Windows Server) & Lustre (Fully managed high-performance file system integrated with S3). FSx_Windows FSx_Lustre |
AWS Storage Gateway | Storage Gateway is a service which connects an on-premises software appliance with cloud-based storage. Storage_Gateway |
AWS DataSync | DataSync makes it simple & fast to move large amounts of data online between on-premises storage & S3, EFS, or FSx for Windows File Server. DataSync |
AWS Transfer Family | The Transfer Family provides fully managed support for file transfers directly into & out of S3. Transfer_Family |
AWS Snow Family | Highly-secure, portable devices to collect & process data at the edge, and migrate data into and out of AWS. Snow_Family |
Classification:
Object storage: S3
File storage services: Elastic File System, FSx for Windows Servers & FSx for Lustre
Block storage: EBS
Backup: AWS Backup
Data transfer:
Storage gateway –> 3 types: Tape, File, Volume.
Transfer Family –> SFTP, FTPS, FTP.
Edge computing and storage and Snow Family –> Snowcone, Snowball, Snowmobile
Databases
Database type | Use cases | Service | Description |
Relational | Traditional applications, ERP, CRM, e-commerce | Aurora, RDS, Redshift | RDS is a web service that makes it easier to set up, control, and scale a relational database in the cloud. Aurora RDS Redshift |
Key-value | High-traffic web apps, e-commerce systems, gaming applications | DynamoDB | DynamoDB is a fully administered NoSQL database service that offers quick and reliable performance with integrated scalability. DynamoDB |
In-memory | Caching, session management, gaming leaderboards, geospatial applications | ElastiCache for Memcached & Redis | ElastiCache helps in setting up, managing, and scaling in-memory cache conditions. Memcached Redis |
Document | Content management, catalogs, user profiles | DocumentDB | DocumentDB (with MongoDB compatibility) is a quick, dependable, and fully-managed database service that makes it easy for you to set up, operate, and scale MongoDB-compatible databases.DocumentDB |
Wide column | High scale industrial apps for equipment maintenance, fleet management, and route optimization | Keyspaces (for Apache Cassandra) | Keyspaces is a scalable, highly available, and managed Apache Cassandra–compatible database service. Keyspaces |
Graph | Fraud detection, social networking, recommendation engines | Neptune | Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets. Neptune |
Time series | IoT applications, DevOps, industrial telemetry | Timestream | Timestream is a fast, scalable, and serverless time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day. Timestream |
Ledger | Systems of record, supply chain, registrations, banking transactions | Quantum Ledger Database (QLDB) | QLDB is a fully managed ledger database that provides a transparent, immutable, and cryptographically verifiable transaction log owned by a central trusted authority. QLDB |
Developer Tools
Service | Description |
Cloud9 | Cloud9 is a cloud-based IDE that enables the user to write, run, and debug code. Cloud9 |
CodeArtifact | CodeArtifact is a fully managed artifact repository service that makes it easy for organizations of any size to securely store, publish, & share software packages used in their software development process. CodeArtifact |
CodeBuild | CodeBuild is a fully managed service that assembles source code, runs unit tests, & also generates artefacts ready to deploy. CodeBuild |
CodeGuru | CodeGuru is a developer tool powered by machine learning that provides intelligent recommendations for improving code quality & identifying an application’s most expensive lines of code. CodeGuru |
Cloud Development Kit | Cloud Development Kit (AWS CDK) is an open source software development framework to define cloud application resources using familiar programming languages. CDK |
CodeCommit | CodeCommit is a version control service that enables the user to personally store & manage Git archives in the AWS cloud. CodeCommit |
CodeDeploy | CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as EC2, Fargate, Lambda, & on-premises servers. CodeDeploy |
CodePipeline | CodePipeline is a fully managed continuous delivery service that helps automate release pipelines for fast & reliable app & infra updates. CodePipeline |
CodeStar | CodeStar enables to quickly develop, build, & deploy applications on AWS. CodeStar |
CLI | AWS CLI is a unified tool to manage AWS services & control multiple services from the command line & automate them through scripts. CLI |
X-Ray | X-Ray helps developers analyze & debug production, distributed applications, such as those built using a microservices architecture. X-Ray |
Migration & Transfer services
Service | Description |
Migration Evaluator | Build a data-driven business case for AWS. ME |
Migration Hub | Migration Hub provides a single location to track the progress of app migrations across multiple AWS & partner solutions. MigrationHub |
Application Discovery Service | Application Discovery Service helps enterprise customers plan migration projects by gathering information about their on-premises data centers. ADS |
Server Migration Service (SMS) | SMS is an agentless service which makes it easier & faster to migrate thousands of on-premises workloads to AWS. SMS |
Database Migration Service (DMS) | DMS helps migrate databases to AWS quickly & securely. DMS |
CloudEndure Migration | CloudEndure Migration simplifies, expedites, & reduces the cost of cloud migration by offering a highly automated lift-&-shift solution. CloudEndure |
VMware Cloud on AWS | Refer compute section. |
DataSync | Refer storage section. |
Transfer Family | Refer storage section. |
Snow Family | Refer storage section. |
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
SDKs & Toolkits
Service | Description |
CDK | CDK uses the familiarity & expressive power of programming languages for modeling apps. CDK |
Corretto | Corretto is a no-cost, multiplatform, production-ready distribution of the OpenJDK. Corretto |
Crypto Tools | Cryptography is hard to do safely & correctly. The AWS Crypto Tools libraries are designed to help everyone do cryptography right, even without special expertise. Crypto Tools |
Serverless Application Model (SAM) | SAM is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, & event source mappings. SAM |
Tools for developing and managing applications on AWS |
Security, Identity, & Compliance
Category | Use cases | Service | Description |
Identity & access management | Securely manage access to services and resources | Identity & Access Management (IAM) | IAM is a web service for safely controlling access to AWS services. IAM |
Securely manage access to services and resources | Single Sign-On | SSO helps in simplifying, managing SSO access to AWS accounts & business applications. SSO | |
Identity management for apps | Cognito | Cognito lets you add user sign-up, sign-in, & access control to web & mobile apps quickly and easily. Cognito | |
Managed Microsoft Active Directory | Directory Service | AWS Managed Microsoft Active Directory (AD) enables your directory-aware workloads & AWS resources to use managed Active Directory (AD) in AWS. DirectoryService | |
Simple, secure service to share AWS resources | Resource Access Manager | Resource Access Manager (RAM) is a service that enables you to easily & securely share AWS resources with any AWS account or within AWS Organization. RAM | |
Central governance and management across AWS accounts | Organizations | Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS. Orgs | |
Detection | Unified security and compliance center | Security Hub | Security Hub gives a comprehensive view of security alerts & security posture across AWS accounts. SecurityHub |
Managed threat detection service | GuardDuty | GuardDuty is a threat detection service that continuously monitors for malicious activity & unauthorized behavior to protect AWS accounts, workloads, & data stored in S3. GuardDuty | |
Analyze application security | Inspector | Inspector is a security vulnerability assessment service improves the security & compliance of the AWS resources. Inspector | |
Record and evaluate configurations of your AWS resources | Config | Config is a service that enables to assess, audit, & evaluate the configurations of AWS resources. Config | |
Track user activity and API usage | CloudTrail | CloudTrail is a service that enables governance, compliance, operational auditing, & risk auditing of AWS account. CloudTrail | |
Security management for IoT devices | IoT Device Defender | IoT Device Defender is a fully managed service that helps secure fleet of IoT devices. IoTDD | |
Infrastructure protection | DDoS protection | Shield | Shield is a managed DDoS protection service that safeguards apps running. It provides always-on detection & automatic inline mitigations that minimize application downtime & latency. Shield |
Filter malicious web traffic | Web Application Firewall (WAF) | WAF is a web application firewall that helps protect web apps or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. WAF | |
Central management of firewall rules | Firewall Manager | Firewall Manager eases the user AWS WAF administration & maintenance activities over multiple accounts & resources. FirewallManager | |
Data protection | Discover and protect your sensitive data at scale | Macie | Macie is a fully managed data (security & privacy) service that uses ML & pattern matching to discover & protect sensitive data. Macie |
Key storage and management | Key Management Service (KMS) | KMS makes it easy for to create & manage cryptographic keys & control their use across a wide range of AWS services & in your applications. KMS | |
Hardware based key storage for regulatory compliance | CloudHSM | CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate & use your own encryption keys. CloudHSM | |
Provision, manage, and deploy public and private SSL/TLS certificates | Certificate Manager | Certificate Manager is a service that easily provision, manage, & deploy public and private SSL/TLS certs for use with AWS services & internal connected resources. ACM | |
Rotate, manage, and retrieve secrets | Secrets Manager | Secrets Manager assist the user to safely encode, store, & recover credentials for any user’s database & other services. SecretsManager | |
Incident response | Investigate potential security issues | Detective | Detective makes it easy to analyze, investigate, & quickly identify the root cause of potential security issues or suspicious activities. Detective |
Fast, automated, cost- effective disaster recovery | CloudEndure Disaster Recovery | Provides scalable, cost-effective business continuity for physical, virtual, & cloud servers. CloudEndure | |
Compliance | No cost, self-service portal for on-demand access to AWS’ compliance reports | Artifact | Artifact is a web service that enables the user to download AWS security & compliance records. Artifact |
Data Lakes & Analytics
Category | Use cases | Service | Description |
Analytics | Interactive analytics | Athena | Athena is an interactive query service that makes it easy to analyze data in S3 using standard SQL. Athena |
Big data processing | EMR | EMR is the industry-leading cloud big data platform for processing vast amounts of data using open source tools such as Apache Spark, Hive, HBase,Flink, Hudi, & Presto. EMR | |
Data warehousing | Redshift | The most popular & fastest cloud data warehouse. Redshift | |
Real-time analytics | Kinesis | Kinesis makes it easy to collect, process, & analyze real-time, streaming data so one can get timely insights. Kinesis | |
Operational analytics | Elasticsearch Service | Elasticsearch Service is a fully managed service that makes it easy to deploy, secure, & run Elasticsearch cost effectively at scale. ES | |
Dashboards & visualizations | Quicksight | QuickSight is a fast, cloud-powered business intelligence service that makes it easy to deliver insights to everyone in organization. QuickSight | |
Data movement | Real-time data movement | 1) Amazon Managed Streaming for Apache Kafka (MSK) 2) Kinesis Data Streams 3) Kinesis Data Firehose 4) Kinesis Data Analytics 5) Kinesis Video Streams 6) Glue | MSK is a fully managed service that makes it easy to build & run applications that use Apache Kafka to process streaming data. MSK KDS KDF KDA KVS Glue |
Data lake | Object storage | 1) S3 2) Lake Formation | Lake Formation is a service that makes it easy to set up a secure data lake in days. A data lake is a centralized, curated, & secured repository that stores all data, both in its original form & prepared for analysis. S3 LakeFormation |
Backup & archive | 1) S3 Glacier 2) Backup | S3 Glacier & S3 Glacier Deep Archive are a secure, durable, & extremely low-cost S3 cloud storage classes for data archiving & long-term backup. S3Glacier | |
Data catalog | 1) Glue 2)) Lake Formation | Refer as above. | |
Third-party data | Data Exchange | Data Exchange makes it easy to find, subscribe to, & use third-party data in the cloud. DataExchange | |
Predictive analytics && machine learning | Frameworks & interfaces | Deep Learning AMIs | Deep Learning AMIs provide machine learning practitioners & researchers with the infrastructure & tools to accelerate deep learning in the cloud, at any scale. DeepLearningAMIs |
Platform services | SageMaker | SageMaker is a fully managed service that provides every developer & data scientist with the ability to build, train, & deploy machine learning (ML) models quickly. SageMaker |
Containers
Use cases | Service | Description |
Store, encrypt, and manage container images | ECR | Refer compute section |
Run containerized applications or build microservices | ECS | Refer compute section |
Manage containers with Kubernetes | EKS | Refer compute section |
Run containers without managing servers | Fargate | Fargate is a serverless compute engine for containers that works with both ECS & EKS. Fargate |
Run containers with server-level control | EC2 | Refer compute section |
Containerize and migrate existing applications | App2Container | App2Container (A2C) is a command-line tool for modernizing .NET & Java applications into containerized applications. App2Container |
Quickly launch and manage containerized applications | Copilot | Copilot is a command line interface (CLI) that enables customers to quickly launch & easily manage containerized applications on AWS. Copilot |
Serverless
Category | Service | Description |
Compute | Lambda | Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. |
Lambda@Edge | Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to users of your application, which improves performance & reduces latency. | |
Fargate | Refer containers section | |
Storage | S3 | Refer storage section |
EFS | Refer storage section | |
Data stores | DynamoDB | DynamoDB is a key-value & document database that delivers single-digit millisecond performance at any scale. |
Aurora Serverless | Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora (MySQL & PostgreSQL-compatible editions), where the database will automatically start up, shut down, & scale capacity up or down based on your application’s needs. | |
RDS Proxy | RDS Proxy is a fully managed, highly available database proxy for RDS that makes applications more scalable, resilient to database failures, & more secure. | |
API Proxy | API Gateway | API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, & secure APIs at any scale. |
Application integration | SNS | SNS is a fully managed messaging service for both system-to-system & app-to-person (A2P) communication. |
SQS | SQS is a fully managed message queuing service that enables to decouple & scale microservices, distributed systems, & serverless applications. | |
AppSync | AppSync is a fully managed service that makes it easy to develop GraphQL APIs by handling the heavy lifting of securely connecting to data sources like AWS DynamoDB, Lambda. | |
EventBridge | EventBridge is a serverless event bus that makes it easy to connect applications together using data from apps, integrated SaaS apps, & AWS services. | |
Orchestration | Step Functions | Step Functions is a serverless function orchestrator that makes it easy to sequence Lambda functions & multiple AWS services into business-critical applications. |
Analytics | Kinesis | Kinesis makes it easy to collect, process, & analyze real-time, streaming data so one can get timely insights. |
Athena | Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. |
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
Application Integration
Category | Service | Description |
Messaging | SNS | Reliable high throughput pub/sub, SMS, email, and mobile push notifications |
SQS | Message queue that sends, stores, and receives messages between application components at any volume | |
MQ | Message broker for Apache ActiveMQ that makes migration easy and enables hybrid architectures | |
Workflows | Step Functions | Coordinate multiple AWS services into serverless workflows so you can build and update apps quickly |
API management | API Gateway | Create, publish, maintain, monitor, & secure APIs at any scale for serverless workloads & web apps |
AppSync | Create a flexible API to securely access, manipulate, & combine data from one or more data sources | |
Event bus | EventBridge | Build an event-driven architecture that connects application data from your own apps, SaaS, & AWS services |
AppFlow | Automate the flow of data between SaaS applications & AWS services at nearly any scale, without code. |
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
Management & Governance Services
Category | Service | Description |
Enable | Control Tower | The easiest way to set up and govern a new, secure multi-account AWS environment. ControlTower |
Organizations | Organizations helps centrally govern environment as you grow & scale workloads on AWS Organizations | |
Well-Architected Tool | Well-Architected Tool helps review the state of workloads & compares them to the latest AWS architectural best practices. WATool | |
Budgets | Budgets allows to set custom budgets to track cost & usage from the simplest to the most complex use cases. Budgets | |
License Manager | License Manager makes it easier to manage software licenses from software vendors such as Microsoft, SAP, Oracle, & IBM across AWS & on-premises environments. LicenseManager | |
Provision | CloudFormation | CloudFormation enables the user to design & provision AWS infrastructure deployments predictably & repeatedly. CloudFormation |
Service Catalog | Service Catalog allows organizations to create & manage catalogs of IT services that are approved for use on AWS. ServiceCatalog | |
OpsWorks | OpsWorks presents a simple and flexible way to create and maintain stacks and applications. OpsWorks | |
Marketplace | Marketplace is a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, & deploy software that runs on AWS. Marketplace | |
Operate | CloudWatch | CloudWatch offers a reliable, scalable, & flexible monitoring solution that can easily start. CloudWatch |
CloudTrail | CloudTrail is a service that enables governance, compliance, operational auditing, & risk auditing of AWS account. CloudTrail | |
Config | Config | |
Systems Manager | Systems Manager to plan, proctor, & automate administration tasks on the AWS resources. SystemsManager | |
Cost & usage report | Refer cost management section | |
Cost explorer | Refer cost management section | |
Managed Services | Operate your AWS infrastructure on your behalf. ManagedServices | |
X Ray | X-Ray |
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
AWS Recommended security best practices
Turn on multifactor authentication for the “root” account |
Turn on CloudTrail log file validation. |
Enable CloudTrail multi-region logging. |
Integrate CloudTrail with CloudWatch. |
Enable access logging for CloudTrail S3 buckets. |
Enable access logging for Elastic Load Balancer (ELB). |
Enable Redshift audit logging. |
Enable Virtual Private Cloud (VPC) flow logging. |
Require multifactor authentication (MFA) to delete CloudTrail buckets |
Enable CloudTrail logging across all AWS. |
Turn on multi-factor authentication for IAM users. |
Enable IAM users for multi-mode access. |
Attach IAM policies to groups or roles |
Rotate IAM access keys regularly, and standardize on the selected number of days |
Set up a strict password policy. |
Set the password expiration period to 90 days and prevent reuseCustomer Visualforce pages with standard headers |
Don’t use expired SSL/TLS certificates |
User HTTPS for CloudFront distributions |
Restrict access to CloudTrail bucket. |
Encrypt CloudTrail log files at rest |
Encrypt Elastic Block Store (EBS) database. |
Provision access to resources using IAM roles. |
Ensure EC2 security groups don’t have large ranges of ports open |
Configure EC2 security groups to restrict inbound access to EC2. |
Avoid using root user accounts. |
Use secure SSL ciphers when connecting between the client and ELB. |
Use secure SSL versions when connecting between client and ELB. |
Use a standard naming (tagging) convention for EC2. |
Encrypt RDS. |
Ensure access keys are not being used with root accounts. |
Use secure CloudFront SSL versions. |
Enable the require_ssl parameter in all Redshift clusters. |
Rotate SSH keys periodically. |
Minimize the number of discrete security groups. |
Reduce number of IAM groups. |
Terminate unused access keys |
Disable access for inactive or unused IAM users |
Remove unused IAM access keys |
Delete unused SSH Public Keys |
Restrict access to AMIs. |
Restrict access to EC2 security groups. |
Restrict access to RDS instances. |
Restrict access to Redshift clusters. |
Restrict outbound access. |
Disallow unrestricted ingress access on uncommon ports. |
Restrict access to well-known ports such as CIFS, FTP, ICMP, SMTP, SSH, Remote desktop |
Inventory & categorize all existing custom apps by the types of data stored, compliance requirements & possible threats they face. |
Involve IT security throughout the development process. |
Grant the fewest privileges as possible for application users |
Enforce a single set of data loss prevention policies across custom applications and all other cloud services. |
Encrypt highly sensitive data such as protected health information (PHI) or personally identifiable information (PII). |
AWS RE:INVENT 2021 – LATEST PRODUCTS AND SERVICES ANNOUNCED:
1- Read For Me
Read For Me launched at the 2021 AWS re:Invent Builders’ Fair in Las Vegas. A web application which helps the visually impaired ‘hear documents. With the help of AI services such as Amazon Textract, Amazon Comprehend, Amazon Translate and Amazon Polly utilizing an event-driven architecture and serverless technology, users upload a picture of a document, or anything with text, and within a few seconds “hear” that document in their chosen language.

2- Delivering code and architectures through AWS Proton and Git
Infrastructure operators are looking for ways to centrally define and manage the architecture of their services, while developers need to find a way to quickly and safely deploy their code. In this session, learn how to use AWS Proton to define architectural templates and make them available to development teams in a collaborative manner. Also, learn how to enable development teams to customize their templates so that they fit the needs of their services.
3- Accelerate front-end web and mobile development with AWS Amplify
User-facing web and mobile applications are the primary touchpoint between organizations and their customers. To meet the ever-rising bar for customer experience, developers must deliver high-quality apps with both foundational and differentiating features. AWS Amplify helps front-end web and mobile developers build faster front to back. In this session, review Amplify’s core capabilities like authentication, data, and file storage and explore new capabilities, such as Amplify Geo and extensibility features for easier app customization with AWS services and better integration with existing deployment pipelines. Also learn how customers have been successful using Amplify to innovate in their businesses.
3- Train ML models at scale with Amazon SageMaker, featuring Aurora
Today, AWS customers use Amazon SageMaker to train and tune millions of machine learning (ML) models with billions of parameters. In this session, learn about advanced SageMaker capabilities that can help you manage large-scale model training and tuning, such as distributed training, automatic model tuning, optimizations for deep learning algorithms, debugging, profiling, and model checkpointing, so that even the largest ML models can be trained in record time for the lowest cost. Then, hear from Aurora, a self-driving vehicle technology company, on how they use SageMaker training capabilities to train large perception models for autonomous driving using massive amounts of images, video, and 3D point cloud data.
AWS RE:INVENT 2020 – LATEST PRODUCTS AND SERVICES ANNOUNCED:
1-Modernize log analytics with Amazon Elasticsearch Service
4- Amazon Location Service: Enable apps with location features
5- Automate, track, and manage tasks with Amazon Connect Tasks
6- Solve customer issues quickly with Amazon Connect Wisdom
7- Introducing Amazon Managed Service for Grafana:
Prometheus is a popular open-source monitoring and alerting solution optimized for container environments. Customers love Prometheus for its active open-source community and flexible query language, using it to monitor containers across AWS and on-premises environments. Amazon Managed Service for Prometheus is a fully managed Prometheus-compatible monitoring service. In this session, learn how you can use the same open-source Prometheus data model, existing instrumentation, and query language to monitor performance with improved scalability, availability, and security without having to manage the underlying infrastructure.
AWS CloudShell is a free, browser-based shell available from the AWS console that provides a simple way to interact with AWS resources through the AWS command-line interface (CLI). In this session, see an overview of both AWS CloudShell and the AWS CLI, which when used together are the fastest and easiest ways to automate tasks, write scripts, and explore new AWS services. Also, see a demo of both services and how to quickly and easily get started with each.
12-AWS Fault Injection Simulator: Fully managed chaos engineering service
Increase availability with AWS observability solutions
To provide access to critical resources when needed and also limit the potential financial impact of an application outage, a highly available application design is critical. In this session, learn how you can use Amazon CloudWatch and AWS X-Ray to increase the availability of your applications. Join this session to learn how AWS observability solutions can help you proactively detect, efficiently investigate, and quickly resolve operational issues. All of which help you manage and improve your application’s availability.
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
Securing your Amazon EKS applications: Best practices
Security is critical for your Kubernetes-based applications. Join this session to learn about the security features and best practices for Amazon EKS. This session covers encryption and other configurations and policies to keep your containers safe.
Join Dr. Werner Vogels at 8:00AM (PST) as he goes behind the scenes to show how Amazon is solving today’s hardest technology problems. Based on his experience working with some of the largest and most successful applications in the world, Dr. Vogels shares his insights on building truly resilient architectures and what that means for the future of software development.
Containers
Getting an insight into your Kubernetes applications
Do you need to know what’s happening with your applications that run on Amazon EKS? In this session, learn how you can combine open-source tools, such as Prometheus and Grafana, with Amazon CloudWatch using CloudWatch Container Insights. Come to this session for a demo of Prometheus metrics with Container Insights.
AWS Copilot: Simplifying container development
The hard part is done. You and your team have spent weeks poring over pull requests, building microservices and containerizing them. Congrats! But what do you do now? How do you get those services on AWS? How do you manage multiple environments? How do you automate deployments? AWS Copilot is a new command line tool that makes building, developing, and operating containerized applications on AWS a breeze. In this session, learn how AWS Copilot can help you and your team manage your services and deploy them to production, safely and delightfully.
Securing your Amazon EKS applications: Best practices
Security is critical for your Kubernetes-based applications. Join this session to learn about the security features and best practices for Amazon EKS. This session covers encryption and other configurations and policies to keep your containers safe.
GitOps compliant: How CommBank multiplied Amazon EKS clusters
In this session, learn how the Commonwealth Bank of Australia (CommBank) built a platform to run containerized applications in a regulated environment and then replicated it across multiple departments using Amazon EKS, AWS CDK, and GitOps. This session covers how to manage multiple multi-team Amazon EKS clusters across multiple AWS accounts while ensuring compliance and observability requirements and integrating Amazon EKS with AWS Identity and Access Management, Amazon CloudWatch, AWS Secrets Manager, Application Load Balancer, Amazon Route 53, and AWS Certificate Manager.
Getting up and running with Amazon EKS
Amazon EKS is a fully managed service that makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. Join this session to learn about how Verizon runs its core applications on Amazon EKS at scale. Verizon also discusses how it worked with AWS to overcome several post-Amazon EKS migration challenges and ensured that the platform was robust.
Developing CI/CD pipelines with Amazon ECS and AWS Fargate
Containers have helped revolutionize modern application architecture. While managed container services have enabled greater agility in application development, coordinating safe deployments and maintainable infrastructure has become more important than ever. This session outlines how to integrate CI/CD best practices into deployments of your Amazon ECS and AWS Fargate services using pipelines and the latest in AWS developer tooling.
Securing your Amazon ECS applications: Best practices
With Amazon ECS, you can run your containerized workloads securely and with ease. In this session, learn how to utilize the full spectrum of Amazon ECS security features and its tight integrations with AWS security features to help you build highly secure applications.
Optimize costs and manage spend for containerized applications
Do you have to budget your spend for container workloads? Do you need to be able to optimize your spend in multiple services to reduce waste? If so, this session is for you. It walks you through how you can use AWS services and configurations to improve your cost visibility. You learn how you can select the best compute options for your containers to maximize utilization and reduce duplication. This combined with various AWS purchase options helps you ensure that you’re using the best options for your services and your budget.
AWS Fargate: Are serverless containers right for you?
You have a choice of approach when it comes to provisioning compute for your containers. Some users prefer to have more direct control of their instances, while others could do away with the operational heavy lifting. AWS Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. This session explores the benefits and considerations of running on Fargate or directly on Amazon EC2 instances. You hear about new and upcoming features and learn how Amenity Analytics benefits from the serverless operational model.
Containers at AWS: More options and power than ever before
Are you confused by the many choices of containers services that you can run on AWS? This session explores all your options and the advantages of each. Whether you are just beginning to learn Docker or are an expert with Kubernetes, join this session to learn how to pick the right services that would work best for you.
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
Modernizing with containers
Leading containers migration and modernization initiatives can be daunting, but AWS is making it easier. This session explores architectural choices and common patterns, and it provides real-world customer examples. Learn about core technologies to help you build and operate container environments at scale. Discover how abstractions can reduce the pain for infrastructure teams, operators, and developers. Finally, hear the AWS vision for how to bring it all together with improved usability for more business agility.
Improving observability with AWS App Mesh and Amazon ECS
As the number of services grow within an application, it becomes difficult to pinpoint the exact location of errors, reroute traffic after failures, and safely deploy code changes. In this session, learn how to integrate AWS App Mesh with Amazon ECS to export monitoring data and implement consistent communications control logic across your application. This makes it easy to quickly pinpoint the exact locations of errors and automatically reroute network traffic, keeping your container applications highly available and performing well.
Best practices for containerizing legacy applications
Enterprises are continually looking to develop new applications using container technologies and leveraging modern CI/CD tools to automate their software delivery lifecycles. This session highlights the types of applications and associated factors that make a candidate suitable to be containerized. It also covers best practices that can be considered as you embark on your modernization journey.
Looking at Amazon EKS through a networking lens
Because of its security, reliability, and scalability capabilities, Amazon Elastic Kubernetes Service (Amazon EKS) is used by organization in their most sensitive and mission-critical applications. This session focuses on how Amazon EKS networking works with an Amazon VPC and how to expose your Kubernetes application using Elastic Load Balancing load balancers. It also looks at options for more efficient IP address utilization.
AWS networking best practices in large-scale migrations
Network design is a critical component in your large-scale migration journey. This session covers some of the real-world networking challenges faced when migrating to the cloud. You learn how to overcome these challenges by diving deep into topics such as establishing private connectivity to your on-premises data center and accelerating data migrations using AWS Direct Connect/Direct Connect gateway, centralizing and simplifying your networking with AWS Transit Gateway, and extending your private DNS into the cloud. The session also includes a discussion of related best practices.
Innovating on AWS in a 5G world
5G will be the catalyst for the next industrial revolution. In this session, come learn about key technical use cases for different industry segments that will be enabled by 5G and related technologies, and hear about the architectural patterns that will support these use cases. You also learn about AWS-enabled 5G reference architectures that incorporate AWS services.
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
How to choose the right instance type for ML inference
AWS offers a breadth and depth of machine learning (ML) infrastructure you can use through either a do-it-yourself approach or a fully managed approach with Amazon SageMaker. In this session, explore how to choose the proper instance for ML inference based on latency and throughput requirements, model size and complexity, framework choice, and portability. Join this session to compare and contrast compute-optimized CPU-only instances, such as Amazon EC2 C4 and C5; high-performance GPU instances, such as Amazon EC2 G4 and P3; cost-effective variable-size GPU acceleration with Amazon Elastic Inference; and highest performance/cost with Amazon EC2 Inf1 instances powered by custom-designed AWS Inferentia chips.
Architectural patterns & best practices for workloads on VMware Cloud on AWS
When it comes to architecting your workloads on VMware Cloud on AWS, it is important to understand design patterns and best practices. Come join this session to learn how you can build well-architected cloud-based solutions for your VMware workloads. This session covers infrastructure designs with native AWS service integrations across compute, networking, storage, security, and operations. It also covers the latest announcements for VMware Cloud on AWS and how you can use these new features in your current architecture.
The cutover: Moving your traffic to the cloud
One of the most critical phases of executing a migration is moving traffic from your existing endpoints to your newly deployed resources in the cloud. This session discusses practices and patterns that can be leveraged to ensure a successful cutover to the cloud. The session covers preparation, tools and services, cutover techniques, rollback strategies, and engagement mechanisms to ensure a successful cutover.
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
AWS DeepRacer is the fastest way to get rolling with machine learning. Developers of all skill levels can get hands-on, learning how to train reinforcement learning models in a cloud based 3D racing simulator. Attend a session to get started, and then test your skills by competing for prizes and glory in an exciting autonomous car racing experience throughout re:Invent!
AWS DeepRacer gives you an interesting and fun way to get started with reinforcement learning (RL). RL is an advanced machine learning (ML) technique that takes a very different approach to training models than other ML methods. Its super power is that it learns very complex behaviors without requiring any labeled training data, and it can make short-term decisions while optimizing for a longer-term goal. AWS DeepRacer makes it fast and easy to build models in Amazon SageMaker and train, test, and iterate quickly and easily on the track in the AWS DeepRacer 3D racing simulator.
Decoupling serverless workloads with Amazon EventBridge
Event-driven architecture can help you decouple services and simplify dependencies as your applications grow. In this session, you learn how Amazon EventBridge provides new options for developers who are looking to gain the benefits of this approach.
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
Deep dive on Amazon Timestream
Amazon Timestream is a fast, scalable, and serverless time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day at as little as one-tenth the cost of relational databases. In this session, dive deep on Amazon Timestream features and capabilities, including its serverless automatic scaling architecture, its storage tiering that simplifies your data lifecycle management, its purpose-built query engine that lets you access and analyze recent and historical data together, and its built-in time series analytics functions that help you identify trends and patterns in your data in near-real time.
Accelerating outcomes and migrations with Savings Plans
Savings Plans is a flexible pricing model that allows you to save up to 72 percent on Amazon EC2, AWS Fargate, and AWS Lambda. Many AWS users have adopted Savings Plans since its launch in November 2019 for the simplicity, savings, ease of use, and flexibility. In this session, learn how many organizations use Savings Plans to drive more migrations and business outcomes. Hear from Comcast on their compute transformation journey to the cloud and how it started with RIs. As their cloud usage evolved, they adopted Savings Plans to drive business outcomes such as new architecture patterns.
Learn how teams at Amazon rapidly release features at scale
The ability to deploy only configuration changes, separate from code, means you do not have to restart the applications or services that use the configuration and changes take effect immediately. In this session, learn best practices used by teams within Amazon to rapidly release features at scale. Learn about a pattern that uses AWS CodePipeline and AWS AppConfig that will allow you to roll out application configurations without taking applications out of service. This will help you ship features faster across complex environments or regions.
Top-paying Cloud certifications:
- Google Certified Professional Cloud Architect — $175,761/year
- AWS Certified Solutions Architect – Associate — $149,446/year
- Azure/Microsoft Cloud Solution Architect – $141,748/yr
- Google Cloud Associate Engineer – $145,769/yr
- AWS Certified Cloud Practitioner — $131,465/year
- Microsoft Certified: Azure Fundamentals — $126,653/year
- Microsoft Certified: Azure Administrator Associate — $125,993/year


AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11

AWS Cloud Practitioner Breaking News – AWS CCP CLF-C01 Testimonials – AWS Top Stories


I Passed AWS CCP Testimonials

I just passed my AWS CCP!!!
(Source: r/AWSCertifications)
Study Materials and Timeline:
I watched (binged) the A Cloud Guru course in two days and did the 6 practice exams over a week. I originally was only getting 70%’s on the exams, but continued doing them on my free time (to the point where I’d have 15 minutes and knock one out on my phone lol) and started getting 90%’s. – A mix of knowledge vs memorization tbh. Just make sure you read why your answers are wrong.
I don’t really have a huge IT background, although will note I work in a DevOps (1 1/2 years) environment; so I do use AWS to host our infrastructure. However, the exam is very high level compared to what I do/services I use. I’m fairly certain with zero knowledge/experience, someone could pass this within two weeks. AWS is also currently promoting a “get certified” challenge and is offering 50% off.
Best!
Resources:
A Cloud Guru Course:
AWS – Get AWS Certified: Cloud Practitioner Challenge:
AWS CCP CLF-C01 on Android – AWS CCP CLF-C01 on iOS – AWS CCP CLF-C01 on Windows 10/11
Good Tool For AWS Certified Cloud Practitioner Exam Preparation
Went through the entire CloudAcademy course. Most of the info went out the other ear. Got a 67% on their final exam. Took the ExamPro free exam, got 69%.
Was going to take it last Saturday, but I bought TutorialDojo’s exams on Udemy. Did one Friday night, got a 50% and rescheduled it a week later to today Sunday.
Took 4 total TD exams. Got a 50%, 54%, 67%, and 64%. Even up until last night I hated the TD exams with a passion, I thought they were covering way too much stuff that didn’t even pop up in study guides I read. Their wording for some problems were also atrocious. But looking back, the bulk of my “studying” was going through their pretty well written explanations, and their links to the white papers allowed me to know what and where to read.
Not sure what score I got yet on the exam. As someone who always hated testing, I’m pretty proud of myself. I also had to take a dump really bad starting at around question 25. Thanks to TutorialsDojo Jon Bonso for completely destroying my confidence before the exam, forcing me to up my game. It’s better to walk in way over prepared than underprepared.
I would like to thank this community for recommendations about exam preparation. It was wayyyy easier than I expected (also way easier than TD practice exams scenario-based questions-a lot less wordy on real exam). I felt so unready before the exam that I rescheduled the exam twice. Quick tip: if you have limited time to prepare for this exam, I would recommend scheduling the exam beforehand so that you don’t procrastinate fully.
Resources:
-Stephane’s course on Udemy (I have seen people saying to skip hands-on videos but I found them extremely helpful to understand most of the concepts-so try to not skip those hands-on)
-Tutorials Dojo practice exams (I did only 3.5 practice tests out of 5 and already got 8-10 EXACTLY worded questions on my real exam)
Previous Aws knowledge:
-Very little to no experience (deployed my group’s app to cloud via Elastic beanstalk in college-had 0 clue at the time about what I was doing-had clear guidelines)
Preparation duration: -2 weeks (honestly watched videos for 12 days and then went over summary and practice tests on the last two days)
Links to resources:
https://www.udemy.com/course/aws-certified-cloud-practitioner-new/
https://tutorialsdojo.com/courses/aws-certified-cloud-practitioner-practice-exams/
I used Stephane Maarek on Udemy. Purchased his course and the 6 Practice Exams. Also got Neal Davis’ 500 practice questions on Udemy. I took Stephane’s class over 2 days, then spent the next 2 weeks going over the tests (3~4 per day) till I was constantly getting over 80% – passed my exam with a 882.
What an adventure, I’ve never really gieven though to getting a cert until one day it just dawned on me that it’s one of the few resources that are globally accepted. So you can approach any company and basically prove you know what’s up on AWS 😀
Passed with two weeks of prep (after work and weekends)
Resources Used:
This was just a nice structured presentation that also gives you the powerpoint slides plus cheatsheets and a nice overview of what is said in each video lecture.
Udemy – AWS Certified Cloud Practitioner Practice Exams, created by Jon Bonso**, Tutorials Dojo**
These are some good prep exams, they ask the questions in a way that actually make you think about the related AWS Service. With only a few “Bullshit! That was asked in a confusing way” questions that popped up.
I took CCP 2 days ago and got the pass notification right after submitting the answers. In about the next 3 hours I got an email from Credly for the badge. This morning I got an official email from AWS congratulating me on passing, the score is much higher than I expected. I took Stephane Maarek’s CCP course and his 6 demo exams, then Neal Davis’ 500 questions also. On all the demo exams, I took 1 fail and all passes with about 700-800. But in the real exam, I got 860. The questions in the real exam are kind of less verbose IMO, but I don’t truly agree with some people I see on this sub saying that they are easier.
Just a little bit of sharing, now I’ll find something to continue ^^
Good luck with your own exams.
Passed the exam! Spent 25 minutes answering all the questions. Another 10 to review. I might come back and update this post with my actual score.
Background
– A year of experience working with AWS (e.g., EC2, Elastic Beanstalk, Route 53, and Amplify).
– Cloud development on AWS is not my strong suit. I just Google everything, so my knowledge is very spotty. Less so now since I studied for this exam.
Study stats
– Spent three weeks studying for the exam.
– Studied an hour to two every day.
– Solved 800-1000 practice questions.
– Took 450 screenshots of practice questions and technology/service descriptions as reference notes to quickly swift through on my phone and computer for review. Screenshots were of questions that I either didn’t know, knew but was iffy on, or those I believed I’d easily forget.
– Made 15-20 pages of notes. Chill. Nothing crazy. This is on A4 paper. Free-form note taking. With big diagrams. Around 60-80 words per page.
– I was getting low-to-mid 70%s on Neal Davis’s and Stephane Maarek’s practice exams. Highest score I got was an 80%.
– I got a 67(?)% on one of Stephane Maarek’s exams. The only sub-70% I ever got on any practice test. I got slightly anxious. But given how much harder Maarek’s exams are compared to the actual exam, the anxiety was undue.
– Finishing the practice exams on time was never a problem for me. I would finish all of them comfortably within 35 minutes.
Resources used
– AWS Cloud Practitioner Essentials on the AWS Training and Certification Portal
– AWS Certified Cloud Practitioner Practice Tests (Book) by Neal Davis
– 6 Practice Exams | AWS Certified Cloud Practitioner CLF-C01 by Stephane Maarek*
– Certified Cloud Practitioner Course by Exam Pro (Paid Version)**
– One or two free practice exams found by a quick Google search
*Regarding Exam Pro: I went through about 40% of the video lectures. I went through all the videos in the first few sections but felt that watching the lectures was too slow and laborious even at 1.5-2x speed. (The creator, for the most part, reads off of the slides, adding brief comments here and there.) So, I decided to only watch the video lectures for sections I didn’t have a good grasp on. (I believe the video lectures provided in the course are just split versions of the full length course available for free on YouTube under the freeCodeCamp channel, here.) The online course provides five practice exams. I did not take any of them.
**Regarding Stephane Maarek: I only took his practice exams. I did not take his study guide course.
Notes
– My study regimen (i.e., an hour to two every day for three weeks) was overkill.
– The questions on the practice exams created by Neal Davis and Stephane Maarek were significantly harder than those on the actual exam. I believe I could’ve passed without touching any of these resources.
– I retook one or two practice exams out of the 10+ I’ve taken. I don’t think there’s a need to retake the exams as long as you are diligent about studying the questions and underlying concepts you got wrong. I reviewed all the questions I missed on every practice exam the day before.
What would I do differently?
– Focus on practice tests only. No video lectures.
– Focus on the technologies domain. You can intuit your way through questions in the other domains.
– Chill
I thank you all for helping me through this process! Couldn’t have done it without all of the recommendations and guidance on this page.
Background: I am a back-end developer that works 12 hours a day for corporate America, so no time to study (or do anything) but I made it work.
Could I have probably gone for SAA first? Yeah, but I wanted to prove to myself that I could do it. I studied for about a month. I used Maarek’s Udemy course at 1.5x speed and I couldn’t recommend it more. I also used his practice exams. I’ll be honest, I took 5 practice exams and got somehow managed to fail every single one in the mid 60’s lol. Cleared the exam with an 800. Practice exams WAY harder.
My 2 cents on must knows:
AWS Shared Security Model (who owns what)
Everything Billing (EC2 instance, S3, different support plans)
I had a few ML questions that caught me off guard
VPC concepts – i.e. subnets, NACL, Transit Gateway
I studied solidly for two weeks, starting with Tutorials Dojo (which was recommended somewhere on here). I turned all of their vocabulary words and end of module questions into note cards. I did the same with their final assessment and one free exam.
During my second week, I studied the cards for anywhere from one to two hours a day, and I’d randomly watch videos on common exam questions.
The last thing I did was watch a 3 hr long video this morning that walks you through setting up AWS Instances. The visual of setting things up filled in a lot of holes.
I had some PSI software problems, and ended up getting started late. I was pretty dejected towards the end of the exam, and was honestly (and pleasantly) surprised to see that I passed.
Hopefully this helps someone. Keep studying and pushing through – if you know it, you know it. Even if you have a bad start. Cheers 🍻
- Amazon Aurora Global Database introduces support for up to 10 secondary Region clustersby aws@amazon.com (Recent Announcements) on May 21, 2025 at 9:55 pm
Amazon Aurora Global Database now supports adding up to 10 secondary Regions to your global cluster, further enhancing scalability and availability for globally distributed applications. With Global Database, a single Aurora cluster can span multiple AWS Regions, providing disaster recovery from Region-wide outages and enabling fast local reads for globally distributed applications. This launch increases the number of secondary Regions that can be added to a global cluster from the previously supported limit of up to 5 secondary Regions to up to 10 secondary Regions, providing a larger global footprint for operating your applications See documentation to learn more about Global Database. Amazon Aurora combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. To get started with Amazon Aurora, take a look at our getting started page.
- Announcing EKS Dashboard, a multi-cluster view of Kubernetes infrastructure across AWS Regions and your AWS Organizationsby aws@amazon.com (Recent Announcements) on May 21, 2025 at 7:45 pm
Amazon Elastic Kubernetes Service (EKS) announces the general availability of EKS Dashboard, a new feature that provides centralized visibility into Kubernetes infrastructure across multiple AWS Regions and accounts. EKS Dashboard provides comprehensive insights into your Kubernetes clusters, enabling operational planning and governance. You can access the Dashboard in EKS console through AWS Organizations' management and delegated administrator accounts. As you expand your Kubernetes footprint to address operational and strategic objectives, such as improving availability, ensuring business continuity, isolating workloads, and scaling infrastructure, the EKS Dashboard provides centralized visibility across your Kubernetes infrastructure. You can now visualize your entire Kubernetes infrastructure without switching between AWS Regions or accounts, gaining aggregated insights into clusters, managed node groups, and EKS add-ons. This includes clusters running specific Kubernetes versions, support status, upcoming end of life auto-upgrades, managed node group AMI versions, EKS add-on versions, and more. This centralized approach supports more effective oversight, auditability, and operational planning for your Kubernetes infrastructure. The EKS Dashboard can be accessed in the us-east-1 AWS Region, aggregating EKS cluster metadata from all commercial AWS Regions. To get started, see the EKS user guide.
- Amazon EC2 Mac instances now support configurable System Integrity Protection (SIP) settingsby aws@amazon.com (Recent Announcements) on May 21, 2025 at 6:10 pm
Starting today, customers can now configure System Integrity Protection (SIP) settings on their EC2 Mac instances, providing greater flexibility and control over their development environments. SIP is a critical macOS security feature that helps prevent unauthorized code execution and system-level modifications. This enhancement enables developers to temporarily disable SIP for development and testing purposes, install and validate system extensions and DriverKit drivers, optimize testing performance through selective program management, and maintain security compliance while meeting development requirements. The new SIP configuration capability is available across all EC2 Mac instance families, including both Intel (x86) and Apple silicon platforms. Customers can access this feature in all AWS regions where EC2 Mac instances are currently supported. To learn more about this feature, please visit the documentation here and our launch blog here. To learn more about EC2 Mac instances, click here.
- AWS DMS introduces Data Resync for improved migration accuracyby aws@amazon.com (Recent Announcements) on May 21, 2025 at 5:00 pm
AWS Database Migration Service (AWS DMS) now supports Data Resync, a new feature that automatically corrects data inconsistencies identified during validation between source and target databases. Data Resync integrates with your existing DMS migration tasks and supports both Full Load and Change Data Capture (CDC) phases. It uses your current task settings—including connection configurations, table mappings, and transformations—to apply corrections automatically, helping ensure accurate and reliable migrations without manual intervention. With Data Resync, AWS DMS can detect and resolve common data issues, such as missing records, duplicate entries, or mismatched values, based on validation results. Data Resync is available starting with AWS DMS replication engine version 3.6.1, and currently supports migration paths from Oracle and SQL Server to PostgreSQL. For detailed information on how Data Resync enhances migration accuracy, please refer to the AWS DMS Technical Documentation.
- AWS Cost Anomaly Detection enables advanced alerting through AWS User Notificationsby aws@amazon.com (Recent Announcements) on May 21, 2025 at 5:00 pm
AWS Cost Anomaly Detection now integrates with AWS User Notifications (via Amazon EventBridge), enabling customers to create enhanced alerting capabilities in the AWS User Notifications console. This integration lets customers configure sophisticated alert rules based on service, account, or other cost dimensions to identify and respond to unexpected spending changes faster. Using AWS User Notifications, customers can receive immediate or aggregated alerts through multiple channels including email, AWS Chatbot, and the AWS Console Mobile Application, while maintaining a centralized history of alert notifications. This new capability allow customers to customize their cost monitoring by creating alert rules in AWS User Notifications. Now customers can configure rules with higher thresholds for machine learning services that naturally experience cost spikes during training, while setting lower thresholds for stable services like databases where small changes might indicate configuration issues. Customers also benefit from verified contact management, ensuring alerts reach the right teams through validated delivery channels that can be reused across multiple alert configurations. These enhancements are available in all AWS Regions, except the AWS GovCloud (US) Regions and the China Regions. To learn more about setting up alerts in AWS User Notifications and getting started, visit the AWS Cost Anomaly Detection product page and documentation.
- AWS Deadline Cloud now supports Foundry Nuke version 16by aws@amazon.com (Recent Announcements) on May 21, 2025 at 5:00 pm
Starting today, AWS Deadline Cloud will support the latest version of Foundry Nuke, a powerful compositing tool widely used for visual effects and post-production workflows. AWS Deadline Cloud is a fully managed service that simplifies render management for teams creating computer-generated graphics and visual effects, for films, television and broadcasting, web content, and design. With support for Nuke version 16, you can access the latest improvements for Nuke while leveraging AWS Deadline Cloud's managed infrastructure for your rendering pipelines, giving you the ability to create high-quality content using cutting-edge compositing features. This new version is now available in all AWS regions where AWS Deadline Cloud is currently offered. To learn more about AWS Deadline Cloud and how to leverage Nuke version 16 in your workflows, visit the AWS Deadline Cloud documentation.
- AWS Marketplace Sellers can now receive disbursements for partially paid invoicesby aws@amazon.com (Recent Announcements) on May 21, 2025 at 5:00 pm
AWS Marketplace now supports partial disbursements for AWS Marketplace invoices transacted through the AWS Inc., Europe, Middle East and Africa (EMEA), Australia (AU), and Japan (JP) Marketplace Operators (MPOs), allowing sellers to receive funds as buyers make partial payments on AWS Marketplace invoices. AWS Marketplace now automatically processes partial disbursements based on the invoice amount paid by the buyer, aligned with the seller's disbursement schedule configured in the AWS Marketplace Management Portal (AMMP). Previously, sellers had to wait for complete invoice payments by buyers before receiving disbursements for invoices. Sellers can now access funds faster through disbursement of partial payments without waiting for buyers to pay invoices in full. Enhancements have also been made to AWS Marketplace Seller reporting to provide better visibility into partially disbursed invoices. For more details on the AWS Marketplace Seller reporting experience, visit the billed revenue dashboard and collections and disbursement dashboard guides. Partial disbursements are available to AWS Marketplace sellers who transact through the AWS Inc., EMEA, AU, and JP MPOs. For more information about partial disbursements for AWS Marketplace invoices and updates to seller dashboards, access the partial disbursements documentation.
- Amazon RDS now supports easy retrieval of engine lifecycle support datesby aws@amazon.com (Recent Announcements) on May 21, 2025 at 5:00 pm
Amazon RDS announces a new capability that helps you view engine lifecycle support dates for your databases. This new feature provides a centralized and convenient place to access engine support dates, offering greater control over your database lifecycle management. You can view start and end dates for RDS Standard Support and RDS Extended Support for RDS and Aurora major engine versions through the RDS API or AWS CLI. If RDS Extended Support is available for an engine version then both RDS Standard and Extended Support dates are shown. If RDS Extended Support is not available for an engine version, the response includes only RDS Standard Support dates. With this feature you can view lifecycle support dates for RDS MySQL, RDS MariaDB, RDS PostgreSQL, Aurora MySQL, and Aurora PostgreSQL engines. To learn more, visit Amazon RDS User Guide and Amazon Aurora User Guide. Amazon RDS makes it simple to set up, operate, and scale database deployments in the cloud. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console. Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. To get started with Amazon Aurora, take a look at our getting started page.
- AWS Transfer Family announces ML-KEM quantum-resistant key exchange for SFTPby aws@amazon.com (Recent Announcements) on May 21, 2025 at 5:00 pm
AWS Transfer Family now supports ML- KEM (FIPS-203), a post-quantum algorithm standardized by the National Institute of Standards and Technology (NIST), for SFTP file transfers. Quantum-resistant public-key exchange helps protect transfers of data files that require long-term confidentiality against “harvest now, decrypt later“ threats. In such scenarios, an adversary may be recording present day traffic for decrypting once cryptanalytically relevant quantum computers become available. AWS Transfer Family offers fully managed support for the transfer of files over SFTP, AS2, FTPS, FTP, and web browser-based transfers directly into and out of AWS storage services. With this launch, you can now use post-quantum (PQ) hybrid security policies that combine classical Elliptic Curve Diffie-Hellman with quantum-resistant ML-KEM key exchanges between your AWS Transfer Family SFTP endpoints and clients like OpenSSH, Putty, and JSch that support PQ algorithms. When using a PQ hybrid policy, your Transfer Family SFTP server preserves the standard connection options supported by most clients today, while leveraging the most secure PQ connection options with clients that support quantum-resistant key exchange. ML-KEM quantum-resistant key exchange for SFTP file transfers is supported in all AWS Regions where AWS Transfer Family is available. Older PQ key exchange methods which included ML-KEM’s pre-standardized version (Kyber), introduced in AWS Transfer in 2023, will be removed from existing policies and no longer be included in the new PQ policy. To learn more about using PQ security policies to enable quantum-resistant key exchange, visit our documentation.
- Amazon RDS for Oracle now supports credential management with AWS Secrets Manager for databases using Oracle multitenant architectureby aws@amazon.com (Recent Announcements) on May 20, 2025 at 5:50 pm
Amazon RDS for Oracle now supports credential management with AWS Secrets Manager for databases that adopt Oracle multitenant architecture. Oracle multitenant architecture enables customers to consolidate data and code from multiple databases into one database by setting up a multitenant container database (CDB) that can include multiple pluggable databases (PDBs). With this launch, customers can use AWS Secrets Manager to manage user credentials for their tenant pluggable databases. Using AWS Secrets Manager to manage user credentials for tenant pluggable databases allows customers to automate regular password rotations, use AWS Identity and Access Management (IAM) for access control to authorized users, encrypt credentials using AWS Key Management Service (KMS), and enhance security posture by replacing the use of plaintext password in application code with programmatic calls to retrieve credentials from AWS Secrets Manager. RDS database management operations such as database restore from Amazon S3 or a snapshot and point-in-time recovery automatically use credentials managed in AWS Secrets Manager. To learn more about using AWS Secrets Manager with Amazon RDS for Oracle database with the CDB architecture, see the Amazon RDS documentation. When storing database secrets in AWS Secrets Manager, your AWS account incurs charges. For information about AWS Secrets Manager pricing and capabilities, visit the AWS Secrets Manager product page. This capability is available in all AWS Regions where Amazon RDS for Oracle and AWS Secrets Manager are available. For more information about regional availability, see the AWS Region table.
- Amazon RDS for MariaDB now supports community MariaDB minor versions 10.5.29 and 10.6.22by aws@amazon.com (Recent Announcements) on May 20, 2025 at 5:00 pm
Amazon Relational Database Service (Amazon RDS) for MariaDB now supports community MariaDB minor versions 10.5.29 and 10.6.22. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of MariaDB, and to benefit from the bug fixes, performance improvements, and new functionality added by the MariaDB community. You can leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. You can also leverage Amazon RDS Managed Blue/Green deployments for safer, simpler, and faster updates to your MariaDB instances. Learn more about upgrading your database instances, including automatic minor version upgrades and Blue/Green Deployments, in the Amazon RDS User Guide. Amazon RDS for MariaDB makes it straightforward to set up, operate, and scale MariaDB deployments in the cloud. Learn more about pricing details and regional availability at Amazon RDS for MariaDB. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.
- Amazon EC2 High Memory instances now available in US East (Ohio) regionby aws@amazon.com (Recent Announcements) on May 20, 2025 at 5:00 pm
Starting today, Amazon EC2 High Memory U-1 instances with 18TB of memory (u-18tb1.112xlarge) are available in the US East (Ohio) region. Customers can start using these new High Memory instances with On Demand and Savings Plan purchase options. Amazon EC2 High Memory instances are certified by SAP for running Business Suite on HANA, SAP S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments. For details, see the Certified and Supported SAP HANA Hardware Directory. For information on how to get started with your SAP HANA migration to EC2 High Memory instances, view the Migrating SAP HANA on AWS to an EC2 High Memory Instance documentation. To hear from Steven Jones, GM for SAP on AWS on what this launch means for our SAP customers, you can read his launch blog.
- AWS Organizations now supports Internet Protocol Version 6 (IPv6)by aws@amazon.com (Recent Announcements) on May 20, 2025 at 5:00 pm
AWS Organizations customers can now use Internet Protocol version 6 (IPv6) addresses, via our new dual-stack endpoints to connect to AWS Organizations over the public internet using IPv6, IPv4, or dual-stack clients. The existing AWS Organizations endpoints supporting IPv4 will remain available for backwards compatibility. To learn more on best practices for configuring IPv6 in your environment, visit the whitepaper on IPv6 in AWS. Support for IPv6 on AWS Organizations is available in the AWS Commercial Regions, the AWS GovCloud (US) Regions, and the China Regions.
- AWS Site-to-Site VPN Tunnel Endpoint Lifecycle Control is now available in AWS Europe (Milan) Regionby aws@amazon.com (Recent Announcements) on May 20, 2025 at 5:00 pm
AWS Site-to-Site VPN Tunnel Endpoint Lifecycle Control is now available in AWS Europe (Milan) Region, providing you with better visibility and control of your VPN tunnel maintenance updates. AWS Site-to-Site VPN is a fully-managed service that allows you to create a secure connection between your data center or branch office and your AWS resources using IP Security (IPSec) tunnels. Enabling Tunnel Endpoint Lifecycle Control feature provides you with advanced notice of an upcoming maintenance updates to help you plan and minimize service disruptions for your VPN connections. It provides you with added flexibility to apply updates to your VPN tunnel endpoints at a time that best suits your business. To learn more, visit the documentation.
- The next generation of Amazon SageMaker is now available in an additional regionby aws@amazon.com (Recent Announcements) on May 20, 2025 at 5:00 pm
The next generation of Amazon SageMaker is now available in AWS Europe (Stockholm). Amazon SageMaker is the center for all your data, analytics, and AI. From SageMaker Unified Studio, you can discover your data and put it to work using familiar AWS tools for model development, generative AI app development, data processing, and SQL analytics. Unified access to data is provided by Amazon SageMaker Lakehouse, and catalog and governance features are available via SageMaker Catalog (built on Amazon DataZone) to help you meet enterprise security requirements. For more information on AWS Regions where the next generation of Amazon SageMaker is available, see Supported regions. To get started, see the following resources: SageMaker overview SageMaker documentation
- AWS service changesby aws@amazon.com (Recent Announcements) on May 20, 2025 at 5:00 pm
At AWS, we understand that the decision to end support for a service or feature significantly impacts customers. We approach such decisions only after careful consideration. When end of support is necessary, we provide customers detailed guidance on available alternatives and comprehensive support for migration, ensuring minimal disruption to customer operations. We understand that changes in availability can impact your operations. Our team is committed to supporting you through these transitions. For specific guidance, consult the relevant service documentation or contact AWS Support. Services closing access to new customers We are closing access to new customers for the following services on June 20th 2025. Existing customers will be able to continue to utilize the service. Amazon Timestream for LiveAnalytics Services announcing end of support We will be ending support for the following services. Review the specific end-of-support dates and migration paths for each service below. Amazon Pinpoint AWS IQ AWS IoT Analytics AWS IoT Events AWS SimSpace Weaver AWS Panorama Amazon Inspector Classic Amazon Connect Voice ID AWS DMS Fleet Advisor Services and features reaching end of support The following services and features have reached their end of support date, and can no longer be accessed. AWS Private 5G AWS DataSync Discovery To learn more about end of support dates, alternative solutions, and migration options, please visit the AWS Product Lifecycle Page.
- Amazon CloudWatch Application Signals introduces auto-monitor support for EKS workloadsby aws@amazon.com (Recent Announcements) on May 20, 2025 at 5:00 pm
CloudWatch Application Signals, an application performance monitoring (APM) tool that simplifies health and performance monitoring for applications, now makes instrumentation easier for your EKS applications through a single configuration flag within the Amazon CloudWatch Observability add-on. Once enabled, you can gain access to pre-built, standardized dashboards within CloudWatch Application Signals and track application performance against key business or service-level objectives (SLOs), making it easier to troubleshoot issues for your EKS applications. SRE teams can now setup APM on CloudWatch on EKS by simply installing and configuring the Amazon CloudWatch EKS add-on, eliminating the need for manual coordination between teams and enabling rapid setup in just a few minutes. Once the add-on is installed and the flag is enabled, the add-on automatically discovers applications along with their dependencies. The add-on then instruments these applications with AWS Distro for OpenTelemetry SDKs (ADOT) to collect application metrics (throughput, latency, and errors), distributed traces and logs after the restarts. To get started, install or upgrade the Amazon CloudWatch EKS add-on to version v4.0.0-eksbuild.1 and enable Application Signals monitoring via the new configuration flag. All service workload applications deployed in EKS are monitored by default, but you can customize the configuration to focus on specific services that matter most to your business. See documentation to learn more. This enhancement is now available in all commercial AWS Regions where both EKS and CloudWatch Application Signals are supported. You can now opt in to the new bundled pricing for Application Signals. For pricing, see Amazon CloudWatch pricing.
- Announcing customer-initiated reboot migrations for EC2 Scheduled Eventsby aws@amazon.com (Recent Announcements) on May 20, 2025 at 5:00 pm
AWS now enables automatic instance migration when you reboot after receiving EC2 scheduled reboot event notifications. This new capability of customer-initiated reboot migrations enables customers to action scheduled reboot event notifications by rebooting the instance at a time of their choosing which will then automatically migrate their instance to different hardware, eliminating the pending scheduled reboot event. With today’s launch, Nitro instances with no local store disks are opted-in to customer-initiated reboot migration by default. After receiving a scheduled reboot event notification, reboots initiated prior to the scheduled event will automatically migrate the instance. You can reboot your instance using the EC2 console or API. Customer-initiated reboot migrations for EC2 Scheduled Events is available in all commercial AWS Regions, the AWS GovCloud (US) Regions, and the China Regions. To learn more, see Manage Amazon EC2 instances scheduled for reboot in the AWS User Guide.
- Announcing Amazon Bedrock Agents Metrics in CloudWatchby aws@amazon.com (Recent Announcements) on May 19, 2025 at 7:50 pm
Amazon Bedrock now offers comprehensive CloudWatch metrics support for Agents, enabling developers to monitor, troubleshoot, and optimize their agent-based applications with greater visibility. This new capability provides detailed runtime metrics for both InvokeAgent and InvokeInlineAgent operations, including invocation counts, latency measurements, token usage, and error rates, helping customers better understand their agents' performance in production environments. With CloudWatch metrics integration, developers can track critical performance indicators such as total processing time, time-to-first-token (TTFT), model latency, and token counts across different dimensions including operation type, model ID, and agent alias ARN. These metrics enable customers to identify bottlenecks, detect anomalies, and make data-driven decisions to improve their agents' efficiency and reliability. Customers can also set up CloudWatch alarms to receive notifications when metrics exceed specified thresholds, allowing for proactive management of their agent deployments. CloudWatch metrics for Amazon Bedrock Agents is now available in all AWS Regions where Amazon Bedrock is supported. To get started with monitoring your agents, ensure your IAM service role has the appropriate CloudWatch permissions. For more information about this feature and implementation details, visit the Amazon Bedrock documentation or refer to the CloudWatch User Guide for comprehensive monitoring best practices.
- AWS CodeBuild adds support for new IAM condition keysby aws@amazon.com (Recent Announcements) on May 19, 2025 at 7:30 pm
AWS CodeBuild now supports new IAM condition keys enabling granular access control on CodeBuild’s resource-modifying APIs. The new condition keys cover most of CodeBuild’s API request contexts, including network settings, credential configurations and compute restrictions. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment. The new condition keys allow you to create IAM policies that better enforce your organizational policies on CodeBuild resources such as projects and fleets. For example, you can use codebuild:vpcConfig.vpcId condition keys to enforce the VPC connectivity settings on projects or fleets, codebuild:source.buildspec condition keys to prevent unauthorized modifications to project buildspec commands, and codebuild:computeConfiguration.instanceType condition keys to restrict which compute types your builds can use. The new IAM condition keys are available in all regions where CodeBuild is offered. For more information about the AWS Regions where CodeBuild is available, see the AWS Regions page. For a full list of new CodeBuild IAM condition keys, please visit our documentation. To learn more about how to get started with CodeBuild, visit the AWS CodeBuild product page.
- DynamoDB local is now accessible on AWS CloudShellby aws@amazon.com (Recent Announcements) on May 19, 2025 at 7:10 pm
Today, Amazon DynamoDB announces the general availability of DynamoDB local on AWS CloudShell, a browser-based, pre-authenticated shell that you can launch directly from the AWS Management Console. With DynamoDB local, you can develop and test your applications by running DynamoDB in your local development environment without incurring any costs. DynamoDB local works with your existing DynamoDB API calls without impacting your production environment. You can now start DynamoDB local just by using dynamodb-local alias in CloudShell to develop and test your DynamoDB tables anywhere in the console without downloading or installing the AWS CLI nor DynamoDB local. To interact with DynamoDB local running in CloudShell with CLI commands, use the --endpoint-url parameter and point it to localhost:8000. You can navigate to CloudShell from the AWS Management Console a few different ways. For more information, see Getting started with AWS CloudShell. To learn more about using DynamoDB local command line options see DynamoDB local usage notes.
- Amazon MSK is now available in Asia Pacific (Thailand) and Mexico (Central) Regionsby aws@amazon.com (Recent Announcements) on May 19, 2025 at 5:00 pm
Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now available in Asia Pacific (Thailand) and Mexico (Central) regions. Customers can create Amazon MSK Provisioned clusters in these regions starting today. Amazon MSK is a fully managed service for Apache Kafka and Kafka Connect that makes it easier for you to build and run applications that use Apache Kafka as a data store. Amazon MSK is fully compatible with Apache Kafka, which enables you to more quickly migrate your existing Apache Kafka workloads to Amazon MSK with confidence or build new ones from scratch. With Amazon MSK, you spend more time building innovative streaming applications and less time managing Kafka clusters. Visit the AWS Regions page for all the regions where Amazon MSK is available. To get started, see the Amazon MSK Developer Guide.
- Amazon Inspector enhances container security by mapping ECR images to running containersby aws@amazon.com (Recent Announcements) on May 19, 2025 at 5:00 pm
Amazon Inspector now automatically maps your Amazon Elastic Container Registry (Amazon ECR) images to specific tasks running on Amazon Elastic Container Service (Amazon ECS) or pods running on Amazon Elastic Kubernetes Service (Amazon EKS), helping identify where the images are actively in use. This enables you to focus your limited resources on patching most critical vulnerable images that are associated with running workloads, improving security and mean- time to remediation. With this launch, you can use Amazon Inspector console or APIs to identify your actively used container images, when you last used an image, and which clusters are running the image. This information will be included in your findings and resource coverage details, and will be routed to EventBridge. You can also control how long an image is monitored by Inspector after its ‘last in use’ date by updating the ECR re-scan duration using the console or APIs. This is in addition to the existing push and pull date settings. Your Amazon ECR images with continuous scanning enabled on Amazon Inspector will automatically get this updated data within your Amazon Inspector findings. Amazon Inspector is a vulnerability management service that continually scans AWS workloads including Amazon EC2 instances, container images, and AWS Lambda functions for software vulnerabilities, code vulnerabilities, and unintended network exposure across your entire AWS organization. This feature is available at no additional cost to Amazon Inspector customers scanning thier container images in Amazon Elastic Container Registry (ECR). Feature is available in all commercial and AWS GovCloud (US) Regions where Amazon Inspector is available. Getting started with Amazon Inspector Amazon Inspector free trial
- Amazon Bedrock Data Automation now supports generating custom insights from videosby aws@amazon.com (Recent Announcements) on May 19, 2025 at 5:00 pm
Amazon Bedrock Data Automation (BDA) now supports video blueprints so you can generate tailored, accurate insights in a consistent format for your multimedia analysis applications. BDA automates the generation of insights from unstructured multimodal content such as documents, images, audio, and videos for your GenAI-powered applications. With video blueprints, you can customize insights — such as scene summaries, content tags, and object detection — by specifying what to generate, the output data type, and the natural language instructions to guide generation. You can create a new video blueprint in minutes or select from a catalog of pre-built blueprints designed for use cases such as media search or highlight generation. With your blueprint, you can generate insights from a variety of video media including movies, television shows, advertisements, meetings recordings, and user-generated videos. For example, a customer analyzing a reality television episode for contextual ad placement can use a blueprint to summarize a scene where contestants are cooking, detect objects like ‘tomato’ and ‘spaghetti’, and identify the logos of condiments used for cooking. As part of the release, BDA also enhances logo detection and the Interactive Advertising Bureau (IAB) taxonomy in standard output. Video blueprints are available in all AWS Regions where Amazon Bedrock Data Automation is supported. To learn more, see the Bedrock Data Automation User Guide and the Amazon Bedrock Pricing page. To get started with using video blueprints, visit the Amazon Bedrock console.
- Amazon MSK adds support for Apache Kafka version 4.0by aws@amazon.com (Recent Announcements) on May 19, 2025 at 5:00 pm
Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports Apache Kafka version 4.0, bringing the latest advancements in cluster management and performance to MSK Provisioned. Kafka 4.0 introduces a new consumer rebalance protocol, now generally available, that helps ensure smoother and faster group rebalances. In addition, Kafka 4.0 requires brokers and tools to use Java 17, providing improved security and performance, includes various bug fixes and improvements, and deprecates metadata management via Apache ZooKeeper. To start using Apache Kafka 4.0 on Amazon MSK, simply select version 4.0.x when creating a new cluster via the AWS Management Console, AWS CLI, or AWS SDKs. You can also upgrade existing MSK provisioned clusters with an in-place rolling update. Amazon MSK orchestrates broker restarts to maintain availability and protect your data during the upgrade. Kafka version 4.0 support is available today across all AWS regions where Amazon MSK is offered. For more details, see the Amazon MSK Developer Guide and the Apache Kafka release notes for version 4.0.
- AWS CloudWatch Synthetics adds safe canary updates and automatic retriesby aws@amazon.com (Recent Announcements) on May 19, 2025 at 5:00 pm
Today, CloudWatch Synthetics, which allows monitoring of customer workflows on websites through periodically running custom code scripts, announces two new features: canary safe updates and automatic retries for failing canaries. The former allows you to test updates for your existing canaries before applying changes and the latter enables canaries to automatically attempt additional retries when a scheduled run fails, helping to differentiate between genuine and intermittent failures. Canary safe updates helps minimize potential monitoring disruptions caused by erroneous updates. By doing a dry run you can verify canary compatibility with newly released runtimes, or with any configuration or code changes. It minimizes potential monitoring gaps by maintaining continuous monitoring during update processes and mitigates risk to end user experience in the process of keeping canaries up-to-date. The automatic retries feature helps in reducing false alarms. When enabled, it provides more reliable monitoring results by distinguishing between persistent issues and intermittent failures preventing unnecessary disruption. Users can analyze temporary failures using the canary runs graph, which employs color-coded points to represent scheduled runs and their retries. You can start using these features by accessing CloudWatch Synthetics through the AWS Management Console, AWS CLI, or CloudFormation. Dry runs for safe canary updates and automatic retries are are priced the same as regular canary runs and are available in all commercial AWS Regions. To learn more about safe canary updates and automatic retries visit the linked Amazon CloudWatch Synthetics documentation. Or get started with Synthetics monitoring by visiting the user guide.
- Amazon Lightsail now supports IPv6 connectivity over AWS PrivateLinkby aws@amazon.com (Recent Announcements) on May 16, 2025 at 9:30 pm
Amazon Lightsail now supports IPv6-only and dual-stack PrivateLink interface VPC endpoints. AWS PrivateLink is a highly available, scalable service that allows you to privately connect your VPC to services and resources as if they were in your VPC. Previously, Lightsail supported private connectivity over PrivateLink using IPv4-only VPC endpoints. With today’s launch, customers can use IPv6-only, IPv4-only, or dual-stack VPC endpoints to create a private connection between their VPC and Lightsail, and access Lightsail without traversing the public internet. Lightsail supports connectivity using PrivateLink in all AWS Regions supporting Lightsail. To learn more about accessing Lightsail using PrivateLink, please see documentation.
- AWS Entity Resolution is now available in 2 additional regionsby aws@amazon.com (Recent Announcements) on May 16, 2025 at 5:35 pm
Starting today, AWS Entity Resolution is now available in AWS Canada (Central) and Africa (Cape Town) Regions. With AWS Entity Resolution, organizations can match and link related customer, product, business, or healthcare records stored across multiple applications, channels, and data stores. You can get started in minutes using matching workflows that are flexible, scalable, and can seamlessly connect to your existing applications, without any expertise in entity resolution or ML. With this launch, AWS Entity Resolution rule-based and ML-powered workflows are now generally available in 12 AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Canada (Central), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), and Africa (Cape Town). To learn more, visit AWS Entity Resolution.
- AWS Config rules now available in additional AWS Regionsby aws@amazon.com (Recent Announcements) on May 16, 2025 at 5:10 pm
Additional AWS Config rules are now available in 17 AWS Regions. AWS Config rules help you automatically evaluate your AWS resource configurations for desired settings, enabling you to assess, audit, and evaluate configurations of your AWS resources. When a resource violates a rule, an AWS Config rule evaluates it as non-compliant and can send you a notification through Amazon EventBridge. AWS Config provides managed rules, which are predefined, customizable rules that AWS Config uses to evaluate whether your AWS resources comply with common best practices. With this expansion, AWS Config managed rules in the following AWS Regions: Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Kuala Lumpur), Asia Pacific (Melbourne), Asia Pacific (Osaka), Canada (Calgary), Europe (Milan), Europe (Paris), Europe (Stockholm), Europe (Zaragoza), Europe (Zurich), Middle East (Bahrain), Middle East (Tel Aviv), Middle East (UAE), South America (São Paulo). You will be charged per rule evaluation in your AWS account per AWS Region. Visit the AWS Config pricing page for more details. To learn more about AWS Config rules, visit our documentation.
- Amazon Cognito now supports OIDC prompt parameterby aws@amazon.com (Recent Announcements) on May 16, 2025 at 5:00 pm
Amazon Cognito announces support for the OpenID Connect (OIDC) prompt parameter in Cognito Managed Login. Managed Login provides a fully-managed, hosted sign-in and sign-up experience that customers can personalize to align with their company or application branding. This new capability enables customers to control authentication flows more precisely by supporting two commonly requested prompt values: 'login' for re-authentication scenarios and 'none' for silent authentication state check. These prompt parameters respectively allow applications to specify whether users should be prompted to authenticate again or leverage existing sessions, enhancing both security and user experience. With this launch, Cognito can also pass through select_account and consent prompts to third-party OIDC providers when the user pool is configured for federated sign-in. With the 'login' prompt, applications can now require users to re-authenticate explicitly while maintaining their existing authenticated sessions. This is particularly useful for scenarios requiring additional and more recent authentication verification, such as right before accessing sensitive information or performing transactions. The 'none' prompt enables a silent check on authentication state, allowing applications to check if users have an existing active authentication session without having to re-authenticate. This prompt can be valuable for implementing seamless single sign-on experiences across multiple applications sharing the same user pool. This enhancement is available in Amazon Cognito Managed Login to customers on the Essentials or Plus tiers in all AWS Regions where Amazon Cognito is available. To learn more about implementing these authentication flows, visit the Amazon Cognito documentation.
- Amazon Data Lifecycle Manager now supports (IPv6) in the AWS GovCloud (US) Regionsby aws@amazon.com (Recent Announcements) on May 16, 2025 at 5:00 pm
Amazon Data Lifecycle Manager now offers customers the option to use Internet Protocol version 6 (IPv6) addresses for their new and existing endpoints. Customers moving to IPv6 can simplify their networks stack by running their Data Lifecycle Manager dual-stack endpoints on a network supporting both IPv4 and IPv6, depending on the protocol used by their network and client. Customers create Amazon Data Lifecycle Manager policies to automate the creation, retention, and management of EBS Snapshots and EBS-backed Amazon Machine Images (AMIs). The policies can also automatically copy created resources across AWS Regions, move EBS Snapshots to EBS Snapshots Archive tier, and manage Fast Snapshot Restore. Customers can also create policies to automate creation and retention of application-consistent EBS Snapshots via pre and post-scripts, as well as create Default Policies for comprehensive protection for their account or AWS Organization. Amazon Data Lifecycle Manager with IPv6, supported in all AWS Commercial Regions, is now available in the AWS GovCloud (US) Regions. To learn more about configuring Amazon Data Lifecycle Manager endpoints for IPv6, please refer to our documentation.
- Amazon SageMaker - move project across domain unitsby aws@amazon.com (Recent Announcements) on May 16, 2025 at 5:00 pm
Today, Amazon SageMaker and Amazon DataZone announced a new data governance capability that enables customers to move a project from one domain unit to another. Domain units enable customers to create business unit/team level organization and manage authorization policies per their business needs. Customers can now take a project mapped to a domain unit and organize it under a new domain unit within their domain unit hierarchy. The move project feature lets customers reflect changes in team structures as business initiatives or organizations shift by allowing them to change a project’s owning domain unit. As an Amazon SageMaker or Amazon DataZone administrator, you can now create domain units (e.g Sales, Marketing) under the top-level domain and organize the catalog by moving existing projects to new owning domain units. Users can then login to the portal to browse and search assets in the catalog by the domain units associated with their business units or teams. The move project feature for domain units is available in all AWS Regions where Amazon SageMaker and Amazon DataZone are available. To learn more, visit Amazon SageMaker, and get started with move project documentation.
- AWS CodePipeline now supports Deploy Spec file in EC2 deploy actionby aws@amazon.com (Recent Announcements) on May 16, 2025 at 5:00 pm
AWS CodePipeline now supports Deploy Spec file configurations in the EC2 Deploy action, enabling you to specify deployment parameters directly in your source repository. You can now include either a Deploy Spec file name or deploy configurations in your EC2 Deploy action. The action accepts Deploy Spec files in YAML format and maintains compatibility with existing CodeDeploy AppSpec files. The deployment debugging experience for large-scale EC2 deployments is also enhanced. Previously, customers relied solely on action execution logs to track deployment status across multiple instances. While these logs provide comprehensive deployment details, tracking specific instance statuses in large deployments was challenging. The new deployment monitoring interface displays real-time status information for individual EC2 instances, eliminating the need to search through extensive logs to identify failed instances. This improvement streamlines troubleshooting for deployments targeting multiple EC2 instances. To learn more about how to use the EC2 deploy action, visit our documentation. For more information about AWS CodePipeline, visit our product page. These new actions are available in all regions where AWS CodePipeline is supported, except the AWS GovCloud (US) Regions and the China Regions.
- AWS CodePipeline now supports deploying to AWS Lambda with traffic shiftingby aws@amazon.com (Recent Announcements) on May 16, 2025 at 5:00 pm
AWS CodePipeline now offers a new Lambda deploy action that simplifies application deployment to AWS Lambda. This feature enables seamless publishing of Lambda function revisions and supports multiple traffic-shifting strategies for safer releases. For production workloads, you can now deploy software updates with confidence using either linear or canary deployment patterns. The new action integrates with CloudWatch alarms for automated rollback protection - if your specified alarms trigger during traffic shifting, the system automatically rolls back changes to minimize impact. To learn more about using this Lambda Deploy action in your pipeline, visit our documentation. For more information about AWS CodePipeline, visit our product page. These new actions are available in all regions where AWS CodePipeline is supported, except the AWS GovCloud (US) Regions and the China Regions.
- Amazon OpenSearch Ingestion increases memory for an OCU to 15 GBby aws@amazon.com (Recent Announcements) on May 15, 2025 at 9:25 pm
We are pleased to announce that the memory allocation per OpenSearch Compute Unit (OCU) for Amazon OpenSearch Ingestion has been increased from 8GB to 15GB. One OCU now comes default with 2vCPU and 15GB of memory, allowing customers to leverage greater in-memory processing for their data ingestion pipelines without modifying existing configurations. With the increased memory per OCU, Amazon OpenSearch Ingestion is better equipped to handle memory-intensive processing tasks such as trace analytics, aggregations, and enrichment operations. Customers can now build more complex and high-throughput ingestion pipelines with reduced risk of out-of-memory failures. The increased memory for OCUs are now available in all AWS Regions where Amazon OpenSearch Ingestion is currently offered at no additional cost. You can take advantage of these improvements by updating your existing pipelines or creating new pipelines through the Amazon OpenSearch Service console or APIs at no additional cost. To learn more, see the Amazon OpenSearch Ingestion webpage and the Amazon OpenSearch Service Developer Guide.
- SES Mail Manager adds Debug Logging for traffic policiesby aws@amazon.com (Recent Announcements) on May 15, 2025 at 8:40 pm
Today, Simple Email Service (SES) Mail Manager announces the addition of a Debug logging level for Mail Manager traffic policies. This new logging level provides more detailed visibility on incoming connections to a customer’s Mail Manager ingress endpoint and makes it easier to troubleshoot delivery challenges quickly, using familiar event destinations such as Cloudwatch, Kinesis, and S3. With Debug level logs, customers can now log every possible evaluation and action within a Mail Manager traffic policy, along with envelope data for the email message being evaluated for traffic permission. This enables customers to determine whether their traffic policy is working as expected or to isolate incoming message parameters which are not covered by the current configuration. When used in conjunction with rules engine logging, debug logging for traffic policies charts a full picture of message arrival into Mail Manager and its disposition by the rules engine. Debug logging for traffic policies is intended to be used during active troubleshooting but otherwise left disabled, as its output can be verbose for high-volume Mail Manager instances. While SES does not charge an additional fee for this logging feature, customers may incur costs from their chosen event destination. Debug logging for traffic policies is available in all 17 AWS non-opt-in Regions within the AWS commercial partition. To learn more about Mail Manager logging options, see the SES Mail Manager Logging Guide.
- AWS Parallel Computing Service (PCS) now supports accounting with Slurm version 24.11by aws@amazon.com (Recent Announcements) on May 15, 2025 at 8:24 pm
AWS Parallel Computing Service (PCS) now supports Slurm version 24.11 with support for managed accounting. Using this feature, you can enable accounting on your PCS clusters to monitor cluster usage, enforce resource limits, and manage fine-grained access control to specific queues or compute node groups. PCS manages the accounting database for your cluster, eliminating the need for you to setup and manage a separate accounting database. You can enable this feature on your PCS cluster in just a few clicks using the AWS Management Console. Visit our getting started and accounting documentation pages to learn more about accounting and see release notes to learn more about Slurm 24.11. AWS Parallel Computing Service (AWS PCS) is a managed service that makes it easier for you to run and scale your high performance computing (HPC) workloads and build scientific and engineering models on AWS using Slurm. To learn more about PCS, refer to the service documentation. For pricing details and region availability, see the PCS Pricing Page and AWS Region Table.
- AWS CodeBuild announces support for remote Docker serversby aws@amazon.com (Recent Announcements) on May 15, 2025 at 7:19 pm
AWS CodeBuild now supports remote Docker image build servers, allowing you to speed up image build requests. You can provision a fully managed Docker server that maintains a persistent cache across builds. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces ready-to-deploy software packages. Centralized image building increases efficiency by reusing cached layers and reducing provisioning plus network transfer latency. CodeBuild automatically configures your build environment to use the remote server when running Docker commands. The Docker server is then readily available to run parallel build requests that can each use the shared layer cache, reducing the overall build latency and optimizing build speed. This feature is available in all regions where CodeBuild is offered. For more information about the AWS Regions where CodeBuild is available, see the AWS Regions page. Get started with CodeBuild’s blog post for setting up a Docker image builder in your CodeBuild project, or visit our documentation. To learn how to get started with CodeBuild, visit the AWS CodeBuild product page.
- AWS Transform for VMware is now generally availableby aws@amazon.com (Recent Announcements) on May 15, 2025 at 5:00 pm
At re:Invent 2024, AWS introduced the preview of Amazon Q Developer transformation capabilities for VMware. That innovation has evolved into AWS Transform for VMware—a first-of-its-kind agentic AI service that’s now generally available. Powered by large language models, graph neural networks, and the deep experience of AWS in enterprise workload migrations, AWS Transform simplifies VMware modernization at scale. Customers and partners can now move faster, reduce migration risk, and modernize with confidence. VMware environments have long been foundational to enterprise IT, but rising costs and vendor uncertainty are prompting organizations to rethink their strategies. Despite the urgency, VMware workload migration has historically been slow and error-prone. AWS Transform changes that. With agentic AI, AWS Transform automates the full modernization lifecycle—from discovery and dependency mapping to network translation and Amazon Elastic Compute Cloud (Amazon EC2) optimization. Certain tasks that once took weeks can now be completed in minutes. In testing, AWS generated migration wave plans for 500 VMs in just 15 minutes and performed networking translations up to 80x faster than traditional methods. Partners in pilot programs have cut execution times by up to 90%. Beyond speed, AWS Transform delivers precision and transparency. A shared workspace brings together infrastructure teams, app owners, partners, and AWS experts to resolve blockers and maintain alignment. Built-in human-in-the-loop controls confirm all artifacts are validated before execution. As enterprises aim to break free from legacy constraints and tap into the value of their data, AWS Transform offers a streamlined path to modern, cloud-native architectures. Customers can seamlessly integrate with 200+ AWS services—including analytics, serverless, and generative AI—to accelerate innovation and reduce long-term costs. Start your VMware modernization journey with AWS Transform. Read the launch blog, explore the documentation, register for the launch webinar, or check out the interactive demo.
- Amazon WorkSpaces Pools now supports AlwaysOn running modeby aws@amazon.com (Recent Announcements) on May 15, 2025 at 5:00 pm
Amazon Web Services announces the availability of AlwaysOn running mode for WorkSpaces Pools, designed for customers who want their streaming to start right away. With AlwaysOn mode, users will have their virtual desktop session provisioned in seconds, allowing them to be productive immediately. Customers can now choose between AlwaysOn running mode and the currently available AutoStop mode, which only bills an hourly usage fee when a customer logs into their session. With AutoStop, streaming starts after a short amount of start-up time, but customers can better optimize on cost for unused instances. Amazon WorkSpaces Pools enables customers to reduce costs by sharing a pool of virtual desktops across a group of users who get a fresh desktop every time they log in. With application settings being saved in a central storage repository, simplified management via a single console and set of clients, the ability to support Microsoft 365 Apps for enterprise, and the new running mode options, WorkSpaces Pools offer the flexibility customers expect. AlwaysOn for WorkSpaces Pools is now available in all regions where WorkSpaces Pools is supported. For pricing information, visit Amazon WorkSpaces Pricing. To learn more about AlwaysOn for WorkSpaces Pools and to get started, view the documentation here.
- PostgreSQL 18 Beta 1 is now available in Amazon RDS Database Preview Environmentby aws@amazon.com (Recent Announcements) on May 15, 2025 at 5:00 pm
Amazon RDS for PostgreSQL 18 Beta 1 is now available in the Amazon RDS Database Preview Environment, allowing you to evaluate the pre-release of PostgreSQL 18 on Amazon RDS for PostgreSQL. You can deploy PostgreSQL 18 Beta 1 in the Amazon RDS Database Preview Environment that has the benefits of a fully managed database. PostgreSQL 18 includes significant updates to query execution and I/O operations. Query execution is enhanced with "skip scan" support for multicolumn B-tree indexes and optimized WHERE clause handling for OR and IN (...) conditions. Parallel execution capabilities are expanded through parallel GIN index builds and enhanced join operations. Observability improvements include detailed buffer access statistics in EXPLAIN ANALYZE and enhanced I/O utilization monitoring capabilities. Please refer to the PostgreSQL community announcement for more details. Amazon RDS Database Preview Environment database instances are retained for a maximum period of 60 days and are automatically deleted after the retention period. Amazon RDS database snapshots that are created in the preview environment can only be used to create or restore database instances within the preview environment. You can use the PostgreSQL dump and load functionality to import or export your databases from the preview environment. Amazon RDS Database Preview Environment database instances are priced as per the pricing in the US East (Ohio) Region.
- Amazon EC2 P6-B200 instances powered by NVIDIA B200 GPUs now generally availableby aws@amazon.com (Recent Announcements) on May 15, 2025 at 5:00 pm
Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) P6-B200 instances, accelerated by NVIDIA B200 GPUs. Amazon EC2 P6-B200 instances offer up to 2x performance compared to P5en instances for AI training and inference. P6-B200 instances feature 8 Blackwell GPUs with 1440 GB of high-bandwidth GPU memory and a 60% increase in GPU memory bandwidth compared to P5en, 5th Generation Intel Xeon processors (Emerald Rapids), and up to 3.2 terabits per second of Elastic Fabric Adapter (EFAv4) networking. P6-B200 instances are powered by the AWS Nitro System, so you can reliably and securely scale AI workloads within Amazon EC2 UltraClusters to tens of thousands of GPUs. P6-B200 instances are now available in the p6-b200.48xlarge size through Amazon EC2 Capacity Blocks for ML in the following AWS Region: US West (Oregon). To learn more about P6-B200 instances, visit Amazon EC2 P6 instances.
- Amazon SageMaker Catalog launches governance for S3 Tablesby aws@amazon.com (Recent Announcements) on May 15, 2025 at 5:00 pm
Amazon SageMaker Catalog integrates with Amazon S3 Tables, making it easy to discover, share, and govern S3 Tables for users to access and query the data with all Apache Iceberg–compatible tools and engines. With Amazon SageMaker Catalog, built on Amazon DataZone, users can securely discover and access approved data and models using semantic search with generative AI–created metadata, or just ask Amazon Q Developer with natural language to find your data. S3 Tables deliver the first cloud object store with built-in Apache Iceberg support. Data publishers can onboard S3 Tables to SageMaker Lakehouse and enhance their discoverability by adding them to the SageMaker Catalog. Publishers have the flexibility to either directly publish tables or enrich them with valuable business metadata, making it easier for all users to understand and find the data they need. On the consumption side, users can search for relevant tables, request access through a subscription workflow (subject to publisher approval), and leverage this data for advanced analytics and AI development projects. This end-to-end workflow significantly improves data accessibility, governance, and utilization of S3 Tables across the organization. SageMaker Catalog with S3 Tables support is available in all AWS Regions where Amazon SageMaker is available. To learn more, visit Amazon SageMaker. Get started with S3 Tables and publish using user documentation.
- AWS Transform for mainframe is now generally availableby aws@amazon.com (Recent Announcements) on May 15, 2025 at 5:00 pm
AWS Transform for mainframe, previewed as “Amazon Q Developer transformation capabilities for mainframe” at re:Invent 2024, is now generally available. AWS Transform is the first agentic AI service for modernizing mainframe applications at scale—accelerating modernization of IBM z/OS applications from years to months. Powered by a specialized AI agent leveraging 19 years of AWS experience, AWS Transform streamlines the entire transformation process—from initial analysis and planning to code documentation and refactoring—helping organizations to modernize faster, reduce risk and cost, and achieve better outcomes in the cloud. This release introduces significant new capabilities. Enhanced analysis features help teams identify cyclomatic complexity, homonyms, and duplicate IDs across codebases, with new export and import functions for file classification and in-UI file viewing and comparison. Documentation generation now supports larger codebases with improved performance and recovery capabilities, including an AI-powered chat experience for querying generated documentation. Teams can use improved decomposition features to manage dependencies and domain creation, while new deployment templates streamline environment setup for modernized applications. The service also introduces flexible job management, allowing teams to modify objectives and focus on specific transformation steps during reruns. AWS Transform for mainframe is available in the following AWS Regions: US East (N. Virginia) and Europe (Frankfurt). To learn more, read the blog post, register for the upcoming launch webinar, or get started in the AWS Transform web experience.
- AWS Transform for .NET is now generally availableby aws@amazon.com (Recent Announcements) on May 15, 2025 at 5:00 pm
AWS Transform for .NET, previewed as “Amazon Q Developer transformation capabilities for .NET porting,” is now generally available. As the first agentic AI service for modernizing .NET applications at scale, AWS Transform helps you to modernize Windows .NET applications to be Linux-ready up to four times faster than traditional methods and realize up to 40% savings in licensing costs. It supports transforming a wide range of .NET project types including MVC, WCF, Web API, class libraries, console apps, and unit test projects. The agentic transformation begins with a code assessment of your repositories from GitHub, GitLab, or Bitbucket. It identifies .NET versions, project types, and interproject dependencies and generates a tailored modernization plan. You can customize and prioritize the transformation sequence based on your business objectives or architectural complexity before initiating the AI-powered modernization process. Once started, AWS Transform for .NET automatically converts application code, builds the output, runs unit tests, and commits results to a new branch in your repository. It provides a comprehensive transformation summary, including modified files, test outcomes, and suggested fixes for any remaining work. Your teams can track transformation status through the AWS Transform dashboards or interactive chat and receive email notifications with links to transformed .NET code. For workloads that need further human input, your developers can continue refinement using the Visual Studio extension in AWS Transform. The scalable experience of AWS Transform enables consistent modernization across a large application portfolio while moving to cross-platform .NET, unlocking performance, portability, and long-term maintainability. AWS Transform for .NET is now available in the following AWS Regions: US East (N. Virginia) and Europe (Frankfurt). To learn more, read the blog, visit the webpage, or review the documentation.
- Announcing migration assessment capabilities of AWS Transformby aws@amazon.com (Recent Announcements) on May 15, 2025 at 5:00 pm
Today, AWS announces the general availability of migration assessment capabilities in AWS Transform. Migration assessment in AWS Transform analyzes your IT environment to simplify and optimize your cloud journey with intelligent, data-driven insights and actionable recommendations. Simply upload your infrastructure data and AWS Transform will deliver a comprehensive analysis that typically takes weeks in just minutes. Powered by agentic AI, AWS Transform removes weeks of manual analysis by providing instant visibility into your infrastructure and automatically discovering cost optimization opportunities. AWS Transform produces a business case including key highlights from your server inventory, a summary of current infrastructure, multiple TCO scenarios with varying purchase commitments (on-demand and reserved instances), operating system licensing options (bring your own licenses and license-included), and tenancy options. AWS Transform for migration assessments is now available in the following AWS Regions: US East (N. Virginia) and Europe (Frankfurt). Ready to get started? Visit the AWS Transform web experience or read our blog post to learn more.
- AWS Glue Studio now supports additional file types and single file outputby aws@amazon.com (Recent Announcements) on May 15, 2025 at 5:00 pm
Today, AWS Glue Studio announces support for additional compressed file types, Excel files (as source), and XML and Tableau's Hyper files (as target). We are also introducing the option to select the number of output files for an S3 target. These enhancements will allow you to use visual ETL jobs for additional data processing workflows not supported today, for example loading data from an Excel file into a single XML file output. The new experience will now enable you to have one single file as the output of your Glue job, or to specify a custom number for the output files. Further, Glue now supports Excel files via S3 file source nodes, and XML or Tableau Hyper files for S3 file target nodes. New compression types that will be available to use are: LZ4 , SNAPPY, DEFLATE, LZO, BROTLI, ZSTD and ZLIB. These new features are now available in all AWS commercial Regions and AWS GovCloud (US) Regions where AWS Glue is available. Access the AWS Regional Services List for the most up-to-date availability information. To learn more, visit the AWS Glue documentation.
- Amazon RDS for Oracle now supports the April 2025 Release Update (RU)by aws@amazon.com (Recent Announcements) on May 14, 2025 at 9:45 pm
Amazon Relational Database Service (Amazon RDS) for Oracle now supports the April 2025 Release Update (RU) for Oracle Database versions 19c and 21c. These RUs include bug and security fixes and are available for RDS for Oracle Standard Edition 2 and Enterprise Edition. Review the Oracle release notes for April RU for details. We recommend upgrading to this RU as it includes security fixes. You can upgrade with just a few clicks in the Amazon RDS Management Console or by using the AWS SDK or CLI. You can also enable auto minor version upgrade (AmVU) to automatically upgrade your database instances. Learn more about upgrading your database instances from the Amazon RDS User Guide. This new minor version is available in all AWS regions where Amazon RDS for Oracle is available. See Amazon RDS for Oracle Pricing for pricing details and regional availability.
- AWS Control Tower introduces account-level reporting for baseline APIsby aws@amazon.com (Recent Announcements) on May 14, 2025 at 7:00 pm
AWS Control Tower customers can now programmatically view statuses for their governed accounts via baseline APIs. The AWS Control Tower baseline contains best practice configurations, controls, and resources required for governance. When you enable this baseline on an organizational unit (OU), member accounts within the OU are enrolled under governance. With this new experience, you can use baseline status to view enrollment for your accounts and use drift status to identify when account and OU baseline configurations are out of sync. In addition to seeing statuses for your accounts and OUs in the AWS Control Tower console, you can use the ListEnabledBaselines API to view statuses for your enabled baselines. To view statuses for individual accounts, use the “includeChildren” flag. You can filter by these statuses to view only the accounts and OUs which require your attention. These APIs include AWS CloudFormation support, allowing you to build automations to manage your OUs and accounts with infrastructure as code (IaC). To learn more about these APIs, review Baselines and API Reference in the AWS Control Tower User Guide. Baseline APIs and the newly launched reporting capabilities are available in all AWS Regions where AWS Control Tower is available. For a list of AWS Regions where AWS Control Tower is available, see the AWS Region Table.
- Amazon Aurora MySQL 3.09 (compatible with MySQL 8.0.40) is now generally availableby aws@amazon.com (Recent Announcements) on May 14, 2025 at 6:40 pm
Starting today, Amazon Aurora MySQL - Compatible Edition 3 (with MySQL 8.0 compatibility) will support MySQL 8.0.40 through Aurora MySQL v3.09. In addition to several security enhancements and bug fixes, MySQL 8.0.40 contains enhancements that improve database availability when handling large number of tables and reduce InnoDB issues related to redo logging, and index handling. Aurora MySQL 3.09 includes performance enhancements to improve write throughput for 32xl and larger instances running on I/O-Optimized configuration. This release also contains improvements that increase the cross-region resiliency of Aurora Global Database secondary region clusters. For more details, refer to the Aurora MySQL 3.09 and MySQL 8.0.40 release notes. To upgrade to Aurora MySQL 3.09, you can initiate a minor version upgrade manually by modifying your DB cluster, or you can enable the “Auto minor version upgrade” option when creating or modifying a DB cluster. For upgrading a Global Database, you can refer to upgrading an Amazon Aurora global database guide. This release is available in all AWS regions where Aurora MySQL is available. Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other Amazon Web Services services. To get started with Amazon Aurora, take a look at our getting started page.
Top 60 AWS Solution Architect Associate Exam Tips
Top 100 AWS Solutions Architect Associate Certification Exam Questions and Answers Dump SAA-C03
What is Google Workspace?
Google Workspace is a cloud-based productivity suite that helps teams communicate, collaborate and get things done from anywhere and on any device. It's simple to set up, use and manage, so your business can focus on what really matters.
Watch a video or find out more here.
Here are some highlights:
Business email for your domain
Look professional and communicate as you@yourcompany.com. Gmail's simple features help you build your brand while getting more done.
Access from any location or device
Check emails, share files, edit documents, hold video meetings and more, whether you're at work, at home or on the move. You can pick up where you left off from a computer, tablet or phone.
Enterprise-level management tools
Robust admin settings give you total command over users, devices, security and more.
Sign up using my link https://referworkspace.app.goo.gl/Q371 and get a 14-day trial, and message me to get an exclusive discount when you try Google Workspace for your business.
Google Workspace Business Standard Promotion code for the Americas
63F733CLLY7R7MM
63F7D7CPD9XXUVT
63FLKQHWV3AEEE6
63JGLWWK36CP7WM
Email me for more promo codes
Active Hydrating Toner, Anti-Aging Replenishing Advanced Face Moisturizer, with Vitamins A, C, E & Natural Botanicals to Promote Skin Balance & Collagen Production, 6.7 Fl Oz
Age Defying 0.3% Retinol Serum, Anti-Aging Dark Spot Remover for Face, Fine Lines & Wrinkle Pore Minimizer, with Vitamin E & Natural Botanicals
Firming Moisturizer, Advanced Hydrating Facial Replenishing Cream, with Hyaluronic Acid, Resveratrol & Natural Botanicals to Restore Skin's Strength, Radiance, and Resilience, 1.75 Oz
Skin Stem Cell Serum
Smartphone 101 - Pick a smartphone for me - android or iOS - Apple iPhone or Samsung Galaxy or Huawei or Xaomi or Google Pixel
Can AI Really Predict Lottery Results? We Asked an Expert.
Djamgatech

Read Photos and PDFs Aloud for me iOS
Read Photos and PDFs Aloud for me android
Read Photos and PDFs Aloud For me Windows 10/11
Read Photos and PDFs Aloud For Amazon
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more)
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6(Email us for more)
AI-Powered Professional Certification Quiz Platform
Web|iOs|Android|Windows
FREE 10000+ Quiz Trivia and and Brain Teasers for All Topics including Cloud Computing, General Knowledge, History, Television, Music, Art, Science, Movies, Films, US History, Soccer Football, World Cup, Data Science, Machine Learning, Geography, etc....

List of Freely available programming books - What is the single most influential book every Programmers should read
- Bjarne Stroustrup - The C++ Programming Language
- Brian W. Kernighan, Rob Pike - The Practice of Programming
- Donald Knuth - The Art of Computer Programming
- Ellen Ullman - Close to the Machine
- Ellis Horowitz - Fundamentals of Computer Algorithms
- Eric Raymond - The Art of Unix Programming
- Gerald M. Weinberg - The Psychology of Computer Programming
- James Gosling - The Java Programming Language
- Joel Spolsky - The Best Software Writing I
- Keith Curtis - After the Software Wars
- Richard M. Stallman - Free Software, Free Society
- Richard P. Gabriel - Patterns of Software
- Richard P. Gabriel - Innovation Happens Elsewhere
- Code Complete (2nd edition) by Steve McConnell
- The Pragmatic Programmer
- Structure and Interpretation of Computer Programs
- The C Programming Language by Kernighan and Ritchie
- Introduction to Algorithms by Cormen, Leiserson, Rivest & Stein
- Design Patterns by the Gang of Four
- Refactoring: Improving the Design of Existing Code
- The Mythical Man Month
- The Art of Computer Programming by Donald Knuth
- Compilers: Principles, Techniques and Tools by Alfred V. Aho, Ravi Sethi and Jeffrey D. Ullman
- Gödel, Escher, Bach by Douglas Hofstadter
- Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin
- Effective C++
- More Effective C++
- CODE by Charles Petzold
- Programming Pearls by Jon Bentley
- Working Effectively with Legacy Code by Michael C. Feathers
- Peopleware by Demarco and Lister
- Coders at Work by Peter Seibel
- Surely You're Joking, Mr. Feynman!
- Effective Java 2nd edition
- Patterns of Enterprise Application Architecture by Martin Fowler
- The Little Schemer
- The Seasoned Schemer
- Why's (Poignant) Guide to Ruby
- The Inmates Are Running The Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity
- The Art of Unix Programming
- Test-Driven Development: By Example by Kent Beck
- Practices of an Agile Developer
- Don't Make Me Think
- Agile Software Development, Principles, Patterns, and Practices by Robert C. Martin
- Domain Driven Designs by Eric Evans
- The Design of Everyday Things by Donald Norman
- Modern C++ Design by Andrei Alexandrescu
- Best Software Writing I by Joel Spolsky
- The Practice of Programming by Kernighan and Pike
- Pragmatic Thinking and Learning: Refactor Your Wetware by Andy Hunt
- Software Estimation: Demystifying the Black Art by Steve McConnel
- The Passionate Programmer (My Job Went To India) by Chad Fowler
- Hackers: Heroes of the Computer Revolution
- Algorithms + Data Structures = Programs
- Writing Solid Code
- JavaScript - The Good Parts
- Getting Real by 37 Signals
- Foundations of Programming by Karl Seguin
- Computer Graphics: Principles and Practice in C (2nd Edition)
- Thinking in Java by Bruce Eckel
- The Elements of Computing Systems
- Refactoring to Patterns by Joshua Kerievsky
- Modern Operating Systems by Andrew S. Tanenbaum
- The Annotated Turing
- Things That Make Us Smart by Donald Norman
- The Timeless Way of Building by Christopher Alexander
- The Deadline: A Novel About Project Management by Tom DeMarco
- The C++ Programming Language (3rd edition) by Stroustrup
- Patterns of Enterprise Application Architecture
- Computer Systems - A Programmer's Perspective
- Agile Principles, Patterns, and Practices in C# by Robert C. Martin
- Growing Object-Oriented Software, Guided by Tests
- Framework Design Guidelines by Brad Abrams
- Object Thinking by Dr. David West
- Advanced Programming in the UNIX Environment by W. Richard Stevens
- Hackers and Painters: Big Ideas from the Computer Age
- The Soul of a New Machine by Tracy Kidder
- CLR via C# by Jeffrey Richter
- The Timeless Way of Building by Christopher Alexander
- Design Patterns in C# by Steve Metsker
- Alice in Wonderland by Lewis Carol
- Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig
- About Face - The Essentials of Interaction Design
- Here Comes Everybody: The Power of Organizing Without Organizations by Clay Shirky
- The Tao of Programming
- Computational Beauty of Nature
- Writing Solid Code by Steve Maguire
- Philip and Alex's Guide to Web Publishing
- Object-Oriented Analysis and Design with Applications by Grady Booch
- Effective Java by Joshua Bloch
- Computability by N. J. Cutland
- Masterminds of Programming
- The Tao Te Ching
- The Productive Programmer
- The Art of Deception by Kevin Mitnick
- The Career Programmer: Guerilla Tactics for an Imperfect World by Christopher Duncan
- Paradigms of Artificial Intelligence Programming: Case studies in Common Lisp
- Masters of Doom
- Pragmatic Unit Testing in C# with NUnit by Andy Hunt and Dave Thomas with Matt Hargett
- How To Solve It by George Polya
- The Alchemist by Paulo Coelho
- Smalltalk-80: The Language and its Implementation
- Writing Secure Code (2nd Edition) by Michael Howard
- Introduction to Functional Programming by Philip Wadler and Richard Bird
- No Bugs! by David Thielen
- Rework by Jason Freid and DHH
- JUnit in Action
#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks
Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION

Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA

Health Health, a science-based community to discuss human health
- Hate Trump? According to a Proposed NIH Investigation, You Have a Mental-Health Disorder.by /u/indig0sixalpha on May 21, 2025 at 11:46 pm
submitted by /u/indig0sixalpha [link] [comments]
- New trial empowers women to choose how to deliver big babiesby /u/uniofwarwick on May 21, 2025 at 8:38 pm
submitted by /u/uniofwarwick [link] [comments]
- Tim Walz calls out RFK Jr on children’s health: ‘Just so blatantly false’by /u/theindependentonline on May 21, 2025 at 7:21 pm
submitted by /u/theindependentonline [link] [comments]
- West Nile virus detected in mosquitoes in the UK for the first timeby /u/New_Scientist_Mag on May 21, 2025 at 3:28 pm
submitted by /u/New_Scientist_Mag [link] [comments]
- Person may have spread measles at Shakira's New Jersey concert, health officials warnby /u/progress18 on May 21, 2025 at 3:10 pm
submitted by /u/progress18 [link] [comments]
Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.
- TIL that Spice in Dune is partially an analogue for psilocybin, and the blue eyes are because psilocybin is blueby /u/d8_thc on May 22, 2025 at 9:02 am
submitted by /u/d8_thc [link] [comments]
- TIL During the Carnian Pluvial Event, it is believed that Earth experienced a period of intense rainfall that lasted for approximately 1 to 2 million years, significantly altering the climate and ecosystems of the time. This event contributed to the rise of dinosaurs and the extinction of many otherby /u/Joeclu on May 22, 2025 at 6:35 am
submitted by /u/Joeclu [link] [comments]
- TIL that in 1994, a nutrition researcher published a groundbreaking discovery in diabetes care and named it after herself. Nobody noticed that it was just basic calculus, known for over 2,000 years.by /u/shebreaksmyarm on May 22, 2025 at 5:41 am
submitted by /u/shebreaksmyarm [link] [comments]
- TIL of the multiplane camera, a device used to create depth and parallax in the early days of animation.by /u/MtotheJ65 on May 22, 2025 at 3:28 am
submitted by /u/MtotheJ65 [link] [comments]
- TIL That the Carter Center got the Guinea worm from an estimated 3.5 million reported cases in 1986 to 22 reported cases in 2015. It has continued to be under 100 reported cases since.by /u/CreeperRussS on May 22, 2025 at 3:18 am
submitted by /u/CreeperRussS [link] [comments]
Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.
- A new global analysis shows 1 in 4 assessed wild animal species face extinction – and climate change is an escalating threat. Insects, marine invertebrates, and coral ecosystems are especially vulnerable.by /u/calliope_kekule on May 22, 2025 at 4:53 am
submitted by /u/calliope_kekule [link] [comments]
- A recent research on grain supply and demand matching in the Beijing–Tianjin–Hebei Region based on ecosystem service flows provides valuable insights into the dynamic relationships and heterogeneous patterns of grain matchingby /u/JIntegrAgri on May 22, 2025 at 3:32 am
submitted by /u/JIntegrAgri [link] [comments]
- No evidence for an active margin-spanning megasplay fault at the Cascadia Subduction Zoneby /u/GeoGeoGeoGeo on May 22, 2025 at 3:18 am
submitted by /u/GeoGeoGeoGeo [link] [comments]
- Study finds connection between support for far-right political parties and belief in genetic essentialism (genes determine who we are, including social traits/ behaviors). Supporters of populist right parties in Sweden/ Norway more likely to endorse this, linked to discriminatory/eugenic ideologies.by /u/mvea on May 22, 2025 at 1:36 am
submitted by /u/mvea [link] [comments]
- Scientists figure out how the brain forms emotional connections in rats: neural recordings track how neurons link environments to emotional events | Prefrontal encoding of an internal model for emotional inferenceby /u/Hrmbee on May 22, 2025 at 12:48 am
submitted by /u/Hrmbee [link] [comments]
Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.
- Pacers erase 17-point deficit, take Game 1 over Knicks in OT , 138-135 at the Gardenby /u/Oldtimer_2 on May 22, 2025 at 3:23 am
submitted by /u/Oldtimer_2 [link] [comments]
- Stars score 3 PP goals in 5 1/2 minutes early in 3rd, rally to beat Oilers 6-3 in Game 1by /u/Oldtimer_2 on May 22, 2025 at 3:16 am
submitted by /u/Oldtimer_2 [link] [comments]
- Brock Purdy avoided offseason drama before signing 5-year, $265 million extension with the 49ersby /u/Oldtimer_2 on May 22, 2025 at 2:46 am
submitted by /u/Oldtimer_2 [link] [comments]
- USMNT soccer star Pulisic won't play in Gold Cup this summerby /u/Oldtimer_2 on May 22, 2025 at 1:18 am
submitted by /u/Oldtimer_2 [link] [comments]
- bumrah bags 3-12, helps MI reach playoffs | MI vs DC | IPL 2025by /u/RodrickJasperHeffley on May 22, 2025 at 12:44 am
submitted by /u/RodrickJasperHeffley [link] [comments]