

Elevate Your Career with AI & Machine Learning For Dummies PRO and Start mastering the technologies shaping the future—download now and take the next step in your professional journey!
What are the Top 100 AWS Solutions Architect Associate Certification Exam Questions and Answers Dump SAA-C03?
AWS Certified Solutions Architects are responsible for designing, deploying, and managing AWS cloud applications. The AWS Cloud Solutions Architect Associate exam validates an examinee’s ability to effectively demonstrate knowledge of how to design and deploy secure and robust applications on AWS technologies. The AWS Solutions Architect Associate training provides an overview of key AWS services, security, architecture, pricing, and support.

An Insightful Overview of SAA-C03 Exam Topics Encountered and Reflecting on My SAA-C03 Exam Journey: From Setback to Success
The AWS Certified Solutions Architect – Associate (SAA-C03) Examination is a required examination for the AWS Certified Solutions Architect – Professional level. Successful completion of this examination can lead to a salary raise or promotion for those in cloud roles. Below is the Top 100 AWS solutions architect associate exam prep facts and summaries questions and answers dump.
With average increases in salary of over 25% for certified individuals, you’re going to be in a much better position to secure your dream job or promotion if you earn your AWS Certified Solutions Architect Associate certification. You’ll also develop strong hands-on skills by doing the guided hands-on lab exercises in our course which will set you up for successfully performing in a solutions architect role.
AWS solutions architect associate SAA-C03 practice exam and cheat sheet 2023 pdf eBook Print Book
aws solutions architect associate SAA-C03 practice exam and flashcards 2023 pdf eBook Print Book
aws certified solutions architect pdf book 2023
aws solutions architect cheat sheet ebook 2023
The AWS Solutions Architect Associate is ideal for those performing in Solutions Architect roles and for anyone working at a technical level with AWS technologies. Earning the AWS Certified Solutions Architect Associate will build your credibility and confidence as it demonstrates that you have the cloud skills companies need to innovate for the future.
AWS Certified Solutions Architect – Associate average salary
The AWS Certified Solutions Architect – Associate average salary is $149,446/year
In this blog, we will help you prepare for the AWS Solution Architect Associate Certification Exam, give you some facts and summaries, provide AWS Solution Architect Associate Top Questions and Answers Dump
How long to study for the AWS Solutions Architect exam?
We recommend that you allocate at least 60 minutes of study time per day and you will then be able to complete the certification within 5 weeks (including taking the actual exam). Study times can vary based on your experience with AWS and how much time you have each day, with some students passing their exams much faster and others taking a little longer. Get our eBook here.

AWS Certified Solutions Architects are IT professionals who design cloud solutions with AWS services to meet given technical requirements. An AWS Solutions Architect Associate is expected to design and implement distributed systems on AWS that are high-performing, scalable, secure and cost optimized.
How hard is the AWS Certified Solutions Architect Associate exam?
The AWS Solutions Architect Associate exam is an associate-level exam that requires a solid understanding of the AWS platform and a broad range of AWS services. The AWS Certified Solutions Architect Associate exam questions are scenario-based questions and can be challenging. Despite this, the AWS Solutions Architect Associate is often earned by beginners to cloud computing.
The popular AWS Certified Solutions Architect Associate exam have its new version this August 2022.
AWS Certified Solutions Architect – Associate (SAA-C03) Exam Guide
The AWS Certified Solutions Architect – Associate (SAA-C03) exam is intended for individuals who perform in a solutions architect role.
The exam validates a candidate’s ability to use AWS technologies to design solutions based on the AWS Well-Architected Framework.
What is the format of the AWS Certified Solutions Architect Associate exam?
The SAA-C03 exam is a multiple choice examination that is 65 questions in length. You can take the exam in a testing center or using an online proctored exam from your home or office. You have 130 minutes to complete your exam and the passing mark is 720 points out of 100 points (72%). If English is not your first language you can request an accommodation when booking your exam that will qualify you for an additional 30 minutes exam extension.
The exam also validates a candidate’s ability to complete the following tasks:
• Design solutions that incorporate AWS services to meet current business requirements and future projected needs
• Design architectures that are secure, resilient, high-performing, and cost-optimized
• Review existing solutions and determine improvements
Unscored content
The exam includes 15 unscored questions that do not affect your score.
AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.
Target candidate description
The target candidate should have at least 1 year of hands-on experience designing cloud solutions that use AWS services
Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720.
Your score shows how you performed on the exam as a whole and whether or not you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.
What is the passing score for the AWS Solutions Architect exam?
All AWS certification exam results are reported as a score from 100 to 1000. Your score shows how you performed on the examination as a whole and whether or not you passed. The passing score for the AWS Certified Solutions Architect Associate is 720 (72%).
Can I take the AWS Exam from Home?
Yes, you can now take all AWS Certification exams with online proctoring using Pearson Vue or PSI. Here’s a detailed guide on how to book your AWS exam.
Are there any prerequisites for taking the AWS Certified Solutions Architect exam?
There are no prerequisites for taking AWS exams. You do not need any programming knowledge or experience working with AWS. Everything you need to know is included in our courses. We do recommend that you have a basic understanding of fundamental computing concepts such as compute, storage, networking, and databases.
How much does the AWS Solution Architect Exam cost?
The AWS Solutions Architect Associate exam cost is $150 US.
Once you successfully pass your exam, you will be issued a 50% discount voucher that you can use towards your next AWS Exam.
For more detailed information, check out this blog article on AWS Certification Costs.
The Role of an AWS Certified Solutions Architect Associate
AWS Certified Solutions Architects are IT professionals who design cloud solutions with AWS services to meet given technical requirements. An AWS Solutions Architect Associate is expected to design and implement distributed systems on AWS that are high-performing, scalable, secure and cost optimized.
Content outline:
Domain 1: Design Secure Architectures 30%
Domain 2: Design Resilient Architectures 26%
Domain 3: Design High-Performing Architectures 24%
Domain 4: Design Cost-Optimized Architectures 20%
Domain 1: Design Secure Architectures
This exam domain is focused on securing your architectures on AWS and comprises 30% of the exam. Task statements include:
Task Statement 1: Design secure access to AWS resources.
Knowledge of:
• Access controls and management across multiple accounts
• AWS federated access and identity services (for example, AWS Identity and Access Management [IAM], AWS Single Sign-On [AWS SSO])
• AWS global infrastructure (for example, Availability Zones, AWS Regions)
• AWS security best practices (for example, the principle of least privilege)
• The AWS shared responsibility model
Skills in:
• Applying AWS security best practices to IAM users and root users (for example, multi-factor authentication [MFA])
• Designing a flexible authorization model that includes IAM users, groups, roles, and policies
• Designing a role-based access control strategy (for example, AWS Security Token Service [AWS STS], role switching, cross-account access)
• Designing a security strategy for multiple AWS accounts (for example, AWS Control Tower, service control policies [SCPs])
• Determining the appropriate use of resource policies for AWS services
• Determining when to federate a directory service with IAM roles
Task Statement 2: Design secure workloads and applications.
Knowledge of:
• Application configuration and credentials security
• AWS service endpoints
• Control ports, protocols, and network traffic on AWS
• Secure application access
• Security services with appropriate use cases (for example, Amazon Cognito, Amazon GuardDuty, Amazon Macie)
• Threat vectors external to AWS (for example, DDoS, SQL injection)
Skills in:
• Designing VPC architectures with security components (for example, security groups, route tables, network ACLs, NAT gateways)
• Determining network segmentation strategies (for example, using public subnets and private subnets)
• Integrating AWS services to secure applications (for example, AWS Shield, AWS WAF, AWS SSO, AWS Secrets Manager)
• Securing external network connections to and from the AWS Cloud (for example, VPN, AWS Direct Connect)
Task Statement 3: Determine appropriate data security controls.
Knowledge of:
• Data access and governance
• Data recovery
• Data retention and classification
• Encryption and appropriate key management
Skills in:
• Aligning AWS technologies to meet compliance requirements
• Encrypting data at rest (for example, AWS Key Management Service [AWS KMS])
• Encrypting data in transit (for example, AWS Certificate Manager [ACM] using TLS)
• Implementing access policies for encryption keys
• Implementing data backups and replications
• Implementing policies for data access, lifecycle, and protection
• Rotating encryption keys and renewing certificates
Domain 2: Design Resilient Architectures
This exam domain is focused on designing resilient architectures on AWS and comprises 26% of the exam. Task statements include:
Task Statement 1: Design scalable and loosely coupled architectures.
Knowledge of:
• API creation and management (for example, Amazon API Gateway, REST API)
• AWS managed services with appropriate use cases (for example, AWS Transfer Family, Amazon
Simple Queue Service [Amazon SQS], Secrets Manager)
• Caching strategies
• Design principles for microservices (for example, stateless workloads compared with stateful workloads)
• Event-driven architectures
• Horizontal scaling and vertical scaling
• How to appropriately use edge accelerators (for example, content delivery network [CDN])
• How to migrate applications into containers
• Load balancing concepts (for example, Application Load Balancer)
• Multi-tier architectures
• Queuing and messaging concepts (for example, publish/subscribe)
• Serverless technologies and patterns (for example, AWS Fargate, AWS Lambda)
• Storage types with associated characteristics (for example, object, file, block)
• The orchestration of containers (for example, Amazon Elastic Container Service [Amazon ECS],Amazon Elastic Kubernetes Service [Amazon EKS])
• When to use read replicas
• Workflow orchestration (for example, AWS Step Functions)
Skills in:
• Designing event-driven, microservice, and/or multi-tier architectures based on requirements
• Determining scaling strategies for components used in an architecture design
• Determining the AWS services required to achieve loose coupling based on requirements
• Determining when to use containers
• Determining when to use serverless technologies and patterns
• Recommending appropriate compute, storage, networking, and database technologies based on requirements
• Using purpose-built AWS services for workloads
Task Statement 2: Design highly available and/or fault-tolerant architectures.
Knowledge of:
• AWS global infrastructure (for example, Availability Zones, AWS Regions, Amazon Route 53)
• AWS managed services with appropriate use cases (for example, Amazon Comprehend, Amazon Polly)
• Basic networking concepts (for example, route tables)
• Disaster recovery (DR) strategies (for example, backup and restore, pilot light, warm standby,
active-active failover, recovery point objective [RPO], recovery time objective [RTO])
• Distributed design patterns
• Failover strategies
• Immutable infrastructure
• Load balancing concepts (for example, Application Load Balancer)
• Proxy concepts (for example, Amazon RDS Proxy)
• Service quotas and throttling (for example, how to configure the service quotas for a workload in a standby environment)
• Storage options and characteristics (for example, durability, replication)
• Workload visibility (for example, AWS X-Ray)
Skills in:
• Determining automation strategies to ensure infrastructure integrity
• Determining the AWS services required to provide a highly available and/or fault-tolerant architecture across AWS Regions or Availability Zones
• Identifying metrics based on business requirements to deliver a highly available solution
• Implementing designs to mitigate single points of failure
• Implementing strategies to ensure the durability and availability of data (for example, backups)
• Selecting an appropriate DR strategy to meet business requirements
• Using AWS services that improve the reliability of legacy applications and applications not built for the cloud (for example, when application changes are not possible)
• Using purpose-built AWS services for workloads
Domain 3: Design High-Performing Architectures
This exam domain is focused on designing high-performing architectures on AWS and comprises 24% of the exam. Task statements include:
Task Statement 1: Determine high-performing and/or scalable storage solutions.
Knowledge of:
• Hybrid storage solutions to meet business requirements
• Storage services with appropriate use cases (for example, Amazon S3, Amazon Elastic File System [Amazon EFS], Amazon Elastic Block Store [Amazon EBS])
• Storage types with associated characteristics (for example, object, file, block)
Skills in:
• Determining storage services and configurations that meet performance demands
• Determining storage services that can scale to accommodate future needs
Task Statement 2: Design high-performing and elastic compute solutions.
Knowledge of:
• AWS compute services with appropriate use cases (for example, AWS Batch, Amazon EMR, Fargate)
• Distributed computing concepts supported by AWS global infrastructure and edge services
• Queuing and messaging concepts (for example, publish/subscribe)
• Scalability capabilities with appropriate use cases (for example, Amazon EC2 Auto Scaling, AWS Auto Scaling)
• Serverless technologies and patterns (for example, Lambda, Fargate)
• The orchestration of containers (for example, Amazon ECS, Amazon EKS)
Skills in:
• Decoupling workloads so that components can scale independently
• Identifying metrics and conditions to perform scaling actions
• Selecting the appropriate compute options and features (for example, EC2 instance types) to meet business requirements
• Selecting the appropriate resource type and size (for example, the amount of Lambda memory) to meet business requirements
Task Statement 3: Determine high-performing database solutions.
Knowledge of:
• AWS global infrastructure (for example, Availability Zones, AWS Regions)
• Caching strategies and services (for example, Amazon ElastiCache)
• Data access patterns (for example, read-intensive compared with write-intensive)
• Database capacity planning (for example, capacity units, instance types, Provisioned IOPS)
• Database connections and proxies
• Database engines with appropriate use cases (for example, heterogeneous migrations, homogeneous migrations)
• Database replication (for example, read replicas)
• Database types and services (for example, serverless, relational compared with non-relational, in-memory)
Skills in:
• Configuring read replicas to meet business requirements
• Designing database architectures
• Determining an appropriate database engine (for example, MySQL compared with
PostgreSQL)
• Determining an appropriate database type (for example, Amazon Aurora, Amazon DynamoDB)
• Integrating caching to meet business requirements
Task Statement 4: Determine high-performing and/or scalable network architectures.
Knowledge of:
• Edge networking services with appropriate use cases (for example, Amazon CloudFront, AWS Global Accelerator)
• How to design network architecture (for example, subnet tiers, routing, IP addressing)
• Load balancing concepts (for example, Application Load Balancer)
• Network connection options (for example, AWS VPN, Direct Connect, AWS PrivateLink)
Skills in:
• Creating a network topology for various architectures (for example, global, hybrid, multi-tier)
• Determining network configurations that can scale to accommodate future needs
• Determining the appropriate placement of resources to meet business requirements
• Selecting the appropriate load balancing strategy
Task Statement 5: Determine high-performing data ingestion and transformation solutions.
Knowledge of:
• Data analytics and visualization services with appropriate use cases (for example, Amazon Athena, AWS Lake Formation, Amazon QuickSight)
• Data ingestion patterns (for example, frequency)
• Data transfer services with appropriate use cases (for example, AWS DataSync, AWS Storage Gateway)
• Data transformation services with appropriate use cases (for example, AWS Glue)
• Secure access to ingestion access points
• Sizes and speeds needed to meet business requirements
• Streaming data services with appropriate use cases (for example, Amazon Kinesis)
Skills in:
• Building and securing data lakes
• Designing data streaming architectures
• Designing data transfer solutions
• Implementing visualization strategies
• Selecting appropriate compute options for data processing (for example, Amazon EMR)
• Selecting appropriate configurations for ingestion
• Transforming data between formats (for example, .csv to .parquet)
Domain 4: Design Cost-Optimized Architectures
This exam domain is focused optimizing solutions for cost-effectiveness on AWS and comprises 20% of the exam. Task statements include:
Task Statement 1: Design cost-optimized storage solutions.
Knowledge of:
• Access options (for example, an S3 bucket with Requester Pays object storage)
• AWS cost management service features (for example, cost allocation tags, multi-account billing)
• AWS cost management tools with appropriate use cases (for example, AWS Cost Explorer, AWS Budgets, AWS Cost and Usage Report)
• AWS storage services with appropriate use cases (for example, Amazon FSx, Amazon EFS, Amazon S3, Amazon EBS)
• Backup strategies
• Block storage options (for example, hard disk drive [HDD] volume types, solid state drive [SSD] volume types)
• Data lifecycles
• Hybrid storage options (for example, DataSync, Transfer Family, Storage Gateway)
• Storage access patterns
• Storage tiering (for example, cold tiering for object storage)
• Storage types with associated characteristics (for example, object, file, block)
Skills in:
• Designing appropriate storage strategies (for example, batch uploads to Amazon S3 compared with individual uploads)
• Determining the correct storage size for a workload
• Determining the lowest cost method of transferring data for a workload to AWS storage
• Determining when storage auto scaling is required
• Managing S3 object lifecycles
• Selecting the appropriate backup and/or archival solution
• Selecting the appropriate service for data migration to storage services
• Selecting the appropriate storage tier
• Selecting the correct data lifecycle for storage
• Selecting the most cost-effective storage service for a workload
Task Statement 2: Design cost-optimized compute solutions.
Knowledge of:
• AWS cost management service features (for example, cost allocation tags, multi-account billing)
• AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report)
• AWS global infrastructure (for example, Availability Zones, AWS Regions)
• AWS purchasing options (for example, Spot Instances, Reserved Instances, Savings Plans)
• Distributed compute strategies (for example, edge processing)
• Hybrid compute options (for example, AWS Outposts, AWS Snowball Edge)
• Instance types, families, and sizes (for example, memory optimized, compute optimized, virtualization)
• Optimization of compute utilization (for example, containers, serverless computing, microservices)
• Scaling strategies (for example, auto scaling, hibernation)
Skills in:
• Determining an appropriate load balancing strategy (for example, Application Load Balancer [Layer 7] compared with Network Load Balancer [Layer 4] compared with Gateway Load Balancer)
• Determining appropriate scaling methods and strategies for elastic workloads (for example, horizontal compared with vertical, EC2 hibernation)
• Determining cost-effective AWS compute services with appropriate use cases (for example, Lambda, Amazon EC2, Fargate)
• Determining the required availability for different classes of workloads (for example, production workloads, non-production workloads)
• Selecting the appropriate instance family for a workload
• Selecting the appropriate instance size for a workload
Task Statement 3: Design cost-optimized database solutions.
Knowledge of:
• AWS cost management service features (for example, cost allocation tags, multi-account billing)
• AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report)
• Caching strategies
• Data retention policies
• Database capacity planning (for example, capacity units)
• Database connections and proxies
• Database engines with appropriate use cases (for example, heterogeneous migrations, homogeneous migrations)
• Database replication (for example, read replicas)
• Database types and services (for example, relational compared with non-relational, Aurora, DynamoDB)
Skills in:
• Designing appropriate backup and retention policies (for example, snapshot frequency)
• Determining an appropriate database engine (for example, MySQL compared with PostgreSQL)
• Determining cost-effective AWS database services with appropriate use cases (for example, DynamoDB compared with Amazon RDS, serverless)
• Determining cost-effective AWS database types (for example, time series format, columnar format)
• Migrating database schemas and data to different locations and/or different database engines
Task Statement 4: Design cost-optimized network architectures.
Knowledge of:
• AWS cost management service features (for example, cost allocation tags, multi-account billing)
• AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report)
• Load balancing concepts (for example, Application Load Balancer)
• NAT gateways (for example, NAT instance costs compared with NAT gateway costs)
• Network connectivity (for example, private lines, dedicated lines, VPNs)
• Network routing, topology, and peering (for example, AWS Transit Gateway, VPC peering)
• Network services with appropriate use cases (for example, DNS)
Skills in:
• Configuring appropriate NAT gateway types for a network (for example, a single shared NAT
gateway compared with NAT gateways for each Availability Zone)
• Configuring appropriate network connections (for example, Direct Connect compared with VPN compared with internet)
• Configuring appropriate network routes to minimize network transfer costs (for example, Region to Region, Availability Zone to Availability Zone, private to public, Global Accelerator, VPC endpoints)
• Determining strategic needs for content delivery networks (CDNs) and edge caching
• Reviewing existing workloads for network optimizations
• Selecting an appropriate throttling strategy
• Selecting the appropriate bandwidth allocation for a network device (for example, a single VPN compared with multiple VPNs, Direct Connect speed)

Which key tools, technologies, and concepts might be covered on the exam?
The following is a non-exhaustive list of the tools and technologies that could appear on the exam.
This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam.
The general tools and technologies in this list appear in no particular order.
AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance:
• Compute
• Cost management
• Database
• Disaster recovery
• High performance
• Management and governance
• Microservices and component decoupling
• Migration and data transfer
• Networking, connectivity, and content delivery
• Resiliency
• Security
• Serverless and event-driven design principles
• Storage
AWS Services and Features
There are lots of new services and feature updates in scope for the new AWS Certified Solutions Architect Associate certification! Here’s a list of some of the new services that will be in scope for the new version of the exam:
Analytics:
• Amazon Athena
• AWS Data Exchange
• AWS Data Pipeline
• Amazon EMR
• AWS Glue
• Amazon Kinesis
• AWS Lake Formation
• Amazon Managed Streaming for Apache Kafka (Amazon MSK)
• Amazon OpenSearch Service (Amazon Elasticsearch Service)
• Amazon QuickSight
• Amazon Redshift
Application Integration:
• Amazon AppFlow
• AWS AppSync
• Amazon EventBridge (Amazon CloudWatch Events)
• Amazon MQ
• Amazon Simple Notification Service (Amazon SNS)
• Amazon Simple Queue Service (Amazon SQS)
• AWS Step Functions
AWS Cost Management:
• AWS Budgets
• AWS Cost and Usage Report
• AWS Cost Explorer
• Savings Plans
Compute:
• AWS Batch
• Amazon EC2
• Amazon EC2 Auto Scaling
• AWS Elastic Beanstalk
• AWS Outposts
• AWS Serverless Application Repository
• VMware Cloud on AWS
• AWS Wavelength
Containers:
• Amazon Elastic Container Registry (Amazon ECR)
• Amazon Elastic Container Service (Amazon ECS)
• Amazon ECS Anywhere
• Amazon Elastic Kubernetes Service (Amazon EKS)
• Amazon EKS Anywhere
• Amazon EKS Distro
Database:
• Amazon Aurora
• Amazon Aurora Serverless
• Amazon DocumentDB (with MongoDB compatibility)
• Amazon DynamoDB
• Amazon ElastiCache
• Amazon Keyspaces (for Apache Cassandra)
• Amazon Neptune
• Amazon Quantum Ledger Database (Amazon QLDB)
• Amazon RDS
• Amazon Redshift
• Amazon Timestream
Developer Tools:
• AWS X-Ray
Front-End Web and Mobile:
• AWS Amplify
• Amazon API Gateway
• AWS Device Farm
• Amazon Pinpoint
Machine Learning:
• Amazon Comprehend
• Amazon Forecast
• Amazon Fraud Detector
• Amazon Kendra
• Amazon Lex
• Amazon Polly
• Amazon Rekognition
• Amazon SageMaker
• Amazon Textract
• Amazon Transcribe
• Amazon Translate
Management and Governance:
• AWS Auto Scaling
• AWS CloudFormation
• AWS CloudTrail
• Amazon CloudWatch
• AWS Command Line Interface (AWS CLI)
• AWS Compute Optimizer
• AWS Config
• AWS Control Tower
• AWS License Manager
• Amazon Managed Grafana
• Amazon Managed Service for Prometheus
• AWS Management Console
• AWS Organizations
• AWS Personal Health Dashboard
• AWS Proton
• AWS Service Catalog
• AWS Systems Manager
• AWS Trusted Advisor
• AWS Well-Architected Tool
Media Services:
• Amazon Elastic Transcoder
• Amazon Kinesis Video Streams
Migration and Transfer:
• AWS Application Discovery Service
• AWS Application Migration Service (CloudEndure Migration)
• AWS Database Migration Service (AWS DMS)
• AWS DataSync
• AWS Migration Hub
• AWS Server Migration Service (AWS SMS)
• AWS Snow Family
• AWS Transfer Family
Networking and Content Delivery:
• Amazon CloudFront
• AWS Direct Connect
• Elastic Load Balancing (ELB)
• AWS Global Accelerator
• AWS PrivateLink
• Amazon Route 53
• AWS Transit Gateway
• Amazon VPC
• AWS VPN
Security, Identity, and Compliance:
• AWS Artifact
• AWS Audit Manager
• AWS Certificate Manager (ACM)
• AWS CloudHSM
• Amazon Cognito
• Amazon Detective
• AWS Directory Service
• AWS Firewall Manager
• Amazon GuardDuty
• AWS Identity and Access Management (IAM)
• Amazon Inspector
• AWS Key Management Service (AWS KMS)
• Amazon Macie
• AWS Network Firewall
• AWS Resource Access Manager (AWS RAM)
• AWS Secrets Manager
• AWS Security Hub
• AWS Shield
• AWS Single Sign-On
• AWS WAF
Serverless:
• AWS AppSync
• AWS Fargate
• AWS Lambda
Storage:
• AWS Backup
• Amazon Elastic Block Store (Amazon EBS)
• Amazon Elastic File System (Amazon EFS)
• Amazon FSx (for all types)
• Amazon S3
• Amazon S3 Glacier
• AWS Storage Gateway
Out-of-scope AWS services and features
The following is a non-exhaustive list of AWS services and features that are not covered on the exam.
These services and features do not represent every AWS offering that is excluded from the exam content.
Analytics:
• Amazon CloudSearch
Application Integration:
• Amazon Managed Workflows for Apache Airflow (Amazon MWAA)
AR and VR:
• Amazon Sumerian
Blockchain:
• Amazon Managed Blockchain
Compute:
• Amazon Lightsail
Database:
• Amazon RDS on VMware
Developer Tools:
• AWS Cloud9
• AWS Cloud Development Kit (AWS CDK)
• AWS CloudShell
• AWS CodeArtifact
• AWS CodeBuild
• AWS CodeCommit
• AWS CodeDeploy
• Amazon CodeGuru
• AWS CodeStar
• Amazon Corretto
• AWS Fault Injection Simulator (AWS FIS)
• AWS Tools and SDKs
Front-End Web and Mobile:
• Amazon Location Service
Game Tech:
• Amazon GameLift
• Amazon Lumberyard
Internet of Things:
• All services
Which new AWS services will be covered in the SAA-C03?
AWS Data Exchange,
AWS Data Pipeline,
AWS Lake Formation,
Amazon Managed Streaming for Apache Kafka,
Amazon AppFlow,
AWS Outposts,
VMware Cloud on AWS,
AWS Wavelength,
Amazon Neptune,
Amazon Quantum Ledger Database,
Amazon Timestream,
AWS Amplify,
Amazon Comprehend,
Amazon Forecast,
Amazon Fraud Detector,
Amazon Kendra,
AWS License Manager,
Amazon Managed Grafana,
Amazon Managed Service for Prometheus,
AWS Proton,
Amazon Elastic Transcoder,
Amazon Kinesis Video Streams,
AWS Application Discovery Service,
AWS WAF Serverless,
AWS AppSync,
Get the AWS SAA-C03 Exam Prep App on: iOS – Android – Windows 10/11
AWS solutions architect associate exam prep facts and summaries questions and answers dump – Solution Architecture Definition 1:
Solution architecture is a practice of defining and describing an architecture of a system delivered in context of a specific solution and as such it may encompass description of an entire system or only its specific parts. Definition of a solution architecture is typically led by a solution architect.
AWS solutions architect associate exam prep facts and summaries questions and answers dump – Solution Architecture Definition 2:
The AWS Certified Solutions Architect – Associate examination is intended for individuals who perform a solutions architect role and have one or more years of hands-on experience designing available, cost-efficient, fault-tolerant, and scalable distributed systems on AWS.
AWS solutions architect associate exam prep facts and summaries questions and answers dump – AWS Solution Architect Associate Exam Facts and Summaries (SAA-C03)
- Take an AWS Training Class
- Study AWS Whitepapers and FAQs: AWS Well-Architected webpage (various whitepapers linked)
- If you are running an application in a production environment and must add a new EBS volume with data from a snapshot, what could you do to avoid degraded performance during the volume’s first use?
Initialize the data by reading each storage block on the volume.
Volumes created from an EBS snapshot must be initialized. Initializing occurs the first time a storage block on the volume is read, and the performance impact can be impacted by up to 50%. You can avoid this impact in production environments by pre-warming the volume by reading all of the blocks. - If you are running a legacy application that has hard-coded static IP addresses and it is running on an EC2 instance; what is the best failover solution that allows you to keep the same IP address on a new instance?
Elastic IP addresses (EIPs) are designed to be attached/detached and moved from one EC2 instance to another. They are a great solution for keeping a static IP address and moving it to a new instance if the current instance fails. This will reduce or eliminate any downtime uses may experience. - Which feature of Intel processors help to encrypt data without significant impact on performance?
AES-NI - You can mount to EFS from which two of the following?
- On-prem servers running Linux
- EC2 instances running Linux
EFS is not compatible with Windows operating systems.
When a file(s) is encrypted and the stored data is not in transit it’s known as encryption at rest. What is an example of encryption at rest?
When would vertical scaling be necessary? When an application is built entirely into one source code, otherwise known as a monolithic application.
Fault-Tolerance allows for continuous operation throughout a failure, which can lead to a low Recovery Time Objective. RPO vs RTO
- High-Availability means automating tasks so that an instance will quickly recover, which can lead to a low Recovery Time Objective. RPO vs. RTO
- Frequent backups reduce the time between the last backup and recovery point, otherwise known as the Recovery Point Objective. RPO vs. RTO
- Which represents the difference between Fault-Tolerance and High-Availability? High-Availability means the system will quickly recover from a failure event, and Fault-Tolerance means the system will maintain operations during a failure.
- From a security perspective, what is a principal? An anonymous user falls under the definition of a principal. A principal can be an anonymous user acting on a system.
An authenticated user falls under the definition of a principal. A principal can be an authenticated user acting on a system.
- What are two types of session data saving for an Application Session State? Stateless and Stateful
23. It is the customer’s responsibility to patch the operating system on an EC2 instance.
24. In designing an environment, what four main points should a Solutions Architect keep in mind? Cost-efficient, secure, application session state, undifferentiated heavy lifting: These four main points should be the framework when designing an environment.
25. In the context of disaster recovery, what does RPO stand for? RPO is the abbreviation for Recovery Point Objective.
26. What are the benefits of horizontal scaling?
Vertical scaling can be costly while horizontal scaling is cheaper.
Horizontal scaling suffers from none of the size limitations of vertical scaling.
Having horizontal scaling means you can easily route traffic to another instance of a server.
Top
Reference: AWS Solution Architect Associate Exam Prep
Top 100 AWS solutions architect associate exam prep facts and summaries questions and answers dump – SAA-C03
For a better mobile experience, download the mobile app below:
Top AWS solutions architect associate exam prep facts and summaries questions and answers dump – Quizzes
A company is developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? (Select TWO.)
- A. CloudWatch
- B. DynamoDB
- C. Elastic Load Balancing
- D. ElastiCache
- E. Storage Gateway

Q1: A Solutions Architect is designing a critical business application with a relational database that runs on an EC2 instance. It requires a single EBS volume that can support up to 16,000 IOPS.
Which Amazon EBS volume type can meet the performance requirements of this application?
- A. EBS Provisioned IOPS SSD
- B. EBS Throughput Optimized HDD
- C. EBS General Purpose SSD
- D. EBS Cold HDD
Q2: An application running on EC2 instances processes sensitive information stored on Amazon S3. The information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 is a security risk.
Which solution will resolve the security concern?
- A. Access the data through an Internet Gateway.
- B. Access the data through a VPN connection.
- C. Access the data through a NAT Gateway.
- D.Access the data through a VPC endpoint for Amazon S3
Q3: An organization is building an Amazon Redshift cluster in their shared services VPC. The cluster will host sensitive data.
How can the organization control which networks can access the cluster?
- A. Run the cluster in a different VPC and connect through VPC peering.
- B. Create a database user inside the Amazon Redshift cluster only for users on the network.
- C. Define a cluster security group for the cluster that allows access from the allowed networks.
- D. Only allow access to networks that connect with the shared services network via VPN.

Q4: A web application allows customers to upload orders to an S3 bucket. The resulting Amazon S3 events trigger a Lambda function that inserts a message to an SQS queue. A single EC2 instance reads messages from the queue, processes them, and stores them in an DynamoDB table partitioned by unique order ID. Next month traffic is expected to increase by a factor of 10 and a Solutions Architect is reviewing the architecture for possible scaling problems.
Which component is MOST likely to need re-architecting to be able to scale to accommodate the new traffic?
- A. Lambda function
- B. SQS queue
- C. EC2 instance
- D. DynamoDB table
Q5: An application requires a highly available relational database with an initial storage capacity of 8 TB. The database will grow by 8 GB every day. To support expected traffic, at least eight read replicas will be required to handle database reads.
Which option will meet these requirements?
- A. DynamoDB
- B. Amazon S3
- C. Amazon Aurora
- D. Amazon Redshift
Q6: How can you improve the performance of EFS?
- A. Use an instance-store backed EC2 instance.
- B. Provision more throughput than is required.
- C. Divide your files system into multiple smaller file systems.
- D. Provision higher IOPs for your EFS.
Q7:
If you are designing an application that requires fast (10 – 25Gbps), low-latency connections between EC2 instances, what EC2 feature should you use?
- A. Snapshots
- B. Instance store volumes
- C. Placement groups
- D. IOPS provisioned instances.

Q8: A Solution Architect is designing an online shopping application running in a VPC on EC2 instances behind an ELB Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The application tier must read and write data to a customer managed database cluster. There should be no access to the database from the Internet, but the cluster must be able to obtain software patches from the Internet.
Which VPC design meets these requirements?
- A. Public subnets for both the application tier and the database cluster
- B. Public subnets for the application tier, and private subnets for the database cluster
- C. Public subnets for the application tier and NAT Gateway, and private subnets for the database cluster
- D. Public subnets for the application tier, and private subnets for the database cluster and NAT Gateway
Q9: What command should you run on a running instance if you want to view its user data (that is used at launch)?
- A. curl http://254.169.254.169/latest/user-data
- B. curl http://localhost/latest/meta-data/bootstrap
- C. curl http://localhost/latest/user-data
- D. curl http://169.254.169.254/latest/user-data

Q10: A company is developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? (Select TWO.)
- A. CloudWatch
- B. DynamoDB
- C. Elastic Load Balancing
- D. ElastiCache
- E. Storage Gateway
Q11: From a security perspective, what is a principal?
- A. An identity
- B. An anonymous user
- C. An authenticated user
- D. A resource
Q12: What are the characteristics of a tiered application?
- A. All three application layers are on the same instance
- B. The presentation tier is on an isolated instance than the logic layer
- C. None of the tiers can be cloned
- D. The logic layer is on an isolated instance than the data layer
- E. Additional machines can be added to help the application by implementing horizontal scaling
- F. Incapable of horizontal scaling
Q13: When using horizontal scaling, how can a server’s capacity closely match it’s rising demand?
A. By frequently purchasing additional instances and smaller resources
B. By purchasing more resources very far in advance
C. By purchasing more resources after demand has risen
D. It is not possible to predict demand
Q14: What is the concept behind AWS’ Well-Architected Framework?
A. It’s a set of best practice areas, principles, and concepts that can help you implement effective AWS solutions.
B. It’s a set of best practice areas, principles, and concepts that can help you implement effective solutions tailored to your specific business.
C. It’s a set of best practice areas, principles, and concepts that can help you implement effective solutions from another web host.
D. It’s a set of best practice areas, principles, and concepts that can help you implement effective E-Commerce solutions.
Question 127: Which options are examples of steps you take to protect your serverless application from attacks? (Select FOUR.)
A. Update your operating system with the latest patches.
B. Configure geoblocking on Amazon CloudFront in front of regional API endpoints.
C. Disable origin access identity on Amazon S3.
D. Disable CORS on your APIs.
E. Use resource policies to limit access to your APIs to users from a specified account.
F. Filter out specific traffic patterns with AWS WAF.
G. Parameterize queries so that your Lambda function expects a single input.
Question 128: Which options reflect best practices for automating your deployment pipeline with serverless applications? (Select TWO.)
A. Select one deployment framework and use it for all of your deployments for consistency.
B. Use different AWS accounts for each environment in your deployment pipeline.
C. Use AWS SAM to configure safe deployments and include pre- and post-traffic tests.
D. Create a specific AWS SAM template to match each environment to keep them distinct.
Question 129: Your application needs to connect to an Amazon RDS instance on the backend. What is the best recommendation to the developer whose function must read from and write to the Amazon RDS instance?
A. Use reserved concurrency to limit the number of concurrent functions that would try to write to the database
B. Use the database proxy feature to provide connection pooling for the functions
C. Initialize the number of connections you want outside of the handler
D. Use the database TTL setting to clean up connections

Question 130: A company runs a cron job on an Amazon EC2 instance on a predefined schedule. The cron job calls a bash script that encrypts a 2 KB file. A security engineer creates an AWS Key Management Service (AWS KMS) CMK with a key policy.
The key policy and the EC2 instance role have the necessary configuration for this job.
Which process should the bash script use to encrypt the file?
A) Use the aws kms encrypt command to encrypt the file by using the existing CMK.
B) Use the aws kms create-grant command to generate a grant for the existing CMK.
C) Use the aws kms encrypt command to generate a data key. Use the plaintext data key to encrypt the file.
D) Use the aws kms generate-data-key command to generate a data key. Use the encrypted data key to encrypt the file.
Question 131: A Security engineer must develop an AWS Identity and Access Management (IAM) strategy for a company’s organization in AWS Organizations. The company needs to give developers autonomy to develop and test their applications on AWS, but the company also needs to implement security guardrails to help protect itself. The company creates and distributes applications with different levels of data classification and types. The solution must maximize scalability.
Which combination of steps should the security engineer take to meet these requirements? (Choose three.)
A) Create an SCP to restrict access to highly privileged or unauthorized actions to specific AM principals. Assign the SCP to the appropriate AWS accounts.
B) Create an IAM permissions boundary to allow access to specific actions and IAM principals. Assign the IAM permissions boundary to all AM principals within the organization
C) Create a delegated IAM role that has capabilities to create other IAM roles. Use the delegated IAM role to provision IAM principals by following the principle of least privilege.
D) Create OUs based on data classification and type. Add the AWS accounts to the appropriate OU. Provide developers access to the AWS accounts based on business need.
E) Create IAM groups based on data classification and type. Add only the required developers’ IAM role to the IAM groups within each AWS account.
F) Create IAM policies based on data classification and type. Add the minimum required IAM policies to the developers’ IAM role within each AWS account.
Question 132: A company is ready to deploy a public web application. The company will use AWS and will host the application on an Amazon EC2 instance. The company must use SSL/TLS encryption. The company is already using AWS Certificate Manager (ACM) and will export a certificate for use with the deployment.
How can a security engineer deploy the application to meet these requirements?
A) Put the EC2 instance behind an Application Load Balancer (ALB). In the EC2 console, associate the certificate with the ALB by choosing HTTPS and 443.
B) Put the EC2 instance behind a Network Load Balancer. Associate the certificate with the EC2 instance.
C) Put the EC2 instance behind a Network Load Balancer (NLB). In the EC2 console, associate the certificate with the NLB by choosing HTTPS and 443.
D) Put the EC2 instance behind an Application Load Balancer. Associate the certificate with the EC2 instance.
What are the 6 pillars of a well architected framework:
AWS Well-Architected helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications and workloads. Based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization — AWS Well-Architected provides a consistent approach for customers and partners to evaluate architectures, and implement designs that can scale over time.
1. Operational Excellence
The operational excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. You can find prescriptive guidance on implementation in the Operational Excellence Pillar whitepaper.
2. Security
The security pillar includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies. You can find prescriptive guidance on implementation in the Security Pillar whitepaper.
3. Reliability
The reliability pillar includes the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. You can find prescriptive guidance on implementation in the Reliability Pillar whitepaper.
4. Performance Efficiency
The performance efficiency pillar includes the ability to use computing resources efficiently to meet system requirements and to maintain that efficiency as demand changes and technologies evolve. You can find prescriptive guidance on implementation in the Performance Efficiency Pillar whitepaper.
5. Cost Optimization
The cost optimization pillar includes the ability to avoid or eliminate unneeded cost or suboptimal resources. You can find prescriptive guidance on implementation in the Cost Optimization Pillar whitepaper.
6. Sustainability
- The ability to increase efficiency across all components of a workload by maximizing the benefits from the provisioned resources.
- There are six best practice areas for sustainability in the cloud:
- Region Selection – AWS Global Infrastructure
- User Behavior Patterns – Auto Scaling, Elastic Load Balancing
- Software and Architecture Patterns – AWS Design Principles
- Data Patterns – Amazon EBS, Amazon EFS, Amazon FSx, Amazon S3
- Hardware Patterns – Amazon EC2, AWS Elastic Beanstalk
- Development and Deployment Process – AWS CloudFormation
- Key AWS service:
- Amazon EC2 Auto Scaling
Source: 6 pillards of AWs Well architected Framework
The AWS Well-Architected Framework provides architectural best practices across the five pillars for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. The framework provides a set of questions that allows you to review an existing or proposed architecture. It also provides a set of AWS best practices for each pillar.
Using the Framework in your architecture helps you produce stable and efficient systems, which allows you to focus on functional requirements.
Other AWS Facts and Summaries and Questions/Answers Dump
- AWS Certified Solution Architect Associate Exam Prep App
- AWS S3 facts and summaries and Q&A Dump
- AWS DynamoDB facts and summaries and Questions and Answers Dump
- AWS EC2 facts and summaries and Questions and Answers Dump
- AWS Serverless facts and summaries and Questions and Answers Dump
- AWS Developer and Deployment Theory facts and summaries and Questions and Answers Dump
- AWS IAM facts and summaries and Questions and Answers Dump
- AWS Lambda facts and summaries and Questions and Answers Dump
- AWS SQS facts and summaries and Questions and Answers Dump
- AWS RDS facts and summaries and Questions and Answers Dump
- AWS ECS facts and summaries and Questions and Answers Dump
- AWS CloudWatch facts and summaries and Questions and Answers Dump
- AWS SES facts and summaries and Questions and Answers Dump
- AWS EBS facts and summaries and Questions and Answers Dump
- AWS ELB facts and summaries and Questions and Answers Dump
- AWS Autoscaling facts and summaries and Questions and Answers Dump
- AWS VPC facts and summaries and Questions and Answers Dump
- AWS KMS facts and summaries and Questions and Answers Dump
- AWS Elastic Beanstalk facts and summaries and Questions and Answers Dump
- AWS CodeBuild facts and summaries and Questions and Answers Dump
- AWS CodeDeploy facts and summaries and Questions and Answers Dump
- AWS CodePipeline facts and summaries and Questions and Answers Dump
What means undifferentiated heavy lifting?
The reality, of course, today is that if you come up with a great idea you don’t get to go quickly to a successful product. There’s a lot of undifferentiated heavy lifting that stands between your idea and that success. The kinds of things that I’m talking about when I say undifferentiated heavy lifting are things like these: figuring out which servers to buy, how many of them to buy, what time line to buy them.
Eventually you end up with heterogeneous hardware and you have to match that. You have to think about backup scenarios if you lose your data center or lose connectivity to a data center. Eventually you have to move facilities. There’s negotiations to be done. It’s a very complex set of activities that really is a big driver of ultimate success.
But they are undifferentiated from, it’s not the heart of, your idea. We call this muck. And it gets worse because what really happens is you don’t have to do this one time. You have to drive this loop. After you get your first version of your idea out into the marketplace, you’ve done all that undifferentiated heavy lifting, you find out that you have to cycle back. Change your idea. The winners are the ones that can cycle this loop the fastest.
On every cycle of this loop you have this undifferentiated heavy lifting, or muck, that you have to contend with. I believe that for most companies, and it’s certainly true at Amazon, that 70% of your time, energy, and dollars go into the undifferentiated heavy lifting and only 30% of your energy, time, and dollars gets to go into the core kernel of your idea.
I think what people are excited about is that they’re going to get a chance they see a future where they may be able to invert those two. Where they may be able to spend 70% of their time, energy and dollars on the differentiated part of what they’re doing.
AWS Certified Solutions Architect Associates Questions and Answers around the web.
Testimonial: Passed SAA-C02!
So my exam was yesterday and I got the results in 24 hours. I think that’s how they review all saa exams, not showing the results right away anymore.
I scored 858. Was practicing with Stephan’s udemy lectures and Bonso exam tests. My test results were as follows Test 1. 63%, 93% Test 2. 67%, 87% Test 3. 81 % Test 4. 72% Test 5. 75 % Test 6. 81% Stephan’s test. 80%
I was reading all question explanations (even the ones I got correct)
The actual exam was pretty much similar to these. The topics I got were:
A lot of S3 (make sure you know all of it from head to toes)
VPC peering
DataSync and Database Migration Service in same questions. Make sure you know the difference
One EKS question
2-3 KMS questions
Security group question
A lot of RDS Multi-AZ
SQS + SNS fan out pattern
ECS microservice architecture question
Route 53
NAT gateway
And that’s all I can remember)
I took extra 30 minutes, because English is not my native language and I had plenty of time to think and then review flagged questions.
Good luck with your exams guys!
Testimonial: Passed SAA-C02

Hey guys, just giving my update so all of you guys working towards your certs can stay motivated as these success stories drove me to reach this goal.
Background: 12 years of military IT experience, never worked with the cloud. I’ve done 7 deployments (that is a lot in 12 years), at which point I came home from the last one burnt out with a family that barely knew me. I knew I needed a change, but had no clue where to start or what I wanted to do. I wasn’t really interested in IT but I knew it’d pay the bills. After seeing videos about people in IT working from home(which after 8+ years of being gone from home really appealed to me), I stumbled across a video about a Solutions Architect’s daily routine working from home and got me interested in AWS.
AWS Solutions Architect SAA Certification Preparation time: It took me 68 days straight of hard work to pass this exam with confidence. No rest days, more than 120 pages of hand-written notes and hundreds and hundreds of flash cards.
In the beginning, I hopped on Stephane Maarek’s course for the CCP exam just to see if it was for me. I did the course in about a week and then after doing some research on here, got the CCP Practice exams from tutorialsdojo.com Two weeks after starting the Udemy course, I passed the exam. By that point, I’d already done lots of research on the different career paths and the best way to study, etc.
Cantrill(10/10) – That same day, I hopped onto Cantrill’s course for the SAA and got to work. Somebody had mentioned that by doing his courses you’d be over-prepared for the exam. While I think a combination of material is really important for passing the certification with confidence, I can say without a doubt Cantrill’s courses got me 85-90% of the way there. His forum is also amazing, and has directly contributed to me talking with somebody who works at AWS to land me a job, which makes the money I spent on all of his courses A STEAL. As I continue my journey (up next is SA Pro), I will be using all of his courses.
Neal Davis(8/10) – After completing Cantrill’s course, I found myself needing a resource to reinforce all the material I’d just learned. AWS is an expansive platform and the many intricacies of the different services can be tricky. For this portion, I relied on Neal Davis’s Training Notes series. These training notes are a very condensed version of the information you’ll need to pass the exam, and with the proper context are very useful to find the things you may have missed in your initial learnings. I will be using his other Training Notes for my other exams as well.
TutorialsDojo(10/10) – These tests filled in the gaps and allowed me to spot my weaknesses and shore them up. I actually think my real exam was harder than these, but because I’d spent so much time on the material I got wrong, I was able to pass the exam with a safe score.
As I said, I was surprised at how difficult the exam was. A lot of my questions were related to DBs, and a lot of them gave no context as to whether the data being loaded into them was SQL or NoSQL which made the choice selection a little frustrating. A lot of the questions have 2 VERY SIMILAR answers, and often time the wording of the answers could be easy to misinterpret (such as when you are creating a Read Replica, do you attach it to the primary application DB that is slowing down because of read issues or attach it to the service that is causing the primary DB to slow down). For context, I was scoring 95-100% on the TD exams prior to taking the test and managed a 823 on the exam so I don’t know if I got unlucky with a hard test or if I’m not as prepared as I thought I was (i.e. over-thinking questions).
Anyways, up next is going back over the practical parts of the course as I gear up for the SA Pro exam. I will be taking my time with this one, and re-learning the Linux CLI in preparation for finding a new job.
PS if anybody on here is hiring, I’m looking! I’m the hardest worker I know and my goal is to make your company as streamlined and profitable as possible. 🙂
Testimonial: How did you prepare for AWS Certified Solutions Architect – Associate Level certification?
Best way to prepare for aws solution architect associate certification
Practical knowledge is 30% important and rest is Jayendra blog and Dumps.
Buying udemy courses doesn’t make you pass, I can tell surely without going to dumps and without going to jayendra’s blog not easy to clear the certification.
Read FAQs of S3, IAM, EC2, VPC, SQS, Autoscaling, Elastic Load Balancer, EBS, RDS, Lambda, API Gateway, ECS.
Read the Security Whitepaper and Shared Responsibility model.
The most important thing is basic questions from the last introduced topics to the exam is very important like Amazon Kinesis, etc…
– ACloudGuru course with practice test’s
– Created my own cheat sheet in excel
– Practice questions on various website
– Few AWS services FAQ’s
– Some questions were your understanding about which service to pick for the use case.
– many questions on VPC
– a couple of unexpected question on AWS CloudHSM, AWS systems manager, aws athena
– encryption at rest and in transit services
– migration from on-premise to AWS
– backup data in az vs regional
I believe the time was sufficient.
Overall I feel AWS SAA was more challenging in theory than GCP Associate CE.
some resources I bookmarked:
- Comparison of AWS Services
- Solutions Architect – Associate | Qwiklabs
- okeeffed/cheat-sheets
- A curated list of AWS resources to prepare for the AWS Certifications
- AWS Cheat Sheet
Whitepapers are the important information about each services that are published by Amazon in their website. If you are preparing for the AWS certifications, it is very important to use the some of the most recommended whitepapers to read before writing the exam.
The following are the list of whitepapers that are useful for preparing solutions architectexam. Also you will be able to find the list of whitepapers in the exam blueprint.
- Overview of Security Processes
- Storage Options in the Cloud
- Defining Fault Tolerant Applications in the AWS Cloud
- Overview of Amazon Web Services
- Compliance Whitepaper
- Architecting for the AWS Cloud
Data Security questions could be the more challenging and it’s worth noting that you need to have a good understanding of security processes described in the whitepaper titled “Overview of Security Processes”.
In the above list, most important whitepapers are Overview of Security Processes and Storage Options in the Cloud. Read more here…
Big thanks to /u/acantril for his amazing course – AWS Certified Solutions Architect – Associate (SAA-C02) – the best IT course I’ve ever had – and I’ve done many on various other platforms:
CBTNuggets
LinuxAcademy
ACloudGuru
Udemy
Linkedin
O’Reilly
- #AWS #SAAC02 #SAAC03 #SolutionsArchitect #AWSSAA #SAA #AWSCertification #AWSTraining #LearnAWS #CloudArchitect #SolutionsArchitect #Djamgatech
If you’re on the fence with buying one of his courses, stop thinking and buy it, I guarantee you won’t regret it! Other materials used for study:
Jon Bonso Practice Exams for SAA-C02 @ Tutorialsdojo (amazing practice exams!)
Random YouTube videos (example)
Official AWS Documentation (example)
TechStudySlack (learning community)
Study duration approximately ~3 months with the following regimen:
Daily study from
30min
to2hrs
Usually early morning before work
Sometimes on the train when commuting from/to work
Sometimes in the evening
Due to being a father/husband, study wasn’t always possible
All learned topics reviewed weekly
Testimonial: I passed SAA-C02 … But don’t do what I did to pass it

I’ve been following this subreddit for awhile and gotten some helpful tips, so I’d like to give back with my two cents. FYI I passed the exam 788
The exam materials that I used were the following:
AWS Certified Solutions Architect Associate All-in-One Exam Guide (Banerjee)
Stephen Maarek’s Udemy course, and his 6 exam practices
Adrian Cantrill’s online course (about `60% done)
TutorialDojo’s exams
(My company has udemy business account so I was able to use Stephen’s course/exam)
I scheduled my exam at the end of March, and started with Adrian’s. But I was dumb thinking that I could go through his course within 3 weeks… I stopped around 12% of his course and went to the textbook and finished reading the all-in-one exam guide within a weekend. Then I started going through Stephen’s course. While learning the course, I pushed back the exam to end of April, because I knew I wouldn’t be ready by the exam comes along.
Five days before the exam, I finished Stephen’s course, and then did his final exam on the course. I failed miserably (around 50%). So I did one of Stephen’s practice exam and did worse (42%). I thought maybe it might be his exams that are slightly difficult, so I went and bought Jon Bonso’s exam and got 60% on his first one. And then I realized based on all the questions on the exams, I was definitely lacking some fundamentals. I went back to Adrian’s course and things were definitely sticking more – I think it has to do with his explanations + more practical stuff. Unfortunately, I could not finish his course before the exam (because I was cramming), and on the day of the exam, I could only do Bonso’s four of six exams, with barely passing one of them.
Please, don’t do what I did. I was desperate to get this thing over with it. I wanted to move on and work on other things for job search, but if you’re not in this situation, please don’t do this. I can’t for love of god tell you about OAI and Cloudfront and why that’s different than S3 URL. The only thing that I can remember is all the practical stuff that I did with Adrian’s course. I’ll never forget how to create VPC, because he make you manually go through it. I’m not against Stephen’s course – they are different on its own way (see the tips below).
So here’s what I recommend doing before writing for aws exam:
Don’t schedule your exam beforehand. Go through the materials that you are doing, and make sure you get at least 80% on all of the Jon Bonso’s exam (I’d recommend maybe 90% or higher)
If you like to learn things practically, I do recommend Adrian’s course. If you like to learn things conceptually, go with Stephen Maarek’s course. I find Stephen’s course more detailed when going through different architectures, but I can’t really say that because I didn’t really finish Adrian’s course
Jon Bonso’s exam was about the same difficulty as the actual exam. But they’re slightly more tricky. For example, many of the questions will give you two different situation and you really have to figure out what they are asking for because they might contradict to each other, but the actual question is asking one specific thing. However, there were few questions that were definitely obvious if you knew the service.
I’m upset that even though I passed the exam, I’m still lacking some practical stuff, so I’m just going to go through Adrian’s Developer exam but without cramming this time. If you actually learn the materials and practice them, they are definitely useful in the real world. I hope this will help you passing and actually learning the stuff.
P.S I vehemently disagree with Adrian in one thing in his course. doggogram.io is definitely better than catagram.io, although his cats are pretty cool
Testimonial: I passed the SAA-C02 exam!

I sat the exam at a PearsonVUE test centre and scored 816.
The exam had lots of questions around S3, RDS and storage. To be honest it was a bit of a blur but they are the ones I remember.
I was a bit worried before sitting the exam as I was only hit 76% in the official AWS practice exam the night before but it turned out alright in the end!
I have around 8 years of experience in IT but AWS was relatively new to me around 5 weeks ago.
Training Material Used
Firstly I ran through the u/stephanemaarek course which I found to pretty much cover all that was required!
I then used the u/Tutorials_Dojo practice exams. I took one before starting Stephane’s course to see where I was at with no training. I got 46% but I suppose a few of them were lucky guesses!
I then finished the course and took another test and hit around 65%, TD was great as they gave explanations on the answers. I then used this go back to the course to go over my weak areas again.
I then seemed to not be able to get higher than the low 70% on the exams so I went through u/neal-davis course, this was also great as it had an “Exam Cram” video at the end of each topic.
I also set up flashcards on BrainScape which helped me remember AWS services and what their function is.
All in all it was a great learning experience and I look forward to putting my skills into action!
Testimonial: I passed SAA with (799), had about an hour left on the clock.
Many FSx / EFS / Lustre questions
S3 Use cases, storage tiers, cloudfront were pretty prominent too
Only got one “figure out what’s wrong with this IAM policy” question
A handful of dynamodb questions and a handful for picking use cases between different database types or caching layers.
Other typical tips: When you’re unclear on what answer you should pick, or if they seem very similar – work on eliminating answers first. “It can’t be X because oy Y” and that can help a lot.
Testimonial: Passed the AWS Solutions Architect Associate exam!
I prepared mostly from freely available resources as my basics were strong. Bought Jon Bonso’s tests on Udemy and they turned out to be super important while preparing for those particular type of questions (i.e. the questions which feel subjective, but they aren’t), understanding line of questioning and most suitable answers for some common scenarios.
Created a Notion notebook to note down those common scenarios, exceptions, what supports what, integrations etc. Used that notebook and cheat sheets on Tutorials Dojo website for revision on final day.
Found the exam was little tougher than Jon Bonso’s, but his practice tests on Udemy were crucial. Wouldn’t have passed it without them.
Piece of advice for upcoming test aspirants: Get your basics right, especially networking. Understand properly how different services interact in VPC. Focus more on the last line of the question. It usually gives you a hint upon what exactly is needed. Whether you need cost optimization, performance efficiency or high availability. Little to no operational effort means serverless. Understand all serverless services thoroughly.
Testimonial: Passed Solutions Architect Associate (SAA-C02) Today!
I have almost no experience with AWS, except for completing the Certified Cloud Practitioner earlier this year. My work is pushing all IT employees to complete some cloud training and certifications, which is why I chose to do this.
How I Studied:
My company pays for acloudguru subscriptions for its employees, so I used that for the bulk of my learning. I took notes on 3×5 notecards on the key terms and concepts for review.
Once I scored passing grades on the ACG practice tests, I took the Jon Bonso tests on Udemy, which are much more difficult and fairly close to the difficulty of the actual exam. I scored 45%-74% on every Bonso practice test, and spent 1-2 hours after each test reviewing what I missed, supplementing my note cards, and taking time to understand my weak spots. I only took these tests once each, but in between each practice test, I would review all my note cards until I had the content largely memorized.
The Test:
This was one of the most difficult certification tests I’ve ever done. The exam was remote proctored with PearsonVUE (I used PSI for the CCP and didn’t like it as much) I felt like I was failing half the time. I marked about 25% of the questions for review, and I used up the entire allotted time. The questions are mostly about understanding which services interact with which other services, or which services are incompatible with the scenario. It was important for me to read through each response and eliminate the ones that don’t make sense. A lot of the responses mentioned a lot of AWS services that sound good but don’t actually work together (i.e. if it doesn’t make sense to have service X querying database Y, so that probably isn’t the right answer). I can’t point to one domain that really needs to be studied more than any other. You need to know all of the content for the exam.
Final Thoughts:
The ACG practice tests are not a good metric for success for the actual SAA exam, and I would not have passed without Bonso’s tests showing me my weak spots. PearsonVUE is better than PSI. Make sure to study everything thoroughly and review excessively. You don’t necessarily need 5 different study sources and years of experience to be able to pass (although both of those definitely help) and good luck to anyone that took the time to read!

Testimonial: Passed AWS CSAA today!
AWS Certified Solutions Architect Associate
So glad to pass my first AWS certification after 6 weeks of preparation.
My Preparation:
After a series of trial of error in regards to picking the appropriate learning content. Eventually, I went with the community’s advice, and took the course presented by the amazing u/stephanemaarek, in addition to the practice exams by Jon Bonso.
At this point, I can’t say anything that hasn’t been said already about how helpful they are. It’s a great combination of learning material, I appreciate the instructor’s work, and the community’s help in this sub.
Review:
Throughout the course I noted down the important points, and used the course slides as a reference in the first review iteration.
Before resorting to Udemy’s practice exams, I purchased a practice exam from another website, that I regret (not to defame the other vendor, I would simply recommend Udemy).
Udemy’s practice exams were incredible, in that they made me aware of the points I hadn’t understood clearly. After each exam, I would go both through the incorrect answers, as well as the questions I marked for review, wrote down the topic for review, and read the explanation thoroughly. The explanations point to the respective documentation in AWS, which is a recommended read, especially if you don’t feel confident with the service.
What I want to note, is that I didn’t get satisfying marks on the first go at the practice exams (I got an average of ~70%).
Throughout the 6 practice exams, I aggregated a long list of topics to review, went back to the course slides and practice-exams explanations, in addition to the AWS documentation for the respective service.
On the second go I averaged 85%. The second attempt at the exams was important as a confidence boost, as I made sure I understood the services more clearly.
The take away:
Don’t feel disappointed if you get bad results at your practice-exams. Make sure to review the topics and give it another shot.
The AWS documentation is your friend! It is vert clear and concise. My only regret is not having referenced the documentation enough after learning new services.
The exam:
I scheduled the exam using PSI.
I was very confident going into the exam. But going through such an exam environment for the first time made me feel under pressure. Partly, because I didn’t feel comfortable being monitored (I was afraid to get eliminated if I moved or covered my mouth), but mostly because there was a lot at stake from my side, and I had to pass it in the first go.
The questions were harder than expected, but I tried analyze the questions more, and eliminate the invalid answers.
I was very nervous and kept reviewing flagged questions up to the last minute. Luckily, I pulled through.
The take away:
The proctors are friendly, just make sure you feel comfortable in the exam place, and use the practice exams to prepare for the actual’s exam’s environment. That includes sitting in a straight posture, not talking/whispering, or looking away.
Make sure to organize the time dedicated for each questions well, and don’t let yourself get distracted by being monitored like I did.
Don’t skip the question that you are not sure of. Try to select the most probable answer, then flag the question. This will make the very-stressful, last-minute review easier.
You have been engaged by a company to design and lead a migration to an AWS environment. The team is concerned about the capabilities of the new environment, especially when it comes to high availability and cost-effectiveness. The design calls for about 20 instances (c3.2xlarge) pulling jobs/messages from SQS. Network traffic per instance is estimated to be around 500 Mbps at the beginning and end of each job. Which configuration should you plan on deploying?
Spread the Instances over multiple AZs to minimize the traffic concentration and maximize fault-tolerance. With a multi-AZ configuration, an additional reliability point is scored as the entire Availability Zone itself is ruled out as a single point of failure. This ensures high availability. Wherever possible, use simple solutions such as spreading the load out rather than expensive high tech solutions
To save money, you quickly stored some data in one of the attached volumes of an EC2 instance and stopped it for the weekend. When you returned on Monday and restarted your instance, you discovered that your data was gone. Why might that be?
The volume was ephemeral, block-level storage. Data on an instance store volume is lost if an instance is stopped.
The most likely answer is that the EC2 instance had an instance store volume attached to it. Instance store volumes are ephemeral, meaning that data in attached instance store volumes is lost if the instance stops.
Reference: Instance store lifetime
Your company likes the idea of storing files on AWS. However, low-latency service of the last few days of files is important to customer service. Which Storage Gateway configuration would you use to achieve both of these ends?
A file gateway simplifies file storage in Amazon S3, integrates to existing applications through industry-standard file system protocols, and provides a cost-effective alternative to on-premises storage. It also provides low-latency access to data through transparent local caching.
Cached volumes allow you to store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data.
You’ve been commissioned to develop a high-availability application with a stateless web tier. Identify the most cost-effective means of reaching this end.
Use an Elastic Load Balancer, a multi-AZ deployment of an Auto-Scaling group of EC2 Spot instances (primary) running in tandem with an Auto-Scaling group of EC2 On-Demand instances (secondary), and DynamoDB.
With proper scripting and scaling policies, running EC2 On-Demand instances behind the Spot instances will deliver the most cost-effective solution because On-Demand instances will only spin up if the Spot instances are not available. DynamoDB lends itself to supporting stateless web/app installations better than RDS .
You are building a NAT Instance in an m3.medium using the AWS Linux2 distro with amazon-linux-extras installed. Which of the following do you need to set?
Ensure that “Source/Destination Checks” is disabled on the NAT instance. With a NAT instance, the most common oversight is forgetting to disable Source/Destination Checks. TNote: This is a legacy topic and while it may appear on the AWS exam it will only do so infrequently.
You are reviewing Change Control requests and you note that there is a proposed change designed to reduce errors due to SQS Eventual Consistency by updating the “DelaySeconds” attribute. What does this mean?
When a new message is added to the SQS queue, it will be hidden from consumer instances for a fixed period.
Delay queues let you postpone the delivery of new messages to a queue for a number of seconds, for example, when your consumer application needs additional time to process messages. If you create a delay queue, any messages that you send to the queue remain invisible to consumers for the duration of the delay period. The default (minimum) delay for a queue is 0 seconds. The maximum is 15 minutes. To set delay seconds on individual messages, rather than on an entire queue, use message timers to allow Amazon SQS to use the message timer’s DelaySeconds value instead of the delay queue’s DelaySeconds value. Reference: Amazon SQS delay queues.
Amazon SQS keeps track of all tasks and events in an application: True or False?
False. Amazon SWF (not Amazon SQS) keeps track of all tasks and events in an application. Amazon SQS requires you to implement your own application-level tracking, especially if your application uses multiple queues. Amazon SWF FAQs.
You work for a company, and you need to protect your data stored on S3 from accidental deletion. Which actions might you take to achieve this?
Allow versioning on the bucket and to protect the objects by configuring MFA-protected API access.
Your Security Manager has hired a security contractor to audit your network and firewall configurations. The consultant doesn’t have access to an AWS account. You need to provide the required access for the auditing tasks, and answer a question about login details for the official AWS firewall appliance. Which actions might you do?
AWS has removed the Firewall appliance from the hub of the network and implemented the firewall functionality as stateful Security Groups, and stateless subnet NACLs. This is not a new concept in networking, but rarely implemented at this scale.
Create an IAM user for the auditor and explain that the firewall functionality is implemented as stateful Security Groups, and stateless subnet NACLs
Amazon ElastiCache can fulfill a number of roles. Which operations can be implemented using ElastiCache for Redis.
Amazon ElastiCache offers a fully managed Memcached and Redis service. Although the name only suggests caching functionality, the Redis service in particular can offer a number of operations such as Pub/Sub, Sorted Sets and an In-Memory Data Store. However, Amazon ElastiCache for Redis doesn’t support multithreaded architectures.
You have been asked to deploy an application on a small number of EC2 instances. The application must be placed across multiple Availability Zones and should also minimize the chance of underlying hardware failure. Which actions would provide this solution?
Deploy the EC2 servers in a Spread Placement Group.
Spread Placement Groups are recommended for applications that have a small number of critical instances which need to be kept separate from each other. Launching instances in a Spread Placement Group reduces the risk of simultaneous failures that might occur when instances share the same underlying hardware. Spread Placement Groups provide access to distinct hardware, and are therefore suitable for mixing instance types or launching instances over time. In this case, deploying the EC2 instances in a Spread Placement Group is the only correct option.
You manage a NodeJS messaging application that lives on a cluster of EC2 instances. Your website occasionally experiences brief, strong, and entirely unpredictable spikes in traffic that overwhelm your EC2 instances’ resources and freeze the application. As a result, you’re losing recently submitted messages from end-users. You use Auto Scaling to deploy additional resources to handle the load during spikes, but the new instances don’t spin-up fast enough to prevent the existing application servers from freezing. Can you provide the most cost-effective solution in preventing the loss of recently submitted messages?
Use Amazon SQS to decouple the application components and keep the messages in queue until the extra Auto-Scaling instances are available.
Neither increasing the size of your EC2 instances nor maintaining additional EC2 instances is cost-effective, and pre-warming an ELB signifies that these spikes in traffic are predictable. The cost-effective solution to the unpredictable spike in traffic is to use SQS to decouple the application components.
True statements on S3 URL styles
Virtual-host-style URLs (such as: https://bucket-name.s3.Region.amazonaws.com/key name) are supported by AWS.
Path-Style URLs (such as https://s3.Region.amazonaws.com/bucket-name/key name) are supported by AWS.
You run an automobile reselling company that has a popular online store on AWS. The application sits behind an Auto Scaling group and requires new instances of the Auto Scaling group to identify their public and private IP addresses. How can you achieve this?
Using a Curl or Get Command to get the latest meta-data from http://169.254.169.254/latest/meta-data/
What data formats are used to create CloudFormation templates?
JSOn and YAML
You have launched a NAT instance into a public subnet, and you have configured all relevant security groups, network ACLs, and routing policies to allow this NAT to function. However, EC2 instances in the private subnet still cannot communicate out to the internet. What troubleshooting steps should you take to resolve this issue?
Disable the Source/Destination Check on your NAT instance.
A NAT instance sends and retrieves traffic on behalf of instances in a private subnet. As a result, source/destination checks on the NAT instance must be disabled to allow the sending and receiving traffic for the private instances. Route 53 resolves DNS names, so it would not help here. Traffic that is originating from your NAT instance will not pass through an ELB. Instead, it is sent directly from the public IP address of the NAT Instance out to the Internet.
You need a storage service that delivers the lowest-latency access to data for a database running on a single EC2 instance. Which of the following AWS storage services is suitable for this use case?
Amazon EBS is a block level storage service for use with Amazon EC2. Amazon EBS can deliver performance for workloads that require the lowest-latency access to data from a single EC2 instance. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.
What are DynamoDB use cases?
Use cases include storing JSON data, BLOB data and storing web session data.
You are reviewing Change Control requests, and you note that there is a change designed to reduce costs by updating the Amazon SQS “WaitTimeSeconds” attribute. What does this mean?
When the consumer instance polls for new work, the SQS service will allow it to wait a certain time for one or more messages to be available before closing the connection.
Poor timing of SQS processes can significantly impact the cost effectiveness of the solution.
Long polling helps reduce the cost of using Amazon SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request) and false empty responses (when messages are available but aren’t included in a response).
Reference: Here
You have been asked to decouple an application by utilizing SQS. The application dictates that messages on the queue CAN be delivered more than once, but must be delivered in the order they have arrived while reducing the number of empty responses. Which option is most suitable?
Configure a FIFO SQS queue and enable long polling.
You are a security architect working for a large antivirus company. The production environment has recently been moved to AWS and is in a public subnet. You are able to view the production environment over HTTP. However, when your customers try to update their virus definition files over a custom port, that port is blocked. You log in to the console and you allow traffic in over the custom port. How long will this take to take effect?
Immediately.
You need to restrict access to an S3 bucket. Which methods can you use to do so?
There are two ways of securing S3, using either Access Control Lists (Permissions) or by using bucket Policies.
You are reviewing Change Control requests, and you note that there is a change designed to reduce wasted CPU cycles by increasing the value of your Amazon SQS “VisibilityTimeout” attribute. What does this mean?
When a consumer instance retrieves a message, that message will be hidden from other consumer instances for a fixed period.
Poor timing of SQS processes can significantly impact the cost effectiveness of the solution. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours.
With EBS, I can ____.
Create an encrypted volume from a snapshot of another encrypted volume.
Create an encrypted snapshot from an unencrypted snapshot by creating an encrypted copy of the unencrypted snapshot.
You can create an encrypted volume from a snapshot of another encrypted volume.
Although there is no direct way to encrypt an existing unencrypted volume or snapshot, you can encrypt them by creating either a volume or a snapshot. Reference: Encrypting unencrypted resources.
Following advice from your consultant, you have configured your VPC to use dedicated hosting tenancy. Your VPC has an Amazon EC2 Auto Scaling designed to launch or terminate Amazon EC2 instances on a regular basis, in order to meet workload demands. A subsequent change to your application has rendered the performance gains from dedicated tenancy superfluous, and you would now like to recoup some of these greater costs. How do you revert your instance tenancy attribute of a VPC to default for new launched EC2 instances?
Modify the instance tenancy attribute of your VPC from dedicated to default using the AWS CLI, an AWS SDK, or the Amazon EC2 API.
You can change the instance tenancy attribute of a VPC from dedicated to default. Modifying the instance tenancy of the VPC does not affect the tenancy of any existing instances in the VPC. The next time you launch an instance in the VPC, it has a tenancy of default, unless you specify otherwise during launch. You can modify the instance tenancy attribute of a VPC using the AWS CLI, an AWS SDK, or the Amazon EC2 API only. Reference: Change the tenancy of a VPC.
How do DynamoDB indices work?
What is Amazon DynamoDB?
Amazon DynamoDB is a fast, fully managed NoSQL database service. DynamoDB makes it simple and cost-effective to store and retrieve any amount of data and serve any level of request traffic.
DynamoDB is used to create tables that store and retrieve any level of data.
- DynamoDB uses SSD’s to store data.
- Provides Automatic and synchronous data.
- Maximum item size is 400KB
- Supports cross-region replication.
DynamoDB Core Concepts:
- The fundamental concepts around DynamoDB are:
- Tables-which is a collection of data.
- Items- They are the individual entries in the table.
- Attributes- These are the properties associated with the entries.
- Primary Keys.
- Secondary Indexes.
- DynamoDB streams.
Secondary Indexes:
- The Secondary index is a data structure that contains a subset of attributes from the table, along with an alternate key that supports Query operations.
- Every secondary index is related to only one table, from where it obtains data. This is called base table of the index.
- When you create an index you create an alternate key for the index i.e. Partition Key and Sort key, DynamoDB creates a copy of the attributes into the index, including primary key attributes derived from the table.
- After this is done, you use the query/scan in the same way as you would use a query on a table.
Every secondary index is instinctively maintained by DynamoDB.
DynamoDB Indexes: DynamoDB supports two indexes:
- Local Secondary Index (LSI)- The index has the same partition key as the base table but a different sort key,
- Global Secondary index (GSI)- The index has a partition key and sort key are different from those on the base table.
While creating more than one table using secondary table , you must do it in a sequence. Create table one after the another. When you create the first table wait for it to be active.
Once that table is active, create another table and wait for it to get active and so on. If you try to create one or more tables continuously DynamoDB will return a LimitExceededException.
You must specify the following, for every secondary index:
- Type- You must mention the type of index you are creating whether it is a Global Secondary Index or a Local Secondary index.
- Name- You must specify the name for the index. The rules for naming the indexes are the same as that for the table it is connected with. You can use the same name for the indexes that are connected with the different base table.
- Key- The key schema for the index states that every attribute in the index must be of the top level attribute of type-string, number, or binary. Other data types which include documents and sets are not allowed. Other requirements depend on the type of index you choose.
- For GSI- The partitions key can be any scalar attribute of the base table.
Sort key is optional and this too can be any scalar attribute of the base table.
- For LSI- The partition key must be the same as the base table’s partition key.
The sort key must be a non-key table attribute.
- Additional Attributes: The additional attributes are in addition to the tables key attributes. They are automatically projected into every index. You can use attributes for any data type, including scalars, documents and sets.
- Throughput: The throughput settings for the index if necessary are:
- GSI: Specify read and write capacity unit settings. These provisioned throughput settings are not dependent on the base tables settings.
- LSI- You do not need to specify read and write capacity unit settings. Any read and write operations on the local secondary index are drawn from the provisioned throughput settings of the base table.
You can create upto 5 Global and 5 Local Secondary Indexes per table. With the deletion of a table all the indexes are connected with the table are also deleted.
You can use the Scan or Query operation to fetch the data from the table. DynamoDB will give you the results in descending or ascending order.
(Source)
What is NLB in AWS?
An NLB is a Network Load Balancer.
Network Load Balancer Overview: A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration. When you enable an Availability Zone for the load balancer, Elastic Load Balancing creates a load balancer node in the Availability Zone. By default, each load balancer node distributes traffic across the registered targets in its Availability Zone only. If you enable cross-zone load balancing, each load balancer node distributes traffic across the registered targets in all enabled Availability Zones. It is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on your part. The Network Load Balancer is API-compatible with the Application Load Balancer, including full programmatic control of Target Groups and Targets. Here are some of the most important features:
- Static IP Addresses – Each Network Load Balancer provides a single IP address for each Availability Zone in its purview. If you have targets in us-west-2a and other targets in us-west-2c, NLB will create and manage two IP addresses (one per AZ); connections to that IP address will spread traffic across the instances in all the VPC subnets in the AZ. You can also specify an existing Elastic IP for each AZ for even greater control. With full control over your IP addresses, a Network Load Balancer can be used in situations where IP addresses need to be hard-coded into DNS records, customer firewall rules, and so forth.
- Zonality – The IP-per-AZ feature reduces latency with improved performance, improves availability through isolation and fault tolerance, and makes the use of Network Load Balancers transparent to your client applications. Network Load Balancers also attempt to route a series of requests from a particular source to targets in a single AZ while still providing automatic failover should those targets become unavailable.
- Source Address Preservation – With Network Load Balancer, the original source IP address and source ports for the incoming connections remain unmodified, so application software need not support X-Forwarded-For, proxy protocol, or other workarounds. This also means that normal firewall rules, including VPC Security Groups, can be used on targets.
- Long-running Connections – NLB handles connections with built-in fault tolerance, and can handle connections that are open for months or years, making them a great fit for IoT, gaming, and messaging applications.
- Failover – Powered by Route 53 health checks, NLB supports failover between IP addresses within and across regions.
How many types of VPC endpoints are available?
There are two types of VPC endpoints: (1) interface endpoints and (2) gateway endpoints. Interface endpoints enable connectivity to services over AWS PrivateLink.
What is the purpose of key pair with Amazon AWS EC2?
Amazon AWS uses key pair to encrypt and decrypt login information.
A sender uses a public key to encrypt data, which its receiver then decrypts using another private key. These two keys, public and private, are known as a key pair.
You need a key pair to be able to connect to your instances. The way this works on Linux and Windows instances is different.
First, when you launch a new instance, you assign a key pair to it. Then, when you log in to it, you use the private key.
The difference between Linux and Windows instances is that Linux instances do not have a password already set and you must use the key pair to log in to Linux instances. On the other hand, on Windows instances, you need the key pair to decrypt the administrator password. Using the decrypted password, you can use RDP and then connect to your Windows instance.
Amazon EC2 stores only the public key, and you can either generate it inside Amazon EC2 or you can import it. Since the private key is not stored by Amazon, it’s advisable to store it in a secure place as anyone who has this private key can log in on your behalf.
What is the difference between a VPC SG and an EC2 security group?
There are two types of Security Groups based on where you launch your instance. When you launch your instance on EC2-Classic, you have to specify an EC2-Classic Security Group . On the other hand, when you launch an instance in a VPC, you will have to specify an EC2-VPC Security Group. Now that we have a clear understanding what we are comparing, lets see their main differences:
- When the instance is launched, you can only choose a Security Group that resides in the same region as the instance.
- You cannot change the Security Group after the instance has launched (you may edit the rules)
- They are not IPv6 Capable
- You can change the Security Group after the instance has launched
- They are IPv6 Capable
Generally speaking, they are not interchangeable and there are more capabilities on the EC2-VPC SGs. You may read more about them on Differences Between Security Groups for EC2-Classic and EC2-VPC
Why do AWS DynamoDB and S3 use gateway VPC endpoints rather than interface endpoints?
I think this is historical in nature. S3 and DynamoDB were the first services to support VPC endpoints. The release of those VPC endpoint features pre-dates two important services that subsequently enabled interface endpoints: Network Load Balancer and AWS PrivateLink.
What is the best way to develop AWS Lambda functions locally on your laptop?
- Separate the Lambda handler from your core logic.
- Take advantage of execution context reuse to improve the performance of your function. Initialize SDK clients and database connections outside of the function handler, and cache static assets locally in the
/tmp
directory. Subsequent invocations processed by the same instance of your function can reuse these resources. This saves execution time and avoid potential data leaks across invocations, don’t use the execution context to store user data, events, or other information with security implications. If your function relies on a mutable state that can’t be stored in memory within the handler, consider creating a separate function or separate versions of a function for each user. - Use AWS Lambda Environment Variables to pass operational parameters to your function. For example, if you are writing to an Amazon S3 bucket, instead of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable.
How can I see if/when someone logs into my AWS Windows instance?
You can use VPC Flow Logs. The steps would be the following:
- Enable VPC Flow Logs for the VPC your EC2 instance lives in. You can do this from the VPC console
- Having VPC Flow Logs enabled will create a CloudWatch Logs log group
- Find the Elastic Network Interface assigned to your EC2 instance. Also, get the private IP of your EC2 instance. You can do this from the EC2 console.
- Find the CloudWatch Logs log stream for that ENI.
- Search the log stream for records where your Windows instance’s IP is the destination IP, make sure the port is the one you’re looking for. You’ll see records that tell you if someone has been connecting to your EC2 instance. For example, there are bytes transferred, status=ACCEPT, log-status=OK. You will also know the source IP that connected to your instance.
I recommend using CloudWatch Logs Metric Filters, so you don’t have to do all this manually. Metric Filters will find the patterns I described in your CloudWatch Logs entries and will publish a CloudWatch metric. Then you can trigger an alarm that notifies you when someone logs in to your instance.
Here are more details from the AWS Official Blog and the AWS documentation for VPC Flow Logs records:
VPC Flow Logs – Log and View Network Traffic Flows
Also, there are 3rd-party tools that simplify all these steps for you and give you very nice visibility and alerts into what’s happening in your AWS network resources. I’ve tried Observable Networks and it’s great: Observable Networks
While enabling ports on AWS NAT gateway when you allow inbound traffic on port 80/443 , do you need to allow outbound traffic on the same ports or is it sufficient to allow outbound traffic on ephemeral ports (1024-65535)?
Typically outbound traffic is not blocked by NAT on any port, so you would not need to explicitly allow those, since they should already be allowed. Your firewall generally would have a rule to allow return traffic that was initiated outbound from inside your office.
Is AWS traffic between EC2 nodes in the same availability zone secure with respect to sending sensitive data?
According to Amazon’s documentation, it is impossible for one instance to sniff traffic bound for a different instance.
https://d0.awsstatic.com/whitepapers/aws-security-whitepaper.pdf
- Packet sniffing by other tenants. It is not possible for a virtual instance running in promiscuous mode to receive or “sniff” traffic that is intended for a different virtual instance. While you can place your interfaces into promiscuous mode, the hypervisor will not deliver any traffic to them that is not addressed to them. Even two virtual instances that are owned by the same customer located on the same physical host cannot listen to each other’s traffic. Attacks such as ARP cache poisoning do not work within Amazon EC2 and Amazon VPC. While Amazon EC2 does provide ample protection against one customer inadvertently or maliciously attempting to view another’s data, as a standard practice you should encrypt sensitive traffic.
But as you can see, they still recommend that you should maintain encryption inside your network. We have taken the approach of terminating SSL at the external interface of the ELB, but then initiating SSL from the ELB to our back-end servers, and even further, to our (RDS) databases. It’s probably belt-and-suspenders, but in my industry it’s needed. Heck, we have some interfaces that require HTTPS and a VPN.
What’s the use case for S3 Pre-signed URL for uploading objects?
I get the use-case to allow access to private/premium content in S3 using Presigned-url that can be used to view or download the file until the expiration time set, But what’s a real life scenario in which a Webapp would have the need to generate URI to give users temporary credentials to upload an object, can’t the same be done by using the SDK and exposing a REST API at the backend.
Asking this since I want to build a POC for this functionality in Java, but struggling to find a real-world use-case for the same
Pre-signed URLs are used to provide short-term access to a private object in your S3 bucket. They work by appending an AWS Access Key, expiration time, and Sigv4 signature as query parameters to the S3 object. There are two common use cases when you may want to use them:
- Simple, occasional sharing of private files.
- Frequent, programmatic access to view or upload a file in an application.
Imagine you may want to share a confidential presentation with a business partner, or you want to allow a friend to download a video file you’re storing in your S3 bucket. In both situations, you could generate a URL, and share it to allow the recipient short-term access.
There are a couple of different approaches for generating these URLs in an ad-hoc, one-off fashion, including:
- Using the AWS Tools for Powershell.
- Using the AWS CLI.
Source: Here
AWS:REINVENT 2022 (Tips, Latest Tech, Surviving Vegas, Parties)

First time going there, what like to know in advance the do and don’t … from people with previous experiences.
Pre-plan as much as you can, but don’t sweat it in the moment if it doesn’t work out. The experience and networking are as if not more valuable than the sessions.
Deliberately know where your exits are. Most of Vegas is designed to keep you inside — when you’re burned out from the crowds and knowledge deluge is not the time to be trying to figure out how the hell you get out of wherever you are.
Study maps of how the properties interconnect before you go. You can get a lot of places without ever going outside. Be able to make a deliberate decision of what route to take. Same thing for the outdoor escalators and pedestrian bridges — they’re not necessarily intuitive, but if you know where they go, they’re a life saver running between events.
Drink more water and eat less food than you think you need to. Your mind and body will thank you.
Be prepared for all of the other Vegasisms if you ever plan on leaving the con boundaries (like to walk down the street to another venue) — you will likely be propositioned by mostly naked showgirls, see overt advertisement for or even be directly propositioned by prostitutes and their business associates, witness some pretty awful homelessness, and be “accidentally bumped into” pretty regularly by amateur pickpockets.
Switching gears between “work/AWS” and “surviving Vegas” multiple times a day can be seriously mentally taxing. I haven’t found any way to prevent that, just know it’s going to happen.
Take a burner laptop and not your production access work machine. You don’t want to accidentally crater your production environment because you gave the wrong cred as part of a lab.
There are helpful staffers everywhere around the con — don’t be afraid to leverage them — they tend to be much better informed than the ushers/directors/crowd wranglers at other cons.
Plan on getting Covid or at very least Con Crud. If you’re not used to being around a million sick people in the desert, it’s going to take its toll on your body one way or another.
Don’t set morning alarms. If your body needs to sleep in, that was more important than whatever morning session you wanted to catch. Watch the recording later on your own time and enjoy your mental clarity for the rest of the day.
Wander the expo floor when you’re bored to get a big picture of the ecosystem, but don’t expect anything too deep. The partner booths are all fun and games and don’t necessarily align with reality. Hang out at the “Ask AWS” booths — people ask some fun interesting questions and AWS TAMs/SAs and the other folks staffing the booth tend not to suck.
Listen to The Killers / Brandon Flowers when walking around outside — he grew up in Las Vegas and a lot of his music has subtle (and not so subtle) hints on how to survive and thrive there.
I’m sure there’s more, but that’s what I can think of off the top of my head.
Source: Many years of attending re:Invent as AWS staff, AWS partner, and AWS customer.
This is more Vegas-advice than pure Re:Invent advice, but if you’re going to be in the city for more than 3 days try to either:
Find a way off/out of the strip for an afternoon. A hike out at Red Rocks is a great option.
Get a pass to the spa at your hotel so that you can escape the casino/event/hotel room trap. It’s amazing how shitty you feel without realizing it until you do a quick workout and steam/sauna/ice bath routine.
I’ve also seen a whole variety of issues that people run into during hands-on workshops where for one reason or another their corporate laptop/email/security won’t let them sign up and log into a new AWS account. Make sure you don’t have any restrictions there, as that’ll be a big hassle. The workshops have been some of the best and most memorable sessions for me.
More tips:
Sign up for all the parties! Try to get your sessions booked too, it’s a pain to be on waitlists. Don’t do one session at Venetian followed by a session at MGM. You’ll never make it in time. Try to group your sessions by location/day.
Use reInventParties.com for that.
Check the Guides there as well. reInventGuides.com.
Start here: http://reInventParties.com
We catalog all the parties, keep a list of the latest (and older) guides, the Expo floor plan, drawings, etc. On Twitter as well @reInventParties
Hidden gem if you’re into that sort of thing, the Pinball Museum is a great place to hang for a bit with some friends.
Bring sunscreen, a water bottle you like, really comfortable shoes, and lip balm.
Get at least one cert if you don’t already have one. The Cert lounge is a wonderful place to chill and the swag there is top tier.
Check the partner parties, they have good food and good swag.
Register with an alt email address (something like yourname+reinvent@domain.com) so you can set an email rule for all the spam.
If your workplace has an SA, coordinate with them for schedules and info. They will also curate calendars for you and get you insider info if you want them to.
Prioritize workshops and chalk talks. Partner talks are long advertisements, take them with a grain of salt.
Even if you are an introvert, network. There are folks there with valuable insights and skills. You are one of those.
Don’t underestimate the distance between venues. Getting from MGM to Venetian can take forever.
Bring very comfortable walking shoes and be prepared to spend a LOT of time on your feet and walking 25-30,000 steps a day. All of the other comments and ideas are awesome. The most important thing to remember, especially for your very first year, is to have fun. Don’t just sit in breakouts all day and then go back to your hotel. Go to the after dark events. Don’t get too hung up on if you don’t make it to all the breakout sessions you want to go to. Let your first year be a learning curve on how to experience and enjoy re:Invent. It is the most epic week in Vegas you will ever experience. Maybe we will bump into each other. Love meeting new people.
FROM AWS:REINVENT 2021:
AWS on Air
Peter DeSantis Keynote
Join Peter DeSantis, Senior Vice President, Utility Computing and Apps, to learn how AWS has optimized its cloud infrastructure to run some of the world’s most demanding workloads and give your business a competitive edge.
Werner Vogels Keynote
Join Dr. Werner Vogels, CTO, Amazon.com, as he goes behind the scenes to show how Amazon is solving today’s hardest technology problems. Based on his experience working with some of the largest and most successful applications in the world, Dr. Vogels shares his insights on building truly resilient architectures and what that means for the future of software development.
Accelerating innovation with AI and ML
Applied artificial intelligence (AI) solutions, such as contact center intelligence (CCI), intelligent document processing (IDP), and media intelligence (MI), have had a significant market and business impact for customers, partners, and AWS. This session details how partners can collaborate with AWS to differentiate their products and solutions with AI and machine learning (ML). It also shares partner and customer success stories and discusses opportunities to help customers who are looking for turnkey solutions.
Application integration patterns for microservices
An implication of applying the microservices architectural style is that a lot of communication between components is done over the network. In order to achieve the full capabilities of microservices, this communication needs to happen in a loosely coupled manner. In this session, explore some fundamental application integration patterns based on messaging and connect them to real-world use cases in a microservices scenario. Also, learn some of the benefits that asynchronous messaging can have over REST APIs for communication between microservices.
Maintain application availability and performance with Amazon CloudWatch
Avoiding unexpected user behavior and maintaining reliable performance is crucial. This session is for application developers who want to learn how to maintain application availability and performance to improve the end user experience. Also, discover the latest on Amazon CloudWatch.
How Amazon.com transforms customer experiences through AI/ML
Amazon is transforming customer experiences through the practical application of AI and machine learning (ML) at scale. This session is for senior business and technology decision-makers who want to understand Amazon.com’s approach to launching and scaling ML-enabled innovations in its core business operations and toward new customer opportunities. See specific examples from various Amazon businesses to learn how Amazon applies AI/ML to shape its customer experience while improving efficiency, increasing speed, and lowering cost. Also hear the lessons the Amazon teams have learned from the cultural, process, and technical aspects of building and scaling ML capabilities across the organization.
Accelerating data-led migrations
Data has become a strategic asset. Customers of all sizes are moving data to the cloud to gain operational efficiencies and fuel innovation. This session details how partners can create repeatable and scalable solutions to help their customers derive value from their data, win new customers, and grow their business. It also discusses how to drive partner-led data migrations using AWS services, tools, resources, and programs, such as the AWS Migration Acceleration Program (MAP). Also, this session shares customer success stories from partners who have used MAP and other resources to help customers migrate to AWS and improve business outcomes.
Accelerate front-end web and mobile development with AWS Amplify
User-facing web and mobile applications are the primary touchpoint between organizations and their customers. To meet the ever-rising bar for customer experience, developers must deliver high-quality apps with both foundational and differentiating features. AWS Amplify helps front-end web and mobile developers build faster front to back. In this session, review Amplify’s core capabilities like authentication, data, and file storage and explore new capabilities, such as Amplify Geo and extensibility features for easier app customization with AWS services and better integration with existing deployment pipelines. Also learn how customers have been successful using Amplify to innovate in their businesses.
AWS Amplify is a set of tools and services that makes it quickand easy for front-end web and mobile developers to build full-stack applications on AWS
Amplify DataStore provides a programming model for leveraging shared and distributed data without writing additional code for offline and online scenarios, which makes working
with distributed, cross-user data just as simple as working with local-only data
AWS AppSync is a managed GraphQL API service
Amazon DynamoDB is a serverless key-value and document database that’s highly scalable
Amazon S3 allows you to store static assets
DevOps revolution
While DevOps has not changed much, the industry has fundamentally transformed over the last decade. Monolithic architectures have evolved into microservices. Containers and serverless have become the default. Applications are distributed on cloud infrastructure across the globe. The technical environment and tooling ecosystem has changed radically from the original conditions in which DevOps was created. So, what’s next? In this session, learn about the next phase of DevOps: a distributed model that emphasizes swift development, observable systems, accountable engineers, and resilient applications.
Innovation Day
Innovation Day is a virtual event that brings together organizations and thought leaders from around the world to share how cloud technology has helped them capture new business opportunities, grow revenue, and solve the big problems facing us today, and in the future. Featured topics include building the first human basecamp on the moon, the next generation F1 car, manufacturing in space, the Climate Pledge from Amazon, and building the city of the future at the foot of Mount Fuji.
Latest AWS Products and Services announced at re:invent 2021
Graviton 3: AWS today announced the newest generation of its Arm-based Graviton processors: the Graviton 3. The company promises that the new chip will be 25 percent faster than the last-generation chips, with 2x faster floating-point performances and a 3x speedup for machine-learning workloads. AWS also promises that the new chips will use 60 percent less power.
Trn1 to train models for various applications
AWS Mainframe Modernization: Cut mainframe migration time by 2/3
AWS Private 5G: Deploy and manage your own private 5G network (Set up and scale a private mobile network in days)
Transaction for Governed tables in Lake Formation: Automatically manages conflicts and error
Serverless and On-Demand Analytics for Redshift, EMAR, MSK, Kinesis:
Amazon Sagemaker Canvas: Create ML predictions without any ML experience or writing any code
AWS IoT TwinMaker: Real Time system that makes it easy to create and use digital twins of real-world systems.
Amazon DevOps Guru for RDS: Automatically detect, diagnose, and resolve hard-to-find database issues.
Amazon DynamoDB Standard-Infrequent Access table class: Reduce costs by up to 60%. Maintain the same performance, durability, scaling. and availability as Standard
AWS Database Migration Service Fleet Advisor: Accelerate database migration with automated inventory and migration: This service makes it easier and faster to get your data to the cloud and match it with the correct database service. “DMS Fleet Advisor automatically builds an inventory of your on-prem database and analytics service by streaming data from on prem to Amazon S3. From there, we take it over. We analyze [the data] to match it with the appropriate amount of AWS Datastore and then provide customized migration plans.
Amazon Sagemaker Ground Truth Plus: Deliver high-quality training datasets fast, and reduce data labeling cost.
Amazon SageMaker Training Compiler: Accelerate model training by 50%
Amazon SageMaker Inference Recommender: Reduce time to deploy from weeks to hours
Amazon SageMaker Serverless Inference: Lower cost of ownership with pay-per-use pricing
Amazon Kendra Experience Builder: Deploy Intelligent search applications powered by Amazon Kendra with a few clicks.
Amazon Lex Automated Chatbot Designer: Drastically Simplifies bot design with advanced natural language understanding
Amazon SageMaker Studio Lab: A no cost, no setup access to powerful machine learning technology
AWS Cloud WAN: Build, manage and monitor global wide area networks
AWS Amplify Studio: Visually build complete, feature-rich apps in hours instead of weeks, with full control over the application code.
AWS Carbon Footprint Tool: Don’t forget to turn off the lights.
AWS Well-Architected Sustainability Pillar: Learn, measure, and improve your workloads using environmental best practices in cloud computing
AWS re:Post: Get Answers from AWS experts. A Reimagined Q&A Experience for the AWS Community
How do you build something completely new?
FROM AWS:REINVENT 2020:
Automate anything with AWS Systems Manager
You can automate any task that involves interaction with AWS and on-premises resources, including in multi-account and multi-Region environments, with AWS Systems Manager. In this session, learn more about three new Systems Manager launches at re:Invent—Change Manager, Fleet Manager, and Application Manager. In addition, learn how Systems Manager Automation can be used across multiple Regions and accounts, integrate with other AWS services, and extend to on-premises. This session takes a deep dive into how to author a custom runbook using an automation document, and how to execute automation anywhere.
Deliver cloud operations at scale with AWS Managed Services
Learn how you can quickly build scaled AWS operations tooling to meet some of the most complex and compliant operations system requirements.
Turbocharging query execution on Amazon EMR
Learn about the performance improvements made in Amazon EMR for Apache Spark and Presto, giving Amazon EMR one of the fastest runtimes for analytics workloads in the cloud. This session dives deep into how AWS generates smart query plans in the absence of accurate table statistics. It also covers adaptive query execution—a technique to dynamically collect statistics during query execution—and how AWS uses dynamic partition pruning to generate query predicates for speeding up table joins. You also learn about execution improvements such as data prefetching and pruning of nested data types.
Detect machine learning (ML) model drift in production
Explore how state-of-the-art algorithms built into Amazon SageMaker are used to detect declines in machine learning (ML) model quality. One of the big factors that can affect the accuracy of models is the difference in the data used to generate predictions and what was used for training. For example, changing economic conditions could drive new interest rates affecting home purchasing predictions. Amazon SageMaker Model Monitor automatically detects drift in deployed models and provides detailed alerts that help you identify the source of the problem so you can be more confident in your ML applications.
Amazon Lightsail: The easiest way to get started on AWS
Amazon Lightsail is AWS’s simple, virtual private server. In this session, learn more about Lightsail and its newest launches. Lightsail is designed for simple web apps, websites, and dev environments. This session reviews core product features, such as preconfigured blueprints, managed databases, load balancers, networking, and snapshots, and includes a demo of the most recent launches. Attend this session to learn more about how you can get up and running on AWS in the easiest way possible.
Deep dive into AWS Lambda security: Function isolation
This session dives into the security model behind AWS Lambda functions, looking at how you can isolate workloads, build multiple layers of protection, and leverage fine-grained authorization. You learn about the implementation, the open-source Firecracker technology that provides one of the most important layers, and what this means for how you build on Lambda. You also see how AWS Lambda securely runs your functions packaged and deployed as container images. Finally, you learn about SaaS, customization, and safe patterns for running your own customers’ code in your Lambda functions.
Unauthorized users and financially motivated third parties also have access to advanced cloud capabilities. This causes concerns and creates challenges for customers responsible for the security of their cloud assets. Join us as Roy Feintuch, chief technologist of cloud products, and Maya Horowitz, director of threat intelligence and research, face off in an epic battle of defense against unauthorized cloud-native attacks. In this session, Roy uses security analytics, threat hunting, and cloud intelligence solutions to dissect and analyze some sneaky cloud breaches so you can strengthen your cloud defense. This presentation is brought to you by Check Point Software, an AWS Partner.
Best practices for security governance in serverless applications
AWS provides services and features that your organization can leverage to improve the security of a serverless application. However, as organizations grow and developers deploy more serverless applications, how do you know if all of the applications are in compliance with your organization’s security policies? This session walks you through serverless security, and you learn about protections and guardrails that you can build to avoid misconfigurations and catch potential security risks.
How Amazon.com automates cash identification & matching with AWS AI/ML
The Amazon Cash application service matches incoming customer payments with accounts and open invoices, while an email ingestion service (EIS) processes more than 1 million semi-structured and unstructured remittance emails monthly. In this session, learn how this EIS classifies the emails, extracts invoice data from the emails, and then identifies the right invoices to close on Amazon financial platforms. Dive deep on how these services automated 89.5% of cash applications using AWS AI & ML services. Hear about how these services will eliminate the manual effort of 1000 cash application analysts in the next 10 years.
Understanding AWS Lambda streaming events
Dive into the details of using Amazon Kinesis Data Streams and Amazon DynamoDB Streams as event sources for AWS Lambda. This session walks you through how AWS Lambda scales along with these two event sources. It also covers best practices and challenges, including how to tune streaming sources for optimum performance and how to effectively monitor them.
Building real-time applications using Apache Flink
Build real-time applications using Apache Flink with Apache Kafka and Amazon Kinesis Data Streams. Apache Flink is a framework and engine for building streaming applications for use cases such as real-time analytics and complex event processing. This session covers best practices for building low-latency applications with Apache Flink when reading data from either Amazon MSK or Amazon Kinesis Data Streams. It also covers best practices for running low-latency Apache Flink applications using Amazon Kinesis Data Analytics and discusses AWS’s open-source contributions to this use case.
App modernization on AWS with Apache Kafka and Confluent Cloud
Learn how you can accelerate application modernization and benefit from the open-source Apache Kafka ecosystem by connecting your legacy, on-premises systems to the cloud. In this session, hear real customer stories about timely insights gained from event-driven applications built on an event streaming platform from Confluent Cloud running on AWS, which stores and processes historical data and real-time data streams. Confluent makes Apache Kafka enterprise-ready using infinite Kafka storage with Amazon S3 and multiple private networking options including AWS PrivateLink, along with self-managed encryption keys for storage volume encryption with AWS Key Management Service (AWS KMS).
BI at hyperscale: Quickly build and scale dashboards with Amazon QuickSight
Data-driven business intelligence (BI) decision making is more important than ever in this age of remote work. An increasing number of organizations are investing in data transformation initiatives, including migrating data to the cloud, modernizing data warehouses, and building data lakes. But what about the last mile—connecting the dots for end users with dashboards and visualizations? Come to this session to learn how Amazon QuickSight allows you to connect to your AWS data and quickly build rich and interactive dashboards with self-serve and advanced analytics capabilities that can scale from tens to hundreds of thousands of users, without managing any infrastructure and only paying for what you use.
Is there an Updated SAA-C03 Practice Exam?
Yes as of August 2022.
This sample SAA-C03 sample exam PDF file can provide you with a hint of what the real SAA-C03 exam will look like in your upcoming test. In addition, the SAA-C03 sample questions also contain the necessary explanation and reference links that you can study.
Top-paying Cloud certifications:
- Google Certified Professional Cloud Architect — $175,761/year
- AWS Certified Solutions Architect – Associate — $149,446/year
- Azure/Microsoft Cloud Solution Architect – $141,748/yr
- Google Cloud Associate Engineer – $145,769/yr
- AWS Certified Cloud Practitioner — $131,465/year
- Microsoft Certified: Azure Fundamentals — $126,653/year
- Microsoft Certified: Azure Administrator Associate — $125,993/year
AWS Certified Solution Architect Associate Exam Prep Quiz App
Download AWS Solution Architect Associate Exam Prep Pro App (No Ads, Full version with answers) for:
Android – iOS – Windows 10 – Amazon Android
How to Load balance EC2 Instances in an Autoscaling Group?
In this AWS tutorial, we are going to discuss how we can make the best use of AWS services to build a highly scalable, and fault tolerant configuration of EC2 instances. The use of Load Balancers and Auto Scaling Groups falls under a number of best practices in AWS, including Performance Efficiency, Reliability and high availability.
Before we dive into this hands-on tutorial on how exactly we can build this solution, let’s have a brief recap on what an Auto Scaling group is, and what a Load balancer is.
Autoscaling group (ASG)
An Autoscaling group (ASG) is a logical grouping of instances which can scale up and scale down depending on pre-configured settings. By setting Scaling policies of your ASG, you can choose how many EC2 instances are launched and terminated based on your application’s load. You can do this based on manual, dynamic, scheduled or predictive scaling.
Elastic Load Balancer (ELB)
An Elastic Load Balancer (ELB) is a name describing a number of services within AWS designed to distribute traffic across multiple EC2 instances in order to provide enhanced scalability, availability, security and more. The particular type of Load Balancer we will be using today is an Application Load Balancer (ALB). The ALB is a Layer 7 Load Balancer designed to distribute HTTP/HTTPS traffic across multiple nodes – with added features such as TLS termination, Sticky Sessions and Complex routing configurations.
Getting Started
First of all, we open our AWS management console and head to the EC2 management console.
We scroll down on the left-hand side and select ‘Launch Templates’. A Launch Template is a configuration template which defines the settings for EC2 instances launched by the ASG.
Under Launch Templates, we will select “Create launch template”.
We specify the name ‘MyTestTemplate’ and use the same text in the description.
Under the ‘Auto Scaling guidance’ box, tick the box which says ‘Provide guidance to help me set up a template that I can use with EC2 Auto Scaling’ and scroll down to launch template contents.
When it comes to choosing our AMI (Amazon Machine Image) we can choose the Amazon Linux 2 under ‘Quick Start’.
The Amazon Linux 2 AMI is free tier eligible, and easy to use for our demonstration purposes.
Next, we select the ‘t2.micro’ under instance types, as this is also free tier eligible.
Under Network Settings, we create a new Security Group called ExampleSG in our default VPC, allowing HTTP access to everyone. It should look like this.
We can then add our IAM Role we created earlier. Under Advanced Details, select your IAM instance profile.
Then we need to include some user data which will load a simple web server and web page onto our Launch Template when the EC2 instance launches.
Under ‘advanced details’, and in ‘User data’ paste the following code in the box.
#!/bin/bash yum update -y yum install -y httpd.x86_64 systemctl start httpd.service systemctl enable httpd.service echo “Hello World from $(hostname -f)” > /var/www/html/index.html
Then simply click ‘Create Launch Template’ and we are done!
We are now able to build an Auto Scaling Group from our launch template.
On the same console page, select ‘Auto Scaling Groups’, and Create Auto Scaling Group.
We will call our Auto Scaling Group ‘ExampleASG’, and select the Launch Template we just created, then select next.
On the next page, keep the default VPC and select any default AZ and Subnet from the list and click next.
Under ‘Configure Advanced Options’ select ‘Attach to a new load balancer’ .
You will notice the settings below will change and we will now build our load balancer directly on the same page.
Select the Application Load Balancer, and leave the default Load Balancer name.
Choose an ‘Internet Facing’ Load balancer, select another AZ and leave all of the other defaults the same. It should look something like the following.
Under ‘Listeners and routing’, select ‘Create a target group’ and select the target group which was just created. It will be called something like ‘ExampleASG-1’. Click next.
Now we get to Group Size. This is where we specify the desired, minimum and maximum capacity of our Auto Scaling Group.
Set the capacities as follows:
Click ‘skip to review’, and click ‘Create Auto Scaling Group’.
You will now see the Auto Scaling Group building, and the capacity is updating.
After a short while, navigate to the EC2 Dashboard, and you will see that two EC2 instances have been launched!
To make sure our Auto Scaling group is working as it should – select any instance, and terminate the instance. After one instance has been terminated you should see another instance pending and go into a running state – bringing capacity back to 2 instances (as per our desired capacity).
If we also head over to the Load Balancer console, you will find our Application Load Balancer has been created.
If you select the load balancer, and scroll down, you will find the DNS name of your ALB – it will look something like ‘ ExampleASG-1-1435567571.us-east-1.elb.amazonaws.com’.
If you enter the DNS name into our URL, you should get the following page show up:
The message will display a ‘Hello World’ message including the IP address of the EC2 instance which is serving up the webpage behind the load balancer.
If you refresh the page a few times, you should see that the IP address listed will change. This is because the load balancer is routing you to the other EC2 instance, validating that our simple webpage is being served from behind our ALB.
The final step Is to make sure you delete all of the resources you configured! Start by deleting the Auto Scaling Group – and ensure you delete your load balancer also – this will ensure you don’t incur any charges.
Architectural Diagram
Below, you’ll find the architectural diagram of what we have built.
Learn how to Master AWS Cloud
Ultimate Training Packages – Our popular training bundles (on-demand video course + practice exams + ebook) will maximize your chances of passing your AWS certification the first time.
Membership – For unlimited access to our cloud training catalog, enroll in our monthly or annual membership program.
Challenge Labs – Build hands-on cloud skills in a secure sandbox environment. Learn, build, test and fail forward without risking unexpected cloud bills.
This post originally appeared on: https://digitalcloud.training/load-balancing-ec2-instances-in-an-autoscaling-group/
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS – Windows 10 – Amazon Android
There are significant protections provided to you natively when you are building your networking stack on AWS. This wide range of services and features can become difficult to manage, and becoming knowledgeable about what tools to use in which area can be challenging.
The two main security components which can be confused within VPC networking are the Security Group and the Network Access Control List (NACL). When you compare a Security Group vs NACL, you will find that although they are fairly similar in general, there is a distinct difference in the use cases for each of these security features.
In this blog post, we are going to explain the main differences between Security Group vs NACL and talk about the use cases and some best practices.
First of all, what do they have in common?
The main thing that is shared in common between a Security group vs a NACL is that they are both a firewall. So, what is a firewall?
Firewalls in computing monitor and control incoming and outgoing network traffic based on predetermined security rules. Firewalls provide a barrier between trusted and untrusted networks. The network layer which we are talking about in this instance is an Amazon Virtual Private Cloud – aka a VPC.
In the AWS cloud, VPCs are on-demand pools of shared resources, designed to provide a certain degree of isolation between different organizations and different teams within an account.
First, let’s talk about the particulars of a Security Group.
Security Group Key Features
Where do they live?
Security groups are tied to an instance. This can be either an EC2 instance, ECS cluster or an RDS database instance – providing routing rules and acting as a firewall for the resources contained within the security group. With a security group, you have to purposely assign a security group to the instances – if you don’t want them to use the default security group.
The default security group allows all traffic outbound by default, but no inbound traffic.
This means any instances within the subnet group gets the rule applied.
Stateful or Stateless
Security groups are stateful in nature. As a result, any changes applicable to an incoming rule will also be automatically applied to the outgoing rule in the same way. For example, allowing an incoming port 80 will automatically open the outgoing port 80 – without you having to explicitly direct traffic in the opposite direction.
Allow or Deny Rules
The only rule set that can be used in security groups is the Allow rule set. Thus, You cannot backlist a certain IP address from establishing a connection with any instances within your security group. This would have to be achieved using a different technology.
Limits
Instance can have multiple security groups. By default, AWS will let you apply up to five security groups to a virtual network interface, but it is possible to use up to 16 if you submit a limit increase request.
Additionally, you can have 60 inbound and 60 outbound rules per security group (for a total of 120 rules). IPv4 rules are enforced separately from IPv6 rules; a security group, for example, may have 60 IPv4 rules and 60 IPv6 rules.
Network Access Control Lists (NACLS)
Now let’s compare the Security Group vs NACLs using the same criteria.
Where do they live?
Network ACLs exist on an interact at the subnet level, so any instance in the subnet with an associated NACL will automatically follow the rules of the NACL.
Stateful or Stateless
Network ACLs are stateless. Consequently, any changes made to an incoming rule will not be reflected in an outgoing rule. For example, if you allow an incoming port 80, you would also need to apply the rule for outgoing traffic.
Allow or Deny Rules
Unlike a Security Group, NACLs support both allow and deny rules. By deny rules, you could explicitly deny a certain IP address to establish a connection; e.g. to block a specific known malicious IP address from establishing a connection to an EC2 Instance.
Limits
Subnet can have only one NACL. However, you can associate one network ACL to one or more subnets within a VPC. By default, you can have up to 200 unique NACLs within a VPC, however this is a soft limit that is adjustable.
Secondly, you can have 20 inbound and 20 outbound rules per NACL (for a total of 40 rules). IPv4 rules are enforced separately from IPv6 rules. A NACL, for example, may have 20 IPv4 rules and 20 IPv6 rules.
We hope that you now more keenly understand the difference between NACLs and security groups.
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS – Windows 10 – Amazon Android

Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS – Windows 10 – Amazon Android

A multi-account strategy in AWS can provide you with a secure and isolated platform from which to launch your resources. Whilst smaller organizations may only require a few AWS accounts, large corporations with many business units often require many accounts. These accounts may be organized hierarchically.
Building this account topology manually on the cloud requires a high degree of knowledge, and is rather error prone. If you want to set up a multi-account environment in AWS within a few clicks, you can use a service called AWS Control Tower.
AWS Control Tower allows your team to quickly provision and to set up and govern a secure, multi-account AWS environment, known as a landing zone. Built on the back of AWS Organizations, it automatically implements many accounts under the appropriate organizational units, with hardened service control policies attached. Provisioning new accounts happens in the click of a button, automating security configuration, and ensuring you extend governance into new accounts, without any manual intervention.
There are a number of key features which constitute AWS Control Tower, and in this article, we will explore each section and break down how it makes governing multiple accounts a lot easier.
The Landing Zone
A Landing Zone refers to the multi-account structure itself, which is configured to provide with a compliant and secure set of accounts upon which to start building. A Landing Zone can include extended features like federated account access via SSO and the utilization of centralized logging via Amazon CloudTrail and AWS Config.
The Landing Zone’s accounts follow guardrails set by you to ensure you are compliant to your own security requirements. Guardrails are rules written in plain English, leveraging AWS CloudFormation in the background to establish a hardened account baseline.
Guardrails can fit into one of a number of categories:
Mandatory – These come pre-configured on the accounts and can not be removed. An example may be “Enable AWS Config in All Available Regions or Disallow Deletion of Log Archive.
Optional – These are useful but not always necessary depending on your use case, and are up to your discretion if you choose to use them. Some examples may be Detect Whether Public Read Access to Amazon S3 Buckets is Allowed and Detect Whether Amazon EBS Volumes are Attached to Amazon EC2 Instances.
Elective Guardrails – Elective guardrails allow you to lock down certain behaviors which are commonly restricted in an AWS environment. These guardrails are not enabled by default, and can be disabled at any time. Examples of these guardrails are the following: Detect Whether MFA is Enabled for AWS IAM Users and Detect Whether Versioning for Amazon S3 Buckets is Enabled.
Guardrails provide immediate protection from any number of scenarios, without the need to be able to read or write complex security policies – a big upside compared to manual provisioning of permissions.
Account Factory
Account Factory is a component of Control Tower which allows you to automate the secure provisioning of new accounts, which exist according to defined security principles. Several pre-approved configurations are included as part of the launch of your new accounts including Networking information, and Region Selection. You also get seamless integration with AWS Service Catalog to allow your internal customers to configure and build new accounts. Third party Infrastructure as Code tooling like Terraform (Account Factory for Terraform) can be used also to provide your cloud teams the ability to benefit from a multiple account setup whilst using tools they are familiar with.
Architecture of Control Tower
Lets now dive into how Control Tower looks, with an architectural overview.
As you can see, there are a number of OUs (Organizational Units) in which accounts are placed. These are provisioned for you using AWS Organizations.
- Security OU – The Security OU contains two accounts, the Log Archive Account and the Audit Account. The Log Archive Account serves as a central store for all CloudTrail and AWS Config logs across the Landing Zone, securely stored within an S3 Bucket.
- Sandbox OU – The Sandbox OU is setup to host testing accounts (Sandbox Accounts) which are safely isolated from any production workloads.
- Production OU – This OU is for hosting all of your production accounts, containing production workloads.
- Non-Production OU – This OU can serve as a pre-production environment, in which further testing and development can take place.
- Suspended OU – this is secure OU, where you can move any deleted, reused or breached accounts. Permissions in this OU are extremely locked-down, ensuring it is a safe location.
- Shared Services OU – The Shared Services OU contains accounts in which services shared across multiple other accounts are hosted. This consists of three accounts:
- The Shared Services account (where the resources are directly shared)
- The Security Services Account (hosting services like Amazon Inspector, Amazon Macie, AWS Secrets Manager as well as any firewall solutions.)
- The Networking Account – This contains VPC Endpoints and components and things like DNS Endpoints.
Any organization can benefit from using AWS Control Tower. Whether you’re a multinational corporation with years of AWS Experience, or a burgeoning start-up with little experience in the cloud, Landing Zone can provide your customers with confidence that they are provisioning their architecture efficiently and securely.
This article originally appeared on: https://digitalcloud.training/
AWS Cloud Certifications Breaking News – Testimonials – AWS Top Stories – AWS solution architect associate preparation guide
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS – Windows 10 – Amazon Android

- AWS Weekly Roundup: AWS Pi Day, Amazon Bedrock multi-agent collaboration, Amazon SageMaker Unified Studio, Amazon S3 Tables, and moreby Prasad Rao (AWS News Blog) on March 17, 2025 at 4:36 pm
Thanks to everyone who joined us for the fifth annual AWS Pi Day on March 14. Since its inception in 2021, commemorating the Amazon Simple Storage Service (Amazon S3) 15th anniversary, AWS Pi Day has grown into a flagship event highlighting the transformative power of cloud technologies in data management, analytics, and AI. This year’s
- AWS Pi Day 2025: Data foundation for analytics and AIby Sébastien Stormacq (AWS News Blog) on March 14, 2025 at 3:04 pm
AWS Pi Day, an annual event commemorating the launch of Amazon S3 in 2006, has evolved from celebrating cloud storage milestones to showcasing cutting-edge developments in data management, analytics, and AI. In 2025, we're focused on unified data foundation for analytics and AI through new capabilities like S3 Tables, SageMaker Unified Studio, and Amazon Bedrock IDE.
- Collaborate and build faster with Amazon SageMaker Unified Studio, now generally availableby Donnie Prakoso (AWS News Blog) on March 13, 2025 at 11:05 pm
Amazon SageMaker Unified Studio is a single data and AI development platform that brings data together with analytics and AI/ML tools, including Amazon Bedrock and Amazon Q Developer, to streamline analytics and AI application development across virtually any use case.
- Amazon S3 Tables integration with Amazon SageMaker Lakehouse is now generally availableby Channy Yun (윤석찬) (AWS News Blog) on March 13, 2025 at 10:03 pm
Amazon S3 Tables integration with SageMaker Lakehouse enables unified access to S3 Tables data from AWS analytics engines like Amazon Athena, Redshift, EMR, and third-party query engines, to build securely and manage centrally.
- DeepSeek-R1 now available as a fully managed serverless model in Amazon Bedrockby Channy Yun (윤석찬) (AWS News Blog) on March 10, 2025 at 8:01 pm
DeepSeek-R1 is now available as a fully managed model in Amazon Bedrock, freeing up your teams to focus on strategic initiatives instead of managing infrastructure complexities.
Download AWS Solution Architect Associate Exam Prep Pro App (No Ads, Full version with answers) for:

Android – iOS – Windows 10 – Amazon Android
All Platforms (PWA) – Android – iOS – Windows 10 – Amazon Android

What are AWS STEP FUNCTIONS?
There are many trends within the current cloud computing industry that have a sway on the conversations which take place throughout the market. One of these key areas of discussion is ‘Serverless’.
Serverless application deployment is a way of provisioning infrastructure in a managed way, without having to worry about any building and maintenance of servers – you launch the service and it works. Scaling, high availability, and automotive processes are looked after using managed AWS Serverless service. AWS Step Functions provides us a useful way to coordinate the components of distributed applications and microservices using visual workflows.
What is AWS Step Functions?
AWS Step Functions let developers build distributed applications, automate IT and business processes, and build data and machine learning pipelines by using AWS services.
Using Step Functions workflows, developers can focus on higher-value business logic instead of worrying about failures, retries, parallelization, and service integrations. In other words, AWS Step Functions is serverless workload orchestration service which can make developers’ lives much easier.
Components and Integrations
AWS Step Functions consist of a few components, the first being a State Machine.
What is a state machine?
The State Machine model uses given states and transitions to complete the tasks at hand. It is an abstract machine (system) that can be in one state at a time, but it can also switch between them. As a result, it doesn’t allow infinity loops, which removes one source of errors entirely, which is often costly.
With AWS Step Functions, you can define workflows as state machines, which simplify complex code into easy-to-understand statements and diagrams. The process of building applications and confirming they work as expected is actually much faster and easier.
State
In a state machine, a state is referred to by its name, which can be any string, but must be unique within the state machine. State instances exist until their execution is complete.
An individual component of your state machine can be in any of the following 8 types of states:
- Task state – Do some work in your state machine. From a task state, Amazon Step Functions can call Lambda functions directly
- Choice state – Make a choice between different branches of execution
- Fail state – Stops execution and marks it as failure
- Succeed state – Stops execution and marks it as a success
- Pass state – Simply pass its input to its output or inject some fixed data
- Wait state – Provide a delay for a certain amount of time or until a specified time/date
- Parallel state – Begin parallel branches of execution
- Map state – Adds a for-each loop condition
Limits
There are some limits which you need to be aware of when you are using AWS Step Functions. This table will break down the limits:
Use Cases and Examples
If you need to build workflows across multiple Amazon services, then AWS Step Functions are a great tool for you. Serverless microservices can be orchestrated with Step Functions, data pipelines can be built, and security incidents can be handled with Step Functions. It is possible to use Step Functions both synchronously and asynchronously.
Instead of manually orchestrating long-running, multiple ETL jobs or maintaining a separate application, Step Functions can ensure that these jobs are executed in order and complete successfully.
As a third feature, Step Functions are a great way to automate recurring tasks, such as updating patches, selecting infrastructure, and synchronizing data, and Step Functions will scale automatically, respond to timeouts, and retry missed tasks when they fail.
With Step Functions, you can create responsive serverless applications and microservices with multiple AWS Lambda functions without writing code for workflow logic, parallel processes, error handling, or timeouts.
Additionally, services and data can be orchestrated that run on Amazon EC2 instances, containers, or on-premises servers.
Pricing
Each time you perform a step in your workflow, Step Functions counts a state transition. State transitions, including retries, are charged across all state machines.
There is a Free Tier for AWS Step Functions of 4000 State Transitions per month.
With AWS Step Functions, you pay for the number state transitions you use per month.
Step Functions count a state transition each time a step of your workflow is executed. You are charged for the total number of state transitions across all your state machines, including retries.
State Transitions cost a flat rate of $0.000025 per state transition thereafter.
Summary
In summary, Step Functions are a powerful tool which you can use to improve the application development and productivity of your developers. By migrating your logic workflows into the cloud you will benefit from lower cost, rapid deployment. As this is a serverless service, you will be able to remove any undifferentiated heavy lifting from the application development process.
Interview Questions
Q: How does AWS Step Function create a State Machine?
A: A state machine is a collection of states which allows you to perform tasks in the form of lambda functions, or another service, in sequence, passing the output of one task to another. You can add branching logic based on the output of a task to determine the next state.
Q: How can we share data in AWS Step Functions without passing it between the steps?
A: You can make use of InputPath and ResultPath. In the ValidationWaiting step you can set the following properties (in State Machine definition)
This way you can send to external service only data that is actually needed by it and you won’t lose access to any data that was previously in the input.
Q: How can I diagnose an error or a failure within AWS Step Functions?
A: The following are some possible failure events that may occur
- State Machine Definition Issues.
- Task Failures due to exceptions thrown in a Lambda Function.
- Transient or Networking Issues.
- A task has surpassed its timeout threshold.
- Privileges are not set appropriately for a task to execute.
Source: This AWS Step Function post originally appeared on: https://digitalcloud.training/
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS – Windows 10 – Amazon Android

AWS Secrets Manager vs SSM Parameter Store
If you want to be an AWS cloud professional, you need to understand the differences between the myriad of services AWS offer. You also need an in-depth understanding on how to use the Security services to ensure that your account infrastructure is highly secure and safe to use. This is job zero at AWS, and there is nothing that is taken more seriously than Security. AWS makes it really easy to implement security best practices and provides you with many tools to do so.
AWS Secrets Manager and SSM Parameter store sound like very similar services on the surface -however, when you dig deeper – comparing AWS Secrets Manager vs SSM Parameter Store – you will find some significant differences which help you understand exactly when to use each tool.
AWS Secrets Manager
AWS Secrets Manager is designed to provide encryption for confidential information (like database credentials and API keys) that needs to be guarded safely in a secure way. Encryption is automatically enabled when creating a secret entry and there are a number of additional features we are going to explore in this article.
Through using AWS Secrets Manager, you can manage a wide range of secrets: Database credentials, API keys, and other self defined secrets are all eligible for this service.
If you are responsible for storing and managing secrets within your team, as well as ensuring that your company follows regulatory requirements – this is possible through AWS Secrets Manager which securely and safely stores all secrets within one place. Secrets Manager also has a large degree of added functionality.
SSM Parameter store
SSM Parameter store is slightly different. The key differences become evident when you compare how AWS Secrets Manager vs SSM Parameter Store are used.
The SSM Parameter Store focuses on a slightly wider set of requirements. Based on your compliance requirements, SSM Parameter Store can be used to store the secrets encrypted or unencrypted within your code base.
By storing environmental configuration data and other parameters, the software simplifies and optimizes the application deployment process. With the AWS Secrets Manager, you can add key rotation, cross-account access, and faster integration with services offered by AWS.
Based on this explanation you may think that they both sound similar. Let’s break down the similarities and differences between these roles.
Similarities
Managed Key/Value Store Services
Both services allow you to store values using a name and key. This is an extremely useful aspect of both of the services as the deployment of the application can reference different parameters or different secrets based on the deployment environment, allowing customizable and highly integratable deployments of your applications.
Both Referenceable in CloudFormation
You can use the powerful Infrastructure as Code (IaC) tool AWS CloudFormation to build your applications programmatically. The effortless deployment of either product using CloudFormation allows a seamless developer experience, without using painful manual processes.
While SSM Parameter Store only allows one version of a parameter to be active at any given time, Secrets Manager allows multiple versions to exist at the same time when you are rotating a secret using staging labels.
Similar Encryption Options
They are both inherently very secure services – and you do not have to choose one over another based on the encryption offered by either service.
Through another AWS Security service, KMS (the Key Management Service), IAM policies can be outlined to control and outline specific permissions on which only certain IAM users and roles have permission to decrypt the value. This restricts access to anyone who doesn’t need it – and it abides to the principle of least privilege, helping you abide by compliance standards.
Versioning
Versioning outlines the ability to save multiple, and iteratively developed versions of something to allow quicker restore lost versions, and maintain multiple copies of the same thing etc.
Both services support versioning of secret values within the service. This allows you to view multiple previous versions of your parameters. You can also optionally choose to promote a former version to the master up to date version, which can be useful as your application changes.
Given that there are lots of similarities between the two services, it is now time to view and compare the differences, along with some use cases of either service.
Differences
Cost
The costs are different across the services, namely the fact that SSM tends to cost less compared to Secrets Manager. Standard parameters are free for SSM. You won’t be charged for the first 10,000 parameters you store, however, Advanced Parameters will cost you. For every 10,000 API calls and every secret per month, AWS Secret Manager bills you a fixed fee.
This may factor into how you use each service and how you define your cloud spending strategy, so this is valuable information.
Password generation
A useful feature within AWS Secrets Manager allows us to generate random data during the creation phase to allow for the secure and auditable creation of strong and unique passwords and subsequently reference it in the same CloudFormation stack. This allows our applications to be fully built using IaC, and gives us all the benefits which that entails.
AWS Systems Manager Parameter Store on the other hand doesn’t work this way, and doesn’t allow us to generate random data — we need to do it manually using console or AWS CLI, and this can’t happen during the creation phase.
Rotation of Secrets
A Powerful feature of AWS Secrets Manager is the ability to automatically rotate credentials based on a pre-defined schedule, which you set. AWS Secrets Manager integrates this feature natively with many AWS services, and this feature (automated data rotation) is simply not possible using AWS Systems Manager Parameter Store.You will have to refresh and update data daily which will include a lot more manual setup to achieve the same functionality that is supported natively with Secrets Manager.
Cross-Account Access
Firstly, there is currently no way to attach resource-based IAM policy for AWS Systems Manager Parameter Store (Standard type).This means that cross-account access is not possible for Parameter store, and if you need this functionality you will have to configure an extensive work around, or use AWS Secrets Manager.
Size of Secrets
Each of the options stores a maximum set size of secret / parameter.
Secrets Manager can store secrets of up to 10kb in size.
Standard Parameters can use up to 4096 characters (4KB size) for each entry, and Advanced Parameters can store up to 8KB entries.
Multi-Region Deployment
Like with many other features of AWS secrets Manager, AWS SSM Parameter store does not come with the same functionality. In this case you can’t easily replicate your secrets across multiple regions for added functionality / value, and you will need to implement an extensive work around for this to work.
In terms of use cases, you may want to use AWS Secrets Manager to store your encrypted secrets with easy rotation. If you require a feature rich solution for managing your secrets to stay compliant with your regulatory and compliance requirements, consider choosing AWS Secrets Manager.
On the other hand, you may want to choose SSM Parameter Store as a cheaper option to store your encrypted or unencrypted secrets. Parameter Store will provide some limited functionality to enable your application deployments by storing your parameters in a safe, cheap and secure way.
Source: This post originally appeared on https://digitalcloud.training/
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS – Windows 10 – Amazon Android

Source: Disaster Recovery in the AWS Cloud
When you are building applications in the AWS cloud, you have to go to painstaking lengths to make your applications durable, resilient and highly available.
Whilst AWS can help you with this for the most part, it is nearly impossible to see a situation in which you will not need some kind of Disaster Recovery plan.
An organization’s Business Continuity and Disaster Recovery (BCDR) program is a set of approaches and processes that can be used to recover from a disaster and resume its regular business operations after the disaster has ended. An example of a disaster would be a natural calamity, an outage or disruption caused by a power outage, an employee mistake, a hardware failure, or a cyberattack.
With the implementation of a BCDR plan, businesses can operate as close to normal as possible after an unexpected interruption, and with the least possible loss of data.
In this blog post, we will explore three notable disaster recovery solutions, each with different merits and drawbacks, and different ways of restoring them once they’ve been lost. However, before we can appreciate these different methods, we need to break down some key terminology in Disaster Recovery. Using AWS infrastructure as a lens, we will examine all of these strategies.
What is Disaster Recovery?
This definition provides an excellent summary of disaster recovery – an extremely broad term.
“Disaster recovery involves a set of policies, tools, and procedures to enable the recovery or continuation of vital technology infrastructure and systems following a natural or human-induced disaster.”
This definition emphasizes the necessity of recovering systems, tools, etc. after a disaster. Disaster Recovery depends on many factors, including:
• Financial plan
• Competence in technology
• Use of tools
• The Cloud Provider used
It is essential to understand some key terminology, including RPO and RTO, in order to evaluate disaster recovery efficacy:
How do RPOs and RTOs differ?
RPO (Recovery Point Objective)
The Recovery Point Objective (RPO) is the maximum acceptable amount of data loss after an unplanned data-loss incident, expressed as an amount of time. This is a measure of a maximum, in order to get a low RPO, you will have to have a highly available solution.
RTO (Recovery Time Objective)
The Recovery Time Objective (RTO) is the maximum tolerable length of time that a computer, system, network or application can be down after a failure or disaster occurs. This is measured in minutes or hours and trying to retrieve as low of an RTO as possible is dependent on how quickly you can get your application back online.
Disaster Recovery Methods
Now that we understand these key concepts, we can break down three popular disaster recovery methods, namely Backup and Restore, Disaster Recovery Plan, and Disaster Recovery Contingency Plan.
Backup and Restore
Data loss or corruption can be mitigated by utilizing backup and restore. The replication of data to other data centers can also mitigate the effects of a disaster. Redeploying the infrastructure, configuration, and application code in the recovery Data center is in addition to restoring the data.
The recovery time objective (RTO) and recovery point objective (RPO) of backup and restoration are higher. The result is longer downtimes and greater data loss between the time of the disaster event and the time of recovery. Even so, backup and restore may still be the most cost-effective and easiest strategy for your workload. RTO and RPO in minutes or less are not required for all workloads.
RPO is dependent on how frequently you take snapshots, and RTO is dependent on how long it takes to restore snapshots.
Pilot Light
As far as affordability and reliability are concerned, Pilot Light strikes a perfect balance between the two. There is one key difference between Backup and Restore and Pilot Light: Pilot Light will always have its core functionality running somewhere, either in another region or in another account and region that separates it from Backup and Restore.
You can, for example, log into Backup and Restore and have all of your data synced into an S3 bucket, so that you can retrieve it in case of a disaster. It is important to note that when using Pilot Light, the data is synchronized with an always-on and always-available database replica.
Also, other core services, such as an EC2 instance with all of the necessary software already installed on it, will be available and ready to use at the touch of a button. There would be an Auto-Scaling Policy in place for each of these EC2 instances to ensure the instances would scale out in a timely manner in order to meet your production needs as soon as possible. This strategy focuses on a lower chance of overall downtime and is contingent on smaller aspects of your architecture running all of the time.
Multi-Site Active/Active
Having an exactly mirrored application across multiple AWS regions or data centers is the most resilient cloud disaster recovery strategy.
In the multi-site active/active strategy, you will be able to achieve the lowest RTO (recovery time objective) and RPO (recovery point objective). However, it is important to take into account the potential cost and complexity of operating active stacks in multiple locations.
There is a multi-AZ workload stack available in every region to ensure high availability. There is a live replication of data between each of the data stores within each Region, as well as a backup of this data. Hence, data backups are of crucial importance to protect against disasters that may lead to the loss or corruption of data as a result.
Only the most demanding applications should use this DR method, since it has the lowest RTOs and RPOs of any other DR technique.
Conclusion
It is impossible to build a Disaster Recovery plan that fits all circumstances, and no “one size fits all” approach exists. Budget ahead of time – and ensure that you don’t spend more than you can afford. It may seem like a lot of money is being spent on ‘What ifs?” – but if your applications CAN NOT go down – you have the capability to ensure this happens.
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS – Windows 10 – Amazon Android

S3 vs EBS vs EFS — Comparing AWS Storage Services
AWS offers many services, so many that it can often get pretty confusing for beginners and experts alike. This is especially true when it comes to the many storage options AWS provides its users. Knowing the benefits and use cases of AWS storage services will help you design the best solution. In this article, we’ll be looking at S3 vs EBS vs EFS.
So, what are these services and what do they do? Let’s start with S3.
Amazon S3 Benefits
The Amazon Simple Storage Service (Amazon S3) is AWS’s object storage solution. If you’ve ever used a service like Google Drive or Dropbox, you’ll know generally what S3 can do. At first glance, S3 is simply a place to store files, photos, videos, and other documents. However, after digging deeper, you’ll uncover the many functionalities of S3, making it much more than the average object storage service.
Some of these functionalities include scalable solutions, which essentially means that if your project gets bigger or smaller than originally expected, S3 can grow or shrink to easily meet your needs in a cost-effective manner. S3 also helps you to easily manage data, giving you the ability to control who accesses your content. With S3 you have data protection against all kinds of threats. It also replicates your data for increased durability and lets you choose between different storage classes to save you money.
S3 is incredibly powerful, so powerful, in fact, that even tech-giant Netflix uses S3 for its services. If you like Netflix, you have AWS S3 to thank for its convenience and efficiency! In fact, many of the websites you access on a daily basis either run off of S3 or use content stored in S3. Let’s look at a couple of use cases to get a better idea of how S3 is used in the real world.
Amazon S3 Use Cases
Have you ever accidentally deleted something important? S3 has backup and restore capabilities to make sure a user doesn’t lose data through versioning and deletion protection. Versioning means that AWS will save a new version of a file every time it’s updated and deletion protection makes sure a user has the right permissions before deleting a file.
What would a company do during an unexpected power outage or if their on-premises data center suddenly crashed? S3 data is protected in an Amazon managed data center, the same data centers Amazon uses to host their world-famous shopping website. By using S3, users get a second storage option without having to directly pay the rent and utilities of a physical site.
Some businesses need to store financial, medical, or other data mandated by industry standards. AWS allows users to archive this type of data with S3 Glacier, one of the many S3 storage classes to choose from. S3 Glacier is a cost-effective solution for archiving and one of the best in the market today.
Amazon EBS Benefits
Amazon Elastic Block Store (Amazon EBS) is an umbrella term for all of AWS’s block storage services. EBS is different from S3 in that it provides a storage volume directly connected to EC2 (Elastic Cloud Compute). EBS allows you to store files directly on an EC2 instance, allowing the instance to access your files in a quick and cheap manner. So when you hear or read about EBS, think “EC2 storage.”
You can customize your EBS volumes with the configuration best suited for the workload. For example, if you have a workload that requires greater throughput, then you could choose a Throughput Optimized HDD EBS volume. If you don’t have any specific needs for your workload then you could choose an EBS General Purpose SSD. If you need a high-performance volume then an EBS Provisioned IOPS SSD volume would do the trick. If you don’t understand yet, that’s okay! There’s a lot to learn about these volume types and we’ll cover that all in our video courses.
Just remember that EBS works with EC2 in a similar way to how your hard drive works with your computer. An EBS lets you save files locally to an EC2 instance. This storage capacity allows your EC2 to do some pretty powerful stuff that would otherwise be impossible. Let’s look at a couple of examples.
Amazon EBS Use Cases
Many companies look for cheaper ways to run their databases. Amazon EBS provides both Relational and NoSQL Databases with scalable solutions that have low-latency performance. Slack, the messaging app, uses EBS to increase database performance to better serve customers around the world.
Another use case of EBS involves backing up your instances. Because EBS is an AWS native solution, the backups you create in EBS can easily be uploaded to S3 for convenient and cost-effective storage. This way you’ll always be able to recover to a certain point-in-time if needed.
Amazon EFS Benefits
Elastic File System (EFS) is Amazon’s way of allowing businesses to share file data from multiple EC2’s or on-prem instances simultaneously. EFS is an elastic and serverless service. It automatically grows and shrinks depending on the file storing needs of your business without you having to provision or manage it.
Some advantages include being able to divide up your content between frequently accessed or infrequently accessed storage classes, helping you save some serious cash. EFS is an AWS native solution, so it also works with containers and functions like Amazon Elastic Container Service (ECS) and AWS Lambda.
Imagine an international company has a hundred EC2 instances with each hosting a web application (a website like this one). Hundreds of thousands of people are accessing these servers on a regular basis — therefore producing HUGE amounts of data. EFS is the AWS tool that would allow you to connect the data gathered from hundreds, even thousands of instances so you can perform data analytics and gather key business insights.
Amazon EFS Use Cases
Amazon Elastic File System (EFS) provides an easy-to-use, high-performing, and consistent file system needed for machine learning and big data workloads. Tons of data scientists use EFS to create the perfect environment for their heavy workloads.
EFS provides an effective means of managing content and web applications. EFS mimics many of the file structures web developers often use, making it easy to learn and implement in web applications like websites or other online content.
When companies like Discover and Ancestry switched from legacy storage systems to Amazon EFS they saved huge amounts of money due to decreased costs in management and time.
S3 vs EBS vs EFS Comparison Table


AWS Storage Summed Up
- S3 is for object storage. Think photos, videos, files, and simple web pages.
- EBS is for EC2 block storage. Think of a computer’s hard drive.
- EFS is a file system for many EC2 instances. Think multiple EC2 instances and lots of data.
I hope that clears up AWS storage options. Of course, we can only cover so much in an article but check out our AWS courses for video lectures and hands-on labs to really learn how these services work.
Thanks for reading!
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS – Windows 10 – Amazon Android

Serverless computing has been on the rise the last few years, and whilst there is still a large number of customers who are not cloud-ready, there is a larger contingent of users who want to realize the benefits of serverless computing to maximize productivity and to enable newer and more powerful ways of building applications.
Serverless in cloud computing
Serverless is a cloud computing execution model in which the cloud provider allocates machine resources on demand and manages the servers on behalf of their customers. Cloud service providers still use servers to execute code for developers, which makes the term “serverless” a misnomer. There is always a server running in the background somewhere, and the cloud provider (AWS in this case) will run the infrastructure for you and leave you with the room to build your applications.
AWS Lambda
Within the AWS world, the principal Serverless service is AWS Lambda. Using AWS Lambda, you can run code for virtually any type of application or backend service without provisioning or managing servers. AWS Lambda functions can be triggered from many services, and you only pay for what you use.
So how does Lambda work? Using Lambda, you can run your code on high availability compute infrastructure and manage your compute resources. This includes server and operating system maintenance, capacity provisioning and automatic scaling, code and security patch deployment, and code monitoring and logging. All you need to do is supply the code.
Source: https://digitalcloud.training/aws-lambda-versions-and-aliases/
How different services trigger Lambda functions:
https://docs.aws.amazon.com/lambda/latest/dg/lambda-services.html
API Gateway APIs in front of Lambda functions:
https://docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html
The 10 Best AWS Lambda Use Case |
AWS Lambda is a powerful service that has in recent years elevated AWS to be the leader in not only serverless architecture development, but within the cloud industry in general. For those of you who don’t know – Lambda is a serverless, event-driven compute service that lets you run code without provisioning or managing servers which can be used for virtually any type of application or backend service. Its serverless nature and the fact that it has wide appeal across different use cases has made AWS Lambda a useful tool when running your short running compute operations in the cloud. What makes Lambda better than other options?You can use Lambda to handle all the operational and administrative tasks on your behalf, such as provisioning capacity, monitoring fleet health, deploying and running your code, and monitoring and logging it. Lambda’s key features and selling points are as follows:
The use cases for AWS Lambda are varied and cannot be sufficiently explored in one blog post. However, we have put together the top ten use cases in which Lambda shines the best. 1: Processing uploaded S3 objectsOnce your files land in S3 buckets, you can immediately start processing them by Lambda using S3 object event notifications. Using AWS Lambda for thumbnail generation is a great example for this use case, as the solution is cost-effective and you won’t have to worry about scaling up since Lambda can handle any load you place on it. The alternative to a serverless function handling this request is an EC2 instance spinning up every time a photo needs converting to a thumbnail, or leaving an EC2 instance running 24/7 on the occasion that a thumbnail needs to be converted. This use case requires low latency, highly responsive event-driven architecture that allows your application to perform effectively at scale. 2: Document editing and conversion in a hurryWhen objects are uploaded to Amazon S3 you can leverage AWS Lambda to perform changes to the material to help with any business goal you may have. This can also include editing document types and adding watermarks to important corporate documents. For example, you could leverage a RESTful API, using Amazon S3 Object Lambda to convert documents to PDF and apply a watermark based on the requesting user. You could also convert a file from doc to PDF automatically upon being uploaded to a particular S3 Bucket. The use cases within this field are also unlimited. 3: Cleaning up the backendAny consumer-oriented website needs to have a fast response time as one of its top priorities. Slow response times or even a visible delay can cause traffic to be lost. It is likely that your consumers will simply switch to another site if your site is too busy dealing with background tasks to be able to display the next page or search results in a timely manner. While there are some sources of delay that are beyond your control, such as slow ISPs, there are some things you can do to increase your response time, and these are listed below. How does AWS Lambda come into play when it comes to cloud computing? Backend tasks should not delay frontend requests due to the fact that they are running on the backend. You can send the data to an AWS Lambda process if you need to parse the user input to store it in a database, or if there are other input processing tasks that are not necessary for rendering the next page. AWS Lambda can then clean up and send the data to your database or application. 4: Creating and operating serverless websitesIt is outdated to maintain a dedicated server, even a virtual server. Furthermore, provisioning the instances, updating the OS, etc. takes a lot of time and distracts you from focusing on the core functions. You don’t need to manage a single server or operating system when you use AWS Lambda and other AWS services to build a powerful website. For a basic version of this architecture you could use AWS API Gateway, DynamoDB, Amazon S3 and Amazon Cognito User Pools to achieve a simple, low effort and highly scalable website to solve any of your business use cases. 5: Real-time processing of bulk dataIt is not unusual for an application, or even a website, to handle a certain amount of real-time data at any given time. Depending on how the data is inputted, it can come from communication devices, peripherals interacting with the physical world, or user input devices. Generally, this data will arrive in short bursts, or even a few bytes at a time, in formats that are easy to parse, and will arrive in formats that are usually very easy to read. Nevertheless, there are times when your application might need to handle large amounts of streaming input data, so moving it to temporary storage for later processing may not be the best option. It is usually necessary to be able to identify specific values from a stream of data collected from a remote device, such as a telemetry device. It is possible to handle the necessary real-time tasks without hindering the operation of your main application by sending the stream of data to a Lambda application on AWS that can pull and process the required information quickly. 6: Rendering pages in real-timeThe Lambda service can play a significant role if you are using predictive page rendering in order to prepare webpages for display on your website. As an example, if you want to retrieve documents and multimedia files for use in the next requested page, you can use a Lambda-based application to retrieve them, perform the initial stages of rendering them for display, and then, if necessary, use them for use in the next page. 7: Automated backupsWhen you are operating an enterprise application in the cloud, certain manual tasks like backing up your database or other storage mediums can fall to the side. By taking the undifferentiated heavy lifting out of your operations you can focus on what delivers value. Using Lambda scheduled events is a great way of performing housekeeping within your account. By using the boto3 Python libraries and AWS Lambda, you can create backups, check for idle resources, generate reports, and perform other common tasks quickly. 8: Email Campaigns using AWS Lambda & SESYou can build out simple email campaigns to send mass emails to potential customers to improve your business outcomes. Any organization that engages in marketing has mass mailing services as part of its marketing services. Hardware expenditures, license costs, and technical expertise are often required for traditional solutions. You can build an in-house serverless email platform using AWS Lambda and Simple Email Service SES quite easily which can scale in line with your application. 9: Real-time log analysisYou could easily build out a Lambda function to check log files from Cloudtrail or Cloudwatch. Amazon CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, and optimize resource utilization. AWS CloudTrail can be used to track all API calls made within your account. It is possible to search the logs for specific events or log entries as they occur in the logs and be notified of them via SNS when they occur. You can also very easily implement custom notification hooks to Slack, Zendesk, or other systems by calling their API endpoint within Lambda. 10: AWS Lambda Use Case for Building Serverless ChatbotBuilding and running chatbots is not only time consuming but expensive also. Developers must provision, run and scale the infrastructural resources that run the chatbot code. However, with AWS Lambda you can run a scalable chatbot architecture quite easily, without having to provision all of the hardware you would have had to do if you were not doing this on the cloud. This article originally appeared on: https://digitalcloud.training/ |
Basics of Amazon Detective (Included in AWS SAA-C03 Exam)
Detective is integrated with Amazon GuardDuty, AWS Security Hub, and partner security products, through which you can easily navigate to Detective, you don’t have to organize any data or develop, configure, or tune queries and algorithms. There are no upfront costs and customers pay only for the events analyzed, with no additional software to deploy or other feeds to subscribe to.
Testimonial: Passed SAA-C03!

Hi, just got the word, I passed the cert!
I mainly used Maareks videos for the initial learning, did turorialsdojo for the practice test and used Cantrills to touch up on places I lacked knowledge.
My next cert is prob gonna be sysOps. This time I plan to just use Cantrills videos I think because I feel they helped me the most.
Source: r/awscertifications
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS – Windows 10 – Amazon Android
Testimonial: Passed SAA-C03!

Today I got the notification that I am officially an AWS Certified Solutions Architect and I’m so happy!
I was nervous because I had been studying for the C02 version, but at the last minute I registered for the C03 thinking it was somehow “better” because it was more up to date (?). I didn’t know how different it would be and with the announcement that Stephane was yet to release an updated version for this exam made me even more anxious. But it turned out well!
I used Stephane’s Udemy course and the practice exams from Tutorials Dojo to help me study. I think the practice exams were the most useful as they helped me understand better how the questions would be presented.
Looking back now, I don’t think there was a major difference between C02 and C03, so if you are thinking that you haven’t studied specifically for C03, I wouldn’t worry too much,
My experience with Practice exam –
I found Stephan’s practice exam to be more challenging and it really helped me in filling the gap. Options were very similar to each other so guessing was not an option in stephane’s exams.
With TD, questions were worded correctly but options were terrible. Like even if you don’t know the answer you can guess it. Some options were like ( Which one of the option is a planet – # sun – # Earth #cow – # AWS) like they were that easy to guess and that’s why I got 85% in the second test and I have to review all question because I don’t know the answer yet I was scoring.
Things of note:
Use the keyboard shortcuts (eg alt-n for next question). Over 65 questions, this will save at least 1-2 minutes.
Attempt every question on first read, even if you flag to come back to it, make a go of it there and then. That way if you time out, you’ve put in your first/gut feel answer. More often than not, during review you won’t change i anyway.
Don’t get disheartened. There are 15 non-scoring questions so conceivably one could get 15 plus 12-14 more wrong and still hit 720+ and pass!
Look for the keywords, obvious wrong answers. Most of the time it will be a choice of 2 answers, with maybe a keyword to nail home the right answer. I found a lot of keywords/points that made me thing ‘yep – has to be that’.
Read the entire question and all of the answers, even if sure on the right answer, just in case…
Discover what works best for you in terms of learning. Some people are more suited to books, some are hands on/projects, some are audio/video etc. Finding your way helps make learning something new a lot easier.
If at home, test your machine the week before and then again the day before don’t reboot. Remove the as much stress from the event as possible.
Source: r/awscertifications
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS –
SAA-C03 prep – Doubt about Policy Routing

I’m preparing for SAA-C03, when I have questions where to choose the correct policy routing I always struggle with Latency, Geolocation and Geoproximity.
Especially with these kinds of scenarios:
Latency
I’ve users in the US and in Europe, those in Europe have perf issues, you set up your application also in Europe and you pick which policy routing?
Obviously ;-P I’ve selected Geolocation, because they are in Europe and I want they use the EU instances!!! It will boost the latency as well 🙁 , or at least to me is logical, while using a Latency based policy, I cannot be sure that they will use my servers in Europe.
2. Geolocation and Geoproximity
I don’t have a specific case to show up, but my understanding is that when I need to change the bias, I pick proximity based routing. The problem for me, it’s to understand when a simple geolocation policy is not enough (any tips). Is that Geolocation is used mainly to restrict content and internationalization? For country/compliance based restrictions, I understand that is better to use CloudFront, so using Routing is even not an option in such cases…
Comments:
#1: Geolocation isn’t about performance, that’s a secondary effect, but it’s not the primary function.
Latency based routing is there for a reason, to ensure the lowest latency .. and latency (generally) is a good indicator of performance.. especially for any applications which are latency sensitive.
Geo-location is more about delivering content from a localized server .. it might be about data location, language, local laws.
These are taken from my lessons on it, geolocation doesn’t return the ‘closest’ record… if you have a record tagged UK and one tagged France and you are in Germany .. it won’t return either of those… it would do Germany, Europe, default etc.
The different routing types are pretty easy to understand once you think about them in the right way.
We know you like your hobbies and especially coding, We do too, but you should find time to build the skills that’ll drive your career into Six Figures. Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career. 85% of hiring managers say cloud certifications make a candidate more attractive. Start your cloud journey with this excellent books below:

Testimonial: Pass My SAA-C03
I passed my solutions architect associate test yesterday.
For those looking for guidance, I took Stephane’s udemy course and took several practice exams.
In addition, I took aws readness webinar and their practice tests.
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS –
AWS WAF & Shield
AWS WAF and AWS Shield help protect your AWS resources from web exploits and DDoS attacks.
AWS WAF is a web application firewall service that helps protect your web apps from common exploits that could affect app availability, compromise security, or consume excessive resources.
AWS Shield provides expanded DDoS attack protection for your AWS resources. Get 24/7 support from our DDoS response team and detailed visibility into DDoS events.
We’ll now go into more detail on each service.
AWS Web Application Firewall (WAF)
AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.
AWS WAF helps protect web applications from attacks by allowing you to configure rules that allow, block, or monitor (count) web requests based on conditions that you define.
These conditions include IP addresses, HTTP headers, HTTP body, URI strings, SQL injection and cross-site scripting.
Can allow or block web requests based on strings that appear in the requests using string match conditions.
For example, AWS WAF can match values in the following request parts:
- Header – A specified request header, for example, the User-Agent or Referer header.
- HTTP method – The HTTP method, which indicates the type of operation that the request is asking the origin to perform. CloudFront supports the following methods: DELETE, GET, HEAD, OPTIONS, PATCH, POST, and PUT.
- Query string – The part of a URL that appears after a ? character, if any.
- URI – The URI path of the request, which identifies the resource, for example, /images/daily-ad.jpg.
- Body – The part of a request that contains any additional data that you want to send to your web server as the HTTP request body, such as data from a form.
- Single query parameter (value only) – Any parameter that you have defined as part of the query string.
- All query parameters (values only) – As above buy inspects all parameters within the query string.
New rules can be deployed within minutes, letting you respond quickly to changing traffic patterns.
When AWS services receive requests for web sites, the requests are forwarded to AWS WAF for inspection against defined rules.
Once a request meets a condition defined in the rules, AWS WAF instructs the underlying service to either block or allow the request based on the action you define.
With AWS WAF you pay only for what you use.
AWS WAF pricing is based on how many rules you deploy and how many web requests your web application receives.
There are no upfront commitments.
AWS WAF is tightly integrated with Amazon CloudFront and the Application Load Balancer (ALB), services.
When you use AWS WAF on Amazon CloudFront, rules run in all AWS Edge Locations, located around the world close to end users.
This means security doesn’t come at the expense of performance.
Blocked requests are stopped before they reach your web servers.
When you use AWS WAF on an Application Load Balancer, your rules run in region and can be used to protect internet-facing as well as internal load balancers.
Web Traffic Filtering
AWS WAF lets you create rules to filter web traffic based on conditions that include IP addresses, HTTP headers and body, or custom URIs.
This gives you an additional layer of protection from web attacks that attempt to exploit vulnerabilities in custom or third-party web applications.
In addition, AWS WAF makes it easy to create rules that block common web exploits like SQL injection and cross site scripting.
AWS WAF allows you to create a centralized set of rules that you can deploy across multiple websites.
This means that in an environment with many websites and web applications you can create a single set of rules that you can reuse across applications rather than recreating that rule on every application you want to protect.
Full feature API
AWS WAF can be completely administered via APIs.
This provides organizations with the ability to create and maintain rules automatically and incorporate them into the development and design process.
For example, a developer who has detailed knowledge of the web application could create a security rule as part of the deployment process.
This capability to incorporate security into your development process avoids the need for complex handoffs between application and security teams to make sure rules are kept up to date.
AWS WAF can also be deployed and provisioned automatically with AWS CloudFormation sample templates that allow you to describe all security rules you would like to deploy for your web applications delivered by Amazon CloudFront.
AWS WAF is integrated with Amazon CloudFront, which supports custom origins outside of AWS – this means you can protect web sites not hosted in AWS.
Support for IPv6 allows the AWS WAF to inspect HTTP/S requests coming from both IPv6 and IPv4 addresses.
Real-time visibility
AWS WAF provides real-time metrics and captures raw requests that include details about IP addresses, geo locations, URIs, User-Agent and Referers.
AWS WAF is fully integrated with Amazon CloudWatch, making it easy to setup custom alarms when thresholds are exceeded, or attacks occur.
This information provides valuable intelligence that can be used to create new rules to better protect applications.
AWS Shield
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS.
AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection.
There are two tiers of AWS Shield – Standard and Advanced.
AWS Shield Standard
All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge.
AWS Shield Standard defends against most common, frequently occurring network and transport layer DDoS attacks that target web sites or applications.
When using AWS Shield Standard with Amazon CloudFront and Amazon Route 53, you receive comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks.
AWS Shield Advanced
Provides higher levels of protection against attacks targeting applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator and Amazon Route 53 resources.
In addition to the network and transport layer protections that come with Standard, AWS Shield Advanced provides additional detection and mitigation against large and sophisticated DDoS attacks, near real-time visibility into attacks, and integration with AWS WAF, a web application firewall.
AWS Shield Advanced also gives you 24×7 access to the AWS DDoS Response Team (DRT) and protection against DDoS related spikes in your Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator and Amazon Route 53 charges.
AWS Shield Advanced is available globally on all Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53 edge locations.
Origin servers can be Amazon S3, Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), or a custom server outside of AWS.
AWS Shield Advanced includes DDoS cost protection, a safeguard from scaling charges because of a DDoS attack that causes usage spikes on protected Amazon EC2, Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, or Amazon Route 53.
If any of the AWS Shield Advanced protected resources scale up in response to a DDoS attack, you can request credits via the regular AWS Support channel.
Source: https://digitalcloud.training/aws-waf-shield/
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS –
AWS Simple Workflow vs AWS Step Function vs Apache Airflow
There are a number of different services and products on the market which support building logic and processes within your application flow. While these services have largely similar pricing, there are different use cases for each service.
AWS Simple Workflow Service (SWF), AWS Step Functions and Apache Airflow all seem very similar, and at times it may seem difficult to distinguish each service. This article highlights the similarities and differences, benefits, drawbacks, and use cases of these services that see a growing demand.
What is AWS Simple Workflow Service?
The AWS Simple Workflow Service (SWF) allows you to coordinate work between distributed applications.
A task is an invocation of a logical step in an Amazon SWF application. Amazon SWF interacts with workers which are programs that retrieve, process, and return tasks.
As part of the coordination of tasks, execution dependencies, scheduling, and concurrency are managed accordingly.
What are AWS Step Functions?
AWS Step Functions enables you to coordinate distributed applications and microservices through visual workflows.
Your workflow can be visualized by state machines describing steps, their relationships, and their inputs and outputs. State machines represent individual steps in a workflow diagram by containing a number of states.
The states in your workflow can perform work, make choices, pass parameters, initiate parallel execution, manage timeouts, or terminate your workflow.
What is Apache Airflow?
Firstly, Apache Airflow is a third party tool – and is not an AWS Service. Apache Airflow is an open-source workflow management platform for data engineering pipelines.
This powerful and widely-used open-source workflow management system (WMS) allows programmatic creation, scheduling, orchestration, and monitoring of data pipelines and workflows.
Using Airflow, you can author workflows as Directed Acyclic Graphs (DAGs) of tasks, and Apache Airflow can integrate with many AWS and non-AWS services such as: Amazon Glacier, Amazon CloudWatch Logs and Google Cloud Secret Manager.
Benefits and Drawbacks
Let’s have a closer look at the benefits and drawbacks of each service.
AWS Simple Workflows pros and cons:
AWS Step Functions pros and cons:
Apache Airflow pros and cons:
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS –
Use Cases
Here’s an overview of some use cases of each service.
Choose AWS Simple Workflow Service if you are building:
- Order management systems
- Multi-stage message processing systems
- Billing management systems
- Video encoding systems
- Image conversion systems
Choose AWS Step Functions if you want to include:
- Microservice Orchestration
- Security and IT Automation
- Data Processing and ETL Orchestration
- New instances of Media Processing
Choose Apache Airflow if:
- ETL pipelines that extract data from multiple sources, and run Spark jobs or other data transformations
- Machine learning model training
- Automated generation of reports
- Backups and other DevOps tasks
Conclusion
Each of the services discussed has unique use cases and deployment considerations. It is always necessary to fully determine your solution requirements before you make a decision as to which service best fits your needs.
Source: https://www.linkedin.com/pulse/aws-simple-workflow-vs-step-functions-apache-airflow-neal-davis/
For further reading, visit: https://digitalcloud.training/aws-application-integration-services/
What does AWS mean by cost optimization?
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS –
There are many things that AWS actively try to help you with – and cost optimization is one of them. Cost optimization simply defined comes down to helping you reduce your cloud spend in specific areas, without impacting on the efficacy of your architecture and how it functions. Cost optimization is one of the pillars in the well architected framework, and we can use it to help us move towards a more streamlined, and cost efficient workload.
AWS Well-Architected Framework enables cloud architects to build fast, reliable, and secure infrastructures for a wide variety of workloads and applications. It is built around six pillars:
- Operational excellence
- Security
- Reliability
- Performance efficiency
- Cost optimization
- Sustainability
The Well-Architected Framework provides customers and partners with a consistent approach for evaluating architectures and implementing scalable designs on AWS. It is applicable for use whether you are a burgeoning start-up or an enterprise corporation using the AWS Cloud.
In this article however, we are going to focus on exactly what is cost optimization, explore some key principles of how it is defined and demonstrate some use cases as to how it could help you when architecting your own AWS Solutions.
What is Cost Optimization?
Besides being one of the pillars on the Well Architected framework, Cost Optimization is a broad, yet simple term and is defined by AWS as follows:
“The Cost Optimization pillar includes the ability to run systems to deliver business value at the lowest price point.”
It provides a comprehensive overview of the general design principles, best practices, and questions related to cost optimization. Once understood, it can have a massive impact on how you are launching your various applications on AWS.
As well as a definition of what the Cost Optimization is there are some key design principles which we’ll explore in order to make sure we are on the right track with enhancing our workloads:
Implement Cloud Financial Management
In order to maximize the value of your cloud investment, cloud financial management/cost optimization is essential for achieving financial success and maximizing the value of your cloud investment. As your organization moves into this new era of technology and usage management, there is an imperative need for you to devote resources and time to developing capability in this new area. As with security or operational excellence, if you want to become a cost-efficient organization, you will have to build capability through knowledge building, programs, resources, and processes in a similar manner to how you would build capability for security.
Adopt a consumption model
If you want to save money on computing resources, it is important to pay only for what you require, and to increase or decrease usage based on the needs of the business, without relying on elaborate forecasting.
Measure overall efficiency
It is important to measure the business output of a workload as well as the costs associated with delivering that workload. You can use this measure to figure out what gains you will make if you increase output and reduce costs. Efficiency doesn’t also have to be just financially worthwhile to help get your cloud spend under control. It can also help any one server from becoming under or over utilized and help from a performance standpoint also.
Stop spending money on unnecessary activities
When it comes to data center operations, AWS handle everything from racks and stacks to powering servers and providing the racking itself. By utilizing managed services, you will also be able to remove the operational burden of managing operating systems as well as applications. The advantage of this method is that you are able to focus on your customers and your business projects rather than on your IT infrastructure.
Analyze and attribute expenditure
There is no doubt that the cloud allows for easy identification of the usage and cost of systems, which in turn allows for transparent attribution of IT costs to individual workload owners. Achieving this helps workload owners to measure the return on investment (ROI) of their investment as well as to reduce their costs and optimize their resources.
Now that we fully understand what we mean when we say ‘Cost Optimization on AWS’, we are going to show some ways that we can use cost optimization principles in order to improve the overall financial performance of our workloads on Amazon S3, and Amazon EC2:
Cost optimization on S3
Amazon S3 is an object-storage service which provides 11 Nines of Durability, and near infinite, low-cost object storage. There are a number of ways to even further optimize your costs, and ensure you are adhering to the Cost Optimization pillar of the Well Architected Framework.
S3 Intelligent Tiering
Amazon S3 Intelligent-Tiering is a storage class that is intended to optimize storage costs as well as provide users with an easy way to move data to the most cost-effective access tier as their usage patterns change over time. In the case of S3, Intelligent-Tiering monitors access patterns and, for a small monthly fee, automatically moves objects that have not been accessed to lower-cost access tiers. You can automatically save storage costs when you use S3 Intelligent-Tiering, which is a technology that provides low-latency and high-throughput access tiers to reduce storage costs. As a result of S3 Intelligent-Tiering storage class, it is possible to automatically archive data that is asynchronously accessible.
S3 Storage Class Analysis
Amazon S3 Storage Class Analysis analyses storage access patterns to help you decide when to transition the right data to the right storage class. This is a relatively new Amazon S3 analytics feature that monitors your data access patterns and tells you when data should be moved to a lower-cost storage class based on the frequency with which it is accessed.
Cost optimization on EC2
Amazon EC2 is simply a Virtual Machine in the cloud that can be scaled up, scaled down dynamically as your application grows. There are a number of ways you can optimize your spend on EC2 depending on your use case, whilst still delivering excellent performance.
Savings Plans
In exchange for a commitment to a specific instance family within the AWS Region (for example, C7 in US-West-2), EC2 Instance Savings Plans offer savings of up to 72 percent off on-demand.
EC2 Instance Savings Plans allow you to switch between instance sizes within the family (for example, from c5.xlarge to c5.2xlarge) or operating systems (such as from Windows to Linux), or change from Dedicated to Default tenancy, while continuing to receive the discounted rate.
If you are using large amounts of particular EC2 instances, buying a Savings Plan allows you to flexibly save money on your compute spend.
Right-sizing EC2 Instances
Right-sizing is about matching instance types and sizes to your workload performance and capacity needs at the lowest possible cost. Furthermore, it involves analyzing deployed instances and identifying opportunities to eliminate or downsize them without compromising capacity or other requirements.
The Amazon EC2 service offers a variety of instance types tailored to fit the needs of different users. There are a number of instance types that offer different combinations of resources such as CPU, memory, storage, and networking, so that you can choose the right resource mix for your application.
You can use Trusted Advisor to give recommendations on which particular EC2 instances are running at low utilization. This takes a lot of undifferentiated heavy lifting out of your hands as AWS tell you the exact instances you need to re-size.
Using Spot Capacity where possible
Spot capacity is spare capacity that AWS has within their data centers, which they provide to you for a large discount (up to 90%). The downside is that if a customer is willing to pay the on-demand price for this capacity, you will be given a 2-minute warning after which your instances will be terminated.
Applications requiring online availability are not well suited to spot instances. The use of Spot Instances is recommended for stateless, fault-tolerant, and flexible applications. A Spot Instance can be used for big data, containerized workloads, continuous integration and delivery (CI/CD), stateless web servers, high performance computing (HPC), and rendering workloads, as well as anything else which can be interrupted and requires low cost.
There are many considerations when it comes to optimizing cost on AWS, and the Cost Optimization pillar provides us all of the resources we need to be fully enabled in our AWS journey.
Source: This article originally appeared on: https://digitalcloud.training/what-does-aws-mean-by-cost-optimization/
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS –
AWS Amplify
AWS Amplify is a set of tools and services that enables mobile and front-end web developers to build secure, scalable full stack applications powered by AWS. Amplify includes an open-source framework with use-case-centric libraries and a powerful toolchain to create and add cloud-based features to your application, and a web-hosting service to deploy static web applications.
AWS SAM
AWS SAM is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, and event source mappings. With just a few lines per resource, you can define the application you want and model it using YAML. During deployment, AWS SAM transforms and expands the AWS SAM syntax into AWS CloudFormation syntax, enabling you to build serverless applications faster.
Amazon Cognito
Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0.
Vue Javascript Framework
Vue JavaScript framework is a progressive framework for building user interfaces. Unlike other monolithic frameworks, Vue is designed to be incrementally adoptable. The core library focuses on the view layer only and is easy to pick up and integrate with other libraries or existing projects. Vue is also perfectly capable of powering sophisticated single-page applications when used in combination with modern tooling and supporting libraries.
AWS Cloud9
AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser. It includes a code editor, debugger, and terminal. AWS Cloud9 makes it easy to write, run, and debug serverless applications. It pre-configures the development environment with all the SDKs, libraries, and plugins needed for serverless development.
Swagger API
Swagger API is an open-source software framework backed by a large ecosystem of tools that help developers design, build, document, and consume RESTful web services. Swagger also allows you to understand and test your backend API specifically.
Amazon DynamoDB
Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-Region, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than 20 million requests per second.
Amazon EventBridge
Amazon EventBridge makes it easy to build event-driven applications because it takes care of event ingestion, delivery, security, authorization, and error handling for you. To achieve the promises of serverless technologies with event-driven architecture, such as being able to individually scale, operate, and evolve each service, the communication between the services must happen in a loosely coupled and reliable environment. Event-driven architecture is a fundamental approach for integrating independent systems or building up a set of loosely coupled systems that can operate, scale, and evolve independently and flexibly. In this lab, you use EventBridge to address the contest use case.
Amazon DynamoDB Streams
Amazon DynamoDB Streams is an ordered flow of information about changes to items in a DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table.
AWS Step Functions
AWS Step Functions is a serverless function orchestrator that makes it easy to sequence Lambda functions and multiple AWS services into business-critical applications. Through its visual interface, you can create and run a series of checkpointed and event-driven workflows that maintain the application state. The output of one step acts as input to the next. Each step in your application runs in order, as defined by your business logic. Orchestrating a series of individual serverless applications, managing retries, and debugging failures can be challenging. As your distributed applications become more complex, the complexity of managing them also grows. Step Functions automatically manages error handling, retry logic, and state. With its built-in operational controls, Step Functions manages sequencing, error handling, retry logic, and state, removing a significant operational burden from your team.
When your processing requires a series of steps, use Step Functions to build a state machine to orchestrate the workflow. This lets you keep your Lambda functions focused on business logic.
Returning to the baker in our analogy, when an order to make a pie comes in, the order is actually a series of related but distinct steps. Some steps have to be done first or in sequence, and some can be done in parallel. Some take longer than others. Someone with expertise in each step performs that step. To make things go smoothly and let the experts stick to their expertise, you need a way to manage the flow of steps and keep whoever needs to know informed of the status.
- AWS Step Functions: https://aws.amazon.com/step-functions/
- AWS Step Functions Resources with links to documentation, whitepapers, tutorials, and webinars
- AWS Step Functions Developer Guide
- AWS Step Functions developer guide: States
- AWS Step Functions developer guide: Error Handling in Step Functions
- AWS Step Functions developer guide: Intrinsic Functions
- AWS Step Functions developer guide: Service Integrations
- States Language Specification
- AWS Lambda developer guide: Orchestration Examples with AWS Step Functions
Amazon Simple Notification Service (Amazon SNS)
Amazon Simple Notification Service (Amazon SNS) is a fully managed messaging service for both system-to-system and app-to-person (A2P) communication. The service enables you to communicate between systems through publish/subscribe (pub/sub) patterns that enable messaging between decoupled microservice applications or to communicate directly to users via SMS, mobile push, and email. The system-to-system pub/sub functionality provides topics for high-throughput, push-based, many-to-many messaging. Using Amazon SNS topics, your publisher systems can fan out messages to a large number of subscriber systems or customer endpoints including Amazon Simple Queue Service (Amazon SQS) queues, Lambda functions, and HTTP/S, for parallel processing. The A2P messaging functionality enables you to send messages to users at scale using either a pub/sub pattern or direct-publish messages using a single API.
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS –
Three pillars of observability
Observability extends traditional monitoring with approaches that address the kinds of questions you want to answer about your applications. Business metrics are sometimes an afterthought, only coming into play when someone in the business asks the question, and you have to figure out how to get the answers from the data you have. If you build in these needs when you’re building the application, you’ll have much more visibility into what’s happening within your application.
Logs, metrics, and distributed tracing are often known as the three pillars of observability. These are powerful tools that, if well understood, can unlock the ability to build better systems.
Logs provide valuable insights into how you measure your application health. Event logs are especially helpful in uncovering growing and unpredictable behaviors that components of a distributed system exhibit. Logs come in three forms: plaintext, structured, and binary.
Metrics are a numeric representation of data measured over intervals of time about the performance of your systems. You can configure and receive automatic alerts when certain metrics are met.
Tracing can provide visibility into both the path that a request traverses and the structure of a request. An event-driven or microservices architecture consists of many different distributed parts that must be monitored. Imagine a complex system consisting of multiple microservices, and an error occurs in one of the services in the call chain. Even if every microservice is logging properly and logs are consolidated in a central system, it can be difficult to find all relevant log messages.
AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application and shows a map of your application’s underlying components. You can use X-Ray to analyze both applications in development and in production, from simple three-tier applications to complex microservices applications consisting of thousands of services.
Amazon CloudWatch Logs Insights is a fully managed service that is designed to work at cloud scale with no setup or maintenance required. The service analyzes massive logs in seconds and gives you fast, interactive queries and visualizations. CloudWatch Logs Insights can handle any log format and autodiscovers fields from JSON logs.
Amazon CloudWatch ServiceLens is a feature that enables you to visualize and analyze the health, performance, and availability of your applications in a single place. CloudWatch ServiceLens ties together CloudWatch metrics and logs, as well as traces from X-Ray, to give you a complete view of your applications and their dependencies. This enables you to quickly pinpoint performance bottlenecks, isolate root causes of application issues, and determine impacted users.
Characteristics of modern applications that challenge traditional approaches
AWS services that address the three pillars of observability
CloudWatch Logs
- Amazon CloudWatch: https://aws.amazon.com/cloudwatch/
- Amazon CloudWatch Logs User Guide
- Amazon CloudWatch Logs user guide: Analyzing Log Data with CloudWatch Logs Insights
- AWS Lambda developer guide: Accessing Amazon CloudWatch Logs for AWS Lambda
AWS X-Ray
- AWS X-Ray: https://aws.amazon.com/xray/
- AWS X-Ray Resources with links to documentation, webinars, and blog posts
- AWS X-Ray Developer Guide
- AWS X-Ray developer guide: Integrating AWS X-Ray with Other AWS services
- AWS Lambda developer guide: Using AWS Lambda with AWS X-Ray
- Amazon API Gateway developer guide: Tracing User Requests to REST APIs Using X-Ray
CloudWatch metrics
Three types of API Gateway authorizers for HTTP APIs
- JWT authorizer
- Amazon Cognito user pools
- IAM permissions
Three types of JSON Web Tokens (JWTs) used by Amazon Cognito
Three things Lambda does for you when polling a stream
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS –
What is a Security Group?
In the world of Cloud Computing, Security is always job zero. This means that we design everything with Security in mind – at every single layer of our application! While you may have heard about AWS Security Groups – have you ever stopped to think about what a security group is, and what it actually does?
If for example, you are launching a Web Server to launch a brand new website hosted on AWS, you will have to prevent and allow certain protocols to initiate communication with your Web Server in order for users to interact with your website. On the other hand, if you give everyone access to your server using any protocol you may be leaving sensitive information easily reachable from anyone else on the internet, ruining your security posture.
The balance of allowing this kind of access is done using a specific technology in AWS, and today we are going to explore how Security Groups work, and what problems they help you solve.
What is a Security Group?
Security groups control traffic reaching and leaving the resources they are associated with according to the security group rules set by each group. After you associate a security group with an EC2 instance, it controls the instance’s inbound and outbound traffic.
Although VPCs come with a default security group when you create them, additional security groups can be created for any VPC within your account.
Security groups can only be associated with resources in the VPC for which they were created, and do not apply to resources in different VPCs.
Each security group has rules for controlling traffic based on protocols and ports. There are separate rules for inbound and outbound traffic.
Let’s have a look at what a security group looks like.
As stated earlier, Security Groups control inbound and outbound traffic in relation to resources placed in these security groups. Below are some example rules that you would see routinely when interacting with security groups for a Web Server.
Inbound
Outbound
Security Groups can also be used for the Relational Database Service, and for Amazon Elasticache to control traffic in a similar way.
Security Group Quotas
There is a limit of Security Groups you can have within a Region, and a limit on the number of outbound and inbound rules you can have per security group.
For the number of Security Groups within a Region, you can have 2500 Security Groups per Region by default. This quota applies to individual AWS account VPCs and shared VPCs, and is adjustable through launching a support ticket with AWS Support.
Regarding the number of inbound and outbound rules per Security Group, you can have 60 inbound and 60 outbound rules per security group (making a total of 120 rules). An IPv4 quota is enforced separately from IPv6 quotas. For example, an IPv4 quota can have 60 inbound rules, while an IPv6 quota can have 60 inbound rules.
Both inbound and outbound rules can be changed with a quota change. Per network interface, this quota multiplied by the quota for security groups cannot exceed 1,000.
Best Practices with Security Groups
When we are inevitably using Security Groups as part of our infrastructure, we can use some best practices to ensure that we are aligning ourselves with the highest security standards possible.
- Ensure your Security Groups do not have a large range of ports open
When large port ranges are open, instances are vulnerable to unwanted attacks. Furthermore, they make it very difficult to trace vulnerabilities. Web servers may only require 80 and 443 ports to be open, and not any more.
- Create new security groups and restrict traffic appropriately
If you are using the default AWS Security group for your active resources you are going to unnecessarily open up your instances and your applications and cause a lessened security posture.
- Where possible, restrict access to required IP address(es) and by port, even internally within your organization
If you are allowing all access (0.0.0.0/0 or ::/0) to your resources, you are asking for trouble. Where possible, you can actually restrict access to your resources based on an individual IP address or range of addresses. This would prevent any bad actors accessing your instances and lessen your security posture.
- Chain Security Groups together
When Chaining Security Groups, the inbound and outbound rules are set up in a way that traffic can only flow from the top tier to the bottom tier and back up again. The security groups act as firewalls to prevent a security breach in one tier to automatically provide subnet-wide access of all resources to the compromised client.
By Neal Davis
Download AWS Solution Architect Associate Exam SAA-C03 Prep Quiz App for:
All Platforms (PWA) – Android – iOS –
AWS Front-End Web and Mobile
The AWS Front-End Web and Mobile services support development workflows for native iOS/Android, React Native, and JavaScript developers. You can develop apps and deliver, test, and monitor them using managed AWS services.
AWS AppSync Features

AWS AppSync is a fully managed service that makes it easy to develop GraphQL APIs.
Securely connects to to data sources like AWS DynamoDB, Lambda, and more.
Add caches to improve performance, subscriptions to support real-time updates, and client-side data stores that keep offline clients in sync.
AWS AppSync automatically scales your GraphQL API execution engine up and down to meet API request volumes.
GraphQL
AWS AppSync uses GraphQL, a data language that enables client apps to fetch, change and subscribe to data from servers.
In a GraphQL query, the client specifies how the data is to be structured when it is returned by the server.
This makes it possible for the client to query only for the data it needs, in the format that it needs it in.
GraphQL also includes a feature called “introspection” which lets new developers on a project discover the data available without requiring knowledge of the backend.
Real-time data access and updates
AWS AppSync lets you specify which portions of your data should be available in a real-time manner using GraphQL Subscriptions.
GraphQL Subscriptions are simple statements in the application code that tell the service what data should be updated in real-time.
Offline data synchronization
The Amplify DataStore provides a queryable on-device DataStore for web, mobile and IoT developers.
When combined with AWS AppSync the DataStore can leverage advanced versioning, conflict detection and resolution in the cloud.
This allows automatic merging of data from different clients as well as providing data consistency and integrity.
Data querying, filtering, and search in apps
AWS AppSync gives client applications the ability to specify data requirements with GraphQL so that only the needed data is fetched, allowing for both server and client filtering.
AWS AppSync supports AWS Lambda, Amazon DynamoDB, and Amazon Elasticsearch.
GraphQL operations can be simple lookups, complex queries & mappings, full text searches, fuzzy/keyword searches, or geo lookups.
Server-Side Caching
AWS AppSync’s server-side data caching capabilities reduce the need to directly access data sources.
Data is delivered at low latency using high speed in-memory managed caches.
AppSync is fully managed and eliminates the operational overhead of managing cache clusters.
Provides the flexibility to selectively cache data fields and operations defined in the GraphQL schema with customizable expiration.
Security and Access Control
AWS AppSync allows several levels of data access and authorization depending on the needs of an application.
Simple access can be protected by a key.
AWS IAM roles can be used for more restrictive access control.
AWS AppSync also integrates with:
- Amazon Cognito User Pools for email and password functionality
- Social providers (Facebook, Google+, and Login with Amazon).
- Enterprise federation with SAML.
Customers can use the Group functionality for logical organization of users and roles as well as OAuth features for application access.
Custom Domain Names
AWS AppSync enables customers to use custom domain names with their AWS AppSync API to access their GraphQL endpoint and real-time endpoint.
Used with AWS Certificate Manager (ACM) certificates..
A custom domain name can be associated with any available AppSync API in your account.
When AppSync receives a request on the custom domain endpoint, it routes it to the associated API for handling.
Source: https://digitalcloud.training/aws-front-end-web-and-mobile/ (Neal Davis)
Serverless Application Security
Cloud security best practices are serverless best practices. These include applying the principle of least privilege, securing data in transit and at rest, writing code that is security-aware, and monitoring and auditing actively.
Apply a defense in depth approach to your serverless application security.
OWASP Top 10 Security Threats:
- Injection (code)
- Broken authentication (identity and access)
- Sensitive data exposure (data)
- XML external entities (XXE) (code)
- Broken access control (identity and access)
- Security misconfiguration (logging and monitoring)
- Cross-site scripting (XSS) (code)
- Insecure desterilization (code)
- Using components with known vulnerabilities (code and infrastructure)
- Insufficient logging and monitoring (logging and monitoring)
Six security design principles in serverless applications:
- Apply security at all layers
- Implement strong identity and access controls
- Protect data in transit and at rest
- Protect against attacks
- Minimize attack surface area
- Mitigate distributed denial of service (DDoS) attack impacts
- Implement inspection and protection
- Enable auditing and traceability
- Automate security best practices
Three general approaches to protecting against attacks
Handling Scale in Serverless Applications
Thinking serverless at scale means knowing the quotas of the services you are using and focusing on scaling trade-offs and optimizations among those services to find the balance that makes the most sense for your workload.
As your solutions evolve and your usage patterns become clearer, you should continue to find ways to optimize performance and costs and make the trade-offs that best support the workload you need rather than trying to scale infinitely on all components. Don’t expect to get it perfect on the first deployment. Build in the kind of monitoring and observability that will help you understand what’s happening, and be prepared to tweak things that make sense for the access patterns that happen in production.
Lambda Power Tuning helps you understand the optimal memory to allocate to functions.
You can specify whether you want to optimize on cost, performance, or a balance of the two.
Under the hood, a Step Functions state machine invokes the function you’ve specified at different memory settings from 128 MB to 3 GB and captures both duration and cost values.
Let’s take a look at Lambda Power Tuning in action with a function I’ve written.
The function I have determines the hash value of a lot of numbers. Computationally, it’s expensive. I’d like to know whether I should be allocating 1 GB, 1.5 GB, or 3 GB of RAM to it.
I can specify the memory values to test in the file deploy.sh. In my example, I’m only using 1, 1.5 GB, and 3 GB. The state machine takes the following parameters (you define these in sample-execution-input.json):
- Lambda ARN
- Number of invocations for each memory configuration
- Static payload to pass to the Lambda function for each invocation
- Parallel invocation: Whether all invocations should be in parallel or not. Depending on the value, you may experience throttling.
- Strategy: Can be cost, speed, or balanced. Default is cost.
If you specify Cost, it will report the cheapest option regardless of performance. Speed will suggest fastest regardless of cost. Balanced will choose a compromise according to balancedWeight. balancedWeight is a number between 0 and 1. Zero is speed strategy. One is cost strategy.
Let’s take a look at the inputs I’ve specified and find out how much memory we should allocate.
In this configuration, I’m specifying that I want this function to execute as quickly as possible.
Results.power shows that 3 GB provides the best performance.
Let’s update my configuration to use the default strategy of cost and run again.
Results.power shows that 1 GB is the best option for price.
Use this tool to help you evaluate how to configure your Lambda functions.
How API Gateway responds to a burst of requests
Automating the Deployment Pipeline
Automation is especially important with serverless applications. Lots of distributed services that can be independently deployed mean more, smaller deployment pipelines that each build and test a service or set of services. With an automated pipeline, you can incorporate better detection of anomalies and more testing, halt your pipeline at a certain step, and automatically roll back a change if a deployment were to fail or if an alarm threshold is triggered.
Your pipeline may be a mix and match of AWS or third-party components that suit your needs, but the concepts apply generally to whatever tools your organization uses for each of these steps in the deployment tool chain. This module will reference the AWS tools that you can use in each step in your CI/CD pipeline.
CI/CD best practices
Configure testing using safe deployments in AWS SAM:
- Declare an AutoPublishAlias
- Set safe deployment type
- Set a list of up to 10 alarms that will trigger a rollback
- Configure a Lambda function to run pre- and post-deployment tests
Use traffic shifting with pre- and post-deployment hooks
- PreTraffic: When the application is deployed, the PreTraffic Lambda function runs to determine if things should continue. If that function completes successfully (i.e., returns a 200 status code), the deployment continues. If the function does not complete successfully, the deployment rolls back.
- PostTraffic: If the traffic successfully completes the traffic shifting progression to 100 percent of traffic to the new alias, the PostTraffic Lambda function runs. If it returns a 200 status code, the deployment is complete. If the PostTraffic function is not successful, the deployment is rolled back.
Use separate account per environment
It’s a best practice with serverless to use separate accounts for each stage or environment in your deployment. Each developer has an account, and the staging and deployment environments are each in their own accounts.
This approach limits the blast radius of issues that occur (for example, unexpectedly high concurrency) and allows you to secure each account with IAM credentials more effectively with less complexity in your IAM policies within a given account. It also makes it less complex to differentiate which resources are associated with each environment.
Because of the way costs are calculated with serverless, spinning up additional environments doesn’t add much to your cost. Other than where you are provisioning concurrency or database capacity, the cost of running tests in three environments is not different than running them in one environment because it’s mostly about the total number of transactions that occur, not about having three sets of infrastructure.
Use on AWS SAM template with parameters across environments
As noted earlier, AWS SAM supports CloudFormation syntax so that your AWS SAM template can be the same for each deployment environment with dynamic data for the environment provided when the stack is created or updated. This helps you ensure that you have parity between all testing environments and aren’t surprised by configurations or resources that are different or missing from one environment to the next.
AWS SAM lets you build out multiple environments using the same template, even across accounts:
- Use parameters and mappings when possible to build dynamic templates based on user inputs and pseudo parameters, such as AWS: Region
- Use the Globals section to simplify templates
- Use ExportValue and ImportValue to share resource information across stacks
Manage secrets across environments with Parameter Store:
AWS Systems Manager Parameter Store supports encrypted values and is account specific, accessible through AWS SAM templates at deployment, and accessible from code at runtime.
Testing throughout the pipeline
Another best practice is to test throughout the pipeline. Assuming these steps in a pipeline – build, deploy to test environment, deploy to staging environment, and deploy to production – drag the type of test to the pipeline step where you would perform the tests before allowing the next step in deployment to continue.
Automated deployments
- Serverless Developer Tools page
- Tutorial: Deploy an Updated Lambda Function with CodeDeploy and the AWS Serverless Application Model
- Whitepaper: Practicing Continuous Integration and Continuous Delivery on AWS: Accelerating Software Delivery with DevOps
- Quick Start: Serverless CI/CD for the Enterprise on AWS
- AWS re:Invent 2019: CI/CD for Serverless Applications
- AWS CodeDeploy user guide: AppSpec ‘hooks’ section for an AWS Lambda deployment
Deploying serverless applications
- AWS Serverless Application Model Developer Guide, Deploying Serverless Applications Gradually
Serverless Deployment Quiz1:
Which of the following are best practices you should implement into ongoing deployments of your application? (Select THREE.)
A. Test throughout the pipeline
B. Create account-specific AWS SAM templates for each environment
C. Use traffic shifting with pre- and post-deployment hooks
D. Use an AutoPublish alias
E. Use stage variables to manage secrets across environments
Serverless Deployment Quiz2:
You are reviewing the team’s plan for managing the application’s deployment. Which suggestions would you agree with? (Select TWO.)
A. Use IAM to control development and production access within one AWS account to separate development code from production code
B. Use AWS SAM CLI for local development testing
C. Use CloudFormation to write all of the infrastructure as code for deploying the application
D. Use Amplify to deploy the user interface and AWS SAM to deploy the serverless backend
Scaling considerations for serverless applications
True
- Using HTTP APIs and first-class service integrations can reduce end-to-end latency because it lets you connect the API call directly to a service API rather than requiring a Lambda function between API Gateway and the other AWS service.
- Provisioned concurrency may be less expensive than on-demand in some cases. If your provisioned concurrency is used more than 60 percent during a given time period, then it will probably be less expensive to use provisioned concurrency or a combination of on-demand and provisioned concurrency.
- With Amazon SQS as an event source, Lambda will manage concurrency. Lambda will increase concurrency when the queue depth is increasing, and decrease concurrency when errors are being returned.
- You can set a batch window to increase the time before Lambda polls a stream or queue. This lets you reduce costs by avoiding regularly invoking the function with a small number of records if you have a relatively low volume of incoming records.
False
- Setting reserved concurrency on a version: You cannot set reserved concurrency per function version. You set reserved concurrency on the function and can set provisioned concurrency on an alias. It’s important to keep the total provisioned concurrency for active aliases to less than the reserved concurrency for the function.
- Setting the number of shards on a DynamoDB table: You do not directly control the number of shards the table uses. You can directly add shards to a Kinesis Data Stream. With a DynamoDB table, the way you provision read/write capacity and your scaling decisions drive the number of shards. DynamoDB will automatically adjust the number of shards needed based on the way you’ve configured the table and the volume of data.
- Concurrency in synchronous invocations: Lambda will use concurrency equal to the request rate multiplied by function duration. As one function invocation ends, Lambda can reuse its environment rather than spinning up a new one, so function duration plays an important factor in concurrency for synchronous and asynchronous invocations.
- The impact of higher function memory: A higher memory configuration does have a higher price per millisecond, but because duration is also a factor of cost, your function may finish faster at higher memory configurations and that might mean an overall lower cost.
A shorter duration may reduce the concurrency Lambda needs, but depending on the nature of the function, higher memory may not have a measurable impact on duration. You can use tools like Lambda Power Tuning (https://github.com/alexcasalboni/aws-lambda-power-tuning) to find the best balance for your functions.
There is no stopping Amazon Web Services (AWS) from innovating, improving, and ensuring the customer gets the best experience possible as a result. Providing a seamless user experience is a constant commitment for AWS, and their ongoing innovation allows the customer’s applications to be more innovative – creating a better customer experience.
AWS makes managing networking in the cloud one of the easiest parts of the cloud service experience. When managing your infrastructure on premises, you would have had to devote a significant amount of time to understanding how your networking stack works. It is important to note that AWS does not have a magic bullet that will make all issues go away, but they are constantly providing new exciting features that will enhance your ability to scale in the cloud, and the key to this is elasticity.
Elasticity is defined as “The ability to acquire resources as you need them and release resources when you no longer need them” – this is one of the biggest selling points of the cloud. The three networking features which we are going to talk about today are all elastic in nature, namely the Elastic Network Interface (ENI), the Elastic Fabric Adapter (EFA), and the Elastic Network Adapter (ENA). Let’s compare and contrast these AWS features to allow us to get a greater understanding into how AWS can help our managed networking requirements.
AWS ENI (Elastic Network Interface)
You may be wondering what an ENI is in AWS? The AWS ENI (AWS Elastic Network Interface) is a virtual network card that can be attached to any instance of the Amazon Elastic Compute Cloud (EC2). The purpose of these devices is to enable network connectivity for your instances. If you have more than one of these devices connected to your instance, it will be able to communicate on two different subnets -offering a whole host of advantages.
For example, using multiple ENIs per instance allows you to decouple the ENI from the EC2 instance, in turn allowing you far more flexibility to design an elastic network which can adapt to failure and change.
As stated, you can connect several ENIs to the same EC2 instance and attach your single EC2 instance to many different subnets. You could for example have one ENI connected to a public-facing subnet, and another ENI connected to another internal private subnet.
You could also, for example, attach an ENI to a running EC2 instance, or you could have it live after the EC2 instance is deleted.
Finally, it can also be implemented as a crude form of high availability: Attach an ENI to an EC2 instance; if that instance dies, launch another and attach the ENI to that one as well. It will only affect traffic flow for a short period of time.
AWS EFA (Elastic Fabric Adapter)
In Amazon EC2 instances, Elastic Fabric Adapters (EFAs) are network devices that accelerate high-performance computing (HPC) and machine learning.
EFAs are Elastic Network Adapters (ENAs) with additional OS-bypass capabilities.
AWS Elastic Fabric Adapter (EFA) is a specialized network interface for Amazon EC2 instances that allows customers to run high levels of inter-instance communication, such as HPC applications on AWS at scale on.
Due to EFA’s support for libfabric APIs, applications using a supported MPI library can be easily migrated to AWS without having to make any changes to their existing code.
For this reason, AWS EFA is often used in conjunction with Cluster placement groups – which allow physical hosts to be placed much closer together within an AZ to decrease latency even more. Some use cases for EFA are in weather modelling, semiconductor design, streaming a live sporting event, oil and gas simulations, genomics, finance, and engineering, amongst others.
AWS ENA (Elastic Network Adapter)
Finally, let’s discuss the AWS ENA (Elastic Network Adapter).
The Elastic Network Adapter (ENA) is designed to provide Enhanced Networking to your EC2 instances.
With ENA, you can expect high throughput and packet per second (PPS) performance, as well as consistently low latencies on Amazon EC2 instances. Using ENA, you can utilize up to 20 Gbps of network bandwidth on certain EC2 instance types – massively improving your networking throughput compared to other EC2 instances, or on premises machines. ENA-based Enhanced Networking is currently supported on X1 instances.
Key Differences
There are a number of differences between these three networking options.
- Elastic Network Interface (ENI) is a logical networking component that represents a virtual networking card
- Elastic Network Adapter (ENA) physical device, Intel 82599 Virtual Function (VF) to provide high end performance on certain specified and supported EC2 types
- Elastic Fabric Adapter (EFA) is a network device which you can attach to your EC2 instance to accelerate High Performance Computing (HPC)
- Elastic Network Adapter (ENA) is only available on the X1 instance type, Elastic Network Interfaces (ENI) are ubiquitous across all EC2 instances and Elastic Fabric Adapters are available for only certain instance types.
- In order to support VPC networking, an ENA ENI provides traditional IP networking features.
- EFA ENIs provide all the functionality of ENA ENIs plus hardware support to allow applications to communicate directly with the EFA ENI without involving the instance kernel (OS-bypass communication).
- Since the EFA ENI has advanced capabilities, it can only be attached to stopped instances or at launch.
Limitations
EFA has the following limitations:
- p4d.24xlarge and dl1.24xlarge instances support up to four EFAs. All other supported instance types support only one EFA per instance.
- It is not possible to send EFA traffic from one subnet to another. It is possible to send IP traffic from one subnet to another using the EFA.
- EFA OS-bypass traffic cannot be routed. EFA IP traffic can be routed normally.
- An EFA must belong to a security group that allows inbound and outbound traffic to and from the group.
ENA has the following limitations:
- ENA is only used currently in the X1 instance type
ENI has the following limitations:
- You lack the visibility of a physical networking card, due to virtualization
- Only a few instances types support up to four networking cards, the majority only support 1
Pricing
- You are not priced per ENI with EC2, you are only limited to how many your instance type supports. There is however a charge for additional public IPs on the same instance.
- EFA is available as an optional EC2 networking feature that you can enable on any supported EC2 instance at no additional cost.
- ENA pricing is absorbed into the cost of running an X1 instance
This article originally appeared on: https://digitalcloud.training/aws-networking-eni-vs-efa-vs-ena/
I Passed SAA-C03 Testimonials
Passed AWS SAA C03!!

Thanks to all the people who posted there testing experience here. It gave me a lot of perspective from the exam point of view and how to prepare for the new version.
Stephan Marek’s udemy course and his practice test on Udemy was the key to my success in this test. I did not use any other resource for my preparation.
I am a consultant and have been working on AWS from the last 5+ years, not much hands on work though. My initial cert expired last year so wanted to renew.
Overall, the C03 version was very similar to the C02/C01 version. I did not get a single question about AI/ML services and the questions were majorly related to more fundamental services like VPC, SQS, Lambda, cloud watch, event bridge, Storage (S3, glacier, lifecycle policies). Source: r/awscertification
Passed SAP-C01 AWS Certified Solutions Architect Professional
Resources used were:
Adrian (for the labs),
Jon (For the Test Bank),
and Stephane for a quick overview played on double speed.
Total time spent studying was about a month. I don’t do much hands on as a security compliance guy, but do work with AWS based applications everyday. It helps to know things to a very low level.
So I am sharing how I passed my certification SAA C03 in less than 40 Days without any prior experience in AWS, (my org asked me to do it)
So the Materials I have used:
Neal Davis SAA C03: https://www.udemy.com/course-dashboard-redirect/?course_id=2469516 This was my primary resource and I built foundation using this.
Tutorial Dojo practice Tests: https://www.udemy.com/course-dashboard-redirect/?course_id=1520628 These will make you learn how you will implement your theory in questions and connect the dots.
Neal Davis Practice Tests: https://www.udemy.com/course-dashboard-redirect/?course_id=1878624 I highly recommend these, since Neal’s tests will give you less hints in questions and after doing these you now have absolute understanding how actual Exam questions will be.
Lastly I used Stephane Maarek SAA CO3: https://www.udemy.com/course-dashboard-redirect/?course_id=2196488 To close out final remaining gaps and revision.
After doing tests just make sure you know why the particular answer is wrong.
I scheduled my exam on 26th September and gave the test in Pearson Center. The exam was extremely lengthy I took all my time to just do the questions and I did not have time to look back at my Flagged questions (actually while I was clicking on End Review button timeup and test ended itself) My results came after 50 hours of completing the test and these 50 hours were the most difficult in complete journey.
Today I received my result and I score 914 and got the badge and certification.
So how do you know you are ready. Once you start getting 80+ consistently in 2-3 tests just book your exam.
Passed SAP-C01!

Just found out I passed the Solutions Architect Pro exam. It was a tough one, took me almost the full 3 hours to answer and review every question. At the end of the exam, I felt that it could have gone either way. Had to wait about 20 painful hours to get my final result (857/1000). I’m honestly amazed, I felt so unprepared. What made it worse is that I suddenly felt ill on the night of the exam. Only got about three hours sleep, realized it was too late to reschedule and had to drag myself to the test center. Was very tempted to bail and pay the $300 to resit, very glad I didn’t!
No formal cloud background, but have worked in IT/software for about 10 years as a software engineer. Some of my roles included network setup/switch configuration/Linux and Windows server admin, which definitely comes in useful (but isn’t required). I got my first cert in January (CCP), and have since got the other three associate certs (SAA, DVA, SOA).
People are not joking when they say this is an endurance test. You need to try and stay focused for the full three hours. It took me about two hours to answer every question, and a further hour to review my answers.
In terms of prep, I used a combination of Stefan Maarek (Udemy) and Adrian Cantrill (learn.cantrill.io). Both courses worked well together I found (Adrian Cantrill for the theory/practical, and Stefan Maarek for the review/revision). I used Tutorials Dojo for practice exams and review (tutorialsdojo.com). The exam questions are very close to the real thing, and the question summary/explanations are extremely well written. My advice is to sit the practice exam, and then carefully review each question (regardless of if you got it right/wrong) and read/understand the explanations as to why each answer is right/wrong. It takes time, but it will really prepare you for the real thing.
I’m particularly impressed with the Advanced Demos on the Adrian Cantrill course, some of those really helped out with having the knowledge to answer the exam questions. I particularly liked the Organizations, Active Directory, Hybrid DNS, Hybrid SSM, VPN and WordPress demos.
In terms of the exam, lots of questions on IAM (cross-account roles), Organizations (billing/SCP/RAM), Database performance issues, migrations, Transit Gateway, DX/VPN, containerisation (ECS/EKS), disaster recovery. Some of the scenario questions are quite tricky, all four answers appear valid but there will be subtle differences between them. So you have to work out what is different between each answer.
A tip I will leave you: a lot of the migration questions will get you to pick between using snow devices or uploading via the internet/DX. Quick way to work out if uploading is feasible is to multiply the line speed by 10,000 – this will give you the approximate number of bytes that can be transferred in a day. E.g. a line speed of 50Mbps will let you transfer 500GBytes in a day (assuming nothing else is using that link). So if you had to transfer 100TB, then you will need to use snow devices (unless you were happy waiting 200 days).
Just passed the SAA-C03 exam (864) and wanted to provide some feedback since that was helpful for me when I was browsing here before the exam.
I come from an IT background and have a strong knowledge in the VPC portion so that section was a breeze for me in the preparation process (I had never used AWS before this so everything else was new, but the concepts were somewhat familiar considering my background). I started my preparation about a month ago, and used the Mareek class on Udemy. Once I finished the class and reviewed my notes I moved to Mareek’s 6 practice exams (on Udemy). I wasn’t doing extremely well on the PEs (I passed on 4/6 of the exams with 70s grades) I reviewed the exam questions after each exam and moved on to the next. I also purchased Tutorial Dojo’s 6 exams set but only ended up taking one out of 6 (which I passed).
Overall the practice exams ended up being a lot harder than the real exam which had mostly the regular/base topics: a LOT of S3 stuff and storage in general, a decent amount of migration questions, only a couple questions on VPCs and no ML/AI stuff.
Sharing the study guide that I followed when I prepared for the AWS Certified Solutions Architect Associate SAA-C03 exam. I passed this test and thought of sharing a real exam experience in taking this challenging test.
First off: my background – I have 8 years of development.experience and been doing AWS for several project, both personally and at work. Studied for a total of 2 months. Focused on the official Exam Guide, and carefully studied the Task Statements and related AWS services.
For my exam prep, I bought the adrian cantrill video course, tutorialsdojo (TD) video course and practice exams. Adrian’s course is just right and highly educational but like others has said, the content is long and cover more than just the exam. Did all of the hands-on labs too and played around some machine learning services in my AWS account.
TD video course is short and a good overall summary of the topics items you’ve just learned. One TD lesson covers multiple topics so the content is highly concise. After I completed doing Adrian’s video course, I used TD’s video course as a refresher, did a couple of their hands-on labs then head on to their practice exams.
For the TD practice exams, I took the exam in chronologically and didn’t jumped back and forth until I completed all tests. I first tried all of the 7 timed-mode tests, and review every wrong ones I got on every attempt., then the 6 review-mode tests and the section/topic-based tests. I took the final-test mode roughly 3 times and this is by far one of the helpful feature of the website IMO. The final-test mode generates a unique set from all TD question bank, so every attempt is challenging for me. I also noticed that the course progress doesn’t move if I failed a specific test, so I used to retake the test that I failed.
The actual AWS exam is almost the same with the ones in the TD tests where:
All of the questions are scenario-based
There are two (or more) valid solutions in the question, e.g:
Need SSL: options are ACM and self-signed URL
Need to store DB credentials: options are SSM Parameter Store and Secrets Manager
The scenarios are long-winded and asks for:
MOST Operationally efficient solution
MOST cost-effective
LEAST amount overhead
Overall, I enjoyed the exam and felt fully prepared while taking the test, thanks to Adrian and TD, but it doesn’t mean the whole darn thing is easy. You really need to put some elbow grease and keep your head lights on when preparing for this exam. Good luck to all and I hope my study guide helped out anyone who is struggling.
Just another thread about passing the general exam? I passed SAA-C03 yesterday, would like to share my experience on how I earned the examination.
Background:
– graduate with networking background
– working experience on on-premise infrastructure automation, mainly using ansible, python, zabbix and etc.
– cloud experience, short period like 3-6 months with practice
– provisioned cloud application using terraform in azure and aws
Course that I used fully:
– AWS Certified Solutions Architect – Associate (SAA-C03) | learn.cantri (cantrill.io)
– AWS Certified Solutions Architect Associate Exam – SAA-C03 Study Path (tutorialsdojo.com)
Course that I used partially or little:
– Ultimate AWS Certified Solutions Architect Associate (SAA) | Udemy
– Practice Exams | AWS Certified Solutions Architect Associate | Udemy
Lab that I used:
– Free tier account with cantrill instruction
– Acloudguru lab and sandbox
– Percepio lab
Comment on course:
cantrill course is depth and lot of practical knowledge, like email alias and etc.. check in to know more
tutorialdojo practice exam help me filter the answer and guide me on correct answer. If I am wrong in specific topic, I rewatch cantrill video. However, there is some topics that not covered by cantrill but the guideline/review in practice exam will provide pretty much detail. I did all the other mode before the timed-based, after that get average 850 in timed-based exam, while scoring the final practice exam with 63/65. However, real examination is harder compared to practice exam in my opinion.
udemy course and practice exam, I go through some of them but I think the practice exam is quite hard compared to tutorialdojo.
lab – just get hand dirty and they will make your knowledge deep dive in your brain, my advice is try not only to do copy and paste lab but really read the description for each parameter in aws portal
Advice:
you need to know some general exam topics like how to:
– s3 private access
– ec2 availability
– kinesis product including firehose, data stream, blabla
– iam
My next target will be AWS SAP and CKA, still searching suitable material for AWS SAP but proposed mainly using acloudguru sandbox and homelab to learn the subject, practice with acantrill lab in github.
Good luck anyone!
Passed SAA

I wanted to give my personal experience. I have a background in IT, but I have never worked in AWS previous to 5 weeks ago. I got my Cloud Practitioner in a week and SAA after another 4 weeks of studying (2-4 hours a day). I used Cantril’s Course and Tutorials Dojo Practice Exams. I highly, highly recommend this combo. I don’t think I would have passed without the practice exams, as they are quite difficult. In my opinion, they are much more difficult than the actual exam. They really hit the mark on what kind of content you will see. I got a 777, and that’s with getting 70-80%’s on the practice exams. I probably could have done better, but I had a really rough night of sleep and I came down with a cold. I was really on the struggle bus halfway through the test.
I only had a couple of questions on ML / AI, so make sure you know the differences between them all. Lot’s of S3 and EC2. You really need to know these in and out.
My company is offering stipend’s for each certification, so I’m going straight to developer next.
Just passed my SAA-C03 yesterday with 961 points. My first time doing AWS certification. I used Cantrill’s course. Went through the course materials twice, and took around 6 months to study, but that’s mostly due to my busy schedule. I found his materials very detailed and probably go beyond what you’d need for the actual exam.
I also used Stephane’s practice exams on Udemy. I’d say it’s instrumental in my passing doing these to get used to the type of questions in the actual exams and review missing knowledge. Would not have passed otherwise.
Just a heads-up, there are a few things popped up that I did not see in the course materials or practice exams:
* Lake Formation: question about pooling data from RDS and S3, as well as controlling access.
* S3 Requester Pays: question about minimizing S3 data cost when sharing with a partner.
* Pinpoint journey: question about customer replying to SMS sent-out and then storing their feedback.
Not sure if they are graded or Amazon testing out new parts.
Cheers.
Passed Solutions Architect Professional (SAP-C01)

I’ve spent the last 2 months of my life focusing on this exam and now it’s over! I wanted to write down some thoughts that I hope are informative to others. I’m also happy to answer any other questions.
APPROACH
I used Stephane’s courses to pass CCP, SAA, DVA… however I heard such great things about Adrian’s course that I purchased it and started there.
The detail and clarity that Adrian employs is amazing, and I was blown away by the informative diagrams that he includes with his lessons. His UDP joke made me lol. The course took a month to get through with many daily hours, and I made over 100 pages of study notes in a Google document. After finishing his course, I went through Stephane’s for redundancy.
As many have mentioned here, Stephane does a great job of summarizing concepts, and for me, I really value the slides that he provides with his courses. It helps to memorize and solidify concepts for the actual exam.
After I went through the courses, I bought TutorialsDojo practice exams and started practicing. As everyone says, these are almost a must-use resource before an AWS exam. I recognized three questions on the real exam, and the thought exercise of taking the mocks came in handy during the real exam.
Total preparation: 10 weeks
DIFFICULTY
I heard on this Subreddit that if this exam is a 10, then the associate-level exams are a 3. I was a bit skeptical, but I found the exam a bit harder than the practice exam questions. I just found a few obscure things referred to during the real exam, and some concepts combined in single questions. The Pro-level exams are *at least* 2 times as hard, in my opinion. You need to have Stephane’s slides (or the exam “power-ups” that Adrian points out)/the bolded parts down cold and really understand the fundamentals.
WHILE STUDYING
As my studying progressed, I found myself on this sub almost every day reading others’ experiences and questions. Very few people in my circle truly understand the dedication and hard work that is required to pass any AWS exam, so observing and occasionally interacting here with like-minded people was great. We’re all in this together!
POST-EXAM
I was waiting anxiously for my exam result. When I took the associate exams, I got a binary PASS/FAIL immediately… I got my Credly email 17 hours after finishing the exam, and when I heard from AWS, my score was more than expected which feels great.
WHAT’S NEXT
I’m a developer and have to admit I’ve caught the AWS bug. I want to pursue more… I heard Adrian mention in another thread that some of his students take the Security specialty exam right after SAP, and I think I will do the same after some practice exams. Or DevOps Pro… Then I’m taking a break 🙂
I had a lot on S3, cloudfront, DBs, and a lot on a bunch Lambda and containers. Lots of which is the most cost-effective solution questions.
I think did ok but my online proctoring experience kinda jacked with my mind alittle bit(specifics in separate thread), at one point I even got yelled at for thinking outloud to myself which kinda sucked as that’sone way I talk myself through situations :-/
For two weeks I used MANY practice exams on Youtube, Tutorials Dojo, a cloud guru, and shout out for Cloud Guru Amit (Youtube) has a keyword method that worked well for me, and just reading up on various white papers on stuff I wasn’t clear on/got wrong.
ONTO AWS-Security Specialty and CompTIA Sec+ for me.
I passed my SAA! Here’s some tips and thoughts.

Shoutout to Adrian his course was great at preparing me for all the knowledge needed for the exam. (with exception of a question on Polly and Textstract which none of the resources Adrian, Stephan for Test review and dojo practice exams covered)
I got a 78 and went in person to a testing site close by to avoid potential hiccups with online testing. I studied over the course of 4 months but did the bulk of the course in 2 months.
I want to reiterate a common theme in these posts that should not be overlooked in case you are in deep in your journey and plan on taking the tests in the near 4 weeks out or 75% through the videos. BUY THE TUTORIALDOJO PRACTICE EXAMS AND TAKE THEM. EVEN BEFORE YOU ARE DONE WITH ALL THE COURSE.
I thought it would be smarter to finish the course and then do the tests to get a higher score BUT you will inevitably strengthen your skills and knowledge through 1) Doing the tests to get used to the format. 2) REVIEW REVIEW REVIEW – The questions fall into 4 categories and afterwords you will see all the questions and why they are the right answer or almost the right answer. Knowing your weaknesses is crucial for intentional, intelligent, and efficient reviewing.
I took screenshots of all the questions I got wrong or wasn’t completely sure of why I got them right.
Got a lot of questions based on cloudfront,s3,secrets manger, kms, databases, container(ECS) and ML question based on amazon transcribe.
Just passed the AWS Certified Solutions Architect Associate exam SAA-C03 and thank God I allocated some time improving my core networking knowledge. In my point of view, the exam is filled with networking and security questions, so make sure that you really focus on these two domains.
If you don’t know that the port number of MySQL is 3306 and the one for MS SQL is 1433, then you might get overwhelmed by the content of the SAA-C03 exam. Knowing how big or how small a particular VPC (or network) would be based on a given CIDR notation would help too. Integrating SSL / HTTPS into your services like ALB, CloudFront etc are also present in the exam.
On the top of my head, these are the related networking stuff I encountered. Most of the things in this list are somewhat mentioned in the official exam guide:
Ports (e.g. 3306 = MySQL, 1433 = Microsoft SQL)
Regional API Gateway
DNS Resolution between On-Premises networks and AWS
Internal vs external IPs
EKS – Kubernetes Pod Networking
Ephemeral Ports
CIDR blocks
VPC Peering
Lots of Endpoint types (e.g. S3 File Gateway endpoints, Interface Endpoint, Gateway Endpoint
As far as I know, AWS shuffles the content of their exam so you probably could get these topics too. Some feature questions could range from basic to advanced, so make sure you know each feature of all the AWS services mentioned in the exam guide. Here’s what i could remember:
Amazon MQ with active/sync
S3 Features (Requester Pays, Object Lock etc)
Data Lakes
Amazon Rekognition
Amazon Comprehend
For my exam prep, I started my study with Jon Bonso/TD’s SAA video course then moved to Adrian Cantrill’s course. Both are very solid resources and each instructor has a different style of teaching. Jon’s course is more like a minimalist, modern YouTube style teaching. He starts with an overview first before going to the nitty-gritty tech details of things, with fancy montage of videos to drive the fundamental concept in AWS. I recommend his stuff as a crash course to learn the majority of SAA-related content. There’s also a bunch of playcloud/hands-on labs included in his course which I find very helpful too.
Adrian’s course has a much longer course and include the necessary networking/tech fundamentals. Like what other people are saying in this sub, the quality of his stuff is superb and very well delivered. If you are not in a rush and really want to learn the ropes of being a solutions architect, then his course is definitely a must-have. He also has a good videos on YouTube and mini-projects in Github that you can check out.
About half-way in Adrian’s course, I started doing mock exams from TutorialsDojo (TD) and AWS Skill Builder just to reinforce my knowledge. I take a practice test first, then review my correct and incorrect answer. If I notice that I get a lot of mistake in a particular service, I go back to Adrian’s course to make those concepts stick better.
I also recommend trying out the demo/sample/preview lessons before you buy any SAA course. From there, you can decide which teaching style would work best for you:
Adrian Cantrill course: https://learn.cantrill.io/courses/aws-certified-solutions-architect-associate-saa-c03/lectures/41301631
TD mock exams: https://portal.tutorialsdojo.com/courses/aws-certified-solutions-architect-associate-practice-exams/
AWS Skill Builder practice questions set https://explore.skillbuilder.aws/learn/course/external/view/elearning/13266/aws-certified-solutions-architect-associate-official-practice-question-set-saa-c03-english
Thank you to all the helpful guys and gals in this community who shared tips!
About me:
My overall objective was to pivot more towards cloud security role from a traditional cybersecurity role. I am a security professional and have 10+ years of experience with certifications like CCIE Security, CISSP, OSCP and others. Mostly I have worked in consulting environments doing deployment and pre-sales work.
My cloud Journey:
I started studying AWS certification in January 2022 and did SA Associate in March, SA Professional in August and Security Specialty in September. I used Adrian’s, Stefan’s, and Neal’s videos in mix. I used tutorialdojo for practice test.
Preparation Material:
For videos, Adrian’s stood out with the level of effort this guy has put in. Had this been 6-8 years back this kind of on-site bootcamp for 1 candidate would sell for at minimum 5000 USD . I used them at 1.25x speed but it was difficult to come back to Adrian’s content due to its length if I were to recall/revise something. That’s why I had Stefan’s and Neal’s stuff in my pocket, they usually go on sale for 12-13 USD so no harm in having them. Neal did better job than Stefan for SA Pro as his slides were much more visually appealing. But I felt Stefan covered more concepts. Topics like VPC, Transit gateway can be better understood if the visuals are better. I never made any notes, I purchased Tutorial Dojo’s notes but I dont think they were of much use. You can always find notes made by other people on github and I felt they were more helpful. You can also download video slides from udemy and I did cut a few slides from there and pasted in my google docs if I were to revise them. For the practice test I felt dojo’s wordings were complex compared to the real exam but it does give a very good idea of the difficulty of exam. The real exam had more crisp content.
About Exam:
The exams itself were interesting because it helped me learn the new datacenter’s architecture. Concepts and technologies like lambda, step function, AWS organization, SCP were very interesting and I feel way more confident now compared to what I was 1 year back. Because I target security roles I want to point out that not everything is covered in AWS certifications for these roles. I had gone through CSA 4.0 guide back in December 2021 before starting AWS journey and I think thats helped me visualize many scenarios. Concepts like shadow IT, legal hold, vendor lock in, SOC2/3 reports , portability and interoperability problems in cloud environments were very new to me. I wish AWS can include these stuff in the security exam. These concepts are more towards compliance and governance but its important to know if you are going to interview for cloud security architect roles. I also feel concepts included in DevSecOps should be included more in the security specialty exam.
A bit of criticism here. The exam is very much product specific and many people coming from deployment/research backgrounds will even call it a marketing exam. In fact one L7 Principal Security SA from AWS told me that he considers this a marketing exam. On this forum, there are often discussions on how difficult the AWS SA Pro is but I disagree on that. These exams were no way near the difficulty level of CCIE , CISSP or OSCP which I did in past. The difficulty of exam is high because of its long length of questions, the reading fatigue it can cause, and lack of Visio diagrams. All of these things are not relevant to the real world if you are working as a Solution Architect/Security Architect. Especially for SA Pro almost all questions goes like this – Example scenario -‘A customer plans to migrate to AWS cloud where the application are to be resided on EC2 with auto-scaling enabled in private subnet, those EC2 are behind an ALB which is in public subnet. A replica of this should be created in EU region and Route53 should be doing geolocation routing’ . In the real world, these kinds of issues are always communicated using Visio diagrams i.e. “Current state architecture diagram” and “Future state architecture diagram”. In almost every question I had to read this and draw on the provided sheet which created extra work and reading fatigue. I bet non-english speakers who are experienced architects will find it irritating even though they are given 30 minutes extra. If AWS would change these long sentences into diagrams that can make things easier and more aligned to real world, not sure if they would want to do it because then the difficulty goes down. Also because SMEs are often paid per question they make they don’t want to put more effort in creating diagrams. That’s the problem when you outsource question creation to 3rd party SMEs, the payment is on the number of questions made and I don’t think companies even pay for this. Often this is voluntary work against which the company grants some sort of free recertification or exam voucher.
There seems to be quite a noise for Advanced Networking exam which is considered most difficult. While I haven’t looked into the exam, I would say if it doesn’t has diagrams in each question then the exam is not aligned to real world. Networking challenges should never be communicated without diagrams. Again the difficulty is high because it causes reading fatigue which doesn’t happen in a life of a security architect.
Tips to be a successful consultant:
If you were to become a cloud security architect, I would still highly recommend AWS SA Pro, Security specialty not so much because there was more KMS here and a little bit here and there but the Security specialty was not an eye-opener for me as SA Pro was. Even AWS job description for L6 Security Arch ( Proserv ) role says that the candidate must be able to complete AWS SA Pro in 3 months of hiring which means this is more relevant than the Security specialty even for security roles. But these are all products and you need knowledge beyond that for security roles. The driving force of security has mostly been compliance, you should be really good in things like PCI DSS , ISO 27001 , Cloud Control Matrix because the end of the day you need to map these controls to the product so understanding product is not even 50% of the job. Terraform/Pulumi if you were to communicate your ideas/PoC as IaC. Some python/boto3 SDK which will help you in creating use cases ( need for ProServ roles but not for SA roles ) . If you are looking to do threat modeling of cloud native applications you again need AWS knowledge plus , securing SDLC process, SAST/DAST and then MITRE ATT&CK/Cloud controls matrix etc.
Similarly, if you want to be in networking roles, don’t think AWS Advance Networking will help you be a good consultant. Its a very complex topic and I would recommend look beyond by following courses by Ivan Pepelnjak who himself is a networking veteran. https://www.ipspace.net/Courses . This kind of stuff will help you be a much confident consultant.
I am starting my python journey now which will help me automate use cases. Feel free to ping me if you have any questions.
So finally got my score and I scored 886 which is definitely more than I expected. I have been working on AWS for about a year but my company is slowing moving there so I don’t have ton of hands on experience yet.
I got lot of helpful information from so many people on this subreddit. This is now my turn to share my experience.
Study plan
Started with Cantril’s SAA-C02 course and later switched to his SAA-C03 course. He does a great job at explaining everything in detail. He really covers every topic in great detail and the demos are well structured and detailed. Worth every penny. It does take a long time to finish his course so plan accordingly.
Tutorials DoJo study guide & cheat sheets – I liked this 300 odd pages PDF where all the crucial topics are summarized. Bonso does a great job in comparing similar services and highlighting things that may get you confused during the exam. I took notes within the PDF and used highlighter tool a lot. Helped me revise couple of days before the exam.
Tutorials DoJo practice tests – These tests are the BEST. The questions are similar to what they ask in the exam. The explanation under every question is very helpful. Read thru these for every question that you got wrong and even on the questions that you got right but weren’t 100% sure.
Official exam guide – I used this at the end to check if I have an understanding of knowledge and skill items. The consolidated list of services is really helpful. I took notes against each service and especially focused on services that look similar.
Labs – While Cantrill’s labs are great, if you are following him along then you may be going too fast and missing few things. If you are new to a particular service then you should absolutely go back and go thru every screen at your own pace. I did spend time doing labs but nearly not enough as I had hoped for.
Exam experience
First few questions were easy. Lot of short questions which definitely helped me with my nerves.
Questions started getting longer and answers were confusing too. I flagged about 20 odd questions for review but could only review half of them before the timer was done.
Remember that 15 questions are not scored. No point spending a lot of time on a question that may not even count against your final score. Use the flag for review feature and come back to a question later if time permits.
Watch out for exactly what they are asking for. You as an architect might want to solve the problem in another way than what the question is asking you to do.
I will edit/add if I remember more things.
2022 – 2023 AWS Solutions Architect Associate SAA-C03 Practice Exam
Lots of the comments here about networking / VPC questions being prevalent are true. Also so many damn Aurora questions, it was like a presales chat.
The questions are actually quite detailed; as some had already mentioned. So pay close attention to the minute details Some questions you definitely have to flag for re-review.
It is by far harder than the Developer Associate exam, despite it having a broader scope. The DVA-C02 exam was like doing a speedrun but this felt like finishing off Sigrun on GoW. Ya gotta take your time.
I took the TJ practice exams. It somewhat helped, but having intimate knowledge of VPC and DB concepts would help more.
Passed AWS SAP-C01

Just passed the SAP this past weekend, and it was for sure a challenge. I had some familiarity with AWS already having cloud practitioner and passing the SAA back in 2019. I originally wanted to pass the professional version to keep my certs active, so I decided to cram to pass this before they made changes in November. Overall I was able to pass on my first attempt after studying for about 6 weeks heavily. This consisted of on average of about 4 hours a day of studying.
I used the following for studying:
a cloud guru video course and labs(this was ok in my opinion but didn’t really go into as much detail as I think it should have)
Stephane Maarek’s video course was really awesome and hit on everything I really needed on the test. I also took his practice tests a bunch of times.
tutorial dojos practice tests were worth every penny and the review mode on there was perfect to practice and go over material rapidly.
Overall I would focus at first on going through a the full video course with Stephane and then tackling some practice tests. I would then revisit his videos often on subjects I needed to revisit. On the day of the test I took it remotely which honestly think added a little more stress with the proctor all over me on any movement. I ended up passing with a score of 811. Not the best score but I honestly thought I did worse on the test overall as it was challenging and time flew by.
Passed with 819.
Approach:
Took Ultimate AWS Certified Solutions Architect Associate SAA-C03 course by Stephane Maarek on Udemy. Sat through all lectures and labs. I think Maarek’s course provides a good overview of all necessary services including hands on labs which prepare you for real world tasks.
Finished all Practice Exams by Tutorial Dojo. Did half of the tests first in review mode and the rest in timed mode.
For last minute summary preparation, I used Tutorials Dojo Study Guide eBook. It was around $4 and summarizes all services. Good ebook to go through before your exam. It is around 280 pages. I only went through summary of services that I was struggling with.
Exam Day and Details:
I opted in for in person exam with Pearson since I live close to their testing centers and I heard about people running into issues with online exams. If you have a testing center nearby, I highly recommend you go there. Unlike online exams, you are free to use bathroom and use blank sheets of paper. It just thought there was more freedom during in person class.
The exam questions were harder than TD. They were more detailed and usually had combination of multiple services as the correct answers. Read the questions very carefully and flag them for review if you aren’t sure.
Around 5-10 questions were exactly the same from TD which was very helpful.
There were a lot of questions related to S3, EBS, EFS, RDS and DynamoDB. So focus on those.
I saw ~5 questions with AWS services which I had never heard of before. I believe those were part of 15 ungraded questions. If you see services you haven’t heard of, I wouldn’t worry much about them as they are likely part of 15 ungraded questions.
It took me around 1.5 hours to finish the exam including the review. I finished my exam around 4PM and got results next morning around 5AM. I only got email from credly. However, I was able to download my exam report from https://www.aws.training/Certification immediately.
Tips:
Try to get at least 80% on few tests on TD before your take your exam.
Take half of the TD practice exams in review mode and go through the answers in detail (even the right ones).
Opt in for in person exam if possible.
If you see AWS services you hadn’t seen before, don’t panic. It’s likely they are part of 15 ungraded questions.
Read questions very carefully.
Relax. It’s just a certification exam and you can retake it in 14 days if you failed. But if you followed all of above, there is very little chance that you will fail.
Next:
AWS Certified Developer – Associate Certification
Good luck everyone!
AWS Certified Solutions Architect Associate
I passed the SAA-C03 AWS Certified Solutions Architect Assoc. exam this week, all thanks to this helpful Reddit sub! Thank you to everyone who are sharing tips and inspiration on a regular basis. Sharing my exam experience here:
Topics I encountered in the exam:
Lots of S3 features (ex: Object Lock, S3 Access Points)
Lots of advanced cloud designs. I remember the following:
AWS cloud only with 1 VPC
AWS cloud only with 3 VPCs connected using a Transit Gateway
AWS cloud only with 3 VPCs with a shared VPC which contains shared resources that the other 2 VPCs can use.
AWS cloud + on-prem via VPN
AWS cloud + on-prem via Direct Connect
AWS cloud + on-prem with SD WAN connection
Lots of networking ( multicasting via Transit Gateway, Container Networking, Route 53 resolvers )
Lots of Containers – EKS, EKS Anywhere, EKS Distro, ECS Anywhere
Lots of new AWS services – Compute Optimizer, License Manager, Proton, Managed Grafana etc.
Reviewers used:
Tutorials Dojo (TD) SAA-C03 video course and practice exams
Adrian Cantrill: AWS Certified Solutions Architect – Associate (SAA-C03)
Official SAA-C03 Exam Guide: https://d1.awsstatic.com/training-and-certification/docs-sa-assoc/AWS-Certified-Solutions-Architect-Associate_Exam-Guide.pdf
Exam Tips
If you’re not a newbie anymore I recommend skipping the basic lessons included in Adrian Cantrill’s course and focus on the related SAA-C03 stuff.
Do labs labs labs! Adrian has a collection of labs on this github. TD has hands-on labs too with a real AWS Console. I do find the TD labs helpful in testing my actual knowledge in certain topics.
Take the TD practice exams at least twice and aim to get 90% on all test
Review the suitable use cases for each AWS service. The TD and Adrian’s video courses usually covers the use cases for every AWS service. Familiarize yourself with that and make notes
Make sure that whenever you watch the videos, you create your own notes that you can review later on.
source: r/awscertifiations
Passed SAA-C03
Hi guys,
I’ve successfully passed the SAA-C03 exam on Saturday with a score of 832. I felt like the exam was pretty difficult and was wondering if I would pass… Maybe I got a harder test set.
What I did to prepare:
- Tom Carpenter’s course on Linkedin Learning for SAA-C02. I started preparing for the exam last year, but had a break in between. Meanwhile AWS released the new version, so this course is not that relevant anymore. They will probably update it for SAA-C03 in the future.
- Tutorials Dojo practice tests and materials: Now these were great! I’ve did a couple of their practice tests in review mode and a couple in timed mode. Overall (unpopular opinion) I felt like the exam was harder than the practice tests, but the practice tests and explanations prepared me pretty well for it.
- Whizlabs SAA-C03 course: They have some practice tests which were fine, but they also have Labs which are great if you want to explore the AWS services in a guided environment.
- Skillcertpro practice tests: The first 5 were fine, but the others were horrible. Stay away from them! They are full of typos and also incorrect answers (S3 was ‘eventually consistent’ in one of the questions)
- 1.5 years experience with AWS
Amazon AWS Outages 2022 – 2023
Is AWS Down? Is Amazon down? AWS Outage today?
Are too many companies dependent on ONE cloud company? Amazon’s AWS outage impacted thousands of companies and the products they offer to consumers including doorbells, security cameras, refrigerators, 911 services, and productivity software.
Amazon AWS Outages Highlight Promise And Peril Of Public Clouds For 5G – Forbes
AWS Skills Builder
What’s better, AWS Skill Builder or AWS Workshops?
Workshops help you practice various labs/scenarios in your AWS Account. Its more like learning by doing.
Skill builder is more structured – like being taught a formal class either through text or video.
https://awesome-aws-workshops.com/
AWS Glue is a pay-as-you-go service from Amazon that helps you with your ETL (extract, transform and load) needs. It automates time-consuming steps of data preparation for analytics. It extracts the data from different data sources, transforms it, and then saves it in the data warehouse. Today, we will explore AWS Glue in detail. Let’s start with the components of AWS Glue.
AWS Glue Components
Below, you’ll find some of the core components of AWS Glue.
Data Catalog
Data Catalog is the persistent metadata store in your AWS Glue. You have one data catalog per AWS account. It contains the metadata related to all your data sources, table definitions, and job definitions to manage the ETL process in AWS Glue.
Crawler
Crawler connects to your data source and data targets. It crawls through the schema and creates metadata in your AWS Glue data catalog.
Classifier
The classifier object determines the schema of a data store. AWS Glue has built-in classifiers for common data types like CSV, Json, XML, etc. AWS Glue also provides default classifiers for common RDBMS systems as well.
Data store
A data store is used to store the actual data in a persistent data storage system like S3 or a relational database management system.
Database
Database in the AWS Glue terminology refers to the collection of associated data catalog table definitions organized into a logical group in AWS Glue.
AWS Glue Architecture
How AWS Glue Works
- You will Identify the data sources which you will use.
- You will define a crawler to point to each data source and populate the AWS Glue data catalog with the metadata table definitions. This metadata will be used when data is transformed during the ETL process.
- Now your data catalog has been categorized, and the data is available for instant searching, querying, and ETL processing.
- You will provide a script through the console or API so that the data can be transformed. AWS Glue can also generate a script for this purpose.
- You will run the job or schedule the job to run based on a particular trigger. A trigger can be based on a particular schedule or occurring of an event.
- When a job is executed, the script extracts the data from the data source(s), transforms it, and loads the transformed data into the data target. The script is run in the Apache Spark environment in AWS Glue.
When To Use AWS Glue
Below are some of the top use cases for AWS Glue.
Build a data warehouse
If you want to build a data warehouse that will collect data from different sources, cleanse it, validate it, and transform it, then AWS Glue is an ideal fit. You can transform and move the AWS cloud data into your data store too.
Use AWS S3 as data lake
You can convert your S3 data into a data lake by cataloguing its data into AWS Glue. The transformed data will be available to AWS redshift and AWS Athena for querying. Both Redshift and Athena can directly query your S3 using AWS Glue.
Create event-driven ETL pipeline
AWS Glue is a perfect fit if you want to launch an ETL job as soon as fresh data is available in S3. You can use AWS Lambda along with AWS Glue to orchestrate the ETL process.
Features of AWS Glue
Below are some of the top features of AWS Glue.
Automatic schema recognition
Crawler is a very powerful component of AWS Glue that automatically recognizes the schema of your data. Users do not need to design the schema of each data source manually. Crawlers automatically identify the schema and parse the data.
Automatic ETL code generation
AWS Glue is capable of creating the ETL code automatically. You just need to specify the source of the data and its target data store; AWS Glue will automatically create the relevant code in scala or python for the entire ETL pipeline.
Job scheduler
ETL jobs are very flexible in AWS Glue. You can execute the jobs on-demand, and you can also schedule them to be triggered based on a schedule or event. Multiple jobs can be executed in parallel, and you can even mention the job dependencies as well.
Developer endpoints
Developers can take advantage of developer endpoints to debug AWS Glue as well as develop custom crawlers, writers, and data transformers, which can later be imported into custom libraries.
Integrated data catalog
The data catalog is the most powerful component of AWS Glue. It is the central metadata store of all the diverse data sources of your pipeline. You only have to maintain just one data catalog per AWS account.
Benefits of Using AWS Glue
Strong integrations
AWS Glue has strong integrations with other AWS services. It provides native support for AWS RDS and Aurora databases. It also supports AWS Redshift, S3, and all common database engines and databases running in your EC2 instances. AWS Glue even supports NoSQL data sources like DynamoDB.
Built-in orchestration
You do not need to set up or maintain ETL pipeline infrastructure. AWS Glue will automatically handle the low-level complexities for you. The crawlers automate the process of schema identification and parsing, freeing you from the burden of manually evaluating and parsing different complex data sources. AWS Glue also creates the ETL pipeline code automatically. It has built-in features for logging, monitoring, alerting, and restarting failure scenarios as well.
AWS Glue is serverless, which means you do not need to worry about maintaining the underlying infrastructure. AWS glue has built-in scaling capabilities, so it can automatically handle the extra load. It automatically handles the setup, configuration, and scaling of underlying resources.
Cost-effective
You only pay for what you use. You will only be charged for the time when your jobs are running. This is especially beneficial if your workload is unpredictable and you are not sure about the infrastructure to provision for your ETL jobs.
Drawbacks of Using AWS Glue
Here are some of the drawbacks of using AWS Glue.
Reliance on Apache Spark
As the AWS Glue jobs run in Apache Spark, the team must have expertise in Spark in order to customize the generated ETL job. AWS Glue also creates the code in python or scala – so your engineers must have knowledge of these programming languages too.
Complexity of some use cases
Apache spark is not very efficient in use cases like advertisement, gaming, and fraud detection because these jobs need high cardinality joins. Spark is not very good when it comes to high cardinality joins. You can handle these scenarios by implementing additional components, although that will make your ETL pipeline complex.
Similarly, if you need to combine steam and batch jobs, that will be complex to handle in AWS Glue. This is because AWS Glue requires batch and stream processes to be separate. As a result, you need to maintain extra code to make sure that both of these processes run in a combined manner.
AWS Glue Pricing
For the ETL jobs, you will be charged only for the time the job is running. AWS will charge you on an hourly basis depending on the number of DPUs (Data Processing Units) that are needed to run your job. One DPU is approximately 4 vCPUs with 16GB of memory. You will also pay for the storage of the data stored in the AWS Glue data catalog. The first million objects are free in the catalog, and the first million accesses are also free. Crawlers and development endpoints are also charged based on an hourly rate, and the rate depends on the number of DPU’s.
Frequently Asked Questions
How is AWS Glue different from AWS Lake Formation?
Lake Formation’s main area is governance and data management functionality, whereas AWS Glue is strong in ETL and data processing. They both complement each other as the lake formation is primarily a permission management layer that uses the AWS glue catalog under the hood.
Can AWS Glue write to DynamoDB?
Yes, AWS Glue can write to DynamoDB. However, the option of writing is not available in the console. You will need to customize the script to achieve that.
Can AWS Glue write to RDS?
Yes, AWS Glue can write to any RDS engine. When using the ETL job wizard, you can select the target option of “JDBC” and then you can create a connection to any RDS-compliant database.
Is AWS Glue in real-time?
AWS Glue can process data from Amazon Kinesis Data Streams using micro-batches in real-time. For a large data set, there might be some delay. It can process petabytes of data both in batches and in real-time.
Does AWS Glue Auto Scale?
AWS Glue provides autoscaling starting from version 3.0. It automatically adds or removes workers based on the workload.
Where is AWS Glue Data Catalog Stored?
As AWS Glue is a drop-in replacement to Hive metastore. Most probably, the data is stored in MySQL database. However, it is not confirmed because there is no official information from AWS regarding this.
How Fast is AWS Glue?
AWS Glue 3 has improved a lot in terms of speed. The speed of version 3 is 2.4 times faster than the version 2. This is because it uses vectorized readers and micro-parallel SIMD CPU instructions for faster data parsing, tokenization, and indexing.
Is AWS Glue Expensive?
No, AWS Glue is not expensive. This is because it is based on serverless architecture, and you are charged only when it is actually used. There is no permanent infrastructure cost, so AWS Glue is not costly.
Is AWS Glue a Database?
No. AWS Glue is a fully managed cloud service from Amazon through which you can prepare data for analysis through an automated ETL process.
Is AWS Glue difficult to learn?
AWS Glue is not really difficult to learn. This is because it provides a GUI-based interface through which you can easily manage the process of authoring, running, and monitoring the whole process of ETL jobs.
What is The Difference Between AWS Glue and EMR?
AWS Glue and EMR are both AWS solutions for ETL processing. EMR is a slightly faster and cheaper platform, especially if you already have the required infrastructure available. However, if you want a serverless solution where you expect your workload to be inconsistent, then AWS Glue is the better option.
What is Google Workspace?
Google Workspace is a cloud-based productivity suite that helps teams communicate, collaborate and get things done from anywhere and on any device. It's simple to set up, use and manage, so your business can focus on what really matters.
Watch a video or find out more here.
Here are some highlights:
Business email for your domain
Look professional and communicate as you@yourcompany.com. Gmail's simple features help you build your brand while getting more done.
Access from any location or device
Check emails, share files, edit documents, hold video meetings and more, whether you're at work, at home or on the move. You can pick up where you left off from a computer, tablet or phone.
Enterprise-level management tools
Robust admin settings give you total command over users, devices, security and more.
Sign up using my link https://referworkspace.app.goo.gl/Q371 and get a 14-day trial, and message me to get an exclusive discount when you try Google Workspace for your business.
Google Workspace Business Standard Promotion code for the Americas
63F733CLLY7R7MM
63F7D7CPD9XXUVT
63FLKQHWV3AEEE6
63JGLWWK36CP7WM
Email me for more promo codes
Active Hydrating Toner, Anti-Aging Replenishing Advanced Face Moisturizer, with Vitamins A, C, E & Natural Botanicals to Promote Skin Balance & Collagen Production, 6.7 Fl Oz
Age Defying 0.3% Retinol Serum, Anti-Aging Dark Spot Remover for Face, Fine Lines & Wrinkle Pore Minimizer, with Vitamin E & Natural Botanicals
Firming Moisturizer, Advanced Hydrating Facial Replenishing Cream, with Hyaluronic Acid, Resveratrol & Natural Botanicals to Restore Skin's Strength, Radiance, and Resilience, 1.75 Oz
Skin Stem Cell Serum
Smartphone 101 - Pick a smartphone for me - android or iOS - Apple iPhone or Samsung Galaxy or Huawei or Xaomi or Google Pixel
Can AI Really Predict Lottery Results? We Asked an Expert.


Djamgatech

Read Photos and PDFs Aloud for me iOS
Read Photos and PDFs Aloud for me android
Read Photos and PDFs Aloud For me Windows 10/11
Read Photos and PDFs Aloud For Amazon
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more)
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6(Email us for more)
FREE 10000+ Quiz Trivia and and Brain Teasers for All Topics including Cloud Computing, General Knowledge, History, Television, Music, Art, Science, Movies, Films, US History, Soccer Football, World Cup, Data Science, Machine Learning, Geography, etc....

List of Freely available programming books - What is the single most influential book every Programmers should read
- Bjarne Stroustrup - The C++ Programming Language
- Brian W. Kernighan, Rob Pike - The Practice of Programming
- Donald Knuth - The Art of Computer Programming
- Ellen Ullman - Close to the Machine
- Ellis Horowitz - Fundamentals of Computer Algorithms
- Eric Raymond - The Art of Unix Programming
- Gerald M. Weinberg - The Psychology of Computer Programming
- James Gosling - The Java Programming Language
- Joel Spolsky - The Best Software Writing I
- Keith Curtis - After the Software Wars
- Richard M. Stallman - Free Software, Free Society
- Richard P. Gabriel - Patterns of Software
- Richard P. Gabriel - Innovation Happens Elsewhere
- Code Complete (2nd edition) by Steve McConnell
- The Pragmatic Programmer
- Structure and Interpretation of Computer Programs
- The C Programming Language by Kernighan and Ritchie
- Introduction to Algorithms by Cormen, Leiserson, Rivest & Stein
- Design Patterns by the Gang of Four
- Refactoring: Improving the Design of Existing Code
- The Mythical Man Month
- The Art of Computer Programming by Donald Knuth
- Compilers: Principles, Techniques and Tools by Alfred V. Aho, Ravi Sethi and Jeffrey D. Ullman
- Gödel, Escher, Bach by Douglas Hofstadter
- Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin
- Effective C++
- More Effective C++
- CODE by Charles Petzold
- Programming Pearls by Jon Bentley
- Working Effectively with Legacy Code by Michael C. Feathers
- Peopleware by Demarco and Lister
- Coders at Work by Peter Seibel
- Surely You're Joking, Mr. Feynman!
- Effective Java 2nd edition
- Patterns of Enterprise Application Architecture by Martin Fowler
- The Little Schemer
- The Seasoned Schemer
- Why's (Poignant) Guide to Ruby
- The Inmates Are Running The Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity
- The Art of Unix Programming
- Test-Driven Development: By Example by Kent Beck
- Practices of an Agile Developer
- Don't Make Me Think
- Agile Software Development, Principles, Patterns, and Practices by Robert C. Martin
- Domain Driven Designs by Eric Evans
- The Design of Everyday Things by Donald Norman
- Modern C++ Design by Andrei Alexandrescu
- Best Software Writing I by Joel Spolsky
- The Practice of Programming by Kernighan and Pike
- Pragmatic Thinking and Learning: Refactor Your Wetware by Andy Hunt
- Software Estimation: Demystifying the Black Art by Steve McConnel
- The Passionate Programmer (My Job Went To India) by Chad Fowler
- Hackers: Heroes of the Computer Revolution
- Algorithms + Data Structures = Programs
- Writing Solid Code
- JavaScript - The Good Parts
- Getting Real by 37 Signals
- Foundations of Programming by Karl Seguin
- Computer Graphics: Principles and Practice in C (2nd Edition)
- Thinking in Java by Bruce Eckel
- The Elements of Computing Systems
- Refactoring to Patterns by Joshua Kerievsky
- Modern Operating Systems by Andrew S. Tanenbaum
- The Annotated Turing
- Things That Make Us Smart by Donald Norman
- The Timeless Way of Building by Christopher Alexander
- The Deadline: A Novel About Project Management by Tom DeMarco
- The C++ Programming Language (3rd edition) by Stroustrup
- Patterns of Enterprise Application Architecture
- Computer Systems - A Programmer's Perspective
- Agile Principles, Patterns, and Practices in C# by Robert C. Martin
- Growing Object-Oriented Software, Guided by Tests
- Framework Design Guidelines by Brad Abrams
- Object Thinking by Dr. David West
- Advanced Programming in the UNIX Environment by W. Richard Stevens
- Hackers and Painters: Big Ideas from the Computer Age
- The Soul of a New Machine by Tracy Kidder
- CLR via C# by Jeffrey Richter
- The Timeless Way of Building by Christopher Alexander
- Design Patterns in C# by Steve Metsker
- Alice in Wonderland by Lewis Carol
- Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig
- About Face - The Essentials of Interaction Design
- Here Comes Everybody: The Power of Organizing Without Organizations by Clay Shirky
- The Tao of Programming
- Computational Beauty of Nature
- Writing Solid Code by Steve Maguire
- Philip and Alex's Guide to Web Publishing
- Object-Oriented Analysis and Design with Applications by Grady Booch
- Effective Java by Joshua Bloch
- Computability by N. J. Cutland
- Masterminds of Programming
- The Tao Te Ching
- The Productive Programmer
- The Art of Deception by Kevin Mitnick
- The Career Programmer: Guerilla Tactics for an Imperfect World by Christopher Duncan
- Paradigms of Artificial Intelligence Programming: Case studies in Common Lisp
- Masters of Doom
- Pragmatic Unit Testing in C# with NUnit by Andy Hunt and Dave Thomas with Matt Hargett
- How To Solve It by George Polya
- The Alchemist by Paulo Coelho
- Smalltalk-80: The Language and its Implementation
- Writing Secure Code (2nd Edition) by Michael Howard
- Introduction to Functional Programming by Philip Wadler and Richard Bird
- No Bugs! by David Thielen
- Rework by Jason Freid and DHH
- JUnit in Action
#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks
Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION

Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA

Health Health, a science-based community to discuss human health
- Mom of child dead from measles: “Don’t do the shots,” my other 4 kids were fine | The interview downplayed the disease, maligned vaccines, touted unproven treatmentsby /u/chrisdh79 on March 21, 2025 at 8:41 am
submitted by /u/chrisdh79 [link] [comments]
- RFK, Jr. Wants to Let Bird Flu Spread on Poultry Farms. Why Experts Are Concernedby /u/Silly-avocatoe on March 21, 2025 at 5:42 am
submitted by /u/Silly-avocatoe [link] [comments]
- An American Philosophical Society member for 35 yrs, Thomas Jefferson was the 1st scientist US President. At 23, he went to Philadelphia to be inoculated for smallpox when Virginia discouraged it. He later vaccinated 200 family members & neighbors. This 1806 letter gives praise to Dr. Edward Jenner.by /u/JamesepicYT on March 20, 2025 at 8:42 pm
submitted by /u/JamesepicYT [link] [comments]
- KDHE: 6 Kansas residents - all under age 18 - confirmed positive for measles; All cases involved unvaccinated individualsby /u/progress18 on March 20, 2025 at 6:56 pm
submitted by /u/progress18 [link] [comments]
- Salmonella outbreak linked to mini pastries is overby /u/CTVNEWS on March 20, 2025 at 4:18 pm
submitted by /u/CTVNEWS [link] [comments]
Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.
- TIL about the coprophagous sloth moth - a moth that lives its entire life on sloths and eats its feces in the larval stage.by /u/Kitchen-Cartoonist-6 on March 21, 2025 at 6:19 am
submitted by /u/Kitchen-Cartoonist-6 [link] [comments]
- TIL the Climax mine, the largest molybdenum mine in the world, was originally sold for $40,000 in 1918 (~$800,000 today) because the prospector had no idea what the mineral was. The mine would later go on to supply 3/4 of the world's molybdenum, being an important alloy in jet engines.by /u/1000LiveEels on March 21, 2025 at 4:50 am
submitted by /u/1000LiveEels [link] [comments]
- TIL James Cameron pitched the sequel to Ridley Scott's "Alien" by walking straight to a whiteboard, writing "Alien" on it, adding an "s" to it to write "Aliens," and then added two vertical lines to the "s" to transform it into "Alien$."by /u/Giff95 on March 21, 2025 at 4:50 am
submitted by /u/Giff95 [link] [comments]
- TIL Pink Floyd's Shine On You Crazy Diamond was a tribute to Syd Barrett who left the band in 1968 due to his drug use and declining mental health which impaired his ability to integrate with the band. The band felt guilty about removing him but were concerned about his severe mental health declineby /u/ProudReaction2204 on March 21, 2025 at 4:49 am
submitted by /u/ProudReaction2204 [link] [comments]
- TIL that there is a surge of vasectomies in March so the recovery time will sync up with March Madnessby /u/BalognaSpumoni on March 21, 2025 at 2:40 am
submitted by /u/BalognaSpumoni [link] [comments]
Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.
- A new study finds that for every 10°C rise in temperature, sleep duration drops by nearly 10 minutes. By 2099, climate change could cause an annual loss of 33 hours of sleep per person.by /u/calliope_kekule on March 21, 2025 at 5:44 am
submitted by /u/calliope_kekule [link] [comments]
- Green recipe: Engineered yeast boosts D-lactic acid production | Constructed strain achieves record-high yield from methanol, advancing eco-friendly biomanufacturingby /u/FunnyGamer97 on March 21, 2025 at 3:15 am
submitted by /u/FunnyGamer97 [link] [comments]
- Night owls who stay up late, called “evening chronotypes,” have more depression symptoms than people who are early risers, or “morning chronotypes.” On average, night owls had poorer sleep quality, higher alcohol consumption, and acted with less mindfulness than morning chronotypes.by /u/mvea on March 21, 2025 at 1:48 am
submitted by /u/mvea [link] [comments]
- origins of elasticity in molecular materialsby /u/thereallegalchemist on March 20, 2025 at 10:25 pm
submitted by /u/thereallegalchemist [link] [comments]
- Racial and Ethnic Disparities in Regulatory Air Quality Monitor Locations in the USby /u/Potential_Being_7226 on March 20, 2025 at 9:53 pm
submitted by /u/Potential_Being_7226 [link] [comments]
Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.
- Haris Rauf takes a ludicrous one handed catchby /u/Risc_Terilia on March 21, 2025 at 8:47 am
submitted by /u/Risc_Terilia [link] [comments]
- Capitals 1st to book playoff spot 1 year after being last team inby /u/Oldtimer_2 on March 21, 2025 at 3:28 am
submitted by /u/Oldtimer_2 [link] [comments]
- Panama beats USMNT with last-gasp goal in Nations League stunnerby /u/Oldtimer_2 on March 21, 2025 at 2:41 am
submitted by /u/Oldtimer_2 [link] [comments]
- Luke Littler has hit another nine–darter at the Premier League in Cardiffby /u/Gregser94 on March 20, 2025 at 11:58 pm
submitted by /u/Gregser94 [link] [comments]
- No. 12 seed McNeese holds off No. 5 Clemson's late charge to earn first March Madness victoryby /u/Oldtimer_2 on March 20, 2025 at 11:11 pm
submitted by /u/Oldtimer_2 [link] [comments]