The Cloud Education Certification App is an EduFlix App for AWS, Azure, Google Cloud Certification Prep[Android, iOS]
Technology is changing and is moving towards the cloud. The cloud will power most businesses in the coming years and is not taught in schools. How do we ensure that our kids and youth and ourselves are best prepared for this challenge?
Building mobile educational apps that work offline and on any device can help greatly in that sense.
The ability to tab on a button and learn the cloud fundamentals and take quizzes is a great opportunity to help our children and youth to boost their job prospects and be more productive at work.
The App covers the following certifications : AWS Cloud Practitioner Exam Prep CCP CLF-C01, Azure Fundamentals AZ 900 Exam Prep, AWS Certified Solution Architect Associate SAA-C02 Exam Prep, AWS Certified Developer Associate DVA-C01 Exam Prep, Azure Administrator AZ 104 Exam Prep, Google Associate Cloud Engineer Exam Prep, Data Analytics for AWS DAS-C01, Machine Learning for AWS and Google, AWS Certified Security – Specialty (SCS-C01), AWS Certified Machine Learning – Specialty (MLS-C01), Google Cloud Professional Machine Learning Engineer and more… [Android, iOS]
The App covers the following cloud categories:
AWS Technology, AWS Security and Compliance, AWS Cloud Concepts, AWS Billing and Pricing , AWS Design High Performing Architectures, AWS Design Cost Optimized Architectures, AWS Specify Secure Applications And Architectures, AWS Design Resilient Architecture, Development With AWS, AWS Deployment, AWS Security, AWS Monitoring, AWS Troubleshooting, AWS Refactoring, Azure Pricing and Support, Azure Cloud Concepts , Azure Identity, governance, and compliance, Azure Services , Implement and Manage Azure Storage, Deploy and Manage Azure Compute Resources, Configure and Manage Azure Networking Services, Monitor and Backup Azure Resources, GCP Plan and configure a cloud solution, GCP Deploy and implement a cloud solution, GCP Ensure successful operation of a cloud solution, GCP Configure access and security, GCP Setting up a cloud solution environment, AWS Incident Response, AWS Logging and Monitoring, AWS Infrastructure Security, AWS Identity and Access Management, AWS Data Protection, AWS Data Engineering, AWS Exploratory Data Analysis, AWS Modeling, AWS Machine Learning Implementation and Operations, GCP Frame ML problems, GCP Architect ML solutions, GCP Prepare and process data, GCP Develop ML models, GCP Automate & orchestrate ML pipelines, GCP Monitor, optimize, and maintain ML solutions, etc.. [Android, iOS]
The App covers the following Cloud Services, Framework and technologies:
Features: – Practice exams – 1000+ Q&A updated frequently. – 3+ Practice exams per Certification – Scorecard / Scoreboard to track your progress – Quizzes with score tracking, progress bar, countdown timer. – Can only see scoreboard after completing the quiz. – FAQs for most popular Cloud services – Cheat Sheets – Flashcards – works offline
Note and disclaimer: We are not affiliated with AWS, Azure, Microsoft or Google. The questions are put together based on the certification study guide and materials available online. The questions in this app should help you pass the exam but it is not guaranteed. We are not responsible for any exam you did not pass.
Important: To succeed with the real exam, do not memorize the answers in this app. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
Almost 4.57 billion people were active internet users as of July 2020, encompassing 59 percent of the global population. 94% of enterprises use cloud. 77% of organizations worldwide have at least one application running on the cloud. This results in an exponential growth of cyber attacks. Therefore, CyberSecurity is one the biggest challenge to individuals and organizations worldwide: 158,727 cyber attacks per hour, 2,645 per minute and 44 every second of every day.
I- The AWS Certified Security – Specialty (SCS-C01) examination is intended for individuals who perform a security role. This exam validates an examinee’s ability to effectively demonstrate knowledge about securing the AWS platform.
It validates an examinee’s ability to demonstrate:
An understanding of specialized data classifications and AWS data protection mechanisms.
An understanding of data-encryption methods and AWS mechanisms to implement them.
An understanding of secure Internet protocols and AWS mechanisms to implement them.
A working knowledge of AWS security services and features of services to provide a secure production environment.
Competency gained from two or more years of production deployment experience using AWS security services and features.
The ability to make tradeoff decisions with regard to cost, security, and deployment complexity given a set of application requirements.
An understanding of security operations and risks.
Below are the Top 25 AWS Certified Security Specialty Questions and Answers Dumps including Notes, Hint and References:
Question 1: When requested through an STS API call, credentials are returned with what three components?
A) Security Token, Access Key ID, Signed URL
B) Security Token, Access Key ID, Secret Access Key
C) Signed URL, Security Token, Username
D) Security Token, Secret Access Key, Personal Pin Code
Question 2: A company has AWS workloads in multiple geographical locations. A Developer has created an Amazon Aurora database in the us-west-1 Region. The database is encrypted using a customer-managed AWS KMS key. Now the Developer wants to create the same encrypted database in the us-east-1 Region. Which approach should the Developer take to accomplish this task?
A) Create a snapshot of the database in the us-west-1 Region. Copy the snapshot to the us-east-1 Region and specify a KMS key in the us-east-1 Region. Restore the database from the copied snapshot.
B) Create an unencrypted snapshot of the database in the us-west-1 Region. Copy the snapshot to the useast-1 Region. Restore the database from the copied snapshot and enable encryption using the KMS key from the us-east-1 Region
C) Disable encryption on the database. Create a snapshot of the database in the us-west-1 Region. Copy the snapshot to the us-east-1 Region. Restore the database from the copied snapshot.
D) In the us-east-1 Region, choose to restore the latest automated backup of the database from the us-west1 Region. Enable encryption using a KMS key in the us-east-1 Region
If a user copies an encrypted snapshot, the copy of the snapshot must also be encrypted. If a user copies an encrypted snapshot across Regions, users cannot use the same AWS KMS encryption key for the copy as used for the source snapshot, because KMS keys are Region specific. Instead, users must specify a KMS key that is valid in the destination Region
Question 3: A corporate cloud security policy states that communication between the company’s VPC and KMS must travel entirely within the AWS network and not use public service endpoints. Which combination of the following actions MOST satisfies this requirement? (Select TWO.)
A) Add the aws:sourceVpce condition to the AWS KMS key policy referencing the company’s VPC endpoint ID.
B) Remove the VPC internet gateway from the VPC and add a virtual private gateway to the VPC to prevent direct, public internet connectivity.
C) Create a VPC endpoint for AWS KMS with private DNS enabled.
D) Use the KMS Import Key feature to securely transfer the AWS KMS key over a VPN. E) Add the following condition to the AWS KMS key policy: “aws:SourceIp”: “10.0.0.0/16“.
A and C
An IAM policy can deny access to AWS KMS except through your VPC endpoint with the following condition statement:
If you select the Enable Private DNS Name option, the standard AWS KMS DNS hostname resolves to your VPC endpoint.
Question 4: An application team is designing a solution with two applications. The security team wants the applications’ logs to be captured in two different places, because one of the applications produces logs with sensitive data. Which solution meets the requirement with the LEAST risk and effort?
A) Use Amazon CloudWatch Logs to capture all logs, write an AWS Lambda function that parses the log file, and move sensitive data to a different log.
B) Use Amazon CloudWatch Logs with two log groups, with one for each application, and use an AWS IAM policy to control access to the log groups, as required.
C) Aggregate logs into one file, then use Amazon CloudWatch Logs, and then design two CloudWatch metric filters to filter sensitive data from the logs.
D) Add logic to the application that saves sensitive data logs on the Amazon EC2 instances’ local storage, and write a batch script that logs into the Amazon EC2 instances and moves sensitive logs to a secure location.
In an n-tier architecture, each tier’s security group allows traffic from the security group sending it traffic only. The presentation tier opens traffic for HTTP and HTTPS from the internet. Since security groups are stateful, only inbound rules are required.
Question 6: A security engineer is working with a product team building a web application on AWS. The application uses Amazon S3 to host the static content, Amazon API Gateway to provide RESTful services, and Amazon DynamoDB as the backend data store. The users already exist in a directory that is exposed through a SAML identity provider. Which combination of the following actions should the engineer take to enable users to be authenticated into the web application and call APIs? (Select THREE).
A) Create a custom authorization service using AWS Lambda.
B) Configure a SAML identity provider in Amazon Cognito to map attributes to the Amazon Cognito user pool attributes.
C) Configure the SAML identity provider to add the Amazon Cognito user pool as a relying party.
D) Configure an Amazon Cognito identity pool to integrate with social login providers.
E) Update DynamoDB to store the user email addresses and passwords.
F) Update API Gateway to use an Amazon Cognito user pool authorizer.
B, C and F
When Amazon Cognito receives a SAML assertion, it needs to be able to map SAML attributes to user pool attributes. When configuring Amazon Cognito to receive SAML assertions from an identity provider, you need ensure that the identity provider is configured to have Amazon Cognito as a relying party.Amazon API Gateway will need to be able to understand the authorization being passed from Amazon Cognito, which is a configuration step.
Question 7: A company is hosting a web application on AWS and is using an Amazon S3 bucket to store images. Users should have the ability to read objects in the bucket. A security engineer has written the following bucket policy to grant public read access:
Attempts to read an object, however, receive the error: “Action does not apply to any resource(s) in statement.” What should the engineer do to fix the error?
A) Change the IAM permissions by applying PutBucketPolicy permissions.
B) Verify that the policy has the same name as the bucket name. If not, make it the same.
C) Change the resource section to “arn:aws:s3:::appbucket/*”.
D) Add an s3:ListBucket action.
The resource section should match with the type of operation. Change the ARN to include /* at the end, as it is an object operation.
Question 8: A company decides to place database hosts in its own VPC, and to set up VPC peering to different VPCs containing the application and web tiers. The application servers are unable to connect to the database. Which network troubleshooting steps should be taken to resolve the issue? (Select TWO.)
A) Check to see if the application servers are in a private subnet or public subnet.
B) Check the route tables for the application server subnets for routes to the VPC peering connection.
C) Check the NACLs for the database subnets for rules that allow traffic from the internet.
D) Check the database security groups for rules that allow traffic from the application servers.
E) Check to see if the database VPC has an internet gateway.
Question 9: A company is building a data lake on Amazon S3. The data consists of millions of small files containing sensitive information. The security team has the following requirements for the architecture:
Data must be encrypted in transit.
Data must be encrypted at rest.
The bucket must be private, but if the bucket is accidentally made public, the data must remain confidential.
Which combination of steps would meet the requirements? (Select TWO.)
A) Enable AES-256 encryption using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) on the S3 bucket.
B) Enable default encryption with server-side encryption with AWS KMS-managed keys (SSE-KMS) on the S3 bucket.
C) Add a bucket policy that includes a deny if a PutObject request does not include aws:SecureTransport.
D) Add a bucket policy with aws:SourceIp to allow uploads and downloads from the corporate intranet only.
E) Enable Amazon Macie to monitor and act on changes to the data lake’s S3 bucket.
Question 10: A security engineer must ensure that all API calls are collected across all company accounts, and that they are preserved online and are instantly available for analysis for 90 days. For compliance reasons, this data must be restorable for 7 years. Which steps must be taken to meet the retention needs in a scalable, cost-effective way?
A) Enable AWS CloudTrail logging across all accounts to a centralized Amazon S3 bucket with versioning enabled. Set a lifecycle policy to move the data to Amazon Glacier daily, and expire the data after 90 days.
B) Enable AWS CloudTrail logging across all accounts to S3 buckets. Set a lifecycle policy to expire the data in each bucket after 7 years.
C) Enable AWS CloudTrail logging across all accounts to Amazon Glacier. Set a lifecycle policy to expire the data after 7 years.
D) Enable AWS CloudTrail logging across all accounts to a centralized Amazon S3 bucket. Set a lifecycle policy to move the data to Amazon Glacier after 90 days, and expire the data after 7 years.
Meets all requirements and is cost effective by using lifecycle policies to transition to Amazon Glacier.
Question 11: A security engineer has been informed that a user’s access key has been found on GitHub. The engineer must ensure that this access key cannot continue to be used, and must assess whether the access key was used to perform any unauthorized activities. Which steps must be taken to perform these tasks?
A) Review the user’s IAM permissions and delete any unrecognized or unauthorized resources.
B) Delete the user, review Amazon CloudWatch Logs in all regions, and report the abuse.
C) Delete or rotate the user’s key, review the AWS CloudTrail logs in all regions, and delete any unrecognized or unauthorized resources.
D) Instruct the user to remove the key from the GitHub submission, rotate keys, and re-deploy any instances that were launched.
Question 12: You have a CloudFront distribution configured with the following path patterns: When users request objects that start with ‘static2/’, they are receiving 404 response codes. What might be the problem?
A) CloudFront distributions cannot have multiple different origin types
B) The ‘*’ path pattern must appear after the ‘static2/*’ path
C) CloudFront distributions cannot have origins in different AWS regions
D) The ‘*’ path pattern must appear before ‘static1/*’ path
CloudFront distributions cannot have origins in different AWS regions
Question 13: An application running on EC2 instances processes sensitive information stored on Amazon S3. The information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 is a security risk. Which solution will resolve the security concern?
A) Access the data through an Internet Gateway.”,
B) Access the data through a VPN connection.”,
C) Access the data through a NAT Gateway.”,
D) Access the data through a VPC endpoint for Amazon S3″,
VPC endpoints for Amazon S3 provide secure connections to S3 buckets that do not require a gateway or NAT instances. NAT Gateways and Internet Gateways still route traffic over the Internet to the public endpoint for Amazon S3. There is no way to connect to Amazon S3 via VPN.
Question 14: An organization is building an Amazon Redshift cluster in their shared services VPC. The cluster will host sensitive data. How can the organization control which networks can access the cluster?
A) Run the cluster in a different VPC and connect through VPC peering
B) Create a database user inside the Amazon Redshift cluster only for users on the network
C) Define a cluster security group for the cluster that allows access from the allowed networks
D) Only allow access to networks that connect with the shared services network via VPN
A security group can grant access to traffic from the allowed networks via the CIDR range for each network. VPC peering and VPN are connectivity services and cannot control traffic for security. Amazon Redshift user accounts address authentication and authorization at the user level and have no control over network traffic
Question 15: From a security perspective, what is a principal?
A) An identity
B) An anonymous user
C) An authenticated user
D) A resource
B and C
An anonymous user falls under the definition of a principal. A principal can be an anonymous user acting on a system. An authenticated user falls under the definition of a principal. A principal can be an authenticated user acting on a system
Question 16: A company is storing an access key (access key ID and secret access key) in a text file on a custom AMI. The company uses the access key to access DynamoDB tables from instances created from the AMI. The security team has mandated a more secure solution. Which solution will meet the security team’s mandate?
A) Put the access key in an S3 bucket, and retrieve the access key on boot from the instance.
B) Pass the access key to the instances through instance user data.
C) Obtain the access key from a key server launched in a private subnet
D) Create an IAM role with permissions to access the table, and launch all instances with the new role
IAM roles for EC2 instances allow applications running on the instance to access AWS resources without having to create and store any access keys. Any solution involving the creation of an access key then introduces the complexity of managing that secret
Question 18: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?
A) Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
B) Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
C) Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
D) Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.”,
With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys.
A) Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
B) Configure S3 bucket tags with your AWS access keys for your bucket hosting your website so that the application can query them for access.
C) Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
D) Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.
With web identity federation, you don't need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don't have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used
Question 20: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?
A) Cognito Data
B) Cognito Events
C) Cognito Streams
D) Cognito Callbacks
Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams
Question 23:Which of the following is an encrypted key used by KMS to encrypt your data
A) Customer Managed Key
B) Encryption Key
C) Envelope Key
D) Customer Master Key
Your Data key also known as the Enveloppe key is encrypted using the master key. This approach is known as Envelope encryption. Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key
Cryptography: Practice and study of techniques for secure communication in the presence of third parties called adversaries.
Hacking: catch-all term for any type of misuse of a computer to break the security of another computing system to steal data, corrupt systems or files, commandeer the environment or disrupt data-related activities in any way.
Cyberwarfare: Uuse of technology to attack a nation, causing comparable harm to actual warfare. There is significant debate among experts regarding the definition of cyberwarfare, and even if such a thing exists
Penetration testing: Colloquially known as a pen test, pentest or ethical hacking, is an authorized simulated cyberattack on a computer system, performed to evaluate the security of the system. Not to be confused with a vulnerability assessment.
Malwares: Any software intentionally designed to cause damage to a computer, server, client, or computer network. A wide variety of malware types exist, including computer viruses, worms, Trojan horses, ransomware, spyware, adware, rogue software, and scareware.
Malware Analysis Tool: Any .Run Malware hunting with live access to the heart of an incident https://any.run/Malware Analysis Total: VirusTotal – Analyze suspicious files and URLs to detect types of malware, automatically share them with the security community https://www.virustotal.com/gui/
VPN: A virtual private network (VPN) extends a private network across a public network and enables users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network. Applications running across a VPN may therefore benefit from the functionality, security, and management of the private network. Encryption is a common, although not an inherent, part of a VPN connection.
Antivirus: Antivirus software, or anti-virus software (abbreviated to AV software), also known as anti-malware, is a computer program used to prevent, detect, and remove malware.
DDos: A distributed denial-of-service (DDoS) attack is one of the most powerful weapons on the internet. When you hear about a website being “brought down by hackers,” it generally means it has become a victim of a DDoS attack.
Fraud Detection: Set of activities undertaken to prevent money or property from being obtained through false pretenses. Fraud detection is applied to many industries such as banking or insurance. In banking, fraud may include forging checks or using stolen credit cards.
Spywares: Spyware describes software with malicious behavior that aims to gather information about a person or organization and send such information to another entity in a way that harms the user; for example by violating their privacy or endangering their device’s security.
Spoofing: Disguising a communication from an unknown source as being from a known, trusted source
Pharming: Malicious websites that look legitimate and are used to gather usernames and passwords.
Catfishing: Creating a fake profile for fraudulent or deceptive purposes
SSL: Stands for secure sockets layer. Protocol for web browsers and servers that allows for the authentication, encryption and decryption of data sent over the Internet.
Phishing emails: Disguised as trustworthy entity to lure someone into providing sensitive information
Intrusion detection System: Device or software application that monitors a network or systems for malicious activity or policy violations. Any intrusion activity or violation is typically reported either to an administrator or collected centrally using a security information and event management system.
Encryption: Encryption is the method by which information is converted into secret code that hides the information’s true meaning. The science of encrypting and decrypting information is called cryptography. In computing, unencrypted data is also known as plaintext, and encrypted data is called ciphertext.
MFA: Multi-factor authentication (MFA) is defined as a security mechanism that requires an individual to provide two or more credentials in order to authenticate their identity. In IT, these credentials take the form of passwords, hardware tokens, numerical codes, biometrics, time, and location.
Vulnerabilities: A vulnerability is a hole or a weakness in the application, which can be a design flaw or an implementation bug, that allows an attacker to cause harm to the stakeholders of an application. Stakeholders include the application owner, application users, and other entities that rely on the application.
SQL injections: SQL injection is a code injection technique, used to attack data-driven applications, in which malicious SQL statements are inserted into an entry field for execution.
Cyber attacks: In computers and computer networks an attack is any attempt to expose, alter, disable, destroy, steal or gain unauthorized access to or make unauthorized use of an asset.
Confidentiality: Confidentiality involves a set of rules or a promise usually executed through confidentiality agreements that limits access or places restrictions on certain types of information.
Secure channel: In cryptography, a secure channel is a way of transferring data that is resistant to overhearing and tampering. A confidential channel is a way of transferring data that is resistant to overhearing, but not necessarily resistant to tampering.
Tunneling: Communications protocol that allows for the movement of data from one network to another. It involves allowing private network communications to be sent across a public network through a process called encapsulation.
SSH: Secure Shell is a cryptographic network protocol for operating network services securely over an unsecured network. Typical applications include remote command-line, login, and remote command execution, but any network service can be secured with SSH.
SSL Certificates: SSL certificates are what enable websites to move from HTTP to HTTPS, which is more secure. An SSL certificate is a data file hosted in a website’s origin server. SSL certificates make SSL/TLS encryption possible, and they contain the website’s public key and the website’s identity, along with related information.
Phishing: Phishing is a cybercrime in which a target or targets are contacted by email, telephone or text message by someone posing as a legitimate institution to lure individuals into providing sensitive data such as personally identifiable information, banking and credit card details, and passwords.
Cybercrime: Cybercrime, or computer-oriented crime, is a crime that involves a computer and a network. The computer may have been used in the commission of a crime, or it may be the target. Cybercrime may threaten a person, company or a nation’s security and financial health.
Backdoor: A backdoor is a means to access a computer system or encrypted data that bypasses the system’s customary security mechanisms. A developer may create a backdoor so that an application or operating system can be accessed for troubleshooting or other purposes.
Salt and Hash: A cryptographic salt is made up of random bits added to each password instance before its hashing. Salts create unique passwords even in the instance of two users choosing the same passwords. Salts help us mitigate rainbow table attacks by forcing attackers to re-compute them using the salts.
Password: A password, sometimes called a passcode, is a memorized secret, typically a string of characters, usually used to confirm the identity of a user. Using the terminology of the NIST Digital Identity Guidelines, the secret is memorized by a party called the claimant while the party verifying the identity of the claimant is called the verifier. When the claimant successfully demonstrates knowledge of the password to the verifier through an established authentication protocol, the verifier is able to infer the claimant’s identity.
Fingerprint: A fingerprint is an impression left by the friction ridges of a human finger. The recovery of partial fingerprints from a crime scene is an important method of forensic science. Moisture and grease on a finger result in fingerprints on surfaces such as glass or metal.
Facial recognition: Facial recognition works better for a person as compared to fingerprint detection. It releases the person from the hassle of moving their thumb or index finger to a particular place on their mobile phone. A user would just have to bring their phone in level with their eye.
Asymmetric key ciphers versus symmetric key ciphers (Difference between symmetric and Asymmetric encryption): The basic difference between these two types of encryption is that symmetric encryption uses one key for both encryption and decryption, and the asymmetric encryption uses public key for encryption and a private key for decryption.
Decryption: The conversion of encrypted data into its original form is called Decryption. It is generally a reverse process of encryption. It decodes the encrypted information so that an authorized user can only decrypt the data because decryption requires a secret key or password.
Algorithms: Finite sequence of well-defined, computer-implementable instructions, typically to solve a class of problems or to perform a computation.
DFIR: Digital forensic and incident response: Multidisciplinary profession that focuses on identifying, investigating, and remediating computer network exploitation. This can take varied forms and involves a wide variety of skills, kinds of attackers, an kinds of targets. We’ll discuss those more below.
OTP: One Time Password: A one-time password, also known as one-time PIN or dynamic password, is a password that is valid for only one login session or transaction, on a computer system or other digital device
Proxy Server and Reverse Proxy Server:A proxyserver is a go‑between or intermediary server that forwards requests for content from multiple clients to different servers across the Internet. A reverseproxyserver is a type of proxyserver that typically sits behind the firewall in a private network and directs client requests to the appropriate backend server.
Offensive * Exploit Database – The Exploit Database is maintained by Offensive Security, an information security training company that provides various Information Security Certifications as well as high end penetration testing services. https://www.exploit-db.com/
The Hacker News – The Hacker News (THN) is a leading, trusted, widely-acknowledged dedicated cybersecurity news platform, attracting over 8 million monthly readers including IT professionals, researchers, hackers, technologists, and enthusiasts. https://thehackernews.com
SANS NewsBites – “A semiweekly high-level executive summary of the most important news articles that have been published on computer security during the last week. Each news item is very briefly summarized and includes a reference on the web for detailed information, if possible.” Published for free on Tuesdays and Fridays. https://www.sans.org/newsletters/newsbites
SimplyCyber Weekly vids, Simply Cyber brings Information security related content to help IT or Information Security professionals take their career further, faster. Current cyber security industry topics and techniques are explored to promote a career in the field. Topics cover offense, defense, governance, risk, compliance, privacy, education, certification, conferences; all with the intent of professional development. https://www.youtube.com/c/GeraldAuger
TheCyberMentor – Heath Adams uploads regular videos related to various facets of cyber security, from bug bounty hunts to specific pentest methodologies like API, buffer overflows, networking. https://www.youtube.com/c/TheCyberMentor/
Risky Business Published weekly, the Risky Business podcast features news and in-depth commentary from security industry luminaries. Hosted by award-winning journalist Patrick Gray, Risky Business has become a must-listen digest for information security professionals. https://risky.biz/
Pauls Security Weekly This show features interviews with folks in the security community; technical segments, which are just that, very technical; and security news, which is an open discussion forum for the hosts to express their opinions about the latest security headlines, breaches, new exploits and vulnerabilities, “not” politics, “cyber” policies and more. https://securityweekly.com/category-shows/paul-security-weekly/
Security Now – Steve Gibson, the man who coined the term spyware and created the first anti-spyware program, creator of Spinrite and ShieldsUP, discusses the hot topics in security today with Leo Laporte. https://twit.tv/shows/security-now
Daily Information Security Podcast (“StormCast”) Stormcasts are daily 5-10 minute information security threat updates. The podcast is produced each work day, and typically released late in the day to be ready for your morning commute. https://isc.sans.edu/podcast.html
Don’t Panic – The Unit 42 Podcast Don’t Panic! is the official podcast from Unit 42 at Palo Alto Networks. We find the big issues that are frustrating cyber security practitioners and help simplify them so they don’t need to panic. https://unit42.libsyn.com/
Recorded Future Recorded Future takes you inside the world of cyber threat intelligence. We’re sharing stories from the trenches and the operations floor as well as giving you the skinny on established and emerging adversaries. We also talk current events, technical tradecraft, and offer up insights on the big picture issues in our industry. https://www.recordedfuture.com/resources/podcast/
The Cybrary Podcast Listen in to the Cybrary Podcast where we discuss a range topics from DevSecOps and Ransomware attacks to diversity and how to retain of talent. Entrepreneurs at all stages of their startup companies join us to share their stories and experience, including how to get funding, hiring the best talent, driving sales, and choosing where to base your business. https://www.cybrary.it/info/cybrary-podcast/
Cyber Life The Cyber Life podcast is for cyber security (InfoSec) professionals, people trying to break into the industry, or business owners looking to learn how to secure their data. We will talk about many things, like how to get jobs, cover breakdowns of hot topics, and have special guest interviews with the men and women “in the trenches” of the industry. https://redcircle.com/shows/cyber-life
Down the Security Rabbitholehttp://podcast.wh1t3rabbit.net/ Down the Security Rabbithole is hosted by Rafal Los and James Jardine who discuss, by means of interviewing or news analysis, everything about Cybersecurity which includes Cybercrime, Cyber Law, Cyber Risk, Enterprise Risk & Security and many more. If you want to hear issues that are relevant to your organization, subscribe and tune-in to this podcast.
The Privacy, Security, & OSINT Showhttps://podcasts.apple.com/us/podcast/the-privacy-security-osint-show/id1165843330 The Privacy, Security, & OSINT Show, hosted by Michael Bazzell, is your weekly dose of digital security, privacy, and Open Source Intelligence (OSINT) opinion and news. This podcast will help listeners learn some ideas on how to stay secure from cyber-attacks and help them become “digitally invisible”.
Defensive Security Podcasthttps://defensivesecurity.org/ Hosted by Andrew Kalat (@lerg) and Jerry Bell (@maliciouslink), the Defensive Security Podcasts aims to look/discuss the latest security news happening around the world and pick out the lessons that can be applied to keeping organizations secured. As of today, they have more than 200 episodes and some of the topics discussed include Forensics, Penetration Testing, Incident Response, Malware Analysis, Vulnerabilities and many more.
Darknet Diarieshttps://darknetdiaries.com/episode/ Darknet Diaries Podcast is hosted and produced by Jack Rhysider that discuss topics related to information security. It also features some true stories from hackers who attacked or have been attacked. If you’re a fan of the show, you might consider buying some of their souvenirs here (https://shop.darknetdiaries.com/).
Brakeing Down Securityhttps://www.brakeingsecurity.com/ Brakeing Down Security started in 2014 and is hosted by Bryan Brake, Brian Boettcher, and Amanda Berlin. This podcast discusses everything about the Cybersecurity world, Compliance, Privacy, and Regulatory issues that arise in today’s organizations. The hosts will teach concepts that Information Security Professionals need to know and discuss topics that will refresh the memories of seasoned veterans.
Open Source Security Podcasthttps://www.opensourcesecuritypodcast.com/ Open Source Security Podcast is a podcast that discusses security with an open-source slant. The show started in 2016 and is hosted by Josh Bressers and Kurt Siefried. As of this writing, they now posted around 190+ podcasts
Cyber Motherboardhttps://podcasts.apple.com/us/podcast/cyber/id1441708044 Ben Makuch is the host of the podcast CYBER and weekly talks to Motherboard reporters Lorenzo Franceschi-Bicchierai and Joseph Cox. They tackle topics about famous hackers and researchers about the biggest news in cybersecurity. The Cyber- stuff gets complicated really fast, but Motherboard spends its time fixed in the infosec world so we don’t have to.
Hak5https://shop.hak5.org/pages/videos Hak5 is a brand that is created by a group of security professionals, hardcore gamers and “IT ninjas”. Their podcast, which is mostly uploaded on YouTube discusses everything from open-source software to penetration testing and network infrastructure. Their channel currently has 590,000 subscribers and is one of the most viewed shows when you want to learn something about security networks.
Threatpost Podcast Serieshttps://threatpost.com/category/podcasts/ Threatpost is an independent news site which is a leading source of information about IT and business security for hundreds of thousands of professionals worldwide. With an award-winning editorial team produces unique and high-impact content including security news, videos, feature reports and more, with their global editorial activities are driven by industry-leading journalist Tom Spring, editor-in-chief.
CISO-Security Vendor Relationship Podcasthttps://cisoseries.com Co-hosted by the creator of the CISO/Security Vendor Relationship Series, David Spark, and Mike Johnson, in 30 minutes, this weekly program challenges the co-hosts, guests, and listeners to critique, share true stories. This podcast, The CISO/Security Vendor Relationship, targets to enlighten and educate listeners on improving security buyer and seller relationships.
Getting Into Infosec Podcast Stories of how Infosec and Cybersecurity pros got jobs in the field so you can be inspired, motivated, and educated on your journey. – https://gettingintoinfosec.com/
Unsupervised Learning Weekly podcasts and biweekly newsletters as a curated summary intersection of security, technology, and humans, or a standalone idea to provoke thought, by Daniel Miessler. https://danielmiessler.com/podcast/
Building Secure & Reliable Systems Best Practices for Designing, Implementing and Maintaining Systems (O’Reilly) By Heather Adkins, Betsy Beyer, Paul Blankinship, Ana Oprea, Piotr Lewandowski, Adam Stubblefield https://landing.google.com/sre/books/
The Cyber Skill Gap By Vagner Nunes – The Cyber Skill Gap: How To Become A Highly Paid And Sought After Information Security Specialist! (Use COUPON CODE: W4VSPTW8G7 to make it free) https://payhip.com/b/PdkW
Texas A&M Security Courses The web-based courses are designed to ensure that the privacy, reliability, and integrity of the information systems that power the global economy remain intact and secure. The web-based courses are offered through three discipline-specific tracks: general, non-technical computer users; technical IT professionals; and business managers and professionals. https://teex.org/program/dhs-cybersecurity/
Chief Information Security Officer (CISO) Workshop Training – The Chief Information Security Office (CISO) workshop contains a collection of security learnings, principles, and recommendations for modernizing security in your organization. This training workshop is a combination of experiences from Microsoft security teams and learnings from customers. – https://docs.microsoft.com/en-us/security/ciso-workshop/ciso-workshop
CLARK Center Plan C – Free cybersecurity curriculum that is primarily video-based or provide online assignments that can be easily integrated into a virtual learning environments https://clark.center/home
Hack.me is a FREE, community based project powered by eLearnSecurity. The community can build, host and share vulnerable web application code for educational and research purposes. It aims to be the largest collection of “runnable” vulnerable web applications, code samples and CMS’s online. The platform is available without any restriction to any party interested in Web Application Security. https://hack.me/
Enroll Now Free: PCAP Programming Essentials in Pythonhttps://www.netacad.com/courses/programming/pcap-programming-essentials-python Python is the very versatile, object-oriented programming language used by startups and tech giants, Google, Facebook, Dropbox and IBM. Python is also recommended for aspiring young developers who are interested in pursuing careers in Security, Networking and Internet-of-Things. Once you complete this course, you are ready to take the PCAP – Certified Associate in Python programming. No prior knowledge of programming is required.
Stanford University Webinar – Hacked! Security Lessons from Big Name Breaches 50 minute cyber lecture from Stanford.You Will Learn: — The root cause of key breaches and how to prevent them; How to measure your organization’s external security posture; How the attacker lifecycle should influence the way you allocate resources https://www.youtube.com/watch?v=V9agUAz0DwI
Stanford University Webinar – Hash, Hack, Code: Emerging Trends in Cyber Security Join Professor Dan Boneh as he shares new approaches to these emerging trends and dives deeper into how you can protect networks and prevent harmful viruses and threats. 50 minute cyber lecture from Stanford. https://www.youtube.com/watch?v=544rhbcDtc8
Kill Chain: The Cyber War on America’s Elections (Documentary) (Referenced at GRIMMCON), In advance of the 2020 Presidential Election, Kill Chain: The Cyber War on America’s Elections takes a deep dive into the weaknesses of today’s election technology, an issue that is little understood by the public or even lawmakers. https://www.hbo.com/documentaries/kill-chain-the-cyber-war-on-americas-elections
Pluralsight and Microsoft Partnership to help you become an expert in Azure. With skill assessments and over 200+ courses, 40+ Skill IQs and 8 Role IQs, you can focus your time on understanding your strengths and skill gaps and learn Azure as quickly as possible.https://www.pluralsight.com/partners/microsoft/azure
Blackhat Webcast Series Monthly webcast of varying cyber topics. I will post specific ones in the training section below sometimes, but this is worth bookmarking and checking back. They always have top tier speakers on relevant, current topics. https://www.blackhat.com/html/webcast/webcast-home.html
Federal Virtual Training Environment – US Govt sponsored free courses. There are 6 available, no login required. They are 101 Coding for the Public, 101 Critical Infrastructure Protection for the Public, Cryptocurrency for Law Enforcement for the Public, Cyber Supply Chain Risk Management for the Public, 101 Reverse Engineering for the Public, Fundamentals of Cyber Risk Management. https://fedvte.usalearning.gov/public_fedvte.php
Harrisburg University CyberSecurity Collection of 18 curated talks. Scroll down to CYBER SECURITY section. You will see there are 4 categories Resource Sharing, Tools & Techniques, Red Team (Offensive Security) and Blue Teaming (Defensive Security). Lot of content in here; something for everyone. https://professionaled.harrisburgu.edu/online-content/
OnRamp 101-Level ICS Security Workshop Starts this 4/28. 10 videos, Q&A / discussion, bonus audio, great links. Get up to speed fast on ICS security. It runs for 5 weeks. 2 videos per week. Then we keep it open for another 3 weeks for 8 in total. https://onramp-3.s4xevents.com
HackXOR WebApp CTF Hackxor is a realistic web application hacking game, designed to help players of all abilities develop their skills. All the missions are based on real vulnerabilities I’ve personally found while doing pentests, bug bounty hunting, and research. https://hackxor.net/
flAWS System Through a series of levels you’ll learn about common mistakes and gotchas when using Amazon Web Services (AWS). Multiple levels, “Buckets” of fun. http://flaws.cloud/
Stanford CS 253 Web Security A free course from Stanford providing a comprehensive overview of web security. The course begins with an introduction to the fundamentals of web security and proceeds to discuss the most common methods for web attacks and their countermeasures. The course includes video lectures, slides, and links to online reading assignments. https://web.stanford.edu/class/cs253
Linux Journey A free, handy guide for learning Linux. Coverage begins with the fundamentals of command line navigation and basic text manipulation. It then extends to more advanced topics, such as file systems and networking. The site is well organized and includes many examples along with code snippets. Exercises and quizzes are provided as well. https://linuxjourney.com
Ryan’s Tutorials A collection of free, introductory tutorials on several technology topics including: Linux command line, Bash scripting, creating and styling webpages with HTML and CSS, counting and converting between different number systems, and writing regular expressions. https://ryanstutorials.net
CYBER INTELLIGENCE ANALYTICS AND OPERATIONS Learn:The ins and outs of all stages of the intelligence cycle from collection to analysis from seasoned intel professionals. How to employ threat intelligence to conduct comprehensive defense strategies to mitigate potential compromise. How to use TI to respond to and minimize impact of cyber incidents. How to generate comprehensive and actionable reports to communicate gaps in defenses and intelligence findings to decision makers. https://www.shadowscape.io/cyber-intelligence-analytics-operat
Linux Command Line for Beginners 25 hours of training – In this course, you’ll learn from one of Fullstack’s top instructors, Corey Greenwald, as he guides you through learning the basics of the command line through short, digestible video lectures. Then you’ll use Fullstack’s CyberLab platform to hone your new technical skills while working through a Capture the Flag game, a special kind of cybersecurity game designed to challenge participants to solve computer security problems by solving puzzles. Finally, through a list of carefully curated resources through a series of curated resources, we’ll introduce you to some important cybersecurity topics so that you can understand some of the common language, concepts and tools used in the industry. https://prep.fullstackacademy.com/
Hacking 101 6 hours of free training – First, you’ll take a tour of the world and watch videos of hackers in action across various platforms (including computers, smartphones, and the power grid). You may be shocked to learn what techniques the good guys are using to fight the bad guys (and which side is winning). Then you’ll learn what it’s like to work in this world, as we show you the different career paths open to you and the (significant) income you could make as a cybersecurity professional. https://cyber.fullstackacademy.com/prepare/hacking-101
Choose Your Own Cyber Adventure Series: Entry Level Cyber Jobs Explained YouTube Playlist (videos from my channel #simplyCyber) This playlist is a collection of various roles within the information security field, mostly entry level, so folks can understand what different opportunities are out there. https://www.youtube.com/playlist?list=PL4Q-ttyNIRAqog96mt8C8lKWzTjW6f38F
NETINSTRUCT.COM Free Cybersecurity, IT and Leadership Courses – Includes OS and networking basics. Critical to any Cyber job. https://netinstruct.com/courses
HackerSploit – HackerSploit is the leading provider of free and open-source Infosec and cybersecurity training. https://hackersploit.org/
Computer Science courses with video lectures Intent of this list is to act as Online bookmarks/lookup table for freely available online video courses. Focus would be to keep the list concise so that it is easy to browse. It would be easier to skim through 15 page list, find the course and start learning than having to read 60 pages of text. If you are student or from non-CS background, please try few courses to decide for yourself as to which course suits your learning curve best. https://github.com/Developer-Y/cs-video-courses?utm_campaign=meetedgar&utm_medium=social&utm_source=meetedgar.com
Cryptography I -offered by Stanford University – Rolling enrollment – Cryptography is an indispensable tool for protecting information in computer systems. In this course you will learn the inner workings of cryptographic systems and how to correctly use them in real-world applications. The course begins with a detailed discussion of how two parties who have a shared secret key can communicate securely when a powerful adversary eavesdrops and tampers with traffic. We will examine many deployed protocols and analyze mistakes in existing systems. The second half of the course discusses public-key techniques that let two parties generate a shared secret key. https://www.coursera.org/learn/crypto
Software Security Rolling enrollment -offered by University of Maryland, College Park via Coursera – This course we will explore the foundations of software security. We will consider important software vulnerabilities and attacks that exploit them — such as buffer overflows, SQL injection, and session hijacking — and we will consider defenses that prevent or mitigate these attacks, including advanced testing and program analysis techniques. Importantly, we take a “build security in” mentality, considering techniques at each phase of the development cycle that can be used to strengthen the security of software systems. https://www.coursera.org/learn/software-security
Intro to Information Security Georgia Institute of Technology via Udacity – Rolling Enrollment. This course provides a one-semester overview of information security. It is designed to help students with prior computer and programming knowledge — both undergraduate and graduate — understand this important priority in society today. Offered at Georgia Tech as CS 6035 https://www.udacity.com/course/intro-to-information-security–ud459
Cyber-Physical Systems Security Georgia Institute of Technology via Udacity – This course provides an introduction to security issues relating to various cyber-physical systems including industrial control systems and those considered critical infrastructure systems. 16 week course – Offered at Georgia Tech as CS 8803 https://www.udacity.com/course/cyber-physical-systems-security–ud279
Finding Your Cybersecurity Career Path – University of Washington via edX – 4 weeks long – self paced – In this course, you will focus on the pathways to cybersecurity career success. You will determine your own incoming skills, talent, and deep interests to apply toward a meaningful and informed exploration of 32 Digital Pathways of Cybersecurity. https://www.edx.org/course/finding-your-cybersecurity-career-path
Building a Cybersecurity Toolkit – University of Washington via edX – 4 weeks self-paced The purpose of this course is to give learners insight into these type of characteristics and skills needed for cybersecurity jobs and to provide a realistic outlook on what they really need to add to their “toolkits” – a set of skills that is constantly evolving, not all technical, but fundamentally rooted in problem-solving. https://www.edx.org/course/building-a-cybersecurity-toolkit
Cybersecurity: The CISO’s View – University of Washington via edX – 4 weeks long self-paced – This course delves into the role that the CISO plays in cybersecurity operations. Throughout the lessons, learners will explore answers to the following questions: How does cybersecurity work across industries? What is the professionals’ point of view? How do we keep information secure https://www.edx.org/course/cybersecurity-the-cisos-view
Introduction to Cybersecurity – University of Washington via edX – In this course, you will gain an overview of the cybersecurity landscape as well as national (USA) and international perspectives on the field. We will cover the legal environment that impacts cybersecurity as well as predominant threat actors. – https://www.edx.org/course/introduction-to-cybersecurity
Cyber Attack Countermeasures New York University (NYU) via Coursera – This course introduces the basics of cyber defense starting with foundational models such as Bell-LaPadula and information flow frameworks. These underlying policy enforcements mechanisms help introduce basic functional protections, starting with authentication methods. Learners will be introduced to a series of different authentication solutions and protocols, including RSA SecureID and Kerberos, in the context of a canonical schema. – https://www.coursera.org/learn/cyber-attack-countermeasures
Introduction to Cyber Attacks New York University (NYU) via Coursera – This course provides learners with a baseline understanding of common cyber security threats, vulnerabilities, and risks. An overview of how basic cyber attacks are constructed and applied to real systems is also included. Examples include simple Unix kernel hacks, Internet worms, and Trojan horses in software utilities. Network attacks such as distributed denial of service (DDOS) and botnet- attacks are also described and illustrated using real examples from the past couple of decades. https://www.coursera.org/learn/intro-cyber-attacks
Enterprise and Infrastructure Security New York University (NYU) via Coursera – This course introduces a series of advanced and current topics in cyber security, many of which are especially relevant in modern enterprise and infrastructure settings. The basics of enterprise compliance frameworks are provided with introduction to NIST and PCI. Hybrid cloud architectures are shown to provide an opportunity to fix many of the security weaknesses in modern perimeter local area networks. https://www.coursera.org/learn/enterprise-infrastructure-security
Network Security Georgia Institute of Technology via Udacity – This course provides an introduction to computer and network security. Students successfully completing this class will be able to evaluate works in academic and commercial security, and will have rudimentary skills in security research. The course begins with a tutorial of the basic elements of cryptography, cryptanalysis, and systems security, and continues by covering a number of seminal papers and monographs in a wide range of security areas. – https://www.udacity.com/course/network-security–ud199
Real-Time Cyber Threat Detection and Mitigation – New York University (NYU) via Coursera This course introduces real-time cyber security techniques and methods in the context of the TCP/IP protocol suites. Explanation of some basic TCP/IP security hacks is used to introduce the need for network security solutions such as stateless and stateful firewalls. Learners will be introduced to the techniques used to design and configure firewall solutions such as packet filters and proxies to protect enterprise assets. https://www.coursera.org/learn/real-time-cyber-threat-detection
Hey everyone, I’ve started getting into hacking, and would like to know the cheapest but best Wi-Fi cracking/deauthing/hacking adapter. I’m on a fairly tight budget of 20AUD and am willing to compromise if needed. Priority is a card with monitor mode, then cracking capabilities, then deauthing, etc. Thank you guys! By the way, if there are any beginner tips you are willing to give, please let me know!
Authentication — The process of checking if a user is allowed to gain access to a system. eg. Login forms with username and password.
Authorization — Checking if the authenticated user has access to perform an action. eg. user, admin, super admin roles.
Audit — Conduct a complete inspection of an organization’s network to find vulnerable endpoints or malicious software.
Access Control List — A list that contains users and their level of access to a system.
Aircrack-ng — Wifi penetration testing software suite. Contains sniffing, password cracking, and general wireless attacking tools.
Backdoor — A piece of code that lets hackers get into the system easily after it has been compromised.
Burp Suite — Web application security software, helps test web apps for vulnerabilities. Used in bug bounty hunting.
Banner Grabbing — Capturing basic information about a server like the type of web server software (eg. apache) and services running on it.
Botnet — A network of computers controlled by a hacker to perform attacks such as Distributed Denial of Service.
Brute-Force Attack — An attack where the hacker tries different login combinations to gain access. eg. trying to crack a 9 -digit numeric password by trying all the numbers from 000000000 to 999999999
Buffer Overflow — When a program tries to store more information than it is allowed to, it overflows into other buffers (memory partitions) corrupting existing data.
Cache — Storing the response to a particular operation in temporary high-speed storage is to serve other incoming requests better. eg. you can store a database request in a cache till it is updated to reduce calling the database again for the same query.
Cipher — Cryptographic algorithm for encrypting and decrypting data.
Code Injection — Injecting malicious code into a system by exploiting a bug or vulnerability.
Cross-Site Scripting — Executing a script on the client-side through a legitimate website. This can be prevented if the website sanitizes user input.
Compliance — A set of rules defined by the government or other authorities on how to protect your customer’s data. Common ones include HIPAA, PCI-DSS, and FISMA.
Dictionary Attack — Attacking a system with a pre-defined list of usernames and passwords. eg. admin/admin is a common username/password combination used by amateur sysadmins.
Dumpster Diving — Looking into a company’s trash cans for useful information.
Denial of Service & Distributed Denial of Service — Exhausting a server’s resources by sending too many requests is Denial of Service. If a botnet is used to do the same, its called Distributed Denial of Service.
DevSecOps — Combination of development and operations by considering security as a key ingredient from the initial system design.
Directory Traversal — Vulnerability that lets attackers list al the files and folders within a server. This can include system configuration and password files.
Domain Name System (DNS) — Helps convert domain names into server IP addresses. eg. Google.com -> 188.8.131.52
DNS Spoofing — Trikcnig a system’s DNS to point to a malicious server. eg. when you enter ‘facebook.com’, you might be redirected to the attacker’s website that looks like Facebook.
Encryption — Encoding a message with a key so that only the parties with the key can read the message.
Exploit — A piece of code that takes advantage of a vulnerability in the target system. eg. Buffer overflow exploits can get you to root access to a system.
Enumeration — Mapping out all the components of a network by gaining access to a single system.
Footprinting — Gathering information about a target using active methods such as scanning and enumeration.
Flooding — Sending too many packets of data to a target system to exhaust its resources and cause a Denial of Service or similar attacks.
Firewall — A software or hardware filter that can be configured to prevent common types of attacks.
Fork Bomb — Forking a process indefinitely to exhaust system resources. Related to a Denial of Service attack.
Fuzzing — Sending automated random input to a software program to test its exception handling capacity.
Hardening — Securing a system from attacks like closing unused ports. Usually done using scripts for servers.
Hash Function — Mapping a piece of data into a fixed value string. Hashes are used to confirm data integrity.
Honey Pot — An intentionally vulnerable system used to lure attackers. This is then used to understand the attacker’s strategies.
HIPAA — The Health Insurance Portability and Accountability Act. If you are working with healthcare data, you need to make sure you are HIPAA compliant. This is to protect the customer’s privacy.
Input Validation — Checking user inputs before sending them to the database. eg. sanitizing form input to prevent SQL injection attacks.
Integrity — Making sure the data that was sent from the server is the same that was received by the client. This ensures there was no tampering and integrity is achieved usually by hashing and encryption.
Intrusion Detection System — A software similar to a firewall but with advanced features. Helps in defending against Nmap scans, DDoS attacks, etc.
IP Spoofing — Changing the source IP address of a packet to fool the target into thinking a request is coming from a legitimate server.
John The Ripper — Brilliant password cracking tool, runs on all major platforms.
Kerberos — Default authorization software used by Microsoft, uses a stronger encryption system.
KeyLogger — A software program that captures all keystrokes that a user performs on the system.
Logic Bombs — A piece of code (usually malicious) that runs when a condition is satisfied.
Light Weight Directory Access Protocol (LDAP) — Lightweight client-server protocol on Windows, central place for authentication. Stores usernames and passwords to validate users on a network.
Malware — Short for “Malicious Software”. Everything from viruses to backdoors is malware.
MAC Address — Unique address assigned to a Network Interface Card and is used as an identifier for local area networks. Easy to spoof.
Multi-factor Authentication — Using more than one method of authentication to access a service. eg. username/password with mobile OTP to access a bank account (two-factor authentication)
MD5 — Widely used hashing algorithm. Once a favorite, it has many vulnerabilities.
Meterpreter — An advanced Metasploit payload that lives in memory and hard to trace.
Null-Byte Injection — An older exploit, uses null bytes (i.e. %00, or 0x00 in hexadecimal) to URLs. This makes web servers return random/unwanted data which might be useful for the attacker. Easily prevented by doing sanity checks.
Network Interface Card(NIC) — Hardware that helps a device connect to a network.
Network Address Translation — Utility that translates your local IP address into a global IP address. eg. your local IP might be 192.168.1.4 but to access the internet, you need a global IP address (from your router).
Netcat — Simple but powerful tool that can view and record data on a TCP or UDP network connections. Since it is not actively maintained, NCat is preferred.
Nikto — A popular web application scanner, helps to find over 6700 vulnerabilities including server configurations and installed web server software.
Nessus — Commercial alternative to NMap, provides a detailed list of vulnerabilities based on scan results.
Packet — Data is sent and received by systems via packets. Contains information like source IP, destination IP, protocol, and other information.
Password Cracking — Cracking an encrypted password using tools like John the Ripper when you don’t have access to the key.
Password Sniffing — Performing man-in-the-middle attacks using tools like Wireshark to find password hashes.
Patch — A software update released by a vendor to fix a bug or vulnerability in a software system.
Phishing — Building fake web sites that look remarkably similar to legitimate websites (like Facebook) to capture sensitive information.
Ping Sweep — A technique that tries to ping a system to see if it is alive on the network.
Public Key Cryptography — Encryption mechanism that users a pair of keys, one private and one public. The sender will encrypt a message using your public key which then you can decrypt using your private key.
Public Key Infrastructure — A public key infrastructure (PKI) is a system to create, store, and distribute digital certificates. This helps sysadmins verify that a particular public key belongs to a certain authorized entity.
Personally Identifiable Information (PII) — Any information that identified a user. eg. Address, Phone number, etc.
Payload — A piece of code (usually malicious) that performs a specific function. eg. Keylogger.
PCI-DSS — Payment Card Industry Data Security Standard. If you are working with customer credit cards, you should be PCI-DSS compliant.
Ransomware — Malware that locks your system using encryption and asks you to pay a price to get the key to unlock it.
Rainbow Table — Pre calculated password hashes that will help you crack password hashes of the target easily.
Reconnaissance — Finding data about the target using methods such as google search, social media, and other publicly available information.
Reverse Engineering — Rebuilding a piece of software based on its functions.
Role-Based Access — Providing a set of authorizations for a role other than a user. eg. “Managers” role will have a set of permissions while the “developers” role will have a different set of permissions.
Rootkit — A rootkit is a malware that provides unauthorized users admin privileges. Rootkits include keyloggers, password sniffers, etc.
Scanning — Sending packets to a system and gaining information about the target system using the packets received. This involved the 3-way-handshake.
Secure Shell (SSH) — Protocol that establishes an encrypted communication channel between a client and a server. You can use ssh to login to remote servers and perform system administration.
Session — A session is a duration in which a communication channel is open between a client and a server. eg. the time between logging into a website and logging out is a session.
Session Hijacking — Taking over someone else’s session by pretending to the client. This is achieved by stealing cookies and session tokens. eg. after you authenticate with your bank, an attacker can steal your session to perform financial transactions on your behalf.
Social Engineering — The art of tricking people into making them do something that is not in their best interest. eg. convincing someone to provide their password over the phone.
Secure Hashing Algorithm (SHA) — Widely used family of encryption algorithms. SHA256 is considered highly secure compared to earlier versions like SHA 1. It is also a one-way algorithm, unlike an encryption algorithm that you can decrypt. Once you hash a message, you can only compare with another hash, you cannot re-hash it to its earlier format.
Sniffing — performing man-in-the-middle attacks on networks. Includes wired and wireless networks.
Spam — Unwanted digital communication, including email, social media messages, etc. Usually tries to get you into a malicious website.
Syslog — System logging protocol, used by system administrators to capture all activity on a server. Usually stored on a separate server to retain logs in the event of an attack.
Secure Sockets Layer (SSL) — Establishes an encrypted tunnel between the client and server. eg. when you submit passwords on Facebook, only the encrypted text will be visible for sniffers and not your original password.
Snort — Lightweight open-source Intrusion Detection System for Windows and Linux.
SQL Injection — A type of attack that can be performed on web applications using SQL databases. Happens when the site does not validate user input.
Trojan — A malware hidden within useful software. eg. a pirated version of MS office can contain trojans that will execute when you install and run the software.
Traceroute — Tool that maps the route a packet takes between the source and destination.
Tunnel — Creating a private encrypted channel between two or more computers. Only allowed devices on the network can communicate through this tunnel.
Virtual Private Network — A subnetwork created within a network, mainly to encrypt traffic. eg. connecting to a VPN to access a blocked third-party site.
Virus — A piece of code that is created to perform a specific action on the target systems. A virus has to be triggered to execute eg. autoplaying a USB drive.
Vulnerability — A point of attack that is caused by a bug / poor system design. eg. lack of input validation causes attackers to perform SQL injection attacks on a website.
War Driving — Travelling through a neighborhood looking for unprotected wifi networks to attack.
WHOIS — Helps to find information about IP addresses, its owners, DNS records, etc.
Wireshark — Open source program to analyze network traffic and filter requests and responses for network debugging.
Worm — A malware program capable of replicating itself and spreading to other connected systems. eg. a worm to built a botnet. Unlike Viruses, Worms don’t need a trigger.
Wireless Application Protocol (WAP) — Protocol that helps mobile devices connect to the internet.
Web Application Firewall (WAF) — Firewalls for web applications that help with cross-site scripting, Denial of Service, etc.
Zero-Day — A newly discovered vulnerability in a system for which there is no patch yet. Zero-day vulnerabilities are the most dangerous type of vulnerabilities since there is no possible way to protect against one.
Zombie — A compromised computer, controlled by an attacker. A group of zombies is called a Botnet.
Increased distributed working: With organizations embracing work from home, incremental risks have been observed due to a surge in Bring Your Own Device (BYOD), Virtual Private Network (VPN), Software As A Service (SaaS), O365 and Shadow IT, as it could be exploited by various Man-in-the-Middle (MITM) attack vectors.
Reimagine Business Models: Envisioning new business opportunities, modes of working, and renewed investment priorities. With reduced workforce capability, compounded with skill shortages, staff who are focusing on business as usual tasks can be victimized, via social engineering.
Digital Transformation and new digital infrastructure: With the change in nature for organizations across the industrial and supply chain sector – security is deprioritized. Hardening of the industrial systems and cloud based infrastructure is crucial as cyber threats exploit these challenges via vulnerability available for unpatched systems.
With an extreme volume of digital communication, security awareness is lowered with increased susceptibility. Malicious actors are using phishing techniques to exploit such situations.
Re-evaluate your approach to cyber
Which cyber scenarios your organization appears to be preparing for or is prepared?
Is there a security scenario that your organization is currently ignoring – but shouldn’t be?
What would your organization need to do differently in order to win, in each of the identified cyber scenarios?
What capabilities, cyber security partnerships, and workforce strategies do you need to strengthen?
The organizations should reflect the following scenarios at a minimum and consider:
Which cyber scenarios your organization appears to be preparing for or is prepared?
Is there a security scenario that your organization is currently ignoring – but shouldn’t be?
What would your organization need to do differently in order to win, in each of the identified cyber scenarios?
What capabilities, cyber security partnerships, and workforce strategies do you need to strengthen?
To tackle the outcome from the above scenarios, the following measures are the key:
Inoculation through education: Educate and / or remind your employees about –
Your organization’s defense – remote work cyber security policies and best practices
Potential threats to your organization and how will it attack – with a specific focus on social engineering scams and identifying COVID-19 phishing campaigns
Assisting remote employees with enabling MFA across the organization assets
Adjust your defenses: Gather cyber threat intelligence and execute a patching sprint:
Set intelligence collection priorities
Share threat intelligence with other organizations
Use intelligence to move at the speed of the threat
Focus on known tactics, such as phishing and C-suite fraud.
Prioritize unpatched critical systems and common vulnerabilities.
Enterprise recovery: If the worst happens and an attack is successful, follow a staged approach to recovering critical business operations which may include tactical items such as:
Protect key systems through isolation
Fully understand and contain the incident
Eradicate any malware
Implement appropriate protection measures to improve overall system posture
Identify and prioritize the recovery of key business processes to deliver operations
Implement a prioritized recovery plan
Cyber Preparedness and Response: It is critical to optimize the detection capability thus, re-evaluation of the detection strategy aligned with the changing landscape is crucial. Some key trends include:
Secure and monitor your cloud environments and remote working applications
Increase monitoring to identify threats from shadow IT
Analyze behavior patterns to improve detection content
Finding the right cyber security partner: To be ready to respond identify the right partner with experience and skillset in Social Engineering, Cyber Response, Cloud Security, and Data Security.
Critical actions to address
At this point, as the organizations are setting the direction towards the social enterprise, it is an unprecedented opportunity to lead with cyber discussions and initiatives. Organizations should immediately gain an understanding of newly introduced risks and relevant controls by:
Getting a seat at the table
Understanding the risk prioritization:
Remote workforce/technology performance
Operational and financial implications
Emerging insider and external threats
Business continuity capabilities
Assessing cyber governance and security awareness in the new operating environment
Assessing the highest areas of risk and recommend practical mitigation strategies that minimize impact to constrained resources.
Keeping leadership and the Board apprised of ever-changing risk profile
Given the complexity of the pandemic and associated cyber challenges, there is reason to believe that the recovery phase post-COVID-19 will require unprecedented levels of cyber orchestration, communication, and changing of existing configurations across the organization.
CyberSecurity: Protect Yourself on Internet
Use two factor authentication when possible. If not possible, use strong unique passwords that are difficult to guess or crack. This means avoiding passwords that use of common words, your birthdate, your SSN, names and birthdays of close associates, etc.
Make sure the devices you are using are up-to-date and have some form of reputable anti-virus/malware software installed.
Never open emails, attachments, programs unless they are from a trusted source (i.e., a source that can be verified). Also disregard email or web requests that ask you to share your personal or account information unless you are sure the request and requestor are legitimate.
Try to only use websites that are encrypted. To do this, look for either the trusted security lock symbol before the website address and/or the extra “s” at the end of http in the URL address bar.
Avoid using an administrator level account when using the internet.
Only enable cookies when absolutely required by a website.
Make social media accounts private or don’t use social media at all.
Consider using VPNs and encrypting any folders/data that contains sensitive data.
Stay away from using unprotected public Wi-Fi networks.
Social media is genetically engineered in Area 51 to harvest as much data from you as possible. Far beyond just having your name and age and photograph.
Never use the same username twice anywhere, or the same password twice anywhere.
Use Tor/Tor Browser whenever possible. It’s not perfect, but it is a decent default attempt at anonymity.
Use a VPN. Using VPN and Tor can be even better.
Search engines like DuckDuckGo offer better privacy (assuming they’re honest, which you can never be certain of) than Google which, like social media, works extremely hard to harvest every bit of data from you that they can.
Never give your real details anywhere. Certainly not things like your name or pictures of yourself, but even less obvious things like your age or country of origin. Even things like how you spell words and grammatical quirks can reveal where you’re from.
Erase your comments from websites after a few days/weeks. It might not erase them from the website’s servers, but it will at least remove them from public view. If you don’t, you can forget they exist and you never know how or when they can and will be used against you.
With Reddit, you can create an account fairly easily over Tor using no real information. Also, regularly nuke your accounts in case Reddit or some crazy stalker is monitoring your posts to build a profile of who you might be. Source: Reddit
Adrian Lamo – gained media attention for breaking into several high-profile computer networks, including those of The New York Times, Yahoo!, and Microsoft, culminating in his 2003 arrest. Lamo was best known for reporting U.S. soldier Chelsea Manning to Army criminal investigators in 2010 for leaking hundreds of thousands of sensitive U.S. government documents to WikiLeaks.
Albert Gonzales – an American computer hacker and computer criminal who is accused of masterminding the combined credit card theft and subsequent reselling of more than 170 million card and ATM numbers from 2005 to 2007: the biggest such fraud in history.
Barnaby Jack – was a New Zealand hacker, programmer and computer security expert. He was known for his presentation at the Black Hat computer security conference in 2010, during which he exploited two ATMs and made them dispense fake paper currency on the stage. Among his other most notable works were the exploitation of various medical devices, including pacemakers and insulin pumps.
Gary McKinnon – a Scottish systems administrator and hacker who was accused in 2002 of perpetrating the “biggest military computer hack of all time,” although McKinnon himself states that he was merely looking for evidence of free energy suppression and a cover-up of UFO activity and other technologies potentially useful to the public. 👽🛸
George Hotz aka geohot – “The former Facebook engineer took on the giants of the tech world by developing the first iPhone carrier-unlock techniques,” says Mark Greenwood, head of data science at Netacea, “followed a few years later by reverse engineering Sony’s PlayStation 3, clearing the way for users to run their own code on locked-down hardware. George sparked an interest in a younger generation frustrated with hardware and software restrictions being imposed on them and led to a new scene of opening up devices, ultimately leading to better security and more openness.”
Guccifer 2.0 – a persona which claimed to be the hacker(s) that hacked into the Democratic National Committee (DNC) computer network and then leaked its documents to the media, the website WikiLeaks, and a conference event.
Hector Monsegur (known as Sabu) – an American computer hacker and co-founder of the hacking group LulzSec. He Monsegur became an informant for the FBI, working with the agency for over ten months to aid them in identifying the other hackers from LulzSec and related groups.
Jacob Appelbaum – an American independent journalist, computer security researcher, artist, and hacker. He has been employed by the University of Washington, and was a core member of the Tor project, a free software network designed to provide online anonymity.
Jeanson James Ancheta – On May 9, 2006, Jeanson James Ancheta (born 1985) became the first person to be charged for controlling large numbers of hijacked computers or botnets.
Jeremy Hammond – He was convicted of computer fraud in 2013 for hacking the private intelligence firm Stratfor and releasing data to the whistle-blowing website WikiLeaks, and sentenced to 10 years in prison.
John Draper – also known as Captain Crunch, Crunch or Crunchman (after the Cap’n Crunch breakfast cereal mascot), is an American computer programmer and former legendary phone phreak.
Kimberley Vanvaeck (known as Gigabyte) – a virus writer from Belgium known for a long-standing dispute which involved the internet security firm Sophos and one of its employees, Graham Cluley. Vanvaeck wrote several viruses, including Quis, Coconut and YahaSux (also called Sahay). She also created a Sharp virus (also called “Sharpei”), credited as being the first virus to be written in C#.
Lauri Love – a British activist charged with stealing data from United States Government computers including the United States Army, Missile Defense Agency, and NASA via computer intrusion.
Michael Calce (known as MafiaBoy) – a security expert from Île Bizard, Quebec who launched a series of highly publicized denial-of-service attacks in February 2000 against large commercial websites, including Yahoo!, Fifa.com, Amazon.com, Dell, Inc., E*TRADE, eBay, and CNN.
Mudge – Peiter C. Zatko, better known as Mudge, is a network security expert, open source programmer, writer, and a hacker. He was the most prominent member of the high-profile hacker think tank the L0pht as well as the long-lived computer and culture hacking cooperative the Cult of the Dead Cow.
PRAGMA – Also known as Impragma or PHOENiX, PRAGMA is the author of Snipr, one of the most prolific credential stuffing tools available online.
The 414s – The 414s were a group of computer hackers who broke into dozens of high-profile computer systems, including ones at Los Alamos National Laboratory, Sloan-Kettering Cancer Center, and Security Pacific Bank, in 1982 and 1983.
The Shadow Brokers – is a hacker group who first appeared in the summer of 2016. They published several leaks containing hacking tools from the National Security Agency (NSA), including several zero-day exploits. Specifically, these exploits and vulnerabilities targeted enterprise firewalls, antivirus software, and Microsoft products. The Shadow Brokers originally attributed the leaks to the Equation Group threat actor, who have been tied to the NSA’s Tailored Access Operations unit.
The Strange History of Ransomware The first ransomware virus predates e-mail, even the Internet as we know it, and was distributed on floppy disk by the postal service. It sounds quaint, but in some ways this horse-and-buggy version was even more insidious than its modern descendants. Contemporary ransomware tends to bait victims using legitimate-looking email attachments — a fake invoice from UPS, or a receipt from Delta airlines. But the 20,000 disks dispatched to 90 countries in December of 1989 were masquerading as something far more evil: AIDS education software.
How to protect sensitive data for its entire lifecycle in AWS
You can protect data in-transit over individual communications channels using transport layer security (TLS), and at-rest in individual storage silos using volume encryption, object encryption or database table encryption. However, if you have sensitive workloads, you might need additional protection that can follow the data as it moves through the application stack. Fine-grained data protection techniques such as field-level encryption allow for the protection of sensitive data fields in larger application payloads while leaving non-sensitive fields in plaintext. This approach lets an application perform business functions on non-sensitive fields without the overhead of encryption, and allows fine-grained control over what fields can be accessed by what parts of the application. Read m ore here…
In this blog, we talk about big data and data analytics; we also give you the last updated top 20 AWS Certified Data Analytics – Specialty Questions and Answers Dumps
The AWS Certified Data Analytics – Specialty (DAS-C01) examination is intended for individuals who perform in a data analytics-focused role. This exam validates an examinee’s comprehensive understanding of using AWS services to design, build, secure, and maintain analytics solutions that provide insight from data.
The AWS Certified Data Analytics – Specialty (DAS-C01) covers the following domains:
Domain 1: Collection 18%
Domain 2: Storage and Data Management 22%
Domain 3: Processing 24%
Domain 4: Analysis and Visualization 18%
Domain 5: Security 18%
Below are the Top 20 AWS Certified Data Analytics – Specialty Questions and Answers Dumps and References –
Question1:What combination of services do you need for the following requirements: accelerate petabyte-scale data transfers, load streaming data, and the ability to create scalable, private connections. Select the correct answer order.
A) Snowball, Kinesis Firehose, Direct Connect
B) Data Migration Services, Kinesis Firehose, Direct Connect
C) Snowball, Data Migration Services, Direct Connect
D) Snowball, Direct Connection, Kinesis Firehose
AWS has many options to help get data into the cloud, including secure devices like AWS Import/Export Snowball to accelerate petabyte-scale data transfers, Amazon Kinesis Firehose to load streaming data, and scalable private connections through AWS Direct Connect.
Question 3: There is a five-day car rally race across Europe. The race coordinators are using a Kinesis stream and IoT sensors to monitor the movement of the cars. Each car has a sensor and data is getting back to the stream with the default stream settings. On the last day of the rally, data is sent to S3. When you go to interpret the data in S3, there is only data for the last day and nothing for the first 4 days. Which of the following is the most probable cause of this?
A) You did not have versioning enabled and would need to create individual buckets to prevent the data from being overwritten.
B) Data records are only accessible for a default of 24 hours from the time they are added to a stream.
C) One of the sensors failed, so there was no data to record.
D) You needed to use EMR to send the data to S3; Kinesis Streams are only compatible with DynamoDB.
Streams support changes to the data record retention period of your stream. An Amazon Kinesis stream is an ordered sequence of data records, meant to be written to and read from in real-time. Data records are therefore stored in shards in your stream temporarily. The period from when a record is added to when it is no longer accessible is called the retention period. An Amazon Kinesis stream stores records for 24 hours by default, up to 168 hours.
Question 4: A publisher website captures user activity and sends clickstream data to Amazon Kinesis Data Streams. The publisher wants to design a cost-effective solution to process the data to create a timeline of user activity within a session. The solution must be able to scale depending on the number of active sessions. Which solution meets these requirements?
A) Include a variable in the clickstream data from the publisher website to maintain a counter for the number of active user sessions. Use a timestamp for the partition key for the stream. Configure the consumer application to read the data from the stream and change the number of processor threads based upon the counter. Deploy the consumer application on Amazon EC2 instances in an EC2 Auto Scaling group.
B) Include a variable in the clickstream to maintain a counter for each user action during their session. Use the action type as the partition key for the stream. Use the Kinesis Client Library (KCL) in the consumer application to retrieve the data from the stream and perform the processing. Configure the consumer application to read the data from the stream and change the number of processor threads based upon the counter. Deploy the consumer application on AWS Lambda.
C) Include a session identifier in the clickstream data from the publisher website and use as the partition key for the stream. Use the Kinesis Client Library (KCL) in the consumer application to retrieve the data from the stream and perform the processing. Deploy the consumer application on Amazon EC2 instances in an EC2 Auto Scaling group. Use an AWS Lambda function to reshard the stream based upon Amazon CloudWatch alarms.
D) Include a variable in the clickstream data from the publisher website to maintain a counter for the number of active user sessions. Use a timestamp for the partition key for the stream. Configure the consumer application to read the data from the stream and change the number of processor threads based upon the counter. Deploy the consumer application on AWS Lambda.
Partitioning by the session ID will allow a single processor to process all the actions for a user session in order. An AWS Lambda function can call the UpdateShardCount API action to change the number of shards in the stream. The KCL will automatically manage the number of processors to match the number of shards. Amazon EC2 Auto Scaling will assure the correct number of instances are running to meet the processing load.
Question 5: Your company has two batch processing applications that consume financial data about the day’s stock transactions. Each transaction needs to be stored durably and guarantee that a record of each application is delivered so the audit and billing batch processing applications can process the data. However, the two applications run separately and several hours apart and need access to the same transaction information. After reviewing the transaction information for the day, the information no longer needs to be stored. What is the best way to architect this application?
A) Use SQS for storing the transaction messages; when the billing batch process performs first and consumes the message, write the code in a way that does not remove the message after consumed, so it is available for the audit application several hours later. The audit application can consume the SQS message and remove it from the queue when completed.
B) Use Kinesis to store the transaction information. The billing application will consume data from the stream and the audit application can consume the same data several hours later.
C) Store the transaction information in a DynamoDB table. The billing application can read the rows while the audit application will read the rows then remove the data.
D) Use SQS for storing the transaction messages. When the billing batch process consumes each message, have the application create an identical message and place it in a different SQS for the audit application to use several hours later.
SQS would make this more difficult because the data does not need to persist after a full day.
Kinesis appears to be the best solution that allows multiple consumers to easily interact with the records.
Question 6: A company is currently using Amazon DynamoDB as the database for a user support application. The company is developing a new version of the application that will store a PDF file for each support case ranging in size from 1–10 MB. The file should be retrievable whenever the case is accessed in the application. How can the company store the file in the MOST cost-effective manner?
A) Store the file in Amazon DocumentDB and the document ID as an attribute in the DynamoDB table.
B) Store the file in Amazon S3 and the object key as an attribute in the DynamoDB table.
C) Split the file into smaller parts and store the parts as multiple items in a separate DynamoDB table.
D) Store the file as an attribute in the DynamoDB table using Base64 encoding.
Use Amazon S3 to store large attribute values that cannot fit in an Amazon DynamoDB item. Store each file as an object in Amazon S3 and then store the object path in the DynamoDB item.
Question 7: Your client has a web app that emits multiple events to Amazon Kinesis Streams for reporting purposes. Critical events need to be immediately captured before processing can continue, but informational events do not need to delay processing. What solution should your client use to record these types of events without unnecessarily slowing the application?
A) Log all events using the Kinesis Producer Library.
B) Log critical events using the Kinesis Producer Library, and log informational events using the PutRecords API method.
C) Log critical events using the PutRecords API method, and log informational events using the Kinesis Producer Library.
D) Log all events using the PutRecords API method.
The PutRecords API can be used in code to be synchronous; it will wait for the API request to complete before the application continues. This means you can use it when you need to wait for the critical events to finish logging before continuing. The Kinesis Producer Library is asynchronous and can send many messages without needing to slow down your application. This makes the KPL ideal for the sending of many non-critical alerts asynchronously.
Question 8: You work for a start-up that tracks commercial delivery trucks via GPS. You receive coordinates that are transmitted from each delivery truck once every 6 seconds. You need to process these coordinates in near real-time from multiple sources and load them into Elasticsearch without significant technical overhead to maintain. Which tool should you use to digest the data?
A) Amazon SQS
B) Amazon EMR
C) AWS Data Pipeline
D) Amazon Kinesis Firehose
Amazon Kinesis Firehose is the easiest way to load streaming data into AWS. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service, enabling near real-time analytics with existing business intelligence tools and dashboards.
Question 9: A company needs to implement a near-real-time fraud prevention feature for its ecommerce site. User and order details need to be delivered to an Amazon SageMaker endpoint to flag suspected fraud. The amount of input data needed for the inference could be as much as 1.5 MB. Which solution meets the requirements with the LOWEST overall latency?
A) Create an Amazon Managed Streaming for Kafka cluster and ingest the data for each order into a topic. Use a Kafka consumer running on Amazon EC2 instances to read these messages and invoke the Amazon SageMaker endpoint.
B) Create an Amazon Kinesis Data Streams stream and ingest the data for each order into the stream. Create an AWS Lambda function to read these messages and invoke the Amazon SageMaker endpoint.
C) Create an Amazon Kinesis Data Firehose delivery stream and ingest the data for each order into the stream. Configure Kinesis Data Firehose to deliver the data to an Amazon S3 bucket. Trigger an AWS Lambda function with an S3 event notification to read the data and invoke the Amazon SageMaker endpoint.
D) Create an Amazon SNS topic and publish the data for each order to the topic. Subscribe the Amazon SageMaker endpoint to the SNS topic.
An Amazon Managed Streaming for Kafka cluster can be used to deliver the messages with very low latency. It has a configurable message size that can handle the 1.5 MB payload.
Question 10: You need to filter and transform incoming messages coming from a smart sensor you have connected with AWS. Once messages are received, you need to store them as time series data in DynamoDB. Which AWS service can you use?
A) IoT Device Shadow Service
D) IoT Rules Engine
The IoT rules engine will allow you to send sensor data over to AWS services like DynamoDB
Question 11: A media company is migrating its on-premises legacy Hadoop cluster with its associated data processing scripts and workflow to an Amazon EMR environment running the latest Hadoop release. The developers want to reuse the Java code that was written for data processing jobs for the on-premises cluster. Which approach meets these requirements?
A) Deploy the existing Oracle Java Archive as a custom bootstrap action and run the job on the EMR cluster.
B) Compile the Java program for the desired Hadoop version and run it using a CUSTOM_JAR step on the EMR cluster.
C) Submit the Java program as an Apache Hive or Apache Spark step for the EMR cluster.
D) Use SSH to connect the master node of the EMR cluster and submit the Java program using the AWS CLI.
A CUSTOM JAR step can be configured to download a JAR file from an Amazon S3 bucket and execute it. Since the Hadoop versions are different, the Java application has to be recompiled.
Question 12: You currently have databases running on-site and in another data center off-site. What service allows you to consolidate to one database in Amazon?
A) AWS Kinesis
B) AWS Database Migration Service
C) AWS Data Pipeline
D) AWS RDS Aurora
AWS Database Migration Service can migrate your data to and from most of the widely used commercial and open source databases. It supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora. Migrations can be from on-premises databases to Amazon RDS or Amazon EC2, databases running on EC2 to RDS, or vice versa, as well as from one RDS database to another RDS database.
Question 13: An online retail company wants to perform analytics on data in large Amazon S3 objects using Amazon EMR. An Apache Spark job repeatedly queries the same data to populate an analytics dashboard. The analytics team wants to minimize the time to load the data and create the dashboard. Which approaches could improve the performance? (Select TWO.) A) Copy the source data into Amazon Redshift and rewrite the Apache Spark code to create analytical reports by querying Amazon Redshift.
B) Copy the source data from Amazon S3 into Hadoop Distributed File System (HDFS) using s3distcp.
C) Load the data into Spark DataFrames.
D) Stream the data into Amazon Kinesis and use the Kinesis Connector Library (KCL) in multiple Spark jobs to perform analytical jobs.
E) Use Amazon S3 Select to retrieve the data necessary for the dashboards from the S3 objects.
C and E
One of the speed advantages of Apache Spark comes from loading data into immutable dataframes, which can be accessed repeatedly in memory. Spark DataFrames organizes distributed data into columns. This makes summaries and aggregates much quicker to calculate. Also, instead of loading an entire large Amazon S3 object, load only what is needed using Amazon S3 Select. Keeping the data in Amazon S3 avoids loading the large dataset into HDFS.
Question 14: You have been hired as a consultant to provide a solution to integrate a client’s on-premises data center to AWS. The customer requires a 300 Mbps dedicated, private connection to their VPC. Which AWS tool do you need?
A) VPC peering
B) Data Pipeline
C) Direct Connect
Direct Connect will provide a dedicated and private connection to an AWS VPC.
Question 15: Your organization has a variety of different services deployed on EC2 and needs to efficiently send application logs over to a central system for processing and analysis. They’ve determined it is best to use a managed AWS service to transfer their data from the EC2 instances into Amazon S3 and they’ve decided to use a solution that will do what?
A) Installs the AWS Direct Connect client on all EC2 instances and uses it to stream the data directly to S3.
B) Leverages the Kinesis Agent to send data to Kinesis Data Streams and output that data in S3.
C) Ingests the data directly from S3 by configuring regular Amazon Snowball transactions.
D) Leverages the Kinesis Agent to send data to Kinesis Firehose and output that data in S3.
Kinesis Firehose is a managed solution, and log files can be sent from EC2 to Firehose to S3 using the Kinesis agent.
Question 16: A data engineer needs to create a dashboard to display social media trends during the last hour of a large company event. The dashboard needs to display the associated metrics with a latency of less than 1 minute. Which solution meets these requirements?
A) Publish the raw social media data to an Amazon Kinesis Data Firehose delivery stream. Use Kinesis Data Analytics for SQL Applications to perform a sliding window analysis to compute the metrics and output the results to a Kinesis Data Streams data stream. Configure an AWS Lambda function to save the stream data to an Amazon DynamoDB table. Deploy a real-time dashboard hosted in an Amazon S3 bucket to read and display the metrics data stored in the DynamoDB table.
B) Publish the raw social media data to an Amazon Kinesis Data Firehose delivery stream. Configure the stream to deliver the data to an Amazon Elasticsearch Service cluster with a buffer interval of 0 seconds. Use Kibana to perform the analysis and display the results.
C) Publish the raw social media data to an Amazon Kinesis Data Streams data stream. Configure an AWS Lambda function to compute the metrics on the stream data and save the results in an Amazon S3 bucket. Configure a dashboard in Amazon QuickSight to query the data using Amazon Athena and display the results.
D) Publish the raw social media data to an Amazon SNS topic. Subscribe an Amazon SQS queue to the topic. Configure Amazon EC2 instances as workers to poll the queue, compute the metrics, and save the results to an Amazon Aurora MySQL database. Configure a dashboard in Amazon QuickSight to query the data in Aurora and display the results.
Question 17: A real estate company is receiving new property listing data from its agents through .csv files every day and storing these files in Amazon S3. The data analytics team created an Amazon QuickSight visualization report that uses a dataset imported from the S3 files. The data analytics team wants the visualization report to reflect the current data up to the previous day. How can a data analyst meet these requirements?
A) Schedule an AWS Lambda function to drop and re-create the dataset daily.
B) Configure the visualization to query the data in Amazon S3 directly without loading the data into SPICE.
C) Schedule the dataset to refresh daily.
D) Close and open the Amazon QuickSight visualization.
Datasets created using Amazon S3 as the data source are automatically imported into SPICE. The Amazon QuickSight console allows for the refresh of SPICE data on a schedule.
Question 18: You need to migrate data to AWS. It is estimated that the data transfer will take over a month via the current AWS Direct Connect connection your company has set up. Which AWS tool should you use?
A) Establish additional Direct Connect connections.
B) Use Data Pipeline to migrate the data in bulk to S3.
C) Use Kinesis Firehose to stream all new and existing data into S3.
As a general rule, if it takes more than one week to upload your data to AWS using the spare capacity of your existing Internet connection, then you should consider using Snowball. For example, if you have a 100 Mb connection that you can solely dedicate to transferring your data and need to transfer 100 TB of data, it takes more than 100 days to complete a data transfer over that connection. You can make the same transfer by using multiple Snowballs in about a week.
Question 19: You currently have an on-premises Oracle database and have decided to leverage AWS and use Aurora. You need to do this as quickly as possible. How do you achieve this?
A) It is not possible to migrate an on-premises database to AWS at this time.
B) Use AWS Data Pipeline to create a target database, migrate the database schema, set up the data replication process, initiate the full load and a subsequent change data capture and apply, and conclude with a switchover of your production environment to the new database once the target database is caught up with the source database.
C) Use AWS Database Migration Services and create a target database, migrate the database schema, set up the data replication process, initiate the full load and a subsequent change data capture and apply, and conclude with a switch-over of your production environment to the new database once the target database is caught up with the source database.
D) Use AWS Glue to crawl the on-premises database schemas and then migrate them into AWS with Data Pipeline jobs.
DMS can efficiently support this sort of migration using the steps outlined. While AWS Glue can help you crawl schemas and store metadata on them inside of Glue for later use, it isn't the best tool for actually transitioning a database over to AWS itself. Similarly, while Data Pipeline is great for ETL and ELT jobs, it isn't the best option to migrate a database over to AWS.
Question 20: A financial company uses Amazon EMR for its analytics workloads. During the company’s annual security audit, the security team determined that none of the EMR clusters’ root volumes are encrypted. The security team recommends the company encrypt its EMR clusters’ root volume as soon as possible. Which solution would meet these requirements?
A) Enable at-rest encryption for EMR File System (EMRFS) data in Amazon S3 in a security configuration. Re-create the cluster using the newly created security configuration.
B) Specify local disk encryption in a security configuration. Re-create the cluster using the newly created security configuration.
C) Detach the Amazon EBS volumes from the master node. Encrypt the EBS volume and attach it back to the master node.
D) Re-create the EMR cluster with LZO encryption enabled on all volumes.
Local disk encryption can be enabled as part of a security configuration to encrypt root and storage volumes.
Data Scientist (n.): Person who is better at statistics than any software engineer and better at software engineering than any statistician. – Josh Wills
Data scientists apply sophisticated quantitative and computer science skills to both structure and analyze massive stores or continuous streams of unstructured data, with the intent to derive insights and prescribe action. – Burtch Works Data Science Salary Survey, May 2018
Data scientists are highly educated. With exceedingly rare exception, every data scientist holds at least an undergraduate degree. 91% of data scientists in 2018 held advanced degrees. The remaining 9% all held undergraduate degrees. Furthermore,
25% of data scientists hold a degree in statistics or mathematics,
20% have a computer science degree,
an additional 20% hold a degree in the natural sciences, and
18% hold an engineering degree.
The remaining 17% of surveyed data scientists held degrees in business, social science, or economics.
How Are Data Scientists Different From Data Analysts?
Broadly speaking, the roles differ in scope: data analysts build reports with narrow, well-defined KPIs. Data scientists often to work on broader business problems without clear solutions. Data scientists live on the edge of the known and unknown.
We’ll leave you with a concrete example: A data analyst cares about profit margins. A data scientist at the same company cares about market share.
How Is Data Science Used in Medicine?
Data science in healthcare best translates to biostatistics. It can be quite different from data science in other industries as it usually focuses on small samples with several confounding variables.
How Is Data Science Used in Manufacturing?
Data science in manufacturing is vast; it includes everything from supply chain optimization to the assembly line.
What are data scientists paid?
Most people are attracted to data science for the salary. It’s true that data scientists garner high salaries compares to their peers. There is data to support this: The May 2018 edition of the BurtchWorks Data Science Salary Survey, annual salary statistics were
Note the above numbers do not reflect total compensation which often includes standard benefits and may include company ownership at high levels.
How will data science evolve in the next 5 years?
Will AI replace data scientists?
What is the workday like for a data scientist?
It’s common for data scientists across the US to work 40 hours weekly. While company culture does dictate different levels of work life balance, it’s rare to see data scientists who work more than they want. That’s the virtue of being an expensive resource in a competitive job market.
How do I become a Data Scientist?
The roadmap given to aspiring data scientists can be boiled down to three steps:
Earning an undergraduate and/or advanced degree in computer science, statistics, or mathematics,
Building their portfolio of SQL, Python, and R skills, and
Getting related work experience through technical internships.
All three require a significant time and financial commitment.
There used to be a saying around datascience: The road into a data science starts with two years of university-level math.
What Should I Learn? What Order Do I Learn Them?
This answer assumes your academic background ends with a HS diploma in the US.
Some follow up questions and answers:
Why Python first?
Python is a general purpose language. R is used primarily by statisticians. In the likely scenario that you decide data science requires too much time, effort, and money, Python will be more valuable than your R skills. It’s preparing you to fail, sure, but in the same way a savings account is preparing you to fail.
When do I start working with data?
You’ll start working with data when you’ve learned enough Python to do so. Whether you’ll have the tools to have any fun is a much more open-ended question.
How long will this take me?
Assuming self-study and average intelligence, 3-5 years from start to finish.
How Do I Learn Python?
If you don’t know the first thing about programming, start with MIT’s course in the curated list.
These modules are the standard tools for data analysis in Python:
R Inferno Learners with a CS background will appreciate this free handbook explaining how and why R behaves the way that it does.
How Do I Learn SQL?
Prioritize the basics of SQL. i.e. when to use functions like POW, SUM, RANK; the computational complexity of the different kinds of joins.
Concepts like relational algebra, when to use clustered/non-clustered indexes, etc. are useful, but (almost) never come up in interviews.
You absolutely do not need to understand administrative concepts like managing permissions.
Finally, there are numerous query engines and therefore numerous dialects of SQL. Use whichever dialect is supported in your chosen resource. There’s not much difference between them, so it’s easy to learn another dialect after you’ve learned one.
Fortunately (or unfortunately), calculus is the lament of many students, and so resources for it are plentiful. Khan Academy mimics lectures very well, and Paul’s Online Math Notes are a terrific reference full of practice problems and solutions.
Calculus, however, is not just calculus. For those unfamiliar with US terminology,
Calculus I is differential calculus.
Calculus II is integral calculus.
Calculus III is multivariable calculus.
Calculus IV is differential equations.
Differential and integral calculus are both necessary for probability and statistics, and should be completed first.
Multivariable calculus can be paired with linear algebra, but is also required.
Differential equations is where consensus falls apart. The short it is, they’re all but necessary for mathematical modeling, but not everyone does mathematical modeling. It’s another tool in the toolbox.
Probability is not friendly to beginners. Definitions are rooted in higher mathematics, notation varies from source to source, and solutions are frequently unintuitive. Probability may present the biggest barrier to entry in data science.
It’s best to pick a single primary source and a community for help. If you can spend the money, register for a university or community college course and attend in person.
Before you start coding, read through all the questions. This allows your unconscious mind to start working on problems in the background.
Start with the hardest problem first, when you hit a snag, move to the simpler problem before returning to the harder one.
Focus on passing all the test cases first, then worry about improving complexity and readability.
If you’re done and have a few minutes left, go get a drink and try to clear your head. Read through your solutions one last time, then submit.
It’s okay to not finish a coding challenge. Sometimes companies will create unreasonably tedious coding challenges with one-week time limits that require 5–10 hours to complete. Unless you’re desperate, you can always walk away and spend your time preparing for the next interview.
Remember, interviewing is a skill that can be learned, just like anything else. Hopefully, this article has given you some insight on what to expect in a data science interview loop.
The process also isn’t perfect and there will be times that you fail to impress an interviewer because you don’t possess some obscure piece of knowledge. However, with repeated persistence and adequate preparation, you’ll be able to land a data science job in no time!
What does the Airbnb data science interview process look like? [Coming soon]
What does the Facebook data science interview process look like? [Coming soon]
What does the Uber data science interview process look like? [Coming soon]
What does the Microsoft data science interview process look like? [Coming soon]
What does the Google data science interview process look like? [Coming soon]
What does the Netflix data science interview process look like? [Coming soon]
What does the Apple data science interview process look like? [Coming soon]
Real life enterprise databases are orders of magnitude more complex than the “customers, products, orders” examples used as teaching tools. SQL as a language is actually, IMO, a relatively simple language (the db administration component can get complex, but mostly data scientists aren’t doing that anyways). SQL is an incredibly important skill though for any DS role. I think when people emphasize SQL, what they really are talking about is the ability to write queries that interrogate the data and discover the nuances behind how it is collected and/or manipulated by an application before it is written to the dB. For example, is the employee’s phone number their current phone number or does the database store a history of all previous phone numbers? Critically important questions for understanding the nature of your data, and it doesn’t necessarily deal with statistics! The level of syntax required to do this is not that sophisticated, you can get pretty damn far with knowledge of all the joins, group by/analytical functions, filtering and nesting queries. In many cases, the data is too large to just select * and dump into a csv to load into pandas, so you start with SQL against the source. In my mind it’s more important for “SQL skills” to know how to generate hypotheses (that will build up to answering your business question) that can be investigated via a query than it is to be a master of SQL’s syntax. Just my two cents though!
Specialty (ANS-C00) examination is intended for individuals who perform complex networking tasks. This examination validates advanced technical skills and experience in designing and implementing AWS and hybrid IT network architectures at scale.
The exam covers the following domains:
Domain 1: Design and Implement Hybrid IT Network Architectures at Scale – 23%
Domain 2: Design and Implement AWS Networks – 29%
Domain 3: Automate AWS Tasks – 8%
Domain 4: Configure Network Integration with Application Services – 15%
Domain 5: Design and Implement for Security and Compliance – 12%
Domain 6: Manage, Optimize, and Troubleshoot the Network – 13%
Below are the top 20 Top 20 AWS Certified Advanced Networking – Specialty Practice Quiz including Questions and Answers and References –
Question 1: What is the relationship between private IPv4 addresses and Elastic IP addresses?
The relationship between private IPv4 addresses and Elastic IP addresses is one-to-one.
Question 2: A company’s on-premises network has an IP address range of 184.108.40.206/16. Only IPs within this network range can be used for inter-server communication. The IP address range 220.127.116.11/24 has been allocated for the cloud. A network engineer needs to design a VPC on AWS. The servers within the VPC should be able to communicate with hosts both on the internet and on-premises through a VPN connection. Which combination of configuration steps meet these requirements? (Select TWO.)
A) Set up the VPC with an IP address range of 18.104.22.168/24.
B) Set up the VPC with an RFC 1918 private IP address range (for example, 10.10.10.0/24). Set up a NAT gateway to do translation between 10.10.10.0/24 and 22.214.171.124/24 for all outbound traffic.
C) Set up a VPN connection between a virtual private gateway and an on-premises router. Set the virtual private gateway as the default gateway for all traffic. Configure the on-premises router to forward traffic to the internet.
D) Set up a VPN connection between a virtual private gateway and an on-premises router. Set the virtual private gateway as the default gateway for traffic destined to 126.96.36.199/24. Add a VPC subnet route to point the default gateway to an internet gateway for internet traffic.
E) Set up the VPC with an RFC 1918 private IP address range (for example, 10.10.10.0/24). Set the virtual private gateway to do a source IP translation of all outbound packets to 188.8.131.52/16.
A and C
The VPC needs to use a CIDR block in the assigned range (and be non-overlapping with the data center). All traffic not destined for the VPC is routed to the virtual private gateway (that route is assumed) and must then be forwarded to the internet when it arrives on-premises. B and E are incorrect because they are not in the assigned range (non-RFC 1918 addresses can be used in a VPC). D is incorrect because it directs traffic to the internet through the internet gateway.
Question 3: Tasks running on Amazon EC2 Container Service (Amazon ECS) can use which mode for container networking (allocating an elastic networking interface to each running task, providing a dynamic private IP address and internal DNS name)?
Tasks running an Amazon EC2 Container Service can use awsvpc for container networking.
Question 4: A network engineer needs to design a solution for an application running on an Amazon EC2 instance to connect to a publicly accessible Amazon RDS Multi-AZ DB instance in a different VPC and Region. Security requirements mandate that the traffic not traverse the internet. Which configuration will ensure that the instances communicate privately without routing traffic over the internet?
A) Create a peering connection between the VPCs and update the routing tables to route traffic between the VPCs. Enable DNS resolution support for the VPC peering connection. Configure the application to connect to the DNS endpoint of the DB instance.
B) Create a gateway endpoint to the DB instance. Update the routing tables in the application VPC to route traffic to the gateway endpoint.
C) Configure a transit VPC to route traffic between the VPCs privately. Configure the application to connect to the DNS endpoint of the DB instance.
D) Create a NAT gateway in the same subnet as the EC2 instances. Update the routing tables in the application VPC to route traffic through the NAT gateway to the DNS endpoint of the DB instance.
Configuring DNS resolution on the VPC peering connection will allow queries from the application VPC to resolve to the private IP of the DB instance and prevent routing over the internet. B is incorrect because Amazon RDS is not supported by gateway endpoints. C and D are incorrect because the database endpoint will resolve to a public IP and the traffic will go over the internet.
Question 5: Management has decided that your firm will implement an AWS hybrid architecture. Given that decision, which of the following is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS Cloud?
AWS Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS Cloud.
Question 6: A company has implemented a critical environment on AWS. For compliance purposes, a network engineer needs to verify that the Amazon EC2 instances are using a specific approved security group and belong to a specific VPC. The configuration history of the instances should be recorded and, in the event of any compliance issues, the instances should be automatically stopped. What should be done to meet these requirements?
A) Enable AWS CloudTrail and create a custom Amazon CloudWatch alarm to perform the required checks. When the CloudWatch alarm is in a failed state, trigger the stop this instance action to stop the noncompliant EC2 instance.
B) Configure a scheduled event with AWS CloudWatch Events to invoke an AWS Lambda function to perform the required checks. In the event of a noncompliant resource, invoke another Lambda function to stop the EC2 instance.
C) Configure an event with AWS CloudWatch Events for an EC2 instance state-change notification that triggers an AWS Lambda function to perform the required checks. In the event of a noncompliant resource, invoke another Lambda function to stop the EC2 instance.
D) Enable AWS Config and create custom AWS Config rules to perform the required checks. In the event of a noncompliant resource, use a remediation action to execute an AWS Systems Manager document to stop the EC2 instance.
AWS Config provides a detailed view of the configuration of AWS resources in a user’s AWS account. Using AWS Config rules with AWS Systems Manager Automation documents can automatically remediate noncompliant resources
Question 8: A company is extending its on-premises data center to AWS. Peak traffic is expected to range between 1 Gbps and 2 Gbps. A network engineer must ensure that there is sufficient bandwidth between AWS and the data center to handle peak traffic. The solution should be highly available and cost effective. What should be implemented to address these needs?
A) Deploy a 10 Gbps AWS Direct Connect connection with an IPsec VPN backup.
B) Deploy two 1 Gbps AWS Direct Connect connections in a link aggregation group.
C) Deploy two 1 Gbps AWS Direct Connect connections in a link aggregation group to two different Direct Connect locations.
D) Deploy a 10 Gbps AWS Direct Connect connection to two different Direct Connect locations.
Two AWS Direct Connect connections with link aggregation groups in two different Direct Connect locations are required to provide sufficient bandwidth with high availability. If one Direct Connect location experiences a failure, the two Direct Connect connections in the second Direct Connect location will provide backup. All of the other options would be unable to handle the peak traffic if a connection was lost.
Question 10: A network engineer needs to limit access to the company’s Amazon S3 bucket to specific source networks. What should the network engineer do to accomplish this?
A) Create an ACL on the S3 bucket, limiting access to the CIDR blocks of the specified networks.
B) Create a bucket policy on the S3 bucket, limiting access to the CIDR blocks of the specified networks using a condition statement.
C) Create a security group allowing inbound access to the CIDR blocks of the specified networks and apply the security group to the S3 bucket.
D) Create a security group allowing inbound access to the CIDR blocks of the specified networks, create a S3 VPC endpoint, and apply the security group to the VPC endpoint.
An Amazon S3 bucket policy that uses a condition statement will support restricting access if the request originates from a specific range of IP addresses. A is incorrect because an S3 ACL does not support IP restrictions. C is incorrect because security groups cannot be applied to S3 buckets. D is incorrect because security groups cannot be applied to an S3 VPC endpoint.
Question 12: A company’s compliance requirements specify that web application logs must be collected and analyzed to identify any malicious activity. A network engineer also needs to monitor for remote attempts to change the network interface of web instances. Which services and configurations will meet these requirements?
A) Install the Amazon CloudWatch Logs agent on the web instances to collect application logs. Use VPC Flow Logs to send data to CloudWatch Logs. Use CloudWatch Logs metric filters to define the patterns to look for in the log data.
B) Configure AWS CloudTrail to log all management and data events to a custom Amazon S3 bucket and Amazon CloudWatch Logs. Use VPC Flow Logs to send data to CloudWatch Logs. Use CloudWatch Logs metric filters to define the patterns to look for in the log data.
C) Configure AWS CloudTrail to log all management events to a custom Amazon S3 bucket and Amazon CloudWatch Logs. Install the Amazon CloudWatch Logs agent on the web instances to collect application logs. Use CloudWatch Logs Insights to define the patterns to look for in the log data.
D) Enable AWS Config to record all configuration changes to the web instances. Configure AWS CloudTrail to log all management and data events to a custom Amazon S3 bucket. Use Amazon Athena to define the patterns to look for in the log data stored in Amazon S3.
Web application logs are internal to the operating system, and Amazon CloudWatch Logs Insights can be used to collect and analyze the logs using the CloudWatch agent. AWS CloudTrail monitors all AWS API activity and can be used to monitor particular API calls to identify remote attempts to change the network interface of web instances.
Question 14: A company has an application that processes confidential data. The data is currently stored in an on premises data center. A network engineer is moving workloads to AWS, and needs to ensure confidentiality and integrity of the data in transit to AWS. The company has an existing AWS Direct Connect connection. Which combination of steps should the network engineer perform to set up the most cost-effective connection between the on-premises data center and AWS? (Select TWO.)
A) Attach an internet gateway to the VPC.
B) Configure a public virtual interface on the AWS Direct Connect connection.
C) Configure a private virtual interface to the virtual private gateway.
D) Set up an IPsec tunnel between the customer gateway and a software VPN on Amazon EC2.
E) Set up a Site-to-Site VPN between the customer gateway and the virtual private gateway.
B and E
Setting up a VPN over an AWS Direct Connect connection will secure the data in transit. The steps to do so are: set up a public virtual interface and create the Site-to-Site VPN between the data center and the virtual private gateway using the public virtual interface. A is incorrect because it would send traffic over the public internet. C is not possible because a public virtual interface is needed to announce the VPN tunnel IPs. D is incorrect because it would not take advantage of the already existing Direct Connect connection.
Question 15: A site you are helping create must use Adobe Media Server and the Adobe Real-Time Messaging Protocol (RTMP) to stream media files. When it comes to AWS, an RTMP distribution must use which of the following as the origin?
An RTMP distribution must use S3 bucket as the origin.
Question 16: A company is creating new features for its ecommerce website. These features will be deployed as microservices using different domain names for each service. The company requires the use of HTTPS for all its public-facing websites. The application requires the client’s source IP. Which combination of actions should be taken to accomplish this? (Select TWO.)
A) Use a Network Load Balancer to distribute traffic to each service.
B) Use an Application Load Balancer to distribute traffic to each service.
C) Configure the application to retrieve client IPs using the X-Forwarded-For header.
D) Configure the application to retrieve client IPs using the X-Forwarded-Host header.
E) Configure the application to retrieve client IPs using the PROXY protocol header.
B and C
An Application Load Balancer supports host-based routing, which is required to route traffic to different microservices based on the domain name. X-Forwarded-For is the correct request header to identify the client’s source IP address.
Question 18: A network engineer is architecting a high performance computing solution on AWS. The system consists of a cluster of Amazon EC2 instances that require low-latency communications between them. Which method will meet these requirements?
A) Launch instances into a single subnet with a size equal to the number of instances required for the cluster.
B) Create a cluster placement group. Launch Elastic Fabric Adapter (EFA)-enabled instances into the placement group.
C) Launch Amazon EC2 instances with the largest available number of cores and RAM. Attach Amazon EBS Provisioned IOPS (PIOPS) volumes. Implement a shared memory system across all instances in the cluster.
D) Choose an Amazon EC2 instance type that offers enhanced networking. Attach a 10 Gbps non-blocking elastic network interface to the instances.
Cluster placement groups and Elastic Fabric Adapters (EFAs) are recommended for high performance computing applications that benefit from low network latency, high network throughput, or both. A is incorrect because the size of a subnet has no impact on network performance. C is incorrect because an Amazon EBS volume cannot be shared between Amazon EC2 instances. D is only half the solution because the enhanced networking affects the network behaviour of an EC2 instance but not the network infrastructure between instances.
Question 20: A company’s internal security team receives a request to allow Amazon S3 access from inside the corporate network. All external traffic must be explicitly allowed through the corporate firewalls. How can the security team grant this access?
A) Schedule a script to download the Amazon S3 IP prefixes from AWS developer forum announcements. Update the firewall rules accordingly.
B) Schedule a script to download and parse the Amazon S3 IP prefixes from the ip-ranges.json file. Update the firewall rules accordingly.
C) Schedule a script to perform a DNS lookup on Amazon S3 endpoints. Update the firewall rules accordingly.
D) Connect the data center to a VPC using AWS Direct Connect. Create routes that forward traffic from the data center to an Amazon S3 VPC endpoint.
The ip-ranges.json file contains the latest list of IP addresses used by AWS. AWS no longer posts IP prefixes in developer forum announcements. DNS lookups would not provide an exhaustive list of possible IP prefixes. D would require transitive routing, which is not possible.
Wi-Fi is a brand name for wireless networking standards. Wi-Fi lets devices communicate by sending and receiving radio waves.
In 1971, the University of Hawaii demonstrated the first wireless data network, known as ALOHAnet. In 1985, the US FCC opened the ISM radio bands for unlicensed transmissions. After 1985, other countries followed, and more people started experimenting. In 1997 and 1999, the IEEE ratified the first international wireless networking standards. They were called 802.11-1997, 802.11b, and 802.11a. The technology was amazing, but the names were not.
In 1999, the brand-consulting firm Interbrand created the logo and suggested Wi-Fi as the name. Wi-Fi was a pun on hi-fi, referring to high-fidelity audio. Wi-Fi was easier to remember than 802.11, and we’ve been stuck with the name since. The official name is Wi-Fi, but most people don’t capitalize it or include the hyphen. Wi-Fi, WiFi, Wifi, wifi, and 802.11 all refer to the same thing. In the early days, Wi-Fi was used as shorthand for Wireless Fidelity, but it isn’t officially short for anything. According to the Wi-Fi Alliance, Wi-Fi is Wi-Fi.
What does Wi-Fi do? How does Wi-Fi work?
Wi-Fi transmits data using microwaves, which are high-frequency radio waves. Wi-Fi is more complicated than FM radio, but the basic underlying technology is the same. They both encode information into radio waves, which are received and decoded. FM radio does this for sound, Wi-Fi does this for computer data. So how can we use radio waves to send sound, or information?
At a basic level, you can think of two people holding a jump rope. One person raises and lowers their arm quickly, creating a wave. With Wi-Fi, this person would represent your Wi-Fi router, or wireless access point. Keeping the same up and down motion is known as a carrier wave. The person on the other end is the client device, such as a laptop or cell phone. When a wireless client joins the network and senses the carrier wave, it starts listening and waits for small differences in the signal.
In our example, you can imagine feeling the jump rope going up and down, and then receiving a single motion to the right. That single motion to the right can be interpreted as a binary number 1. A motion to the left would be a binary 0. Chain enough 1’s and 0’s together and you can represent complicated things, like all the data on this webpage.
It sounds like magic, but it’s not only Wi-Fi that works this way. Bluetooth, 4G, 5G, and most wireless transmissions work by manipulating waves to transfer electrical signals through the air. A deeper, better question than “How does Wi-Fi work?” is “How do wireless transmissions work?”
If you want a better answer, you need to have a basic understanding of a few things:
Fundamental physics of electricity and magnetism
Electromagnetic radiation, radio waves, and antennas
How wired networks transmit data
I tried my best to keep this understandable, and laid out in a way that makes sense. This stuff is complicated, and hard to explain. That is why there are so many bad explanations of how Wi-Fi works out there.
This isn’t going to be a light and breezy discussion. Each of these topics could be an entire college course, so forgive me for simplifying where possible. Use Wikipedia and other resources to fill in the gaps, or to clarify something I glossed over. As always, corrections and feedback are welcomed.
Let’s dive in the deep end and cover the physics first. If you’re not familiar with fundamental physics, Wikipedia is an amazing resource. The key terms highlighted in blue are links to Wikipedia articles which explain further.
Since visible light is an electromagnetic wave, this is how we can see the sun, or distant stars.
This is also how we heard Neil Armstrong say “One small step for man…” live from the moon.
The warmth you feel from sunlight is due to the radiant energy sunlight contains. All electromagnetic waves have radiant energy.
Examples of electromagnetic waves: Visible light, radio waves, microwaves, infrared, ultraviolet, X-rays, and gamma rays.
Wi-Fi is an example of a radio wave, specifically a microwave. Microwaves are high-energy radio waves.
Electromagnetic waves come in a wide range of forms. The type of wave is categorized by wavelength and frequency.
Wavelength is a measure of the distance over which the wave’s shape repeats. In a typical continuous sine wave like Wi-Fi, every time a wave goes from peak to valley to peak, we call that a cycle. The distance it takes to complete one cycle is its wavelength.
Frequency is a measure of how many cycles the wave makes per second. We use Hertz (Hz) as the measure of frequency, 1 Hz is one cycle per second. The more common MHz and GHz are for millions, or billions, of cycles per second.
Imagine waves on a beach. On calm days the waves are small, and come in slowly. On a windy day the waves have more energy, come in faster, and have less distance between them. Higher energy, higher frequency, shorter wavelength. Unlike ocean waves, electromagnetic waves move at the speed of light. Since their speed is constant, their wavelength and frequency are inverse. As wavelength goes up, frequency does down. If you multiply the wavelength and frequency, you will always get the same value — the speed of light, the speed limit of the universe.
You can graph all the various kinds of electromagnetic waves, with the lowest energy on the left, and the highest energy on the right. We call this the electromagnetic spectrum. I’m not going to cover the entire electromagnetic spectrum, since we are mainly interested in Wi-Fi’s microwaves, and how we can use them to send data wirelessly.
Starting from the left, we have the low-energy waves we call radio. Opinions vary, but I’m going with Wikipedia’s broad definition that radio waves cover from 30 Hz, up to 300 GHz. Compared to the rest of the spectrum, radio’s wavelengths are long, their frequency is slow, and energy is low. Within radio waves, there is a separate category we call microwaves.
Microwaves fall within the broader radio wave range. At a minimum, microwaves cover 3 GHz to 30 GHz, but some people say microwaves extend further than that. The specific range depends on who you ask, but generally you can think of Microwaves as high-frequency radio waves.
Microwaves are used in microwave ovens, Bluetooth, Wi-Fi, your cell phone’s 4G or 5G connection, and lots of other wireless data transmissions. Their higher energy, shorter wavelength, and other properties make them better for high-bandwidth transfers than traditional, lower-powered radio waves.
All waves can be modulated by varying either the amplitude (strength), frequency or phase of the wave. This is what allows Wi-Fi, and any other wireless technology, to encode data in a wireless signal.
Wired Networking Transmissions
Before we cover how wireless data transmission works, we need to understand how wired data transmission works. In wired Ethernet networks, we use the copper inside Ethernet cables to transmit electrical signals. The conductive copper transfers the electrical current applied at one end, through the wire, to the other side.
A typical example would be a PC plugged into an Ethernet switch. If the PC wants to transfer information, it converts binary digits to electrical impulses. On, off, on, off. It sends a specific pattern of 1’s and 0’s across the wire, which is received on the other end. Ethernet is the neighborhood street of the networking world. It’s great for getting around the local area, but you’ll need to jump on the highway if you want to go further.
The highway of the networking world is fiber optic cabling. Just like how Ethernet transfers electrical current, we can do the same thing with lasers and fiber optic cables. Fiber optic cables are made of bendable glass, and they provide a path for light to be transmitted. Since fiber optics require lasers, special transceivers are required at each end. Compared to Ethernet, Fiber optic cables have the advantage of having a longer range, and generally a higher capacity.
Fiber optic cabling carries a big portion of global Internet traffic. We have a wide array of fiber optic cabling over land, and sea. Those connections are what allow you to communicate with someone on the other side of the country, or the other side of the world. This is possible because these transmissions happen at the speed of light.
Here’s where things get fun. Just like how Ethernet and fiber optic cabling take an electrical impulse or beam of light from A to B, we can do the same thing with radios, antennas, and radio waves.
Radios, Antennas, and Wireless Networking
Now that we have a rough common understanding of electromagnetic waves and wired data transmission, how can we transmit data wirelessly? The key is an antenna. Antennas convert electricity into radio waves, and radio waves into electricity. A basic antenna consists of two metal rods connected to a receiver or transmitter.
When transmitting, a radio supplies an alternating electric current to the antenna, and the antenna radiates the energy as electromagnetic waves. When receiving, an antenna reverses this process. It intercepts some of the power of a radio wave to produce an electrical current, which is applied to a receiver, and amplified. Receiving antennas capture a fraction of the original signal, which is why distance, antenna design, and amplification are important for a successful wireless transmission.
If you have a properly tuned, powerful antenna, you can send a signal 1000s of kilometers away, or even into space. It’s not just Wi-Fi, this is what makes satellites, radar, radio, and broadcast TV transmissions work too. Pretty cool, right?
How Wi-Fi Works: From Electricity to Information
An intricate pattern of electrons representing computer data flow into your Wi-Fi router, or wireless access point.
The access point sends that pattern of electrons to an antenna, generating an electromagnetic wave.
By alternating between a positive to negative charge, the wire inside of an antenna creates an oscillating electric and magnetic field. These oscillating fields propagate out into space as electromagnetic waves, and are able to be received by anyone in range.
Typical Wi-Fi access points have omnidirectional antennas, which make the wave propagate in all horizontal directions.
This wave travels through the air and hits a receiving antenna which reverses the process, converting the radiant energy in the radio wave back into electricity.
The electric field of the incoming wave pushes electrons back and forth in the antenna, creating an alternating positive and negative charge. The oscillating field induces voltage and current, which flows to the receiver.
The signal is amplified and received, either to the client device or to an Ethernet connection for further routing.
A lot of the wave’s energy is lost along the way.
If the transmission was successful, the electrical impulses should be a good copy of what was sent.
If the transmission wasn’t successful, the data is resent.
When the information is received on the other end, it is treated the same as any other data on the network.
More Fun Wi-Fi Facts
Wi-Fi has redundancy built-in. If you wanted to send “Hello” your access point wouldn’t send an H, an E, an L, an L and a O. It sends multiple characters for each one, just like you would on a static-filled radio or phone call. It will use its equivalent of the phonetic alphabet to send “Hotel”, “Echo”, “Lima”, “Lima”, “Oscar”.
That way, even if you didn’t hear the entire transmission, you are still likely to be able to know that “Hello” was being sent. The level of redundancy varies on signal strength and interference on the channel.
If the signal strength is high, the access point and receiver are able to use a complicated modulation scheme, and encode a lot of data.
If you think about our jump rope analogy from earlier, rather than just left and right, it can divide into 1/4s, 1/8ths, or further. It can also combine the direction of the modulation with strength, or phase of modulation.
The most complex modulation in Wi-Fi 6 is 1024-QAM, which has 1024 unique combinations of amplitude and phase. This results in high throughput, but requires a very strong wireless signal and minimal interference to work effectively.
As your wireless signal weakens, complex modulation can’t be understood. Both devices will step down to a less complex modulation scheme. This is why Wi-Fi slows down as you move away from the access point.
First In a Series: Wi-Fi 101
I plan on writing a whole series of posts about Wi-Fi fundamentals which will cover various topics about Wi-Fi, how to improve your home network, and related issues. If there is something you want me to cover, leave a comment below.
The IEEE, an international standards body, sets the definitions of what Wi-Fi is. They’re the reason we have Wi-Fi standards with names like 802.11n, 802.11ac or 802.11ax. They’ve since renamed the major standards to Wi-Fi 1, 2, 3, 4, 5, and 6. With each generation, Wi-Fi gets better, and there are a lot of details to cover. I’ll cover that in a future post.
Hertz did not realize the practical importance of his experiments. “It’s of no use whatsoever. This is just an experiment that proves Maestro Maxwell was right—we just have these mysterious electromagnetic waves that we cannot see with the naked eye. But they are there.” When asked about the applications of his discoveries, Hertz replied, “Nothing, I guess.”You can pay your respects to this legend by always capitalizing the H in MHz and GHz.
It takes about one second for a radio wave to travel from the Earth to the moon. It’s pretty amazing that over 50 years ago we had the technology to capture sound and images on the moon, turn them into electromagnetic waves, beam them back to Earth, and transmit them around the globe. I guess it’s pretty cool we put a human on the moon, too.
If you keep adding energy to microwaves, you can end up in a unique part of the EM spectrum, visible light. Visible light’s wavelengths are measured in nanometers, and nanometers are really small: a human hair is around 75,000 nanometers wide. Visible light has a wavelength between 380 and 740 nanometers and a frequency between 405 and 790 THz (trillions of cycles per second). It’s hard to wrap your head around, but a lot of foundational physics is, too.
Your eye is reading this page because your computer screen is sending out electromagnetic radiation in the visible light portion of the electromagnetic spectrum. Differences in the wavelength cause your eye to interpret different areas of the page as different colors. A whole lot of brain magic and pattern recognition lets you interpret those color variations as letters and words. If I did my job as a writer, there should also be some meaning behind those words. All from some waves shooting out of your screen. Physics is amazing, Wi-Fi isn’t magic, and writing is telepathy.
Every once in a while I go onto the Deep Space Network site to check on Voyager 1 and 2, and just to see what’s going on in general. Currently the round-trip time to V1 is about 1.69 days with a data rate of 150 bits/second, although I’ve seen it as low as 6 bits/sec. V2 is a bit closer at a mere 11 billion miles or so. It’s amazing to me that the entire space craft runs on 4 Watts. V1 and 2 have both departed the solar system.
NOTES/HINT1: The Shared Responsibility Model is the security model under which AWS provides secure infrastructure and services, while the customer is responsible for secure operating systems, platforms, and data.
Question 2: Which type of testing method is used to compare a control system to a test system, with the goal of assessing whether changes applied to the test system improve a particular metric compared to the control system?
NOTES/HINT2: The side-by-side testing method is used to compare a control system to a test system, with the goal of assessing whether changes applied to the test system improve a particular metric compared to the control system.
Question 4: Which pillar of the AWS Well-Architected Framework includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies?
NOTES/HINT4: Security is the pillar of the AWS Well-Architected Framework that includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.
Question 8: Which pillar of the AWS Well-Architected Framework includes the ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve?
NOTES/HINT8: Performance efficiency is the pillar of the AWS Well-Architected Framework that includes the ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.
Question 11: A company is migrating a legacy web application from a single server to multiple Amazon EC2 instances behind an Application Load Balancer (ALB). After the migration, users report that they are frequently losing their sessions and are being prompted to log in again. Which action should be taken to resolve the issue reported by users?
A) Purchase Reserved Instances. B) Submit a request for a Spot block. C) Submit a request for all Spot Instances. D) Use a mixture of On-Demand and Spot Instances
NOTES/HINT11: Legacy applications designed to run on a single server frequently store session data locally. When these applications are deployed on multiple instances behind a load balancer, user requests are routed to instances using the round robin routing algorithm. Session data stored on one instance would not be present on the others. By enabling sticky sessions, cookies are used to track user requests and keep subsequent requests going to the same instance.
Question 12: An ecommerce company wants to lower costs on its nightly jobs that aggregate the current day’s sales and store the results in Amazon S3. The jobs run on multiple On-Demand Instances, and the jobs take just under 2 hours to complete. The jobs can run at any time during the night. If the job fails for any reason, it needs to be started from the beginning. Which solution is the MOST cost-effective based on these requirements?
Question 13: A sysops team checks their AWS Personal Health Dashboard every week for upcoming AWS hardware maintenance events. Recently, a team member was on vacation and the team missed an event, which resulted in an outage. The team wants a simple method to ensure that everyone is aware of upcoming events without depending on an individual team member checking the dashboard. What should be done to address this?
A) Build a web scraper to monitor the Personal Health Dashboard. When new health events are detected, send a notification to an Amazon SNS topic monitored by the entire team.
B) Create an Amazon CloudWatch Events event based off the AWS Health service and send a notification to an Amazon SNS topic monitored by the entire team.
C) Create an Amazon CloudWatch Events event that sends a notification to an Amazon SNS topic monitored by the entire team to remind the team to view the maintenance events on the Personal Health Dashboard.
D) Create an AWS Lambda function that continuously pings all EC2 instances to confirm their health. Alert the team if this check fails.
NOTES/HINT13: The AWS Health service publishes Amazon CloudWatch Events. CloudWatch Events can trigger Amazon SNS notifications. This method requires neither additional coding nor infrastructure. It automatically notifies the team of upcoming events, and does not depend upon brittle solutions like web scraping.
Question14: An application running in a VPC needs to access instances owned by a different account and running in a VPC in a different AWS Region. For compliance purposes, the traffic must not traverse the public internet.
How should a sysops administrator configure network routing to meet these requirements?
A) Within each account, create a custom routing table containing routes that point to the other account’s virtual private gateway.
B) Within each account, set up a NAT gateway in a public subnet in its respective VPC. Then, using the public IP address from the NAT gateway, enable routing between the two VPCs.
C) From one account, configure a Site-to-Site VPN connection between the VPCs. Within each account, add routes in the VPC route tables that point to the CIDR block of the remote VPC.
D) From one account, create a VPC peering request. After an administrator from the other account accepts the request, add routes in the route tables for each VPC that point to the CIDR block of the peered VPC.
NOTES/HINT14: A VPC peering connection enables routing using each VPC’s private IP addresses as if they were in the same network. Traffic using inter-Region VPC peering always stays on the global AWS backbone and never traverses the public internet.
Question16: A third-party service uploads objects to Amazon S3 every night. Occasionally, the service uploads an incorrectly formatted version of an object. In these cases, the sysops administrator needs to recover an older version of the object.
What is the MOST efficient way to recover the object without having to retrieve it from the remote service?
A) Configure an Amazon CloudWatch Events scheduled event that triggers an AWS Lambda function that backs up the S3 bucket prior to the nightly job. When bad objects are discovered, restore the backed up version.
B) Create an S3 event on object creation that copies the object to an Amazon Elasticsearch Service (Amazon ES) cluster. When bad objects are discovered, retrieve the previous version from Amazon ES.
C) Create an AWS Lambda function that copies the object to an S3 bucket owned by a different account. Trigger the function when new objects are created in Amazon S3. When bad objects are discovered, retrieve the previous version from the other account.
D) Enable versioning on the S3 bucket. When bad objects are discovered, access previous versions with the AWS CLI or AWS Management Console.
NOTES/HINT16: Enabling versioning is a simple solution; (A) involves writing custom code, (C) has no versioning, so the replication will overwrite the old version with the bad version if the error is not discovered quickly, and (B) will involve expensive storage that is not well suited for objects.
Question17: According to the AWS shared responsibility model, for which of the following Amazon EC2 activities is AWS responsible? (Select TWO.) A) Configuring network ACLs B) Maintaining network infrastructure C) Monitoring memory utilization D) Patching the guest operating system E) Patching the hypervisor
NOTES/HINT17: AWS provides security of the cloud, including maintenance of the hardware and hypervisor software supporting Amazon EC2. Customers are responsible for any maintenance or monitoring within an EC2 instance, and for configuring their VPC infrastructure.
Question18: A security and compliance team requires that all Amazon EC2 workloads use approved Amazon Machine Images (AMIs). A sysops administrator must implement a process to find EC2 instances launched from unapproved AMIs.
Which solution will meet these requirements? A) Create a custom report using AWS Systems Manager inventory to identify unapproved AMIs. B) Run Amazon Inspector on each EC2 instance and flag the instance if it is using unapproved AMIs. C) Use an AWS Config rule to identify unapproved AMIs. D) Use AWS Trusted Advisor to identify the EC2 workloads using unapproved AMIs.
Question19: A sysops administrator observes a large number of rogue HTTP requests on an Application Load Balancer. The requests originate from various IP addresses. These requests cause increased server load and costs.
What should the administrator do to block this traffic? A) Install Amazon Inspector on Amazon EC2 instances to block the traffic. B) Use Amazon GuardDuty to protect the web servers from bots and scrapers. C) Use AWS Lambda to analyze the web server logs, detect bot traffic, and block the IP addresses in the security groups. D) Use an AWS WAF rate-based rule to block the traffic when it exceeds a threshold.
Question20: A sysops administrator is implementing security group policies for a web application running on AWS.
An Elastic Load Balancer connects to a fleet of Amazon EC2 instances that connect to an Amazon RDS database over port 1521. The security groups are named elbSG, ec2SG, and rdsSG, respectively.
How should these security groups be implemented? A) elbSG: allow port 80 and 443 from 0.0.0.0/0; ec2SG: allow port 443 from elbSG; rdsSG: allow port 1521 from ec2SG.
B) elbSG: allow port 80 and 443 from 0.0.0.0/0; ec2SG: allow port 80 and 443 from elbSG and rdsSG; rdsSG: allow port 1521 from ec2SG.
C) elbSG: allow port 80 and 443 from ec2SG; ec2SG: allow port 80 and 443 from elbSG and rdsSG; rdsSG: allow port 1521 from ec2SG.
D) elbSG: allow port 80 and 443 from ec2SG; ec2SG: allow port 443 from elbSG; rdsSG: allow port 1521 from elbSG.
NOTES/HINT20: elbSG must allow all web traffic (HTTP and HTTPS) from the internet. ec2SG must allow traffic from the load balancer only, in this case identified as traffic from elbSG. The database must allow traffic from the EC2 instances only, in this case identified as traffic from ec2SG.