How does using a VPN or Proxy or TOR or private browsing protects your online activity?

What are some common reasons why a blog doesn't rank on Google

VPNs are used to provide remote corporate employees, gig economy freelance workers and business travelers with access to software applications hosted on proprietary networks. To gain access to a restricted resource through a VPN, the user must be authorized to use the VPN app and provide one or more authentication factors, such as a password, security token or biometric data.

A VPN extends a private network across a public network, and enables users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network. Applications running on a computing device, e.g. a laptop, desktop, smartphone, across a VPN may therefore benefit from the functionality, security, and management of the private network. Encryption is a common though not an inherent part of a VPN connection.

To ensure security, the private network connection is established using an encrypted layered tunneling protocol and VPN users use authentication methods, including passwords or certificates, to gain access to the VPN. In other applications, Internet users may secure their connections with a VPN, to circumvent geo restrictions and censorship, or to connect to proxy servers to protect personal identity and location to stay anonymous on the Internet. However, some websites block access to known VPN technology to prevent the circumvention of their geo-restrictions, and many VPN providers have been developing strategies to get around these roadblocks.

Private browsing on incognito window or inPrivate window a privacy feature in some web browsers (Chrome, Firefox, Explorer, Edge). When operating in such a mode, the browser creates a temporary session that is isolated from the browser’s main session and user data. Browsing history is not saved, and local data associated with the session, such as cookies, are cleared when the session is closed.

These modes are designed primarily to prevent data and history associated with a particular browsing session from persisting on the device, or being discovered by another user of the same device. Private browsing modes do not necessarily protect users from being tracked by other websites or their internet service provider (ISP). Furthermore, there is a possibility that identifiable traces of activity could be leaked from private browsing sessions by means of the operating system, security flaws in the browser, or via malicious browser extensions, and it has been found that certain HTML5APIs can be used to detect the presence of private browsing modes due to differences in behaviour.

The question is:

How does using a VPN or Proxy or TOR or private browsing protects your online activity?

What are the pros and cons of VPN vs Proxy?

How can VPN, Proxy, TOR, private browsing, incognito windows How does using a VPN, Proxy, TOR, private browsing, incognito windows protects your online activity? protects your online activity?

  • VPN masks your real IP address by hiding it with one of its servers. As a result, no third party will be able to link your online activity to your physical location. To top it off, you avoid annoying ads and stay off the marketer’s radars.
  • VPN encrypts your internet traffic in order to make it impossible for anybody to decode your sensitive information and steal your identity. You can also learn more what a development team tells about how they protect their users against data theft.

If your VPN doesn’t protect your online activities, it means there are some problems with the aforementioned protection measures. This could be:

  1. VPN connection disruption. Unfortunately, a sudden disruption of your connection can deanonymize you, if at this moment your device is sending or receiving IP-related requests. In order to avoid such a situation, the kill switch option should be always ON.
  2. DNS/IP address leakage. This problem can be caused by various reasons from configuration mistakes to a conflict between the app under discussion and some other installed software. Regardless of the reason, you will end up with otherwise perfectly working security app, which, in fact, is leaking your IP address.
  3. Outdated protocol. In a nutshell, it is the technology that manages the сreation of your secured connection. If your current protocol becomes obsolete, the app will not work perfectly.
  4. Free apps. This is about free software that makes money on your privacy. The actions of such applications are also considered as unethical and illegal. Stealing your private data and selling of it to third parties is one of them.
  5. User carelessness. For instance, turn on your virtual private network when you visit any website or enter your credentials. Don’t use the app sporadically.

How is a VPN different from a proxy server?

On top of serving as a proxy server, VPN provides encryption. A proxy server only hides your IP address.

Proxies are good for the low-stakes task like: watching regionally restricted videos on YouTube, creating another Gmail account when your IP limit ran out, accessing region restricted websites, bypassing content filters, request restrictions on IP.

On the other hand, proxies are not so great for the high-stakes task. As we know, proxies only act as a middleman in our Internet traffic, they only serve a webpage which we are requesting them to serve.

Just like the proxy service, a VPN makes your traffic to have appeared from the remote IP address that is not yours. But, that’s when all the similarities end.

Unlike a proxy, VPN is set at the operating system level, it captures all the traffic coming from the device it is set up on. Whether it is your web traffic, BitTorrent client, game, or a Windows Update, it captures traffic from all the applications from your device.

Another difference between proxy and VPN is – VPN tunnels all your traffic through heavily encrypted and secure connection to the VPN server.

This makes VPN an ideal solution high-stakes tasks where security and privacy are of paramount of importance. With VPN, neither your ISP, Government, or a guy snooping over open Wi-Fi connection can access your traffic.

What are daily use of VPN for?

There are many uses of Virtual Private Network (VPN) for normal users and company employees. Here are the list of the most common usages:

Accessing Business Networks From Any Places in the World :

This is one of the best use of VPN. It is very much helpful when you are travelling and have to complete some work. You can connect any computer to your business network from anywhere and set up your work easily. Local resources need some security so they have to be kept in VPN-only to ensure their safety.

To Hide Your Browsing Data From ISP & Local Users :

All Internet Service Providers (ISP) will log the data of your IP address. If you use the VPN then they can only see the connection of your VPN. It won’t let anyone spy on your website history.

Moreover, it secures your connection when you use a public Wi-Fi network. As you may or may not know, users on these networks can spy on your browsing history, even if you are surfing HTTPS websites. Virtual Private Networks protect your privacy on public unsecured Wi-Fi connection.

To Access Geographically Blocked Sites :

Have you ever faced a problem like “This content is not available in your country”? VPNs are the best solution to bypass these restrictions.

Some videos on YouTube will also show this restriction. VPNs are a quick fix for all these restrictions.

What about TOR and VPN? What are the Pros and Cons?

The Tor network is similar to a VPN. Messages to and from your computer pass through the Tor network rather than connecting directly to resources on the Internet. But where VPNs provide privacy, Tor provides anonymity.

Tor is free and open-source software for enabling anonymous communication. The name is derived from an acronym for the original software project name “The Onion Router”. Tor directs Internet traffic through a free, worldwide, volunteer overlay network consisting of more than seven thousand relays to conceal a user’s location and usage from anyone conducting network surveillance or traffic analysis. Using Tor makes it more difficult to trace Internet activity to the user: this includes “visits to Web sites, online posts, instant messages, and other communication forms”.[ Tor’s intended use is to protect the personal privacy of its users, as well as their freedom and ability to conduct confidential communication by keeping their Internet activities from being monitored.

Tor does not prevent an online service from determining when it is being accessed through Tor. Tor protects a user’s privacy, but does not hide the fact that someone is using Tor. Some websites restrict allowances through Tor. For example, Wikipedia blocks attempts by Tor users to edit articles unless special permission is sought. Although a VPN is generally faster than Tor, using them together will slow down your internet connection and should be avoided. More is not necessarily better in this situation.

Is VPN necessary when using the deep web?

The deep web is the part of the web that can not be indexed by search machines: internal company login pages, or a school portal (the internal portal) private google sites or government pages.

The dark web is the more sinister form of the Deep Web. The dark web is more associated with illegal activity (i.e child pornography, drug dealing, hitmen etc).
VPN is not necessary when connecting to the DEEP WEB. Please do not confuse the DEEP WEB with the DARK WEB.

Are there any good free VPN services?

It is not recommended to use free VPN for following reasons:

1- Security: Free VPNs don’t necessarily have to ensure your privacy is protected.

2- Tracking – Free VPNs have no obligation to keep your details safe, so at any point, your details could be passed on.

3- Speed / bandwidth – Some free VPN services are capped at a lower bandwidth that is you will receive less browsing or download speed to that of paid VPN.

4- Protocols supported – A free VPN may not support all necessary protocols. PPTP, OpenVPN and L2TP are generally provided only on paid VPN services.

If you are ok with the risks of using Free VPN, here are some you can try:

  1. TunnelBear: Secure VPN Service
  2. Hide.me VPN
  3. SurfEasy | Ultra fast, no-log private network VPN for Android, iOS, Mac & Windows
  4. CyberGhost Fast and Secure VPN Service
  5. Windscibe Free VPN and Ad Block
  6. OpenVPN – Open Source VPN
  7. SoftEther VPN Open Source
  8. Zenmate
  9. HotSpot Shield

Paid VPNs are better and give you:

  • great customer support
  • lighting internet speed
  • user friendly design
  • minimum 256-bit security
  • advanced features such as P2P, double encryption, VPN over Onion etc.

Below are the top paid VPNs:

1- NordVPN – cost-effective, provides Netflix in 5 countries (US, CAN, UK, JP, NL) and does not log your info.

2- ExpressVPN – nearly 3x NordVPN’s price but guarantees Netflix in the US. Excellent customer service and claims to not log your info.

3- Private Internet Access – a U.S. based VPN that has proven its no log policy in the court of law. This is a unique selling point that 99.99% VPNs don’t have.

4- OpenVPN provides flexible VPN solutions to secure your data communications, whether it’s for Internet privacy, remote access for employees, securing IoT, or for networking Cloud data centers. 

Other Questions about VPN and security:

Why might certain web sites not load with VPN?

For security, some corporations like Banks often block IP addresses used by major VPN companies, because it is thought to improve security.

Can a VPN bypass being flagged as a suspicious log-in on Facebook & Instagram?

You probably need a VPN that allow you to use dedicated IP address, otherwise the server ips are constantly switching every time you reconnect to your vpn and shared ip usually raised as suspicious logins due to many people logging in from the same ip address (which make the site thinks it might be bots or mass-hacked accounts).

How is a hacker traced when server logs show his or her IP is from a VPN?

  • Start looking for IP address leaks. Even hackers are terrible at not leaking their IPs.
  • Look for times the attacker forgot to enable their VPN. It happens all the time.
  • Look at other things related to the attcke like domains for example. They might have registered a domain using something you can trace or they left a string in the malware that can help identify them.
  • Silently take control of the command and control server legally.

What is the most secure VPN protocol?

  • OpenVPN technology uses the highest levels (military standards) of encryption algorithms i.e. 256bit keys to secure your data transfers.
  • OpenVPN is also known to have the fastest speeds even in the case of long distance connections that have latency. The protocol is highly recommended for streaming, downloading files and watching live TV. In addition to speeds, the protocol is stable and known to have fewer disconnections compared to its many counterparts.
  • OpenVPN comes equipped with solid military grade encryption and is way better, security wise, than PPTP, L2TP/IPSec and SSTP.

What are some alternatives for VPN?

  • Tor network, it is anonymous, free and well, rather slow, certainly fast enough to access your private email, but not fast enough to stream a movie.
  • Proxies are remote computers that individuals or organizations use to restrict Internet access, filter content, and make Internet browsing more secure. It acts as a middleman between the end user and the web server, since all connection requests pass through it. It filters the request first then sends it to the web server. Once the web server responds, the proxy filters the response then sends it to the end user.
  • IPSec (Cisco, Netgear, etc.): secure network protocol suite that authenticates and encrypts the packets of data sent over an Internet Protocol network.
  • SSL (Full) like OpenVPN
  • SSL (Partial) like SSL-Explorer and most appliances
  • SSH Tunneling is a method of transporting arbitrary networking data over an encrypted SSH connection. It can be used to add encryption to legacy applications. It also provides a way to secure the data traffic of any given application using port forwarding, basically tunneling any TCP/IP port over SSH.
  • PPTP
  • L2TP (old Cisco, pre IPSec)
  • DirectAccess 
  • Hamachi
  • You can create you own VPN as well using any encryption or simple tunneling technology.

How does private browsing or incognito window work?

When you are in private browsing mode, your browser doesn’t store any of this information at all. It functions as a completely isolated browser session.

For most web browsers, their optional private mode, often also called InPrivate or incognito, is like normal browsing except for a few things.

  1. it uses separate temporary cookies that are deleted once the browser is closed (leaving your existing cookies unaffected)
  2. no private activity is logged to the browser’s history
  3. it often uses a separate temporary cache

What are the advantages of Google Chrome’s private browsing?

  • simultaneously log into a website using different account names
  • access websites without extensions (all extensions are disabled by default when in Incognito)
  • Shield you from being tracked by Google, Facebook and other online advertising companies
  • Allow you to be anonymous visitor to a website, or see how a personalized webpage will look like from a third-party perspective

Firefox private browsing or chrome incognito?

Mozilla doesn’t really have an incentive to spy on their users. It’s not really going to get them anything because they’re not a data broker and don’t sell ads. Couple this with the fact that Firefox is open-source and I would argue that Firefox is the clear winner here.

Chrome now prevents sites from checking for private browsing mode

Mozilla Private Network VPN gives Firefox another privacy boost

Adding a VPN to Firefox is clever because it means the privacy protection is integrated into one application rather than being spread across different services. That integration probably makes it more likely to be used by people who wouldn’t otherwise use one.

Pros and Cons of Adding VPN to browsers like Firefox and Opera:

Turning on the VPN will give users a secure connection to a trusted server when using a device connected to public Wi-Fi (and running the gamut of rogue Wi-Fi hotspots and unknown intermediaries). Many travellers use subscription VPNs when away from a home network – the Mozilla Private Network is just a simpler, zero-cost alternative.

However, like Opera’s offering, it’s not a true VPN – that is, it only encrypts traffic while using one browser, Firefox.  Traffic from all other applications on the same computer won’t be secured in the same way.

As with any VPN, it won’t keep you completely anonymous. Websites you visit will see a Cloudflare IP address instead of your own, but you will still get advertising cookies and if you log in to a website your identity will be known to that site.

Additional reading:

Resources:

1- Wikipedia

2- Quora

3- SearchExpress

4- Reddit

5- VPN’s for Remote Workers: A Beginners Guide for 2019

AWS Developer and Deployment Theory: Facts and Summaries and Questions/Answers

AWS Certification Exam Preparation

AWS Developer – Deployment Theory Facts and summaries, Top 80 AWS Developer Theory Questions and Answers Dump

Definition 1: The AWS Developer is responsible for designing, deploying, and developing cloud applications on AWS platform

Definition 2: The AWS Developer Tools is a set of services designed to enable developers and IT operations professionals practicing DevOps to rapidly and safely deliver software.




AWS Developer and Deployment Theory Facts and summaries

    1. Continuous Integration is about integrating or merging the code changes frequently, at least once per day. It enables multiple devs to work on the same application.
    2. Continuous delivery is all about automating the build, test, and deployment functions.
    3. Continuous Deployment fully automates the entire release process, code is deployed into Production as soon as it has successfully passed through the release pipeline.
    4. AWS CodePipeline is a continuous integration/Continuous delivery service:
      • It automates your end-to-end software release process based on user defines workflow
      • It can be configured to automatically trigger your pipeline as soon as a change is detected in your source code repository
      • It integrates with other services from AWS like CodeBuild and CodeDeploy, as well as third party custom plug-ins.
    5. AWS CodeBuild is a fully managed build service. It can build source code, run tests and produce software packages based on commands that you define yourself.
    6. Dy default the buildspec.yml defines the build commands and settings used by CodeBuild to run your build.
    7. AWS CodeDeploy is a fully managed automated deployment service and can be used as part of a Continuous Delivery or Continuous Deployment process.
    8. There are 2 types of deployment approach:
      • In-place or Rolling update- you stop the application on each host and deploy the latest code. EC2 and on premise systems only. To roll back, you must re-deploy the previous version of the application.
      • Blue/Green : New instances are provisioned and the new application is deployed to these new instances. Traffic is routed to the new instances according to your own schedule. Supported for EC2, on-premise systems and Lambda functions. Rollback is easy, just route the traffic back to the original instances. Blue is active deployment, green is new release.
    9. Docker allows you to package your software into Containers which you can run in Elastic Container Service (ECS)
    10.  A docker Container includes everything the software needs to run including code, libraries, runtime and environment variables etc..
    11.  A special file called Dockerfile is used to specify the instructions needed to assemble your Docker image.
    12.  Once built, Docker images can be stored in Elastic Container Registry (ECR) and ECS can then use the image to launch Docker Containers.
    13. AWS CodeCommit is based on Git. It provides centralized repositories for all your code, binaries, images, and libraries.
    14. CodeCommit tracks and manages code changes. It maintains version history.
    15. CodeCommit manages updates from multiple sources and enables collaboration.
    16. To support CORS, API resource needs to implement an OPTIONS method that can respond to the OPTIONS preflight request with following headers:

       

      • Access-Control-Allow-Headers
      • Access-Control-Allow-Origin
      • Access-Control-Allow-Methods
    17. You have a legacy application that works via XML messages. You need to place the application behind the API gateway in order for customers to make API calls. Which of the following would you need to configure?
      You will need to work with the Request and Response Data mapping.
    18. Your application currently points to several Lambda functions in AWS. A change is being made to one of the Lambda functions. You need to ensure that application traffic is shifted slowly from one Lambda function to the other. Which of the following steps would you carry out?
      • Create an ALIAS with the –routing-config parameter
      • Update the ALIAS with the –routing-config parameter

      By default, an alias points to a single Lambda function version. When the alias is updated to point to a different function version, incoming request traffic in turn instantly points to the updated version. This exposes that alias to any potential instabilities introduced by the new version. To minimize this impact, you can implement the routing-config parameter of the Lambda alias that allows you to point to two different versions of the Lambda function and dictate what percentage of incoming traffic is sent to each version.

    19. AWS CodeDeploy: The AppSpec file defines all the parameters needed for the deployment e.g. location of application files and pre/post deployment validation tests to run.
    20. For Ec2 / On Premise systems, the appspec.yml file must be placed in the root directory of your revision (the same folder that contains your application code). Written in YAML.
    21. For Lambda and ECS deployment, the AppSpec file can be YAML or JSON
    22. Visual workflows are automatically created when working with which Step Functions
    23. API Gateway stages store configuration for deployment. An API Gateway Stage refers to A snapshot of your API
    24. AWS SWF Services SWF guarantees delivery order of messages/tasks
    25. Blue/Green Deployments with CodeDeploy on AWS Lambda can happen in multiple ways. Which of these is a potential option? Linear, All at once, Canary
    26. X-Ray Filter Expressions allow you to search through request information using characteristics like URL Paths, Trace ID, Annotations
    27. S3 has eventual consistency for overwrite PUTS and DELETES.
    28. What can you do to ensure the most recent version of your Lambda functions is in CodeDeploy?
      Specify the version to be deployed in AppSpec file.

      https://docs.aws.amazon.com/codedeploy/latest/userguide/application-specification-files.htmlAppSpec Files on an Amazon ECS Compute Platform 

      If your application uses the Amazon ECS compute platform, the AppSpec file can be formatted with either YAML or JSON. It can also be typed directly into an editor in the console. The AppSpec file is used to specify:

      The name of the Amazon ECS service and the container name and port used to direct traffic to the new task set. The functions to be used as validation tests. You can run validation Lambda functions after deployment lifecycle events. For more information, see AppSpec ‘hooks’ Section for an Amazon ECS Deployment, AppSpec File Structure for Amazon ECS Deployments , and AppSpec File Example for an Amazon ECS Deployment .

    Top
    Reference: AWS Developer Tools




    AWS Developer and Deployment Theory: Top 80 Questions and Answers Dump

    Q0: Which AWS service can be used to compile source code, run tests and package code?

    • A. CodePipeline
    • B. CodeCommit
    • C. CodeBuild
    • D. CodeDeploy

    Answer: C.

    Reference: AWS CodeBuild

    Top

    Q1: How can your prevent CloudFormation from deleting your entire stack on failure? (Choose 2)

    • A. Set the Rollback on failure radio button to No in the CloudFormation console
    • B. Set Termination Protection to Enabled in the CloudFormation console
    • C. Use the –disable-rollback flag with the AWS CLI
    • D. Use the –enable-termination-protection protection flag with the AWS CLI

    Answer: A. and C.

    Reference: Protecting a Stack From Being Deleted

    Top

    Q2: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?

    • A. Continuous Integration
    • B. Continuous Deployment
    • C. Continuous Delivery
    • D. Continuous Development

    Answer: A

    Reference: What is Continuous Integration?

    Top

    Q3: When deploying application code to EC2, the AppSpec file can be written in which language?

    • A. JSON
    • B. JSON or YAML
    • C. XML
    • D. YAML

    Top

    Q4: Part of your CloudFormation deployment fails due to a mis-configuration, by defaukt what will happen?

    • A. CloudFormation will rollback only the failed components
    • B. CloudFormation will rollback the entire stack
    • C. Failed component will remain available for debugging purposes
    • D. CloudFormation will ask you if you want to continue with the deployment

    Answer: B

    Reference: Troubleshooting AWS CloudFormation

    Top

    Q5: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?

    • A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
    • B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
    • C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
    • D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.

    Answer: C

    Reference: Getting Started with Amazon SNS

    Top

    Q6: Which AWS service can be used to centrally store and version control your application source code, binaries and libraries

    • A. CodeCommit
    • B. CodeBuild
    • C. CodePipeline
    • D. ElasticFileSystem

    Answer: A

    Reference: AWS CodeCommit

    Top

    Q7: You are using CloudFormation to create a new S3 bucket,
    which of the following sections would you use to define the properties of your bucket?

    • A. Conditions
    • B. Parameters
    • C. Outputs
    • D. Resources

    Answer: D

    Reference: Resources

    Top

    Q8: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template
    would you use to define these?

    • A. Transforms
    • B. Outputs
    • C. Resources
    • D. Instances

    Answer: C.
    The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the resources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3.

    Reference: Resources

    Top

    Q9: Which AWS service can be used to fully automate your entire release process?

    • A. CodeDeploy
    • B. CodePipeline
    • C. CodeCommit
    • D. CodeBuild

    Answer: B.
    AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates

    Reference: AWS CodePipeline

    Top

    Q10: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?

    • A. Outputs
    • B. Transforms
    • C. Resources
    • D. Exports

    Answer: A.
    Outputs is used to output user defines data relating to the resources you have built and can also used as input to another CloudFormation stack.

    Reference: CloudFormation Outputs

    Top

    Q11: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?

    • A. Inputs
    • B. Resources
    • C. Transforms
    • D. Files

    Answer: C.
    Transforms is used to reference code located in S3 and also specifying the use of the Serverless Application Model (SAM)
    for Lambda deployments.
    Transform:
    Name: ‘AWS::Include’
    Parameters:
    Location: ‘s3://MyAmazonS3BucketName/MyFileName.yaml’

    Reference: Transforms

    Top

    Q12: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file
    used to specify source files and lifecycle hooks?

    • A. buildspec.yml
    • B. appspec.json
    • C. appspec.yml
    • D. buildspec.json

    Answer: C.

    Reference: CodeDeploy AppSpec File Reference

    Top

    Q13: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?

    • A. Share the code using an EBS volume
    • B. Copy and paste the code into the template each time you need to use it
    • C. Use a cloudformation nested stack
    • D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.

    Answer: C.

    Reference: Working with Nested Stacks

    Top

    Q14: In the CodeDeploy AppSpec file, what are hooks used for?

    • A. To reference AWS resources that will be used during the deployment
    • B. Hooks are reserved for future use
    • C. To specify files you want to copy during the deployment.
    • D. To specify, scripts or function that you want to run at set points in the deployment lifecycle

    Answer: D.
    The ‘hooks’ section for an EC2/On-Premises deployment contains mappings that link deployment lifecycle event hooks to one or more scripts.

    Reference: AppSpec ‘hooks’ Section

    Top

    Q15:You need to setup a RESTful API service in AWS that would be serviced via the following url https://democompany.com/customers Which of the following combination of services can be used for development and hosting of the RESTful service? Choose 2 answers from the options below

    • A. AWS Lambda and AWS API gateway
    • B. AWS S3 and Cloudfront
    • C. AWS EC2 and AWS Elastic Load Balancer
    • D. AWS SQS and Cloudfront

    Answer: A and C
    AWS Lambda can be used to host the code and the API gateway can be used to access the API’s which point to AWS Lambda Alternatively you can create your own API service , host it on an EC2 Instance and then use the AWS Application Load balancer to do path based routing.
    Reference: Build a Serverless Web Application with AWS Lambda, Amazon API Gateway, Amazon S3, Amazon DynamoDB, and Amazon Cognito

    Top

    Q16: As a developer, you have created a Lambda function that is used to work with a bucket in Amazon S3. The Lambda function is not working as expected. You need to debug the issue and understand what’s the underlying issue. How can you accomplish this in an easily understandable way?

    • A. Use AWS Cloudwatch metrics
    • B. Put logging statements in your code
    • C. Set the Lambda function debugging level to verbose
    • D. Use AWS Cloudtrail logs

    Answer: B
    You can insert logging statements into your code to help you validate that your code is working as expected. Lambda automatically integrates with Amazon CloudWatch Logs and pushes all logs from your code to a CloudWatch Logs group associated with a Lambda function (/aws/lambda/).
    Reference: Using Amazon CloudWatch

    Top

    Q17: You have a lambda function that is processed asynchronously. You need a way to check and debug issues if the function fails? How could you accomplish this?

    • A. Use AWS Cloudwatch metrics
    • B. Assign a dead letter queue
    • C. Congure SNS notications
    • D. Use AWS Cloudtrail logs

    Answer: B
    Any Lambda function invoked asynchronously is retried twice before the event is discarded. If the retries fail and you’re unsure why, use Dead Letter Queues (DLQ) to direct unprocessed events to an Amazon SQS queue or an Amazon SNS topic to analyze the failure.
    Reference: AWS Lambda Function Dead Letter Queues

    Top

    Q18: You are developing an application that is going to make use of Amazon Kinesis. Due to the high throughput , you decide to have multiple shards for the streams. Which of the following is TRUE when it comes to processing data across multiple shards?

    • A. You cannot guarantee the order of data across multiple shards. Its possible only within a shard
    • B. Order of data is possible across all shards in a streams
    • C. Order of data is not possible at all in Kinesis streams
    • D. You need to use Kinesis firehose to guarantee the order of data

    Answer: A
    Kinesis Data Streams lets you order records and read and replay records in the same order to many Kinesis Data Streams applications. To enable write ordering, Kinesis Data Streams expects you to call the PutRecord API to write serially to a shard while using the sequenceNumberForOrdering parameter. Setting this parameter guarantees strictly increasing sequence numbers for puts from the same client and to the same partition key.
    Option A is correct as it cannot guarantee the ordering of records across multiple shards.
    Reference: How to perform ordered data replication between applications by using Amazon DynamoDB Streams

    Top

    Q19: You’ve developed a Lambda function and are now in the process of debugging it. You add the necessary print statements in the code to assist in the debugging. You go to Cloudwatch logs , but you see no logs for the lambda function. Which of the following could be the underlying issue for this?

    • A. You’ve not enabled versioning for the Lambda function
    • B. The IAM Role assigned to the Lambda function does not have the necessary permission to create Logs
    • C. There is not enough memory assigned to the function
    • D. There is not enough time assigned to the function

    Answer: B
    “If your Lambda function code is executing, but you don’t see any log data being generated after several minutes, this could mean your execution role for the Lambda function did not grant permissions to write log data to CloudWatch Logs. For information about how to make sure that you have set up the execution role correctly to grant these permissions, see Manage Permissions: Using an IAM Role (Execution Role)”.

    Reference: Using Amazon CloudWatch

    Top

    Q20: Your application is developed to pick up metrics from several servers and push them off to Cloudwatch. At times , the application gets client 429 errors. Which of the following can be done from the programming side to resolve such errors?

    • A. Use the AWS CLI instead of the SDK to push the metrics
    • B. Ensure that all metrics have a timestamp before sending them across
    • C. Use exponential backoff in your request
    • D. Enable encryption for the requests

    Answer: C.
    The main reason for such errors is that throttling is occurring when many requests are sent via API calls. The best way to mitigate this is to stagger the rate at which you make the API calls.
    In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values and should be set based on the operation being performed, as well as other local factors, such as network latency.
    Reference: Error Retries and Exponential Backoff in AWS

    Q21: You have been instructed to use the CodePipeline service for the CI/CD automation in your company. Due to security reasons , the resources that would be part of the deployment are placed in another account. Which of the following steps need to be carried out to accomplish this deployment? Choose 2 answers from the options given below

    • A. Dene a customer master key in KMS
    • B. Create a reference Code Pipeline instance in the other account
    • C. Add a cross account role
    • D. Embed the access keys in the codepipeline process

    Answer: A. and C.
    You might want to create a pipeline that uses resources created or managed by another AWS account. For example, you might want to use one account for your pipeline and another for your AWS CodeDeploy resources. To do so, you must create a AWS Key Management Service (AWS KMS) key to use, add the key to the pipeline, and set up account policies and roles to enable cross-account access.
    Reference: Create a Pipeline in CodePipeline That Uses Resources from Another AWS Account

    Top

    Q22: You are planning on deploying an application to the worker role in Elastic Beanstalk. Moreover, this worker application is going to run the periodic tasks. Which of the following is a must have as part of the deployment?

    • A. An appspec.yaml file
    • B. A cron.yaml  file
    • C. A cron.cong file
    • D. An appspec.json file

    Answer: B.
    Create an Application Source Bundle
    When you use the AWS Elastic Beanstalk console to deploy a new application or an application version, you’ll need to upload a source bundle. Your source bundle must meet the following requirements:
    Consist of a single ZIP file or WAR file (you can include multiple WAR files inside your ZIP file)
    Not exceed 512 MB
    Not include a parent folder or top-level directory (subdirectories are fine)
    If you want to deploy a worker application that processes periodic background tasks, your application source bundle must also include a cron.yaml file. For more information, see Periodic Tasks.

    Reference: Create an Application Source Bundle

    Top

    Q23: An application needs to make use of an SQS queue for working with messages. An SQS queue has been created with the default settings. The application needs 60 seconds to process each message. Which of the following step need to be carried out by the application.

    • A. Change the VisibilityTimeout for each message and then delete the message after processing is completed
    • B. Delete the message and change the visibility timeout.
    • C. Process the message , change the visibility timeout. Delete the message
    • D. Process the message and delete the message

    Answer: A
    If the SQS queue is created with the default settings , then the default visibility timeout is 30 seconds. And since the application needs more time for processing , you first need to change the timeout and delete the message after it is processed.
    Reference: Amazon SQS Visibility Timeout

    Top

    Q24: AWS CodeDeploy deployment fails to start & generate following error code, ”HEALTH_CONSTRAINTS_INVALID”, Which of the following can be used to eliminate this error?

    • A. Make sure the minimum number of healthy instances is equal to the total number of instances in the deployment group.
    • B. Increase the number of healthy instances required during deployment
    • C. Reduce number of healthy instances required during deployment
    • D. Make sure the number of healthy instances is equal to the specified minimum number of healthy instances.

    Answer: C
    AWS CodeDeploy generates ”HEALTH_CONSTRAINTS_INVALID” error, when a minimum number of healthy instances defined in deployment group are not available during deployment. To mitigate this error, make sure required number of healthy instances are available during deployments.
    Reference: Error Codes for AWS CodeDeploy

    Top

    Q25: How are the state machines in AWS Step Functions defined?

    • A. SAML
    • B. XML
    • C. YAML
    • D. JSON

    Answer: D. JSON
    AWS Step Functions state machines are defines in JSON files!
    Reference: What Is AWS Step Functions?

    Top

    Q26:How can API Gateway methods be configured to respond to requests?

    • A. Forwarded to method handlers
    • B. AWS Lambda
    • C. Integrated with other AWS Services
    • D. Existing HTTP endpoints

    Answer: B. C. D.

    Reference: Set up REST API Methods in API Gateway

    Top

    Q27: Which of the following could be an example of an API Gateway Resource URL for a trucks resource?

    • A. https://1a2sb3c4.execute-api.us-east-1.awsapigateway.com/trucks
    • B. https://trucks.1a2sb3c4.execute-api.us-east-1.amazonaws.com
    • C. https://1a2sb3c4.execute-api.amazonaws.com/trucks
    • D. https://1a2sb3c4.execute-api.us-east-1.amazonaws.com/cars

    Answer: C

    Reference: Amazon API Gateway Concepts

    Top

    Q28: API Gateway Deployments are:

    • A. A specific snapshot of your API’s methods
    • B. A specific snapshot of all of your API’s settings, resources, and methods
    • C. A specific snapshot of your API’s resources
    • D. A specific snapshot of your API’s resources and methods

    Answer: D.
    AWS API Gateway Deployments are a snapshot of all the resources and methods of your API and their configuration.
    Reference: Deploying a REST API in Amazon API Gateway

    Top

    Q29: A SWF workflow task or task execution can live up to how long?

    • A. 1 Year
    • B. 14 days
    • C. 24 hours
    • D. 3 days

    Answer: A. 1 Year
    Each workflow execution can run for a maximum of 1 year. Each workflow execution history can grow up to 25,000 events. If your use case requires you to go beyond these limits, you can use features Amazon SWF provides to continue executions and structure your applications using child workflow executions.
    Reference: Amazon SWF FAQs

    Top

    Q30: With AWS Step Functions, all the work in your state machine is done by tasks. These tasks performs work by using what types of things? (Choose the best 3 answers)

    • A. An AWS Lambda Function Integration
    • B. Passing parameters to API actions of other services
    • C. Activities
    • D. An EC2 Integration

    Answer: A. B. C.

    Reference:

    Top

    Q31: How does SWF make decisions?

    • A. A decider program that is written in the language of the developer’s choice
    • B. A visual workflow created in the SWF visual workflow editor
    • C. A JSON-defined state machine that contains states within it to select the next step to take
    • D. SWF outsources all decisions to human deciders through the AWS Mechanical Turk service.

    Answer: A.
    SWF allows the developer to write their own application logic to make decisions and determine how to evaluate incoming data.
    Q: What programming conveniences does Amazon SWF provide to write applications? Like other AWS services, Amazon SWF provides a core SDK for the web service APIs. Additionally, Amazon SWF offers an SDK called the AWS Flow Framework that enables you to develop Amazon SWF-based applications quickly and easily. AWS Flow Framework abstracts the details of task-level coordination with familiar programming constructs. While running your program, the framework makes calls to Amazon SWF, tracks your program’s execution state using the execution history kept by Amazon SWF, and invokes the relevant portions of your code at the right times. By offering an intuitive programming framework to access Amazon SWF, AWS Flow Framework enables developers to write entire applications as asynchronous interactions structured in a workflow. For more details, please see What is the AWS Flow Framework?
    Reference:

    Top

    Q32: In order to effectively build and test your code, AWS CodeBuild allows you to:

    • A. Select and use some 3rd party providers to run tests against your code
    • B. Select a pre-configured environment
    • C. Provide your own custom AMI
    • D. Provide your own custom container image

    Answer:A. B. and D.

    Reference: AWS CodeBuild FAQs

    Top

    Q33: X-Ray Filter Expressions allow you to search through request information using characteristics like:

    • A. URL Paths
    • B. Metadata
    • C. Trace ID
    • D. Annotations

    Top

    Q34: CodePipeline pipelines are workflows that deal with stages, actions, transitions, and artifacts. Which of the following statements is true about these concepts?

    • A. Stages contain at least two actions
    • B. Artifacts are never modified or iterated on when used inside of CodePipeline
    • C. Stages contain at least one action
    • D. Actions will have a deployment artifact as either an input an output or both

    Answer: B. C. D.

    Reference:

    Top

    Q35: When deploying a simple Python web application with Elastic Beanstalk which of the following AWS resources will be created and managed for you by Elastic Beanstalk?

    • A. An Elastic Load Balancer
    • B. An S3 Bucket
    • C. A Lambda Function
    • D. An EC2 instance

    Answer: A. B. and D.
    AWS Elastic Beanstalk uses proven AWS features and services, such as Amazon EC2, Amazon RDS, Elastic Load Balancing, Auto Scaling, Amazon S3, and Amazon SNS, to create an environment that runs your application. The current version of AWS Elastic Beanstalk uses the Amazon Linux AMI or the Windows Server 2012 R2 AMI.
    Reference: AWS Elastic Beanstalk FAQs

    Top

    Q36: Elastic Beanstalk is used to:

    • A. Deploy and scale web applications and services developed with a supported platform
    • B. Deploy and scale serverless applications
    • C. Deploy and scale applications based purely on EC2 instances
    • D. Manage the deployment of all AWS infrastructure resources of your AWS applications

    Answer: A.
    Who should use AWS Elastic Beanstalk?
    Those who want to deploy and manage their applications within minutes in the AWS Cloud. You don’t need experience with cloud computing to get started. AWS Elastic Beanstalk supports Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker web applications.
    Reference:

    Top

    Q35: How can AWS X-Ray determine what data to collect?

    • A. X-Ray applies a sampling algorithm by default
    • B. X-Ray collects data on all requests by default
    • C. You can implement your own sampling frequencies for data collection
    • D. X-Ray collects data on all requests for services enabled with it

    Answer: A. and C.

    Reference: AWS X-Ray FAQs

    Top

    Q37: Which API call is used to list all resources that belong to a CloudFormation Stack?

    • A. DescribeStacks
    • B. GetTemplate
    • C. DescribeStackResources
    • D. ListStackResources

    Answer: D.

    Reference: ListStackResources

    Top

    Q38: What is the default behaviour of a CloudFormation stack if the creation of one resource fails?

    • A. Rollback
    • B. The stack continues creating and the failed resource is ignored
    • C. Delete
    • D. Undo

    Answer: A. Rollback

    Reference: AWS CloudFormation FAQs

    Top

    Q39: Which AWS CLI command lists all current stacks in your CloudFormation service?

    • A. aws cloudformation describe-stacks
    • B. aws cloudformation list-stacks
    • C. aws cloudformation create-stack
    • D. aws cloudformation describe-stack-resources

    Answer: A. and B.

    Reference: list-stacks

    Top

    Q40:
    Which API call is used to list all resources that belong to a CloudFormation Stack?

    • A. DescribeStacks
    • B. GetTemplate
    • C. ListStackResources
    • D. DescribeStackResources

    Answer: C.

    Reference: list-stack-resources

    Top

    Q41: How does using ElastiCache help to improve database performance?

    • A. It can store petabytes of data
    • B. It provides faster internet speeds
    • C. It can store the results of frequent or highly-taxing queries
    • D. It uses read replicas

    Answer: C.
    With ElastiCache, customers get all of the benefits of a high-performance, in-memory cache with less of the administrative burden involved in launching and managing a distributed cache. The service makes setup, scaling, and cluster failure handling much simpler than in a self-managed cache deployment.
    Reference: Amazon ElastiCache

    Top

    Q42: Which of the following best describes the Lazy Loading caching strategy?

    • A. Every time the underlying database is written to or updated the cache is updated with the new information.
    • B. Every miss to the cache is counted and when a specific number is reached a full copy of the database is migrated to the cache
    • C. A specific amount of time is set before the data in the cache is marked as expired. After expiration, a request for expired data will be made through to the backing database.
    • D. Data is added to the cache when a cache miss occurs (when there is no data in the cache and the request must go to the database for that data)

    Answer: D.
    Amazon ElastiCache is an in-memory key/value store that sits between your application and the data store (database) that it accesses. Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests the data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested.
    Reference: Lazy Loading

    Top

    Q43: What are two benefits of using RDS read replicas?

    • A. You can add/remove read replicas based on demand, so it creates elasticity for RDS.
    • B. Improves performance of the primary database by taking workload from it
    • C. Automatic failover in the case of Availability Zone service failures
    • D. Allows both reads and writes

    Answer: A. and B.

    Reference: Amazon RDS Read Replicas

    Top

    Q44: What is the simplest way to enable an S3 bucket to be able to send messages to your SNS topic?

    • A. Attach an IAM role to the S3 bucket to send messages to SNS.
    • B. Activate the S3 pipeline feature to send notifications to another AWS service – in this case select SNS.
    • C. Add a resource-based access control policy on the SNS topic.
    • D. Use AWS Lambda to receive events from the S3 bucket and then use the Publish API action to send them to the SNS topic.

    Answer: C.

    Reference: Access Control List (ACL) Overview

    Top

    Q45: You have just set up a push notification service to send a message to an app installed on a device with the Apple Push Notification Service. It seems to work fine. You now want to send a message to an app installed on devices for multiple platforms, those being the Apple Push Notification Service(APNS) and Google Cloud Messaging for Android (GCM). What do you need to do first for this to be successful?

    • A. Request Credentials from Mobile Platforms, so that each device has the correct access control policies to access the SNS publisher
    • B. Create a Platform Application Object which will connect all of the mobile devices with your app to the correct SNS topic.
    • C. Request a Token from Mobile Platforms, so that each device has the correct access control policies to access the SNS publisher.
    • D. Get a set of credentials in order to be able to connect to the push notification service you are trying to setup.

    Answer: D.
    To use Amazon SNS mobile push notifications, you need to establish a connection with a supported push notification service. This connection is established using a set of credentials.
    Reference: Add Device Tokens or Registration IDs

    Top

    Q46: SNS message can be sent to different kinds of endpoints. Which of these is NOT currently a supported endpoint?

    • A. Slack Messages
    • B. SMS (text message)
    • C. HTTP/HTTPS
    • D. AWS Lambda

    Answer: A.
    Slack messages are not directly integrated with SNS, though theoretically, you could write a service to push messages to slack from SNS.
    Reference:

    Top

    Q47: Company B provides an online image recognition service and utilizes SQS to decouple system components for scalability. The SQS consumers poll the imaging queue as often as possible to keep end-to-end throughput as high as possible. However, Company B is realizing that polling in tight loops is burning CPU cycles and increasing costs with empty responses. How can Company B reduce the number empty responses?

    • A. Set the imaging queue VisibilityTimeout attribute to 20 seconds
    • B. Set the imaging queue MessageRetentionPeriod attribute to 20 seconds
    • C. Set the imaging queue ReceiveMessageWaitTimeSeconds Attribute to 20 seconds
    • D. Set the DelaySeconds parameter of a message to 20 seconds

    Answer: C.
    Enabling long polling reduces the amount of false and empty responses from SQS service. It also reduces the number of calls that need to be made to a queue by staying connected to the queue until all messages have been received or until timeout. In order to enable long polling the ReceiveMessageWaitTimeSeconds attribute needs to be set to a number greater than 0. If it is set to 0 then short polling is enabled.
    Reference: Amazon SQS Long Polling

    Top

    Q48: Which of the following statements about SQS standard queues are true?

    • A. Message order can be indeterminate – you’re not guaranteed to get messages in the same order they were sent in
    • B. Messages will be delivered exactly once and messages will be delivered in First in, First out order
    • C. Messages will be delivered exactly once and message delivery order is indeterminate
    • D. Messages can be delivered one or more times

    Answer: A. and D.
    A standard queue makes a best effort to preserve the order of messages, but more than one copy of a message might be delivered out of order. If your system requires that order be preserved, we recommend using a FIFO (First-In-First-Out) queue or adding sequencing information in each message so you can reorder the messages when they’re received.
    Reference: Amazon SQS Standard Queues

    Top

    Q49: Which of the following is true if long polling is enabled?

    • A. If long polling is enabled, then each poll only polls a subset of SQS servers; in order for all messages to be received, polling must continuously occur
    • B. The reader will listen to the queue until timeout
    • C. Increases costs because each request lasts longer
    • D. The reader will listen to the queue until a message is available or until timeout

    Answer: D.

    Reference: Amazon SQS Long Polling

    Top

    Q50: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?

    • A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
    • B. Permanently assigning users to specific instances and always routing their traffic to those instances
    • C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
    • D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance

    Answer: A.

    Reference: Distributed Session Management

    Top

    Q51: When requested through an STS API call, credentials are returned with what three components?

    • A. Security Token, Access Key ID, Signed URL
    • B. Security Token, Access Key ID, Secret Access Key
    • C. Signed URL, Security Token, Username
    • D. Security Token, Secret Access Key, Personal Pin Code

    Answer: B.
    Security Token, Access Key ID, Secret Access Key
    Reference:

    Top

    Q52: Your application must write to an SQS queue. Your corporate security policies require that AWS credentials are always encrypted and are rotated at least once a week.
    How can you securely provide credentials that allow your application to write to the queue?

    • A. Have the application fetch an access key from an Amazon S3 bucket at run time.
    • B. Launch the application’s Amazon EC2 instance with an IAM role.
    • C. Encrypt an access key in the application source code.
    • D. Enroll the instance in an Active Directory domain and use AD authentication.

    Answer: B.
    IAM roles are based on temporary security tokens, so they are rotated automatically. Keys in the source code cannot be rotated (and are a very bad idea). It’s impossible to retrieve credentials from an S3 bucket if you don’t already have credentials for that bucket. Active Directory authorization will not grant access to AWS resources.
    Reference: AWS IAM FAQs

    Top

    Q53: Your web application reads an item from your DynamoDB table, changes an attribute, and then writes the item back to the table. You need to ensure that one process doesn’t overwrite a simultaneous change from another process.
    How can you ensure concurrency?

    • A. Implement optimistic concurrency by using a conditional write.
    • B. Implement pessimistic concurrency by using a conditional write.
    • C. Implement optimistic concurrency by locking the item upon read.
    • D. Implement pessimistic concurrency by locking the item upon read.

    Answer: A.
    Optimistic concurrency depends on checking a value upon save to ensure that it has not changed. Pessimistic concurrency prevents a value from changing by locking the item or row in the database. DynamoDB does not support item locking, and conditional writes are perfect for implementing optimistic concurrency.
    Reference: Optimistic Locking With Version Number

    Top

    Q54: Which statements about DynamoDB are true? Choose 2 answers

    • A. DynamoDB uses optimistic concurrency control
    • B. DynamoDB restricts item access during writes
    • C. DynamoDB uses a pessimistic locking model
    • D. DynamoDB restricts item access during reads
    • E. DynamoDB uses conditional writes for consistency

    Top

    Q55: Your CloudFormation template has the following Mappings section:

    Which JSON snippet will result in the value “ami-6411e20d” when a stack is launched in us-east-1?

    • A. { “Fn::FindInMap” : [ “Mappings”, { “RegionMap” : [“us-east-1”, “us-west-1”] }, “32”]}
    • B. { “Fn::FindInMap” : [ “Mappings”, { “Ref” : “AWS::Region” }, “32”]}
    • C. { “Fn::FindInMap” : [ “RegionMap”, { “Ref” : “AWS::Region” }, “32”]}
    • D. { “Fn::FindInMap” : [ “RegionMap”, { “RegionMap” : “AWS::Region” }, “32”]}

    Answer: C.
    The intrinsic function Fn::FindInMap returns the value corresponding to keys in a two-level map that is declared in the Mappings section.
    You can use the Fn::FindInMap function to return a named value based on a specified key. The following example template contains an Amazon EC2 resource whose ImageId property is assigned by the FindInMap function. The FindInMap function specifies key as the region where the stack is created (using the AWS::Region pseudo parameter) and HVM64 as the name of the value to map to.
    Reference:

    Top

    Q56: Your application triggers events that must be delivered to all your partners. The exact partner list is constantly changing: some partners run a highly available endpoint, and other partners’ endpoints are online only a few hours each night. Your application is mission-critical, and communication with your partners must not introduce delay in its operation. A delay in delivering the event to one partner cannot delay delivery to other partners.

    What is an appropriate way to code this?

    • A. Implement an Amazon SWF task to deliver the message to each partner. Initiate an Amazon SWF workflow execution.
    • B. Send the event as an Amazon SNS message. Instruct your partners to create an HTTP. Subscribe their HTTP endpoint to the Amazon SNS topic.
    • C. Create one SQS queue per partner. Iterate through the queues and write the event to each one. Partners retrieve messages from their queue.
    • D. Send the event as an Amazon SNS message. Create one SQS queue per partner that subscribes to the Amazon SNS topic. Partners retrieve messages from their queue.

    Answer: D.
    There are two challenges here: the command must be “fanned out” to a variable pool of partners, and your app must be decoupled from the partners because they are not highly available.
    Sending the command as an SNS message achieves the fan-out via its publication/subscribe model, and using an SQS queue for each partner decouples your app from the partners. Writing the message to each queue directly would cause more latency for your app and would require your app to monitor which partners were active. It would be difficult to write an Amazon SWF workflow for a rapidly changing set of partners.

    Reference: AWS SNS Faqs

    Top

    Q57: You have a three-tier web application (web, app, and data) in a single Amazon VPC. The web and app tiers each span two Availability Zones, are in separate subnets, and sit behind ELB Classic Load Balancers. The data tier is a Multi-AZ Amazon RDS MySQL database instance in database subnets.
    When you call the database tier from your app tier instances, you receive a timeout error. What could be causing this?

    • A. The IAM role associated with the app tier instances does not have rights to the MySQL database.
    • B. The security group for the Amazon RDS instance does not allow traffic on port 3306 from the app
      instances.
    • C. The Amazon RDS database instance does not have a public IP address.
    • D. There is no route defined between the app tier and the database tier in the Amazon VPC.

    Answer: B.
    Security groups block all network traffic by default, so if a group is not correctly configured, it can lead to a timeout error. MySQL security, not IAM, controls MySQL security. All subnets in an Amazon VPC have routes to all other subnets. Internal traffic within an Amazon VPC does not require public IP addresses.

    Reference: Security Groups for Your VPC

    Top

    Q58: What type of block cipher does Amazon S3 offer for server side encryption?

    • A. RC5
    • B. Blowfish
    • C. Triple DES
    • D. Advanced Encryption Standard

    Answer: D
    Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data.

    Reference: Protecting Data Using Server-Side Encryption

    Top

    Q59: You have written an application that uses the Elastic Load Balancing service to spread
    traffic to several web servers Your users complain that they are sometimes forced to login
    again in the middle of using your application, after they have already togged in. This is not
    behaviour you have designed. What is a possible solution to prevent this happening?

    • A. Use instance memory to save session state.
    • B. Use instance storage to save session state.
    • C. Use EBS to save session state
    • D. Use ElastiCache to save session state.
    • E. Use Glacier to save session slate.

    Answer: D.
    You can cache a variety of objects using the service, from the content in persistent data stores (such as Amazon RDS, DynamoDB, or self-managed databases hosted on EC2) to dynamically generated web pages (with Nginx for example), or transient session data that may not require a persistent backing store. You can also use it to implement high-frequency counters to deploy admission control in high volume web applications.

    Reference: Amazon ElastiCache FAQs

    Top

    Q60: You are writing to a DynamoDB table and receive the following exception:”
    ProvisionedThroughputExceededException”. though according to your Cloudwatch metrics
    for the table, you are not exceeding your provisioned throughput. What could be an
    explanation for this?

    • A. You haven’t provisioned enough DynamoDB storage instances
    • B. You’re exceeding your capacity on a particular Range Key
    • C. You’re exceeding your capacity on a particular Hash Key
    • D. You’re exceeding your capacity on a particular Sort Key
    • E. You haven’t configured DynamoDB Auto Scaling triggers

    Answer: C.
    The primary key that uniquely identifies each item in a DynamoDB table can be simple (a partition key only) or composite (a partition key combined with a sort key).
    Generally speaking, you should design your application for uniform activity across all logical partition keys in the Table and its secondary indexes.
    You can determine the access patterns that your application requires, and estimate the total read capacity units and write capacity units that each table and secondary Index requires.

    As traffic starts to flow, DynamoDB automatically supports your access patterns using the throughput you have provisioned, as long as the traffic against a given partition key does not exceed 3000 read capacity units or 1000 write capacity units.

    Reference: Best Practices for Designing and Using Partition Keys Effectively

    Top

    Q61: Which DynamoDB limits can be raised by contacting AWS support?

    • A. The number of hash keys per account
    • B. The maximum storage used per account
    • C. The number of tables per account
    • D. The number of local secondary indexes per account
    • E. The number of provisioned throughput units per account

    Answer: C. and E.

    For any AWS account, there is an initial limit of 256 tables per region.
    AWS places some default limits on the throughput you can provision.
    These are the limits unless you request a higher amount.
    To request a service limit increase see https://aws.amazon.com/support.Reference: Limits in DynamoDB

    Top

    Q62: AWS CodeBuild allows you to compile your source code, run unit tests, and produce deployment artifacts by:

    • A. Allowing you to provide an Amazon Machine Image to take these actions within
    • B. Allowing you to select an Amazon Machine Image and provide a User Data bootstrapping script to prepare an instance to take these actions within
    • C. Allowing you to provide a container image to take these actions within
    • D. Allowing you to select from pre-configured environments to take these actions within

    Answer: C. and D.
    You can provide your own custom container image to build your deployment artifacts.
    You never actually pass a specific AMI to CodeBuild. Though you can provide a custom docker image which you could basically ‘bootstrap’ for the purposes of your build.
    Reference: AWS CodeBuild Faqs

    Top

    Q63: Which of the following will not cause a CloudFormation stack deployment to rollback?

    • A. The template contains invalid JSON syntax
    • B. An AMI specified in the template exists in a different region than the one in which the stack is being deployed.
    • C. A subnet specified in the template does not exist
    • D. The template specifies an instance-store backed AMI and an incompatible EC2 instance type.

    Answer: A.
    Invalid JSON syntax will cause an error message during template validation. Until the syntax is fixed, the template will not be able to deploy resources, so there will not be a need to or opportunity to rollback.
    Reference: AWS CloudFormatio Faqs

    Top

    Q64: Your team is using CodeDeploy to deploy an application which uses secure parameters that are stored in the AWS System Mangers Parameter Store. What two options below must be completed so CodeDeploy can deploy the application?

    • A. Use ssm get-parameters with –with-decryption option
    • B. Add permissions using AWS access keys
    • C. Add permissions using AWS IAM role
    • D. Use ssm get-parameters with –with-no-decryption option

    Answer: A. and C.

    Reference: Add permission using IAM role

    Top

    Q65: A corporate web application is deployed within an Amazon VPC, and is connected to the corporate data center via IPSec VPN. The application must authenticate against the on-premise LDAP server. Once authenticated, logged-in users can only access an S3 keyspace specific to the user. Which of the solutions below meet these requirements? Choose two answers How would you authenticate to the application given these details? (Choose 2)

    • A. The application authenticates against LDAP, and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM Role. The application can use the temporary credentials to access the S3 keyspace.
    • B. Develop an identity broker which authenticates against LDAP, and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 keyspace
    • C. Develop an identity broker which authenticates against IAM Security Token Service to assume an IAM Role to get temporary AWS security credentials. The application calls the identity broker to get AWS temporary security credentials with access to the app
    • D. The application authenticates against LDAP. The application then calls the IAM Security Service to login to IAM using the LDAP credentials. The application can use the IAM temporary credentials to access the appropriate S3 bucket.

    Answer: A. and B.
    The question clearly says “authenticate against LDAP”. Temporary credentials come from STS. Federated user credentials come from the identity broker.
    Reference: IAM faqs

    Top

    Q66:
    A corporate web application is deployed within an Amazon VPC, and is connected to the corporate data center via IPSec VPN. The application must authenticate against the on-premise LDAP server. Once authenticated, logged-in users can only access an S3 keyspace specific to the user. Which of the solutions below meet these requirements? Choose two answers
    How would you authenticate to the application given these details? (Choose 2)

    • A. The application authenticates against LDAP, and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM Role. The application can use the temporary credentials to access the S3 keyspace.
    • B. Develop an identity broker which authenticates against LDAP, and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 keyspace
    • C. Develop an identity broker which authenticates against IAM Security Token Service to assume an IAM Role to get temporary AWS security credentials. The application calls the identity broker to get AWS temporary security credentials with access to the app
    • D. The application authenticates against LDAP. The application then calls the IAM Security Service to login to IAM using the LDAP credentials. The application can use the IAM temporary credentials to access the appropriate S3 bucket.

    Answer: A. and B.
    The question clearly says “authenticate against LDAP”. Temporary credentials come from STS. Federated user credentials come from the identity broker.
    Reference: AWA STS Faqs

    Top

    Q67: When users are signing in to your application using Cognito, what do you need to do to make sure if the user has compromised credentials, they must enter a new password?

    • A. Create a user pool in Cognito
    • B. Block use for “Compromised credential” in the Basic security section
    • C. Block use for “Compromised credential” in the Advanced security section
    • D. Use secure remote password

    Answer: A. and C.
    Amazon Cognito can detect if a user’s credentials (user name and password) have been compromised elsewhere. This can happen when users reuse credentials at more than one site, or when they use passwords that are easy to guess.

    From the Advanced security page in the Amazon Cognito console, you can choose whether to allow, or block the user if compromised credentials are detected. Blocking requires users to choose another password. Choosing Allow publishes all attempted uses of compromised credentials to Amazon CloudWatch. For more information, see Viewing Advanced Security Metrics.

    You can also choose whether Amazon Cognito checks for compromised credentials during sign-in, sign-up, and password changes.

    Note Currently, Amazon Cognito doesn’t check for compromised credentials for sign-in operations with Secure Remote Password (SRP) flow, which doesn’t send the password during sign-in. Sign-ins that use the AdminInitiateAuth API with ADMIN_NO_SRP_AUTH flow and the InitiateAuth API with USER_PASSWORD_AUTH flow are checked for compromised credentials.

    Reference: AWS Cognito

    Top

    Q68: You work in a large enterprise that is currently evaluating options to migrate your 27 GB Subversion code base. Which of the following options is the best choice for your organization?

    • A. AWS CodeHost
    • B. AWS CodeCommit
    • C. AWS CodeStart
    • D. None of these

    Answer: D.
    None of these. While CodeCommit is a good option for git reponsitories it is not able to host Subversion source control.

    Reference: Migration to CodeCommit

    Top

    Q69: You are on a development team and you need to migrate your Spring Application over to AWS. Your team is looking to build, modify, and test new versions of the application. What AWS services could help you migrate your app?

    • A. Elastic Beanstalk
    • B. SQS
    • C. Ec2
    • D. AWS CodeDeploy

    Answer: A. C. and D.
    Amazon EC2 can be used to deploy various applications to your AWS Infrastructure.
    AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, or serverless Lambda functions.

    Reference: AWS Deployment Faqs

    Top

    Q70:
    You are a developer responsible for managing a high volume API running in your company’s datacenter. You have been asked to implement a similar API, but one that has potentially higher volume. And you must do it in the most cost effective way, using as few services and components as possible. The API stores and fetches data from a key value store. Which services could you utilize in AWS?

    • A. DynamoDB
    • B. Lambda
    • C. API Gateway
    • D. EC2

    Answer: A. and C.
    NoSQL databases like DynamoDB are designed for key value usage. DynamoDB can also handle incredible volumes and is cost effective. AWS API Gateway makes it easy for developers to create, publish, maintain, monitor, and secure APIs.

    Reference: API Gateway Faqs

    Top

    Q71: By default, what event occurs if your CloudFormation receives an error during creation?

    • A. DELETE_IN_PROGRESS
    • B. CREATION_IN_PROGRESS
    • C. DELETE_COMPLETE
    • D. ROLLBACK_IN_PROGRESS

    Answer: D.

    Reference: Check Status Code

    Top

    Q72:
    AWS X-Ray was recently implemented inside of a service that you work on. Several weeks later, after a new marketing push, that service started seeing a large spike in traffic and you’ve been tasked with investigating a few issues that have started coming up but when you review the X-Ray data you can’t find enough information to draw conclusions so you decide to:

    • A. Start passing in the X-Amzn-Trace-Id: True HTTP header from your upstream requests
    • B. Refactor the service to include additional calls to the X-Ray API using an AWS SDK
    • C. Update the sampling algorithm to increase the sample rate and instrument X-Ray to collect more pertinent information
    • D. Update your application to use the custom API Gateway TRACE method to send in data

    Answer: C.
    This is a good way to solve the problem – by customizing the sampling so that you can get more relevant information.

    Reference: AWS X-Ray facts

    Top

    Q74: X-Ray metadata:

    • A. Associates request data with a particular Trace-ID
    • B. Stores key-value pairs of any type that are not searchable
    • C. Collects at the service layer to provide information on the overall health of the system
    • D. Stores key-value pairs of searchable information

    Answer:AB.
    X-Ray metadata stores key-value pairs of any type that are not searchable.
    Reference: AWS X-Rays faqs

    Top

    Q75: Which of the following is the right sequence that gets called in CodeDeploy when you use Lambda hooks in an EC2/On-Premise Deployment?

    • A. Before Install-AfterInstall-Validate Service-Application Start
    • B. Before Install-After-Install-Application Stop-Application Start
    • C. Before Install-Application Stop-Validate Service-Application Start
    • D. Application Stop-Before Install-After Install-Application Start

    Answer: D.
    In an in-place deployment, including the rollback of an in-place deployment, event hooks are run in the following order:

    Note An AWS Lambda hook is one Lambda function specified with a string on a new line after the name of the lifecycle event. Each hook is executed once per deployment. Following are descriptions of the lifecycle events where you can run a hook during an Amazon ECS deployment.

    BeforeInstall – Use to run tasks before the replacement task set is created. One target group is associated with the original task set. If an optional test listener is specified, it is associated with the original task set. A rollback is not possible at this point. AfterInstall – Use to run tasks after the replacement task set is created and one of the target groups is associated with it. If an optional test listener is specified, it is associated with the original task set. The results of a hook function at this lifecycle event can trigger a rollback. AfterAllowTestTraffic – Use to run tasks after the test listener serves traffic to the replacement task set. The results of a hook function at this point can trigger a rollback. BeforeAllowTraffic – Use to run tasks after the second target group is associated with the replacement task set, but before traffic is shifted to the replacement task set. The results of a hook function at this lifecycle event can trigger a rollback. AfterAllowTraffic – Use to run tasks after the second target group serves traffic to the replacement task set. The results of a hook function at this lifecycle event can trigger a rollback. Run Order of Hooks in an Amazon ECS Deployment

    In an Amazon ECS deployment, event hooks run in the following order:

    For in-place deployments, the six hooks related to blocking and allowing traffic apply only if you specify a Classic Load Balancer, Application Load Balancer, or Network Load Balancer from Elastic Load Balancing in the deployment group.Note The Start, DownloadBundle, Install, and End events in the deployment cannot be scripted, which is why they appear in gray in this diagram. However, you can edit the ‘files’ section of the AppSpec file to specify what’s installed during the Install event.

    Reference: Appspec.yml specs

    Top

    Q76:
    Describe the process of registering a mobile device with SNS push notification service using GCM.

    • A. Receive Registration ID and token for each mobile device. Then, register the mobile application with Amazon SNS, and pass the GCM token credentials to Amazon SNS
    • B. Pass device token to SNS to create mobile subscription endpoint for each mobile device, then request the device token from each mobile device. SNS then communicates on your behalf to the GCM service
    • C. None of these are correct
    • D. Submit GCM notification credentials to Amazon SNS, then receive the Registration ID for each mobile device. After that, pass the device token to SNS, and SNS then creates a mobile subscription endpoint for each device and communicates with the GCM service on your behalf

    Answer: D.
    When you first register an app and mobile device with a notification service, such as Apple Push Notification Service (APNS) and Google Cloud Messaging for Android (GCM), device tokens or registration IDs are returned from the notification service. When you add the device tokens or registration IDs to Amazon SNS, they are used with the PlatformApplicationArn API to create an endpoint for the app and device. When Amazon SNS creates the endpoint, an EndpointArn is returned. The EndpointArn is how Amazon SNS knows which app and mobile device to send the notification message to.

    Reference: AWS Mobile Push Send device token

    Top

    Q77:
    You run an ad-supported photo sharing website using S3 to serve photos to visitors of your site. At some point you find out that other sites have been linking to the photos on your site, causing loss to your business. What is an effective method to mitigate this?

    • A. Store photos on an EBS volume of the web server.
    • B. Block the IPs of the offending websites in Security Groups.
    • C. Remove public read access and use signed URLs with expiry dates.
    • D. Use CloudFront distributions for static content.

    Answer: C.
    This solves the issue, but does require you to modify your website. Your website already uses S3, so it doesn’t require a lot of changes. See the docs for details: http://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html

    Reference: AWS S3 shared objects presigned urls

    CloudFront on its own doesn’t prevent unauthorized access and requires you to add a whole new layer to your stack (which may make sense anyway). You can serve private content, but you’d have to use signed URLs or similar mechanism. Here are the docs: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html

    Top

    Q78: How can you control access to the API Gateway in your environment?

    • A. Cognito User Pools
    • B. Lambda Authorizers
    • C. API Methods
    • D. API Stages

    Answer: A. and B.
    Access to a REST API Using Amazon Cognito User Pools as Authorizer
    As an alternative to using IAM roles and policies or Lambda authorizers (formerly known as custom authorizers), you can use an Amazon Cognito user pool to control who can access your API in Amazon API Gateway.

    To use an Amazon Cognito user pool with your API, you must first create an authorizer of the COGNITO_USER_POOLS type and then configure an API method to use that authorizer. After the API is deployed, the client must first sign the user in to the user pool, obtain an identity or access token for the user, and then call the API method with one of the tokens, which are typically set to the request’s Authorization header. The API call succeeds only if the required token is supplied and the supplied token is valid, otherwise, the client isn’t authorized to make the call because the client did not have credentials that could be authorized.

    The identity token is used to authorize API calls based on identity claims of the signed-in user. The access token is used to authorize API calls based on the custom scopes of specified access-protected resources. For more information, see Using Tokens with User Pools and Resource Server and Custom Scopes.

    Reference: AWS API Gateway integrate with Cognito

    Top

    Q79: What kind of message does SNS send to endpoints?

    • A. An XML document with parameters like Message, Source, Destination, Type
    • B. A JSON document with parameters like Message, Signature, Subject, Type.
    • C. An XML document with parameters like Message, Signature, Subject, Type
    • D. A JSON document with parameters like Message, Source, Destination, Type

    Answer: B.
    Amazon SNS messages do not publish the source/destination

    Reference: AWS SNS Faqs

    Top

    Q80: Company B provides an online image recognition service and utilizes SQS to decouple system components for scalability. The SQS consumers poll the imaging queue as often as possible to keep end-to-end throughput as high as possible. However, Company B is realizing that polling in tight loops is burning CPU cycles and increasing costs with empty responses. How can Company B reduce the number of empty responses?

    • A. Set the imaging queue MessageRetentionPeriod attribute to 20 seconds.
    • B. Set the imaging queue ReceiveMessageWaitTimeSeconds attribute to 20 seconds.
    • C. Set the imaging queue VisibilityTimeout attribute to 20 seconds.
    • D. Set the DelaySeconds parameter of a message to 20 seconds.

    Answer: B.
    ReceiveMessageWaitTimeSeconds, when set to greater than zero, enables long polling. Long polling allows the Amazon SQS service to wait until a message is available in the queue before sending a response. Short polling continuously pools a queue and can have false positives. Enabling long polling reduces the number of poll requests, false positives, and empty responses.
    Reference: AWS SQS Long Polling

    Top

    81: You’re using CloudFormation templates to build out staging environments. What section of the CloudFormation would you edit in order to allow the user to specify the PEM key-name at start time?

    • A. Resources Section
    • B. Parameters Section
    • C. Mappings Section
    • D. Declaration Section

    Answer:B.

    Parameters property type in CloudFormation allows you to accept user input when starting the CloudFormation template. It allows you to reference the user input as variable throughout your CloudFormation template. Other examples might include asking the user starting the template to provide Domain admin passwords, instance size, pem key, region, and other dynamic options.

    Reference: AWS CloudFormation Parameters

    Top

    Q82: You are writing an AWS CloudFormation template and you want to assign values to properties that will not be available until runtime. You know that you can use intrinsic functions to do this but are unsure as to which part of the template they can be used in. Which of the following is correct in describing how you can currently use intrinsic functions in an AWS CloudFormation template?

    • A. You can use intrinsic functions in any part of a template, except AWSTemplateFormatVersion and Description
    • B. You can use intrinsic functions in any part of a template.
    • C. You can use intrinsic functions only in the resource properties part of a template.
    • D. You can only use intrinsic functions in specific parts of a template. You can use intrinsic functions in resource properties, metadata attributes, and update policy attributes.

    Answer: D.

    You can use intrinsic functions only in specific parts of a template. Currently, you can use intrinsic functions in resource properties, outputs, metadata attributes, and update policy attributes. You can also use intrinsic functions to conditionally create stack resources.Reference: AWS Intrinsic Functions

    Top




    Other AWS Facts and Summaries and Questions/Answers Dump

AWS Certification Exam Prep: S3 Facts, Summaries, Questions and Answers

AWS S3 Facts and summaries, AWS S3 Top 10 Questions and Answers Dump

Definition 1: Amazon S3 or Amazon Simple Storage Service is a “simple storage service” offered by Amazon Web Services that provides object storage through a web service interface. Amazon S3 uses the same scalable storage infrastructure that Amazon.com uses to run its global e-commerce network.

Definition 2: Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.




AWS S3 Facts and summaries

  1. S3 is a universal namespace, meaning each S3 bucket you create must have a unique name that is not being used by anyone else in the world.
  2. S3 is object based: i.e allows you to upload files.
  3. Files can be from 0 Bytes to 5 TB
  4. What is the maximum length, in bytes, of a DynamoDB range primary key attribute value?
    The maximum length of a DynamoDB range primary key attribute value is 2048 bytes (NOT 256 bytes).
  5. S3 has unlimited storage.
  6. Files are stored in Buckets.
  7. Read after write consistency for PUTS of new Objects
  8. Eventual Consistency for overwrite PUTS and DELETES (can take some time to propagate)
  9. S3 Storage Classes/Tiers:
    • S3 Standard (durable, immediately available, frequently accesses)
    • Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering): It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access.
    • S3 Standard-Infrequent Access – S3 Standard-IA (durable, immediately available, infrequently accessed)
    • S3 – One Zone-Infrequent Access – S3 One Zone IA: Same ad IA. However, data is stored in a single Availability Zone only
    • S3 – Reduced Redundancy Storage (data that is easily reproducible, such as thumbnails, etc.)
    • Glacier – Archived data, where you can wait 3-5 hours before accessing

    You can have a bucket that has different objects stored in S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA.

  10. The default URL for S3 hosted websites lists the bucket name first followed by s3-website-region.amazonaws.com . Example: enoumen.com.s3-website-us-east-1.amazonaws.com
  11. Core fundamentals of an S3 object
    • Key (name)
    • Value (data)
    • Version (ID)
    • Metadata
    • Sub-resources (used to manage bucket-specific configuration)
      • Bucket Policies, ACLs,
      • CORS
      • Transfer Acceleration
  12. Object-based storage only for files
  13. Not suitable to install OS on.
  14. Successful uploads will generate a HTTP 200 status code.
  15. S3 Security – Summary
    • By default, all newly created buckets are PRIVATE.
    • You can set up access control to your buckets using:
      • Bucket Policies – Applied at the bucket level
      • Access Control Lists – Applied at an object level.
    • S3 buckets can be configured to create access logs, which log all requests made to the S3 bucket. These logs can be written to another bucket.
  16. S3 Encryption
    • Encryption In-Transit (SSL/TLS)
    • Encryption At Rest:
      • Server side Encryption (SSE-S3, SSE-KMS, SSE-C)
      • Client Side Encryption
    • Remember that we can use a Bucket policy to prevent unencrypted files from being uploaded by creating a policy which only allows requests which include the x-amz-server-side-encryption parameter in the request header.
  17. S3 CORS (Cross Origin Resource Sharing):
    CORS defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.  

    • Used to enable cross origin access for your AWS resources, e.g. S3 hosted website accessing javascript or image files located in another bucket. By default, resources in one bucket cannot access resources located in another. To allow this we need to configure CORS on the bucket being accessed and enable access for the origin (bucket) attempting to access.
    • Always use the S3 website URL, not the regular bucket URL. E.g.: https://s3-eu-west-2.amazonaws.com/acloudguru
    •  
  18. S3 CloudFront:
    • Edge locations are not just READ only – you can WRITE to them too (i.e put an object on to them.)
    • Objects are cached for the life of the TTL (Time to Live)
    • You can clear cached objects, but you will be charged. (Invalidation)
  19. S3 Performance optimization – 2 main approaches to Performance Optimization for S3:
    • GET-Intensive Workloads – Use Cloudfront
    • Mixed Workload – Avoid sequencial key names for your S3 objects. Instead, add a random prefix like a hex hash to the key name to prevent multiple objects from being stored on the same partition.
      • mybucket/7eh4-2019-03-04-15-00-00/cust1234234/photo1.jpg
      • mybucket/h35d-2019-03-04-15-00-00/cust1234234/photo2.jpg
      • mybucket/o3n6-2019-03-04-15-00-00/cust1234234/photo3.jpg
  20. The best way to handle large objects uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts.
  21. You can enable versioning on a bucket, even if that bucket already has objects in it. The already existing objects, though, will show their versions as null. All new objects will have version IDs.
  22. Bucket names cannot start with a . or – characters. S3 bucket names can contain both the . and – characters. There can only be one . or one – between labels. E.G mybucket-com mybucket.com are valid names but mybucket–com and mybucket..com are not valid bucket names.
  23. What is the maximum number of S3 buckets allowed per AWS account (by default)? 100
  24. You successfully upload an item to the us-east-1 region. You then immediately make another API call and attempt to read the object. What will happen?
    All AWS regions now have read-after-write consistency for PUT operations of new objects. Read-after-write consistency allows you to retrieve objects immediately after creation in Amazon S3. Other actions still follow the eventual consistency model (where you will sometimes get stale results if you have recently made changes)
  25. S3 bucket policies require a Principal be defined. Review the access policy elements here
  26. What checksums does Amazon S3 employ to detect data corruption?

    Amazon S3 uses a combination of Content-MD5 checksums and cyclic redundancy checks (CRCs) to detect data corruption. Amazon S3 performs these checksums on data at rest and repairs any corruption using redundant data. In addition, the service calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data.

Top
Reference: AWS S3




AWS S3 Top 10 Questions and Answers Dump

Q0: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?

  • A. Create multiple threads and upload the objects in the multiple threads
  • B. Write the items in batches for better performance
  • C. Use the Multipart upload API
  • D. Enable versioning on the Bucket

C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.

Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html

Top

Q2: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?

  • A. AWS::Serverless::Api
  • B. AWS::Serverless::Application
  • C. AWS::Serverless::Layerversion
  • D. AWS::Serverless::Function

Answer – B
AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets.
Reference: Declaring Serverless Resources

Top

Q3: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?

  • A. Enable versioning for the underlying S3 bucket.
  • B. Enable Replication so that the objects get replicated to the other bucket
  • C. Enable CORS for the bucket
  • D. Change the Bucket policy for the bucket to allow access from the other bucket

Answer – C

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.

Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:

Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can congure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.

Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would congure the bucket that is hosting the web font to allow any origin to make these requests.

Reference: Cross-Origin Resource Sharing (CORS)

Top

Q4: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below

  • A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
  • B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
  • C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
  • D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.

Answer- C
The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3.
You can then provides access to the objects based on the key values generated via the user id.

Reference: The AWS Security Token Service (STS)

Top

Q5: Both ACLs and Bucket Policies can be used to grant access to S3 buckets. Which of the following statements is true about ACLs and Bucket policies?

  • A. Bucket Policies are Written in JSON and ACLs are written in XML
  • B. ACLs can be attached to S3 objects or S3 Buckets
  • C. Bucket Policies and ACLs are written in JSON
  • D. Bucket policies are only attached to s3 buckets, ACLs are only attached to s3 objects

Answer: A. and B.
Only Bucket Policies are written in JSON, ACLs are written in XML.
While Bucket policies are indeed only attached to S3 buckets, ACLs can be attached to S3 Buckets OR S3 Objects.
Reference:

Top

Q6: What are good options to improve S3 performance when you have significantly high numbers of GET requests?

  • A. Introduce random prefixes to S3 objects
  • B. Introduce random suffixes to S3 objects
  • C. Setup CloudFront for S3 objects
  • D. Migrate commonly used objects to Amazon Glacier

Answer: C
CloudFront caching is an excellent way to avoid putting extra strain on the S3 service and to improve the response times of reqeusts by caching data closer to users at CloudFront locations.
S3 Transfer Acceleration optimizes the TCP protocol and adds additional intelligence between the client and the S3 bucket, making S3 Transfer Acceleration a better choice if a higher throughput is desired. If you have objects that are smaller than 1GB or if the data set is less than 1GB in size, you should consider using Amazon CloudFront’s PUT/POST commands for optimal performance.
Reference: Amazon S3 Transfer Acceleration

Top

Q7: If an application is storing hourly log files from thousands of instances from a high traffic
web site, which naming scheme would give optimal performance on S3?

  • A. Sequential
  • B. HH-DD-MM-YYYY-log_instanceID
  • C. YYYY-MM-DD-HH-log_instanceID
  • D. instanceID_log-HH-DD-MM-YYYY
  • E. instanceID_log-YYYY-MM-DD-HH

Answer: A. B. C. D. and E.
Amazon S3 now provides increased performance to support at least 3,500 requests per second to add data and 5,500 requests per second to retrieve data, which can save significant processing time for no additional charge. Each S3 prefix can support these request rates, making it simple to increase performance significantly.
This S3 request rate performance increase removes any previous guidance to randomize object prefixes to achieve faster performance. That means you can now use logical or sequential naming patterns in S3 object naming without any performance implications.

Reference: Amazon S3 Announces Increased Request Rate Performance

Top

Q8: You are working with the S3 API and receive an error message: 409 Conflict. What is the possible cause of this error

  • A. You’re attempting to remove a bucket without emptying the contents of the bucket first.
  • B. You’re attempting to upload an object to the bucket that is greater than 5TB in size.
  • C. Your request does not contain the proper metadata.
  • D. Amazon S3 is having internal issues.

Answer:A.

Reference: S3 Error codes

Top

Q9: You created three S3 buckets – “mywebsite.com”, “downloads.mywebsite.com”, and “www.mywebsite.com”. You uploaded your files and enabled static website hosting. You specified both of the default documents under the “enable static website hosting” header. You also set the “Make Public” permission for the objects in each of the three buckets. You create the Route 53 Aliases for the three buckets. You are going to have your end users test your websites by browsing to http://mydomain.com/error.html, http://downloads.mydomain.com/index.html, and http://www.mydomain.com. What problems will your testers encounter?

  • A. http://mydomain.com/error.html will not work because you did not set a value for the error.html file
  • B. There will be no problems, all three sites should work.
  • C. http://www.mywebsite.com will not work because the URL does not include a file name at the end of it.
  • D. http://downloads.mywebsite.com/index.html will not work because the “downloads” prefix is not a supported prefix for S3 websites using Route 53 aliases

Answer: B.
It used to be that the only allowed domain prefix when creating Route 53 Aliases for S3 static websites was the “www” prefix. However, this is no longer the case. You can now use other subdomain.

Reference: Hosting a Static Website on Amazon S3

Top

Q10: Which of the following is NOT a common S3 API call?

  • A. UploadPart
  • B. ReadObject
  • C. PutObject
  • D. DownloadBucket

Answer: D.

Reference: s3api

Top




Other AWS Facts and Summaries

Binary Search Algorithm Implementation with Python

Binary Search  is an algorithm that finds the position of a target in a sorted array. Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array. Even though the idea is simple, implementing binary search correctly requires attention to some subtleties about its exit conditions and midpoint calculation, particularly if the values in the array are not all of the whole numbers in the range.

Source: https://en.wikipedia.org/wiki/Binary_search_algorithm

Below the binary search algorithm implementation  with python:





#Test_Binary_Search.py
Array=[-2,1,0,4,7,8,10,13,16,17, 21,30,45,100,150,160,191,200]
Target=201
n=len(Array)
def binarySearch(A,T):
L=0
R=n-1
while (L <= R):
m=int((L + R)/2)
if ( A[m] < T ):
L = m +1
elif (A[m] > T):
R = m-1
else:
print("Target is at %s:" % m)
return m
print("Target not found")
return "unsuccessful"

binarySearch(Array, Target)

Binary Search Algorithm implementation with python

Budget to start a web app built on the MEAN stack

I want to start a web app built on the MEAN stack (mongoDB, express.js, angular, and node.js). How much would it cost me to host this site? What resources are there for hosting websites built on the MEAN stack?

I went through the same questions and concerns and I actually tried a couple of different cloud providers for similar environments and machines.

Web Apps Feed

  1. At Digital Ocean, you can get a fully loaded machine to develop and host at $5 per month (512 MB RAM, 20 GB disk ). You can even get a $10 credit by using this link of mine.[1] It is very easy to sign up and start. Just don’t use their web console to connect to your host. It is slow. I recommend using ssh client to connect and it is very fast.
  2. GoDaddy will charge you around 8$ per month for a similar MEAN stack host (512 MB RAM, 1 core processor, 20 Gb disk ) for your MEAN Stack development.
  3. Azure use bitmani’s mean stack on minimum DS1_V2 machine (1core, 3.5 gB RAM) and your average cost will be $52 per month if you never shut down the machine. The set up is a little bit more complicated that Digital Ocean, but very doable. I also recommend ssh to connect to the server and develop.
  4. AWS also offers Bitmani’s MEAN stack on EC2 instances similar to Azure DS1V2 described above and it is around $55 per month.
  5. Other suggestions

All those solutions will work fine and it all depends on your budget. If you are cheap like me and don’t have a big budget, go with Digital Ocean and start with $10 off with this code.

Basic Gotcha Linux Questions for IT DevOps and SysAdmin Interviews

Some IT DevOps, SysAdmin, Developer positions require the knowledge of basic linux Operating System. Most of the time, we know the answer but forget them when we don’t practice very often. This refresher will help you prepare for the linux portion of your IT interview by answering some gotcha Linux Questions for IT DevOps and SysAdmin Interviews.

Get a $10 credit to have your own linux server for your MEAN STACK development and more. It is only $5 per month for a fully loaded Ubuntu machine.

Latest Linux Feeds

I- Networking:

  1. How many bytes are there in a MAC address?
    48.
    MAC, Media Access Control, address is a globally unique identifier assigned to network devices, and therefore it is often referred to as hardware or physical address. MAC addresses are 6-byte (48-bits) in length, and are written in MM:MM:MM:SS:SS:SS format.
  2. What are the different parts of a TCP packet?
    The term TCP packet appears in both informal and formal usage, whereas in more precise terminology segment refers to the TCP protocol data unit (PDU), datagram to the IP PDU, and frame to the data link layer PDU: … A TCP segment consists of a segment header and a data section.
  3. Networking: Which command is used to initialize an interface, assign IP address, etc.
    ifconfig (interface configuration). The equivalent command for Dos is ipconfig.
    Other useful networking commands are: Ping, traceroute, netstat, dig, nslookup, route, lsof
  4. What’s the difference between TCP and UDP; Between DNS TCP and UDP?
    There are two types of Internet Protocol (IP) traffic. They are TCP or Transmission Control Protocol and UDP or User Datagram Protocol. TCP is connection oriented – once a connection is established, data can be sent bidirectional. UDP is a simpler, connectionless Internet protocol.
    The reality is that DNS queries can also use TCP port 53 if UDP port 53 is not accepted.
    DNS uses TCP for Zone Transfer over port :53.
    DNS uses UDP for DNS Queries over port :53.

  5. What are defaults ports used by http, telnet, ftp, smtp, dns, , snmp, squid?
    All those services are part of the Application level of the TCP/IP protocol.
    http => 80
    telnet => 23
    ftp => 20 (data transfer), 21 (Connection established)
    smtp => 25
    dns => 53
    snmp => 161
    dhcp => 67 (server), 68 (Client)
    ssh => 22
    squid => 3128
  6. How many host available in a subnet (Class B and C Networks)
  7. How DNS works?
    When you enter a URL into your Web browser, your DNS server uses its resources to resolve the name into the IP address for the appropriate Web server.
  8. What is the difference between class A, class B and class C IP addresses?
    Class A Network (/ 8 Prefixes)
    This network is 8-bit network prefix. IP address range from 0.0.0.0 to 127.255.255.255
    Class B Networks (/16 Prefixes)
    This network is 16-bit network prefix. IP address range from 128.0.0.0 to 191.255.255.255Class C Networks (/24 Prefixes)
    This network is 24-bit network prefix.IP address range from 192.0.0.0 to 223.255.255.255
  9. Difference between ospf and bgp?
    The first reason is that BGP is more scalable than OSPF. , and this, normal igp like ospf cannot perform. Generally speaking OSPF and BGP are routing protocols for two different things. OSPF is an IGP (Interior Gateway Protocol) and is used internally within a companies network to provide routing.

II- Operating System
1&1 Web Hosting

  1. How to find the Operating System version?
    $uname -a
    To check the distribution for redhat for example: $cat /etc/redhat –release
  2. How to list all the process running?
    top
    To list java processes, ps -ef | grep java
    To list processes on a specific port:
    netstat -aon | findstr :port_number
    lsof -i:80
  3. How to check disk space?
    df shows the amount of disk space used and available.
    du displays the amount of disk used by the specified files and for each subdirectories.
    To drill down and find out which file is filling up a drive: du -ks /drive_name/* | sort -nr | head
  4. How to check memory usage?
    free or cat /proc/meminfo
  5. What is the load average?
    It is the average sum of the number of process waiting in the queue and the number of process currently executing over the period of 1, 5 and 15 minutes. Use top to find the load average.
  6. What is a load balancer?
    A load balancer is a device that acts as a reverse proxy and distributes network or application traffic across a number of servers. Load balancers are used to increase capacity (concurrent users) and reliability of applications.
  7. What is the Linux Kernel?
    The Linux Kernel is a low-level systems software whose main role is to manage hardware resources for the user. It is also used to provide an interface for user-level interaction.
  8. What is the default kill signal?
    There are many different signals that can be sent (see signal for a full list), although the signals in which users are generally most interested are SIGTERM (“terminate”) and SIGKILL (“kill”). The default signal sent is SIGTERM.
    kill 1234
    kill -s TERM 1234
    kill -TERM 1234
    kill -15 1234
  9. Describe Linux boot process
    BIOS => MBR => GRUB => KERNEL => INIT => RUN LEVEL
    As power comes up, the BIOS (Basic Input/Output System) is given control and executes MBR (Master Boot Record). The MBR executes GRUB (Grand Unified Boot Loader). GRUB executes Kernel. Kernel executes /sbin/init. Init executes run level programs. Run level programs are executed from /etc/rc.d/rc*.d
    Mac OS X Boot Process:

    Boot ROM Firmware. Part of Hardware system
    BootROM firmware is activated
    POST Power-On Self Test
    initializes some hardware interfaces and verifies that sufficient memory is available and in a good state.
    EFI Extensible Firmware Interface
    EFI does basic hardware initialization and selects which operating system to use.
    BOOTX boot.efi boot loader
    load the kernel environment
    Rooting/Kernel The init routine of the kernel is executed
    boot loader starts the kernel’s initialization procedure
    Various Mach/BSD data structures are initialized by the kernel.
    The I/O Kit is initialized.
    The kernel starts /sbin/mach_init
    Run Level mach_init starts /sbin/init
    init determines the runlevel, and runs /etc/rc.boot, which sets up the machine enough to run single-user.
    rc.boot figures out the type of boot (Multi-User, Safe, CD-ROM, Network etc.)
  10. List services enabled at a particular run level
    chkconfig –list | grep 5:0n
    Enable|Disable a service at a specific run level: chkconfig on|off –level 5
  11. How do you stop a bash fork bomb?
    Create a fork bomb by editing limits.conf:
    root hard nproc 512
    Drop a fork bomb as below:
    :(){ :|:& };:
    Assuming you have access to shell:
    kill -STOP
    killall -STOP -u user1
    killall -KILL -u user1
  12. What is a fork?
    fork is an operation whereby a process creates a copy of itself. It is usually a system call, implemented in the kernel. Fork is the primary (and historically, only) method of process creation on Unix-like operating systems.
  13. What is the D state?
    D state code means that process is in uninterruptible sleep, and that may mean different things but it is usually I/O.

III- File System

  1. What is umask?
    umask is “User File Creation Mask”, which determines the settings of a mask that controls which file permissions are set for files and directories when they are created.
  2. What is the role of the swap space?
    A swap space is a certain amount of space used by Linux to temporarily hold some programs that are running concurrently. This happens when RAM does not have enough memory to hold all programs that are executing.
  • What is the role of the swap space?
    A swap space is a certain amount of space used by Linux to temporarily hold some programs that are running concurrently. This happens when RAM does not have enough memory to hold all programs that are executing.
  • What is the null device in Linux?
    The null device is typically used for disposing of unwanted output streams of a process, or as a convenient empty file for input streams. This is usually done by redirection. The /dev/null device is a special file, not a directory, so one cannot move a whole file or directory into it with the Unix mv command.You might receive the “Bad file descriptor” error message if /dev/null has been deleted or overwritten. You can infer this cause when file system is reported as read-only at the time of booting through error messages, such as“/dev/null: Read-only filesystem” and “dup2: bad file descriptor”.
    In Unix and related computer operating systems, a file descriptor (FD, less frequently fildes) is an abstract indicator (handle) used to access a file or other input/output resource, such as a pipe or network socket.
  • What is a inode?
    The inode is a data structure in a Unix-style file system that describes a filesystem object such as a file or a directory. Each inode stores the attributes and disk block location(s) of the object’s data.

IV- Databases

  1. What is the difference between a document store and a relational database?
    In a relational database system you must define a schema before adding records to a database. The schema is the structure described in a formal language supported by the database and provides a blueprint for the tables in a database and the relationships between tables of data. Within a table, you need to define constraints in terms of rows and named columns as well as the type of data that can be stored in each column.In contrast, a document-oriented database contains documents, which are records that describe the data in the document, as well as the actual data. Documents can be as complex as you choose; you can use nested data to provide additional sub-categories of information about your object. You can also use one or more document to represent a real-world object.
  2. How to optimise a slow DB?
    • Rewrite the queries
    • Change indexing strategy
    • Change schema
    • Use an external cache
    • Server tuning and beyond
  3. How would you build a 1 Petabyte storage with commodity hardware?
    Using JBODs with large capacity disks with Linux in a distributed storage system stacking nodes until 1PB is reached.
    JBOD (which stands for “just a bunch of disks”) generally refers to a collection of hard disks that have not been configured to act as a redundant array of independent disks (RAID) array.
    JBOD

V- Scripting

  1. What is @INC in Perl?
    The @INC Array. @INC is a special Perl variable that is the equivalent to the shell’s PATH variable. Whereas PATH contains a list of directories to search for executables, @INC contains a list of directories from which Perl modules and libraries can be loaded.
  2. Strings comparison – operator – for loop – if statement
  3. Sort access log file by http Response Codes
    Via Shell using linux commands
    cat sample_log.log | cut -d ‘”‘ -f3 | cut -d ‘ ‘ -f2 | sort | uniq -c | sort -rn
  4. Sort access log file by http Response Codes Using awk
    awk ‘{print $9}’ sample_log.log | sort | uniq -c | sort -rn
  5. Find broken links from access log file
    awk ‘($9 ~ /404/)’ sample_log.log | awk ‘{print $7}’ sample_log.log | sort | uniq -c | sort -rn
  6. Most requested page:
    awk -F\” ‘{print $2}’ sample_log.log | awk ‘{print $2}’ | sort | uniq -c | sort -r
  7. Count all occurrences of a word in a file
    grep -o “user” sample_log.log | wc -w

Learn more at http://career.guru99.com/top-50-linux-interview-questions/

Real Time Linux Jobs

Script with hash tables on windows and Linux

How to declare and write a script with hash tables on windows and linux

  • Hash tables with powershell on windows

    Declaration:
    $states=@{“Alberta” = “Calgary”; “British Columbia” = “Vancouver”; “Ontario” = “Toronto” ; “Quebec” = “Montreal”}

    Name
    _____
    Value
    _______
    Alberta Calgary
    British Columbia Vancouver
    Ontario Toronto
    Quebec Montreal

    Add new key-value in hashtable:
    $states.Add(“Manitoba”,”Winnipeg”)

    Remove key-value in hashtable:
    $states.Remove(“Manitoba”,”Winnipeg”)
    Change value in hashtable:
    $states.Set_Item(“Ontario”,”Ottawa”)
    Retrieve value in hashtable:
    $states.Get_Item(“Alberta”)
    Find key in hashtable:
    $states.ContainsKey(“Alberta”)
    Find Value in hashtable:
    $states.ContainsValue(“Calgary”)
    Count items in hashtable:
    $states.Count
    Sort items by Name in hashtable:
    $states.GetEnumerator() | Sort-Object Name -descending
    Sort items by Value in hashtable:
    $states.GetEnumerator() | Sort-Object Value -descending

  • Hash tables with perl on linux or windows

     
    Declaration:
    my %hash = (); #Initialize a hash
    my $hash_ref = {}; # Initialize a hash reference. ref will return HASH
    Clear (or empty) a hash
    for (keys %hash)
    {
    delete $hash{$_};
    }
    Clear (or empty) a hash reference
    for (keys %$href)
    {
    delete $href->{$_};
    }
    Add a key/value pair to a hash
    $hash{ ‘key’ } = ‘value’; # hash
    $hash{ $key } = $value; # hash, using variables
    Using Hash Reference
    $href->{ ‘key’ } = ‘value’; # hash ref
    $href->{ $key } = $value; # hash ref, using variables
    Add several key/value pairs to a hash
    %hash = ( ‘key1’, ‘value1’, ‘key2’, ‘value2’, ‘key3’, ‘value3’ );
    %hash = (
    key1 => ‘value1’,
    key2 => ‘value2’,
    key3 => ‘value3’,
    );

    Copy a hash
    my %hash_copy = %hash; # copy a hash
    my $href_copy = $href; # copy a hash ref
    Delete a single key/value pair
    delete $hash{$key};
    delete $hash_ref->{$key};

  • Hash tables with python on linux or windows

    Hash tables are called dictionary in python.
    Declaration:
    dict = {‘Name’: ‘Zara’, ‘Age’: 7, ‘Class’: ‘First’}
    Accessing Values
    print “dict[‘Name’]: “, dict[‘Name’]
    print “dict[‘Age’]: “, dict[‘Age’]
    Output:
    dict[‘Name’]: Zara
    dict[‘Age’]: 7
    Updating Dictionary
    dict = {‘Name’: ‘Zara’, ‘Age’: 7, ‘Class’: ‘First’}

    dict[‘Age’] = 8; # update existing entry
    dict[‘School’] = “DPS School”; # Add new entry
    Delete Dictionary Elements
    #!/usr/bin/python

    dict = {‘Name’: ‘Zara’, ‘Age’: 7, ‘Class’: ‘First’}

    del dict[‘Name’]; # remove entry with key ‘Name’
    dict.clear(); # remove all entries in dict
    del dict ; # delete entire dictionary

Source:

  1. https://technet.microsoft.com/en-us/library/ee692803.aspx
  2. http://www.cs.mcgill.ca/~abatko/computers/programming/perl/howto/hash/
  3. http://www.tutorialspoint.com/python/python_dictionary.htm

Things to do to build more diverse technical teams?

What are specific Things to do to build more diverse technical teams?

  • 1. Start at the junior high school level in poor neighborhood by helping schools to teach minority kids how to code.
  • 2. Help set up computer labs in junior high school in poor neighborhoods so that kids who cannot afford the equipment can use the lab.
  • 3. Start coding challenges with prizes in those high school neighborhood and make the kids feel confident about their abilities.
  • 4. Get successful minorities entrepreneurs to host the kids and give them tours of their companies and offices and projects to inspire them.
  • 5. Give internships to minorities kids starting in high school.
  • 6. For the kids in senior high school who missed early adoption to coding, speed them up with after hours coding classes to prepare them for college.
  • 7. In college, make the minority kids feel welcome. Try to make them part of all activities ( computer related or not).
  • 8. For the kids in senior high school who missed early adoption to coding, speed them up with after hours coding classes to prepare them for college.
  • 9. In college, make the minority kids feel welcome. Try to make them part of all activities ( computer related or not). Help find internships from top IT firms (Google, Apple, IBM, etc.) for minorities.
  • 10. When minorities start working after college, welcome them in the team and trust them. Don’t just invite certain kids to dinner and golf. It makes minorities feel unappreciated and unwanted.
  • 11. Give minorities the same opportunity of advancement at work with other employees.
  • 12. Inclusion, inclusion, inclusion in every aspect of the company is the key.
  • Assess the progress of your efforts and adjust. Make sure that team leads or managers are open minded people and make it a priority for them to build diverse teams, because in the long run, it will be a win win for the company

Read more here on Quora.

How to stay healthy as a software engineer?

Tricks and Tips: How to stay healthy as a software engineer or IT professional?

I am a software engineer like you and by my second year I started feeling the unhealthy behavior of sitting down and coding for long hours.

How to stay healthy as a software engineer
How to stay healthy as a software engineer

Below are the steps that I took:

  1. Avoid sitting down for more than 1 hour without getting up for a walk.
  2. Stand up for 15 minutes every hour to code.
  3. Take multiple short walks outdoor during working hours.
  4. Avoid elevators unless you have no choice, use the stairs to go up and down if your office floor is lower than the 5th floor.
  5. Avoid drinking sweet drinks or too much coffee during work hours.
  6. Avoid eating chips or almost anything while working.
  7. Instead of spending long hours reading manuals and documents on your computer, print them out, then take a walk and read them somewhere quiet while standing.
  8. Stretch often while working (extend your legs, arms, rotate your neck).
  9. Take short breaks of 2 to 5 minutes every 2 hours to read something different from your main topic. It can be news, sports, entertainment, or anything else you like. I read or write on Quora during my breaks.
  10. Change your position frequently and don’t hesitate to stand up at your desk from time to time while working.
  11. Make sure that your chair is always comfortable. Don’t hesitate to upgrade or get a better chair if necessary.

Here are the steps that I took to stay active and healthy:

  1. I am committed, no matter what, to playing at least 2 competitive games of soccer or basketball a week, either in an amateur team league or at drop-in sports leagues. Check out one of my drop in league chapter in your city at ShowUpAndPlaySports chapters – Djamga – ShowUpAndPlaySports
  2. I volunteer to organize soccer and basketball games every week via Home – Djamga – ShowUpAndPlaySports
  3. I walk regularly at lunch time, and try to get as much sunshine as possible.
  4. I visit a chiropractor once a month to adjust my back and neck.
  5. I visit a certified massage therapist regularly to work on my neck, back, hamstrings, and feet.
  6. I visit a pedicure clinic once a month for a good pedicure and foot massage.
  7. I eat a healthy diet of mostly vegetables and fish (mostly salmon).
  8. I drastically reduced the carbs in my diet. Every morning, I take one cup of coffee or tea with no sugar or milk and a small cake. Then, I am covered until dinner time. In the evening, I have a large meal of vegetables and fish, usually salmon.
  9. I drink plenty of water.
  10. I don’t drink alcohol or smoke.
  11. It is very important to sleep well; sleep at least 6 hours per day.
    You spend about 25% of your life in your bed, there you need to invest on your mattresses, pillows, bed furnitures and upgrade them regularly.

After adopting these habits, my efficiency came back. I was able to work as hard as when I was a student . I even lost weight!

Now, I can go toe-to-toe with young players and students in their twenties on the soccer field. I easily work more than 60 hours per week and still have enough time to play with my kids and enjoy a fulfilling life with my family.

I highly recommend these life-changing habits to all IT professionals and engineers so they can remain healthy and effective as they get older and busier.