Download the Ace AWS DEA-C01 Exam App: iOS - Android
Your home’s safety can hinge on the subtlest of electronics. It’s an electric world out there, and with great home convenience comes the somewhat daunting task of managing power efficiently and safely. But here’s an electric truth you might not have plugged into yet—there’s a world of difference between a power strip and its more resilient sibling, the surge protector. Read on to learn more.
Power Strips: Not All Outlets Are Equal
A power strip is the standard go-to when you need a few extra outlets in a pinch. Picture the power strip as an adaptable traffic cop for your electrical items—it directs power traffic and keeps all your devices organized.
The run-of-the-mill strip comes with multiple outlets, possibly with a fuse to avoid overloading, but when it comes to sudden voltage spikes, they’re just as shocked as your electronics. While power strips are handy for plugging in your everyday devices like lamps, laptops, and toasters, they’re not fit for the challenges of more sensitive electronics.
Surge Protectors: Grounded Defense for Electronics
Surge protectors take the defensive line against unexpected jolts of power. Beyond the quintessential role of offering additional outlets, these bruisers can detect and suppress voltage spikes, ensuring that your electronics remain unscathed.
What might seem like a flicker in your lights can wreak havoc on a computer, TV, or home entertainment system, but a good surge protector stands in the way like a protective barrier. This level of protection is one of the top reasons to use surge protectors at home as opposed to typical power strips.
Making the Switch
Choosing between power strips and surge protectors isn’t just about preference—it’s a judgment call for safeguarding your home’s tech and electronics. A decent surge protector might be pricier than a simple power strip, but the investment pales next to the potential cost of replacing your fried laptop or smart home hub. When in doubt, opt for the surge—it’s the wise route to powering up your gadgets safely and securely.
Electrical safety isn’t a lightning-quick decision; it’s a current of constants, small choices that spark a larger assurance in your home. By recognizing the stark differences between power strips and surge protectors, you’re not just playing it safe but conducting a symphony of safeguarded living.
Download the Ace AWS DEA-C01 Exam App: iOS - Android
What are The Benefits and Drawbacks of Working Remotely in Africa?
Has Africa fully embraced hybrid teams, digital workspace and the use of remote workers?
The COVID-19 pandemic has forced many businesses to reevaluate the way they operate. For some, this has meant a shift to hybrid teams, with employees working remotely part of the time. For others, it’s meant a move to digital workspaces and an embrace of remote workers. But what does this mean for Africa? Has the continent fully embraced these changes? Let’s take a look.
There are a number of advantages to working remotely in Africa. First, it allows businesses to tap into a larger pool of talent. With more people working remotely, businesses can hire the best employees, regardless of location. Second, it can help reduce costs. With no need for office space or equipment, businesses can save money by having employees work remotely. Finally, it can promote a better work-life balance. With no need to commute, employees can have more time for family and hobbies.
The Cons of Working Remotely in Africa
However, there are also some drawbacks to working remotely in Africa. First, there is the issue of internet connectivity. While most African countries have access to high-speed internet, there are still some areas that do not. This can make it difficult for remote workers to stay connected and productive. Second, there is the issue of time zones. With workers in different time zones, it can be difficult to schedule meetings and conference calls. Finally, there is the issue of culture.
Working remotely can be isolating, and it can be difficult to build relationships with coworkers when you’re not in the same place.
The Benefits of Hybrid Teams
A hybrid team is a mix of full-time employees and freelancers or contractors who work together to achieve a common goal. This model offers a number of benefits for businesses, including increased flexibility, reduced costs, and improved access to skills and talent.
One of the biggest advantages of hybrid teams is that they offer businesses increased flexibility. With a hybrid team, businesses can scale up or down as needed, which is ideal in today’s ever-changing business landscape. Additionally, hybrid teams allow businesses to tap into a wider pool of skills and talent. And because freelancers and contractors are typically paid by the project, businesses can save money by only paying for the work that is completed.
The Digital Workspace
The digital workspace is a new way of working that enables employees to be productive from anywhere at any time. It includes cloud-based applications and services that allow employees to access their files and applications from any device with an internet connection.
The digital workspace offers a number of benefits for businesses, including increased productivity, reduced costs, and improved collaboration. Perhaps most importantly, it gives employees the freedom to work from anywhere at any time. This is especially beneficial for employees in Africa who may not have reliable access to electricity or internet connectivity.
Remote Workers in Africa
The COVID-19 pandemic has forced many businesses around the world to embrace remote work. In Africa, we are seeing a similar trend, with more and more businesses allowing employees to work from home or other remote locations. There are many reasons for this, but chief among them are increased productivity and reduced costs.
When done correctly, remote work can lead to increased productivity as employees are free to design their own schedules and work in environments that suit their needs. Additionally, remote work can help reduce costs by eliminating the need for office space and associated overhead costs.
The benefits of hybrid teams are well-documented. A study by Harvard Business Review found that companies with diverse teams are 35% more likely to outperform their peers. Another study by McKinsey & Company found that businesses with gender-diverse leadership teams are 21% more likely to generate above-average profits. In Africa, the benefits of hybrid teams are especially pronounced.
The African continent is home to a wide variety of cultures and languages. This diversity is an asset that can be leveraged by businesses to gain a competitive edge. By tapping into the talents of people from all corners of the continent, businesses can create products and services that appeal to a global market.
In addition, the use of remote workers allows businesses to tap into a wider pool of talent. By eliminating the need for employees to be physically present in an office, businesses can hire the best person for the job regardless of location. This has led to increased productivity and efficiency in the workplace.
Overall, working remotely in Africa has its pros and cons. However, with the right infrastructure and support in place, remote work can be a great option for businesses and employees alike.
The rise of hybrid teams has had a positive impact on Africa. By bringing together people with different skillsets and backgrounds, businesses have been able to create products and services that appeal to a global market. In addition, the use of remote workers has allowed businesses to tap into a wider pool of talent. This has led to increased productivity and efficiency in the workplace.
The COVID-19 pandemic has changed the way we live and work. In Africa, we are seeing a trend towards hybrid teams, the digital workspace, and remote workers. This new way of working offers a number of benefits for businesses, including increased flexibility, reduced costs, and improved access to skills and talent. As we continue to adapt to the new normal brought on by the pandemic, it is clear that these trends are here to stay.
HISTORY – GEOGRAPHY – CULTURE – PEOPLE – CUISINE – ECONOMICS – LANGUAGES – MUSIC – WILDLIFE – FOOTBALL – POLITICS – ANIMALS – TOURISM – SCIENCE – ENVIRONMENT
How well do you know Africa? Test your knowledge with this Africa history and geography quiz. Africa is the world’s second largest continent, and it is home to a stunning diversity of cultures, languages, and landscapes. From the Sahara Desert to the rainforests of the Congo Basin, Africa boasts a huge variety of geography. And its history is just as rich, from ancient civilizations like Egypt and Ethiopia to European colonization and the struggle for independence. So whether you’re an Africa expert or just getting started, this quiz will help you test your knowledge of this amazing continent.
Africa is a vast and fascinating continent with a rich history and diverse culture. To test your knowledge of Africa, take this Africa History and Geography Quiz. See how much you know about the people, places, and events that have shaped Africa over the centuries.
This book contains hundreds of quizzes with illustrations and answers about African History, Geography, Wildlife, Economics, Culture, Cuisine, Wildlife, Languages, Music and People and a lot more…
Cuide da sua pele: Mantenha uma rotina de cuidados que inclua limpeza e hidratação. Isso ajuda a ter uma pele saudável e radiante.Continue reading on Medium »
On 1 January 2024, Ethiopia signed a Memorandum of Understanding (MoU) with Somaliland on access to the port of Berbera and the leasing of…Continue reading on Medium »
Nigerian Uber drivers on the Uber Go platform are feeling the heat, and it’s not just from the Lagos sun. Tension is rising after reports…Continue reading on Medium »
Mobile money, often celebrated in stories like Vodacom’s R7.4 trillion surge or Tanzania’s 55.8 million accounts, has reshaped financial…Continue reading on Medium »
What are some of the words in you countries and respective languagea which have now been accepted and often get used in spoken speech? I'm from Zambia, and the word 'Pin', meaning thousand, now even gets used in boardrooms E.g that house is going at 800 pin It's quite interesting because over here we have one official language which is British English, but when somone says 'pin' to mean thousand in a Zambian context, us locals don't realise it but it'd totally throw off someone foreign. submitted by /u/ck3thou [link] [comments]
What are the Top 5 things that can say a lot about a software engineer or programmer’s quality?
When it comes to the quality of a software engineer or programmer, there are a few key things that can give you a good indication. First, take a look at their code quality. A good software engineer will take pride in their work and produce clean, well-organized code. They will also be able to explain their code concisely and confidently. Another thing to look for is whether they are up-to-date on the latest coding technologies and trends. A good programmer will always be learning and keeping up with the latest industry developments. Finally, pay attention to how they handle difficult problems. A good software engineer will be able to think creatively and come up with innovative solutions to complex issues. If you see these qualities in a software engineer or programmer, chances are they are of high quality.
Below are the top 5 things can say a lot about a software engineer/ programmer’s quality?
The number of possible paths through the code (branch points) is minimized. Top quality code tends to be much more straight line than poor code. As a result, the author can design, code and test very quickly and is often looked at as a programming guru. In addition this code is far more resilient in Production.
The code clearly matches the underlying business requirements and can therefore be understood very quickly by new resources. As a result there is much less tendency for a maintenance programmer to break the basic design as opposed to spaghetti code where small changes can have catastrophic effects.
There is an overall sense of pride in the source code itself. If the enterprise has clear written standards, these are followed to the letter. If not, the code is internally consistent in terms of procedure/object, function/method or variable/attribute naming. Also indentation and continuations are universally consistent throughout. Last but not least, the majority of code blocks are self-evident to the requirements and where not the case, adequate purpose focused documentation is provided.
In general, I have seen two types of programs provided for initial Production deployment. One looks like it was just written moments ago and the other looks like it has had 20 years of maintenance performed on it. Unfortunately, the authors of the second type cannot generally see the difference so it is a lost cause and we just have to continue to deal with the problems.
In today’s programming environment, a project may span many platforms, languages etc. A simple web page may invoke an API which in turn accesses a database. For this example lets say JavaScript – Rest API – C# – SQL – RDBMS. The programmer can basically embed logic anywhere in this chain, but needs to be aware of reuse, performance and maintenance issues. For instance, if a part of the process requires access to three database tables, it is both faster and clearer to allow the DBMS engine return a single query than compare the tables in the API code. Similarly every business rule coded in the client side reduces re-usability potential. Top quality developers understand these issues and can optimize their designs to take advantages of the strengths of the component technologies.
The ability to stay current with new trends and technologies. Technology is constantly evolving, and a good software engineer or programmer should be able to stay up-to-date on the latest trends and technologies in order to be able to create the best possible products.
To conclude:
Below are other things to consider when hiring good software engineers or programmers:
The ability to write clean, well-organized code. This is a key indicator of a good software engineer or programmer. The ability to write code that is easy to read and understand is essential for creating high-quality software.
The ability to test and debug code. A good coder should be able to test their code thoroughly and identify and fix any errors that may exist.
The ability to write efficient code. Software engineering is all about creating efficient solutions to problems. A good software engineer or programmer will be able to write code that is efficient and effective.
The ability to work well with others. Software engineering is typically a team-based effort. A good software engineer or programmer should be able to work well with others in order to create the best possible product.
The ability to stay current with new trends and technologies.
Why is there a lack of diversity among software engineers in tech companies in USA and Canada?
One of the most significant problems facing the tech industry is a lack of diversity among software engineers. In the United States and Canada, women and visible minorities are grossly underrepresented in the field. This problem is often attributed to the pipelines that feed into the industry. For example, women are less likely than men to study computer science in college. However, this does not fully explain the disparity. Hiring practices at tech companies are also to blame. Studies have shown that companies are biased against female and minority candidates. They are more likely to hire white men with similar educational backgrounds and work experience. This lack of diversity has negative consequences for both the individuals affected and the companies themselves. Immersive technologies, such as virtual reality, are being designed and developed primarily by white men. This creates a product that does not reflect the needs or perspectives of a diverse population. In addition, companies with diverse teams have been shown to perform better than those without diversity. They are more innovative and adaptable to change. The lack of diversity among software engineers is a problem that needs to be addressed by both educators and tech companies if the industry is to thrive in the future.
A prominent Engineer from Silicon Valley said this about the topic: Women and minorities are systematically discouraged from taking STEM subjects starting around the third grade. Fewer of them excel in math and science in high school. Fewer go to college. Women and minorities don’t see women and minorities in tech roles in the media. Some hiring managers are racist, misogynist scumbags. Some co-workers are dismissive of them. They have trouble finding mentors. The women, at least, are encouraged continuously to drop out and make fat babies.
This all forms a pretty effective filter. By Kurt Guntheroth.
Software industry has mostly adopted and promoted conformist culture, thanks to its leaders who mostly have been power mongers and Machiavellian by approach, with few exceptions here and there in few organizations. Conforming culture will inherently repel diversity since one is more assured of conforming staff in a known culture rather than more diverse culture. Most of these so called pseudo leaders don’t really seek diverse ideas which inherently stem from diverse cultures. And so such organizations never end up becoming diverse.
There aren’t enough women in the industry — in individual contributor roles and in leadership roles. There also aren’t enough African-Americans and Latinos.
Why does this matter? Many reasons. One big one is that exclusion leads to more exclusion. When one gender/ethnic group is significantly underrepresented in a workforce, strong biases bake in -> the people in the affected group think they don’t belong at the compan(ies) and the people at the companies have an insular bias to pick people from their networks/people who share their backgrounds.
Silicon Valley is the hottest growth sector in the US and will continue to create the best career and wealth opportunities over the next 20–30 years. It’s really not good that the industry isn’t absorbing more women, African-Americans, and Latinos.
To conclude:
Technology is a rapidly growing industry with a huge demand for qualified software engineers. However, there is a lack of diversity among software engineers in tech companies in the USA and Canada. Women and minorities are greatly underrepresented in the field of software engineering. This is due to several factors, including the misogynistic and racist culture of many tech companies. The lack of diversity among software engineers has a negative impact on the quality of products and services offered by tech companies. It also limits the ability of these companies to innovate and serve a wide range of customers. In order to increase diversity among software engineers, tech companies need to change their hiring practices and create an inclusive environment that values all types of people.
What is the single most influential book every Programmers should read
There are a lot of books that can be influential to programmers. But, what is the one book that every programmer should read? This is a question that has been asked by many, and it is still up for debate. However, there are some great contenders for this title. In this blog post, we will discuss three possible books that could be called the most influential book for programmers. So, what are you waiting for? Keep reading to find out more!
What are the concepts every Java C# C++ Python Rust programmer must know?
Ok…I think this is one of the most important questions to answer. According to the my personal experience as a Programmer, I would say you must learn following 5 universal core concepts of programming to become a successful Java programmer.
(1) Mastering the fundamentals of Java programming Language – This is the most important skill that you must learn to become successful java programmer. You must master the fundamentals of the language, specially the areas like OOP, Collections, Generics, Concurrency, I/O, Stings, Exception handling, Inner Classes and JVM architecture.
(2) Data Structures and Algorithms – Programming languages are basically just a tool to solve problems. Problems generally has data to process on to make some decisions and we have to build a procedure to solve that specific problem domain. In any real life complexity of the problem domain and the data we have to handle would be very large. That’s why it is essential to knowing basic data structures like Arrays, Linked Lists, Stacks, Queues, Trees, Heap, Dictionaries ,Hash Tables and Graphs and also basic algorithms like Searching, Sorting, Hashing, Graph algorithms, Greedy algorithms and Dynamic Programming.
(3) Design Patterns – Design patterns are general reusable solution to a commonly occurring problem within a given context in software design and they are absolutely crucial as hard core Java Programmer. If you don’t use design patterns you will write much more code, it will be buggy and hard to understand and refactor, not to mention untestable and they are really great way for communicating your intent very quickly with other programmers.
(4) Programming Best Practices – Programming is not only about learning and writing code. Code readability is a universal subject in the world of computer programming. It helps standardize products and help reduce future maintenance cost. Best practices helps you, as a programmer to think differently and improves problem solving attitude within you. A simple program can be written in many ways if given to multiple developers. Thus the need to best practices come into picture and every programmer must aware about these things.
(5) Testing and Debugging (T&D) – As you know about the writing the code for specific problem domain, you have to learn how to test that code snippet and debug it when it is needed. Some programmers skip their unit testing or other testing methodology part and leave it to QA guys. That will lead to delivering 80% bugs hiding in your code to the QA team and reduce the productivity and risking and pushing your project boundaries to failure. When a miss behavior or bug occurred within your code when the testing phase. It is essential to know about the debugging techniques to identify that bug and its root cause.
I hope these instructions will help you to become a successful Java Programmer. Here i am explain only the universal core concepts that you must learn as successful programmer. I am not mentioning any technologies that Java programmer must know such as Spring, Hibernate, Micro-Servicers and Build tools, because that can be change according to the problem domain or environment that you are currently working on…..Happy Coding!
Summary: There’s no doubt that books have had a profound influence on society and the advancement of human knowledge. But which book is the most influential for programmers? Some might say it’s The Art of Computer Programming, or The Pragmatic Programmer. But I would argue that the most influential book for programmers is CODE: The Hidden Language of Computer Hardware and Software. In CODE, author Charles Petzold takes you on a journey from the basics of computer hardware to the intricate workings of software. Along the way, you learn how to write code in Assembly language, and gain an understanding of how computers work at a fundamental level. If you’re serious about becoming a programmer, then CODE should be at the top of your reading list!
Designing a document collaboration system like Google Docs involves creating a platform where users can create, edit, share, and…Continue reading on Medium »
Hey there, fellow C# enthusiasts! Today, we’re diving into the world of StringValues in ASP.NET Core—a nifty feature designed to handle…Continue reading on ByteHide »
Pernyataan kondisi
pernyataan kondisi merupakan eksekusi blokode berdasarkan hasil dari suatu kondisi yang biasanya berupa True or False…Continue reading on Medium »
Creating an app is like building a house. You can go full luxury, adding all the bells and whistles, or you can design something more…Continue reading on Medium »
Use AWS Cheatsheets – I also found the cheatsheets provided by Tutorials Dojo very helpful. In my opinion, it is better than Jayendrapatil Patil’s blog since it contains more updated information that complements your review notes. #AWS Cheat Sheet
Watch this exam readiness 3hr video, it very recent webinar this provides what is expected in the exam. #AWS Exam Prep Video
27
Start off watching Ryan’s videos. Try and completely focus on the hands on. Take your time to understand what you are trying to learn and achieve in those LAB Sessions. #AWS Exam Prep Video
28
Do not rush into completing the videos. Take your time and hone the basics. Focus and spend a lot of time for the back bone of AWS infrastructure – Compute/EC2 section, Storage (S3/EBS/EFS), Networking (Route 53/Load Balancers), RDS, VPC, Route 3. These sections are vast, with lot of concepts to go over and have loads to learn. Trust me you will need to thoroughly understand each one of them to ensure you pass the certification comfortably. #AWS Exam Prep Video
29
Make sure you go through resources section and also AWS documentation for each components. Go over FAQs. If you have a question, please post it in the community. Trust me, each answer here helps you understand more about AWS. #AWS Faqs
30
Like any other product/service, each AWS offering has a different flavor. I will take an example of EC2 (Spot/Reserved/Dedicated/On Demand etc.). Make sure you understand what they are, what are the pros/cons of each of these flavors. Applies for all other offerings too. #AWS Services
31
Follow Neal K Davis on Linkedin and Read his updates about DVA-C01 #AWS Services
What is the AWS Certified Developer Associate Exam?
The AWS Certified Developer – Associate examination is intended for individuals who perform a development role and have one or more years of hands-on experience developing and maintaining an AWS-based application. It validates an examinee’s ability to:
Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
Demonstrate proficiency in developing, deploying, and debugging cloud-based applications using AWS
There are two types of questions on the examination:
Multiple-choice: Has one correct response and three incorrect responses (distractors).
Provide implementation guidance based on best practices to the organization throughout the lifecycle of the project.
Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that an examinee with incomplete knowledge or skill would likely choose. However, they are generally plausible responses that fit in the content area defined by the test objective. Unanswered questions are scored as incorrect; there is no penalty for guessing.
To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
Understand bastion hosts, and which subnet one might live on. Bastion hosts are instances that sit within your public subnet and are typically accessed using SSH or RDP. Once remote connectivity has been established with the bastion host, it then acts as a ‘jump’ server, allowing you to use SSH or RDP to login to other instances (within private subnets) deeper within your network. When properly configured through the use of security groups and Network ACLs, the bastion essentially acts as a bridge to your private instances via the Internet.” Bastion Hosts
3
Know the difference between Directory Service’s AD Connector and Simple AD. Use Simple AD if you need an inexpensive Active Directory–compatible service with the common directory features. AD Connector lets you simply connect your existing on-premises Active Directory to AWS. AD Connector and Simple AD
4
Know how to enable cross-account access with IAM: To delegate permission to access a resource, you create an IAM role that has two policies attached. The permissions policy grants the user of the role the needed permissions to carry out the desired tasks on the resource. The trust policy specifies which trusted accounts are allowed to grant its users permissions to assume the role. The trust policy on the role in the trusting account is one-half of the permissions. The other half is a permissions policy attached to the user in the trusted account that allows that user to switch to, or assume the role. Enable cross-account access with IAM
Know which services allow you to retain full admin privileges of the underlying EC2 instances EC2 Full admin privilege
8
Know When Elastic IPs are free or not: If you associate additional EIPs with that instance, you will be charged for each additional EIP associated with that instance per hour on a pro rata basis. Additional EIPs are only available in Amazon VPC. To ensure efficient use of Elastic IP addresses, we impose a small hourly charge when these IP addresses are not associated with a running instance or when they are associated with a stopped instance or unattached network interface. When are AWS Elastic IPs Free or not?
9
Know what are the four high level categories of information Trusted Advisor supplies. #AWS Trusted advisor
10
Know how to troubleshoot a connection time out error when trying to connect to an instance in your VPC. You need a security group rule that allows inbound traffic from your public IP address on the proper port, you need a route that sends all traffic destined outside the VPC (0.0.0.0/0) to the Internet gateway for the VPC, the network ACLs must allow inbound and outbound traffic from your public IP address on the proper port, etc. #AWS Connection time out error
11
Be able to identify multiple possible use cases and eliminate non-use cases for SWF. #AWS
Understand how you might set up consolidated billing and cross-account access such that individual divisions resources are isolated from each other, but corporate IT can oversee all of it. #AWS Set up consolidated billing
13
Know how you would go about making changes to an Auto Scaling group, fully understanding what you can and can’t change. “You can only specify one launch configuration for an Auto Scaling group at a time, and you can’t modify a launch configuration after you’ve created it. Therefore, if you want to change the launch configuration for your Auto Scaling group, you must create a launch configuration and then update your Auto Scaling group with the new launch configuration. When you change the launch configuration for your Auto Scaling group, any new instances are launched using the new configuration parameters, but existing instances are not affected. #AWS Make Change to Auto Scaling group
Know how you would go about making changes to an Auto Scaling group, fully understanding what you can and can’t change. “You can only specify one launch configuration for an Auto Scaling group at a time, and you can’t modify a launch configuration after you’ve created it. Therefore, if you want to change the launch configuration for your Auto Scaling group, you must create a launch configuration and then update your Auto Scaling group with the new launch configuration. When you change the launch configuration for your Auto Scaling group, any new instances are launched using the new configuration parameters, but existing instances are not affected. #AWS Make Change to Auto Scaling group
15
Know which field you use to run a script upon launching your instance. #AWS User data script
16
Know how DynamoDB (durable, and you can pay for strong consistency), Elasticache (great for speed, not so durable), and S3 (eventual consistency results in lower latency) compare to each other in terms of durability and low latency. #AWS DynamoDB consistency
Know the difference between bucket policies, IAM policies, and ACLs for use with S3, and examples of when you would use each. “With IAM policies, companies can grant IAM users fine-grained control to their Amazon S3 bucket or objects while also retaining full control over everything the users do. With bucket policies, companies can define rules which apply broadly across all requests to their Amazon S3 resources, such as granting write privileges to a subset of Amazon S3 resources. Customers can also restrict access based on an aspect of the request, such as HTTP referrer and IP address. With ACLs, customers can grant specific permissions (i.e. READ, WRITE, FULL_CONTROL) to specific users for an individual bucket or object. #AWS Difference between bucket policies
Understand how you can use ELB cross-zone load balancing to ensure even distribution of traffic to EC2 instances in multiple AZs registered with a load balancer. #AWS ELB cross-zone load balancing
Spot instances are good for cost optimization, even if it seems you might need to fall back to On-Demand instances if you wind up getting kicked off them and the timeline grows tighter. The primary (but still not only) factor seems to be whether you can gracefully handle instances that die on you–which is pretty much how you should always design everything, anyway! #AWS Spot instances
22
The term “use case” is not the same as “function” or “capability”. A use case is something that your app/system will need to accomplish, not just behaviour that you will get from that service. In particular, a use case doesn’t require that the service be a 100% turnkey solution for that situation, just that the service plays a valuable role in enabling it. #AWS use case
23
There might be extra, unnecessary information in some of the questions (red herrings), so try not to get thrown off by them. Understand what services can and can’t do, but don’t ignore “obvious”-but-still-correct answers in favour of super-tricky ones. #AWS Exam Answers: Distractors
24
If you don’t know what they’re trying to ask, in a question, just move on and come back to it later (by using the helpful “mark this question” feature in the exam tool). You could easily spend way more time than you should on a single confusing question if you don’t triage and move on. #AWS Exa: Skip Questions that are vague and come back to them later
25
Some exam questions required you to understand features and use cases of: VPC peering, cross-account access, DirectConnect, snapshotting EBS RAID arrays, DynamoDB, spot instances, Glacier, AWS/user security responsibilities, etc. #AWS
26
The 30 Day constraint in the S3 Lifecycle Policy before transitioning to S3-IA and S3-One Zone IA storage classes #AWS S3 lifecycle policy
Watch Acloud Guru Videos Lectures while commuting / lunch break – Reschedule the exam if you are not yet ready #AWS ACloud Guru
36
Watch Linux Academy Videos Lectures while commuting / lunch break – Reschedule the exam if you are not yet ready #AWS Linux Academy
37
Watch Udemy Videos Lectures while commuting / lunch break – Reschedule the exam if you are not yet ready #AWS Linux Academy
38
The Udemy practice test interface is good that it pinpoints your weak areas, so what I did was to re-watch all the videos that I got the wrong answers. Since I was able to gauge my exam readiness, I decided to reschedule my exam for 2 more weeks, to help me focus on completing the practice tests. #AWS Udemy
39
Use AWS Cheatsheets – I also found the cheatsheets provided by Tutorials Dojo very helpful. In my opinion, it is better than Jayendrapatil Patil’s blog since it contains more updated information that complements your review notes. #AWS Cheat Sheet
40
Watch this exam readiness 3hr video, it very recent webinar this provides what is expected in the exam. #AWS Exam Prep Video
41
Start off watching Ryan’s videos. Try and completely focus on the hands on. Take your time to understand what you are trying to learn and achieve in those LAB Sessions. #AWS Exam Prep Video
42
Do not rush into completing the videos. Take your time and hone the basics. Focus and spend a lot of time for the back bone of AWS infrastructure – Compute/EC2 section, Storage (S3/EBS/EFS), Networking (Route 53/Load Balancers), RDS, VPC, Route 3. These sections are vast, with lot of concepts to go over and have loads to learn. Trust me you will need to thoroughly understand each one of them to ensure you pass the certification comfortably. #AWS Exam Prep Video
43
Make sure you go through resources section and also AWS documentation for each components. Go over FAQs. If you have a question, please post it in the community. Trust me, each answer here helps you understand more about AWS. #AWS Faqs
44
Like any other product/service, each AWS offering has a different flavor. I will take an example of EC2 (Spot/Reserved/Dedicated/On Demand etc.). Make sure you understand what they are, what are the pros/cons of each of these flavors. Applies for all other offerings too. #AWS Services
45
Ensure to attend all quizzes after each section. Please do not treat these quizzes as your practice exams. These quizzes are designed to mostly test your knowledge on the section you just finished. The exam itself is designed to test you with scenarios and questions, where in you will need to recall and apply your knowledge of different AWS technologies/services you learn over multiple lectures. #AWS Services
46
I, personally, do not recommend to attempt a practice exam or simulator exam until you have done all of the above. It was a little overwhelming for me. I had thoroughly gone over the videos. And understood the concepts pretty well, but once I opened exam simulator I felt the questions were pretty difficult. I also had a feeling that videos do not cover lot of topics. But later I realized, given the vastness of AWS Services and offerings it is really difficult to encompass all these services and their details in the course content. The fact that these services keep changing so often, does not help #AWS Services
47
Go back and make a note of all topics, that you felt were unfamiliar for you. Go through the resources section and fiund links to AWS documentation. After going over them, you shoud gain at least 5-10% more knowledge on AWS. Have expectations from the online courses as a way to get thorough understanding of basics and strong foundations for your AWS knowledge. But once you are done with videos. Make sure you spend a lot of time on AWS documentation and FAQs. There are many many topics/sub topics which may not be covered in the course and you would need to know, atleast their basic functionalities, to do well in the exam. #AWS Services
48
Once you start taking practice exams, it may seem really difficult at the beginning. So, please do not panic if you find the questions complicated or difficult. IMO they are designed or put in a way to sound complicated but they are not. Be calm and read questions very carefully. In my observation, many questions have lot of information which sometimes is not relevant to the solution you are expected to provide. Read the question slowly and read it again until you understand what is expected out of it. #AWS Services
49
With each practice exam you will come across topics that you may need to scale your knowledge on or learn them from scratch. #AWS Services
50
With each test and the subsequent revision, you will surely feel more confident. There are 130 mins for questions. 2 mins for each question which is plenty of time. At least take 8-10 practice tests. The ones on udemy/tutorialdojo are really good. If you are a acloudguru member. The exam simulator is really good. Manage your time well. Keep patience. I saw someone mention in one of the discussions that do not under estimate the mental focus/strength needed to sit through 130 mins solving these questions. And it is really true. Do not give away or waste any of those precious 130 mins. While answering flag/mark questions you think you are not completely sure. My advice is, even if you finish early, spend your time reviewing the answers. I could review 40 of my answers at the end of test. And I at least rectified 3 of them (which is 4-5% of total score, I think) So in short – Put a lot of focus on making your foundations strong. Make sure you go through AWS Documentation and FAQs. Try and envision how all of the AWS components can fit together and provide an optimal solution. Keep calm. This video gives outline about exam, must watch before or after Ryan’s course.#AWS Services
51
Walking you through how to best prepare for the AWS Certified Solutions Architect Associate SAA-C02 exam in 5 steps: 1. Understand the exam blueprint 2. Learn about the new topics included in the SAA-C02 version of the exam 3. Use the many FREE resources available to gain and deepen your knowledge 4. Enroll in our hands-on video course to learn AWS in depth 5. Use practice tests to fully prepare yourself for the exam and assess your exam readiness AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS
52
Storage: 1. Know your different Amazon S3 storage tiers! You need to know the use cases, features and limitations, and relative costs; e.g. retrieval costs. 2. Amazon S3 lifecycle policies is also required knowledge — there are minimum storage times in certain tiers that you need to know. 3. For Glacier, you need to understand what it is, what it’s used for, and what the options are for retrieval times and fees. 4. For the Amazon Elastic File System (EFS), make sure you’re clear which operating systems you can use with it (just Linux). 5. For the Amazon Elastic Block Store (EBS), make sure you know when to use the different tiers including instance stores; e.g. what would you use for a datastore that requires the highest IO and the data is distributed across multiple instances? (Good instance store use case) 6. Learn about Amazon FSx. You’ll need to know about FSx for Windows and Lustre. 7. Know how to improve Amazon S3 performance including using CloudFront, and byte-range fetches — check out this whitepaper. 8. Make sure you understand about Amazon S3 object deletion protection options including versioning and MFA delete. AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS
53
Compute: 1. You need to have a good understanding of the options for how to scale an Auto Scaling Group using metrics such as SQS queue depth, or numbers of SNS messages. 2. Know your different Auto Scaling policies including Target Tracking Policies. 3. Read up on High Performance Computing (HPC) with AWS. You’ll need to know about Amazon FSx with HPC use cases. 4. Know your placement groups. Make sure you can differentiate between spread, cluster and partition; e.g. what would you use for lowest latency? What about if you need to support an app that’s tightly coupled? Within an AZ or cross AZ? 5. Make sure you know the difference between Elastic Network Adapters (ENAs), Elastic Network Interfaces (ENIs) and Elastic Fabric Adapters (EFAs). 6. For the Amazon Elastic Container Service (ECS), make sure you understand how to assign IAM policies to ECS for providing S3 access. How can you decouple an ECS data processing process — Kinesis Firehose or SQS? 7. Make sure you’re clear on the different EC2 pricing models including Reserved Instances (RI) and the different RI options such as scheduled RIs. 8. Make sure you know the maximum execution time for AWS Lambda (it’s currently 900 seconds or 15 minutes). AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS
54
Network 1. Understand what AWS Global Accelerator is and its use cases. 2. Understand when to use CloudFront and when to use AWS Global Accelerator. 3. Make sure you understand the different types of VPC endpoint and which require an Elastic Network Interface (ENI) and which require a route table entry. 4. You need to know how to connect multiple accounts; e.g. should you use VPC peering or a VPC endpoint? 5. Know the difference between PrivateLink and ClassicLink. 6. Know the patterns for extending a secure on-premises environment into AWS. 7. Know how to encrypt AWS Direct Connect (you can use a Virtual Private Gateway / AWS VPN). 8. Understand when to use Direct Connect vs Snowball to migrate data — lead time can be an issue with Direct Connect if you’re in a hurry. 9. Know how to prevent circumvention of Amazon CloudFront; e.g. Origin Access Identity (OAI) or signed URLs / signed cookies. AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS
55
Databases 1. Make sure you understand Amazon Aurora and Amazon Aurora Serverless. 2. Know which RDS databases can have Read Replicas and whether you can read from a Multi-AZ standby. 3. Know the options for encrypting an existing RDS database; e.g. only at creation time otherwise you must encrypt a snapshot and create a new instance from the snapshot. 4. Know which databases are key-value stores; e.g. Amazon DynamoDB. AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS
56
Application Integration 1. Make sure you know the use cases for the Amazon Simple Queue Service (SQS), and Simple Notification Service (SNS). 2. Understand the differences between Amazon Kinesis Firehose and SQS and when you would use each service. 3. Know how to use Amazon S3 event notifications to publish events to SQS — here’s a good “How To” article. AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS
57
Management and Governance 1. You’ll need to know about AWS Organizations; e.g. how to migrate an account between organizations. 2. For AWS Organizations, you also need to know how to restrict actions using service control policies attached to OUs. 3. Understand what AWS Resource Access Manager is. AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS
The AWS Certified Solution Architect Associate Examination reparation and Readiness Quiz App (SAA-C01, SAA-C01, SAA) Prep App helps you prepare and train for the AWS Certification Solution Architect Associate Exam with various questions and answers dumps.
This App provide updated Questions and Answers, an Intuitive Responsive Interface allowing to browse questions horizontally and browse tips and resources vertically after completing a quiz.
Features:
100+ Questions and Answers updated frequently to get you AWS certified.
Quiz with score tracker, countdown timer, highest score saving. Vie Answers after completing the quiz for each category.
Ability to navigate through questions for each category using next and previous button.
Resource info page about the answer for each category and Top 60 Tips to succeed in the exam.
Prominent Cloud Evangelist latest tweets and Technology Latest News Feed
The app helps you study and practice from your mobile device with an intuitive interface.
SAA-C01 and SAA-C02 compatible
Resource info page about the answer for each category.
Helps you study and practice from your mobile device with an intuitive interface.
The questions and Answers are divided in 4 categories:
Design High Performing Architectures,
Design Cost Optimized Architectures,
Design Secure Applications And Architectures,
Design Resilient Architecture,
The questions and answers cover the following topics: AWS VPC, S3, DynamoDB, EC2, ECS, Lambda, API Gateway, CloudWatch, CloudTrail, Code Pipeline, Code Deploy, TCO Calculator, AWS S3, AWS DynamoDB, CloudWatch , AWS SES, Amazon Lex, AWS EBS, AWS ELB, AWS Autoscaling , RDS, Aurora, Route 53, Amazon CodeGuru, Amazon Bracket, AWS Billing and Pricing, AWS Simply Monthly Calculator, AWS cost calculator, Ec2 pricing on-demand, AWS Pricing, AWS Pay As You Go, AWS No Upfront Cost, Cost Explorer, AWS Organizations, Consolidated billing, Instance Scheduler, on-demand instances, Reserved instances, Spot Instances, CloudFront, Web hosting on S3, S3 storage classes, AWS Regions, AWS Availability Zones, Trusted Advisor, Various architectural Questions and Answers about AWS, AWS SDK, AWS EBS Volumes, EC2, S3, Containers, KMS, AWS read replicas, Cloudfront, API Gateway, AWS Snapshots, Auto shutdown Ec2 instances, High Availability, RDS, DynamoDB, Elasticity, AWS Virtual Machines, AWS Caching, AWS Containers, AWS Architecture, AWS Ec2, AWS S3, AWS Security, AWS Lambda, Bastion Hosts, S3 lifecycle policy, kinesis sharing, AWS KMS, Design High Performing Architectures, Design Cost Optimized Architectures, Design Secure Applications And Architectures, Design Resilient Architecture, AWS vs Azure vs Google Cloud, Resources, Questions, AWS, AWS SDK, AWS EBS Volumes, AWS read replicas, Cloudfront, API Gateway, AWS Snapshots, Auto shutdown Ec2 instances, High Availability, RDS, DynamoDB, Elasticity, AWS Virtual Machines, AWS Caching, AWS Containers, AWS Architecture, AWS Ec2, AWS S3, AWS Security, AWS Lambda, Load Balancing, DynamoDB, EBS, Multi-AZ RDS, Aurora, EFS, DynamoDB, NLB, ALB, Aurora, Auto Scaling, DynamoDB(latency), Aurora(performance), Multi-AZ RDS(high availability), Throughput Optimized EBS (highly sequential), SAA-CO1, SAA-CO2, Cloudwatch, CloudTrail, KMS, ElasticBeanstalk, OpsWorks, RPO vs RTO, HA vs FT, Undifferentiated Heavy Lifting, Access Management Basics, Shared Responsibility Model, Cloud Service Models, etc…
The resources sections cover the following areas: Certification, AWS training, Mock Exam Preparation Tips, Cloud Architect Training, Cloud Architect Knowledge, Cloud Technology, cloud certification, cloud exam preparation tips, cloud solution architect associate exam, certification practice exam, learn aws free, amazon cloud solution architect, question dumps, acloud guru links, tutorial dojo links, linuxacademy links, latest aws certification tweets, and post from reddit, quota, linkedin, medium, cloud exam preparation tips, aws cloud solution architect associate exam, aws certification practice exam, cloud exam questions, learn aws free, amazon cloud solution architect, amazon cloud certified solution architect associate exam questions, as certification dumps, google cloud, azure cloud, acloud, learn google cloud, learn azure cloud, cloud comparison, etc.
Abilities Validated by the Certification:
Effectively demonstrate knowledge of how to architect and deploy secure and robust applications on AWS technologies
Define a solution using architectural design principles based on customer requirements
Provide implementation guidance based on best practices to the organization throughout the life cycle of the project
Recommended Knowledge for the Certification:
One year of hands-on experience designing available, cost-effective, fault-tolerant, and scalable distributed systems on AWS.
Hands-on experience using compute, networking, storage, and database AWS services.
Hands-on experience with AWS deployment and management services.
Ability to identify and define technical requirements for an AWS-based application.
bility to identify which AWS services meet a given technical requirement.
Knowledge of recommended best practices for building secure and reliable applications on the AWS platform.
An understanding of the basic architectural principles of building in the AWS Cloud.
An understanding of the AWS global infrastructure.
An understanding of network technologies as they relate to AWS.
An understanding of security features and tools that AWS provides and how they relate to traditional services.
Note and disclaimer: We are not affiliated with AWS or Amazon or Microsoft or Google. The questions are put together based on the certification study guide and materials available online. We also receive questions and answers from anonymous users and we vet to make sure they are legitimate. The questions in this app should help you pass the exam but it is not guaranteed. We are not responsible for any exam you did not pass.
Important: To succeed with the real exam, do not memorize the answers in this app. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
What is the AWS Certified Solution Architect Associate Exam?
This exam validates an examinee’s ability to effectively demonstrate knowledge of how to architect and deploy secure and robust applications on AWS technologies. It validates an examinee’s ability to:
Define a solution using architectural design principles based on customer requirements.
Multiple-response: Has two correct responses out of five options.
There are two types of questions on the examination:
Multiple-choice: Has one correct response and three incorrect responses (distractors).
Provide implementation guidance based on best practices to the organization throughout the lifecycle of the project.
Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that an examinee with incomplete knowledge or skill would likely choose. However, they are generally plausible responses that fit in the content area defined by the test objective. Unanswered questions are scored as incorrect; there is no penalty for guessing.
To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
How does using a VPN or Proxy or TOR or private browsing protects your online activity?
There are several ways that using a virtual private network (VPN), proxy, TOR, or private browsing can protect your online activity:
VPN: A VPN encrypts your internet connection and routes your traffic through a secure server, making it harder for others to track your online activity. This can protect you from hackers, government surveillance, and other types of online threats.
Proxy: A proxy acts as an intermediary between your device and the internet. When you use a proxy, your internet traffic is routed through the proxy server, which can mask your IP address and make it harder for others to track your online activity.
TOR: The TOR network is a decentralized network of servers that routes your internet traffic through multiple servers to obscure your IP address and location. This can make it more difficult for others to track your online activity.
Private browsing: Private browsing mode, also known as “incognito mode,” is a feature that is available in most modern web browsers. When you use private browsing, your web browser does not store any information about your browsing activity, including cookies, history, or cache. This can make it harder for others to track your online activity.
Overall, using a VPN, proxy, TOR, or private browsing can help protect your online activity by making it harder for others to track your internet usage and by providing an additional layer of security. However, it is important to note that these tools are not foolproof and cannot completely guarantee your online privacy. It is always a good idea to be aware of your online activity and take steps to protect your personal information.
VPNs are used to provide remote corporate employees, gig economy freelance workers and business travelers with access to software applications hosted on proprietary networks. To gain access to a restricted resource through a VPN, the user must be authorized to use the VPN app and provide one or more authentication factors, such as a password, security token or biometric data.
A VPN extends a private network across a public network, and enables users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network. Applications running on a computing device, e.g. a laptop, desktop, smartphone, across a VPN may therefore benefit from the functionality, security, and management of the private network. Encryption is a common though not an inherent part of a VPN connection.
To ensure security, the private network connection is established using an encrypted layered tunneling protocol and VPN users use authentication methods, including passwords or certificates, to gain access to the VPN. In other applications, Internet users may secure their connections with a VPN, to circumvent geo restrictions and censorship, or to connect to proxy servers to protect personal identity and location to stay anonymous on the Internet. However, some websites block access to known VPN technology to prevent the circumvention of their geo-restrictions, and many VPN providers have been developing strategies to get around these roadblocks.
Private browsing on incognito window or inPrivate window a privacy feature in some web browsers (Chrome, Firefox, Explorer, Edge). When operating in such a mode, the browser creates a temporary session that is isolated from the browser’s main session and user data. Browsing history is not saved, and local data associated with the session, such as cookies, are cleared when the session is closed.
These modes are designed primarily to prevent data and history associated with a particular browsing session from persisting on the device, or being discovered by another user of the same device. Private browsing modes do not necessarily protect users from being tracked by other websites or their internet service provider (ISP). Furthermore, there is a possibility that identifiable traces of activity could be leaked from private browsing sessions by means of the operating system, security flaws in the browser, or via malicious browser extensions, and it has been found that certain HTML5APIs can be used to detect the presence of private browsing modes due to differences in behaviour.
The question is:
How does using a VPN or Proxy or TOR or private browsing protects your online activity?
What are the pros and cons of VPN vs Proxy?
How can VPN, Proxy, TOR, private browsing, incognito windows How does using a VPN, Proxy, TOR, private browsing, incognito windows protects your online activity? protects your online activity?
VPN masks your real IP address by hiding it with one of its servers. As a result, no third party will be able to link your online activity to your physical location. To top it off, you avoid annoying ads and stay off the marketer’s radars.
VPN encrypts your internet traffic in order to make it impossible for anybody to decode your sensitive information and steal your identity. You can also learn more what a development team tells about how they protect their users against data theft.
If your VPN doesn’t protect your online activities, it means there are some problems with the aforementioned protection measures. This could be:
VPN connection disruption. Unfortunately, a sudden disruption of your connection can deanonymize you, if at this moment your device is sending or receiving IP-related requests. In order to avoid such a situation, the kill switch option should be always ON.
DNS/IP address leakage. This problem can be caused by various reasons from configuration mistakes to a conflict between the app under discussion and some other installed software. Regardless of the reason, you will end up with otherwise perfectly working security app, which, in fact, is leaking your IP address.
Outdated protocol. In a nutshell, it is the technology that manages the сreation of your secured connection. If your current protocol becomes obsolete, the app will not work perfectly.
Free apps. This is about free software that makes money on your privacy. The actions of such applications are also considered as unethical and illegal. Stealing your private data and selling of it to third parties is one of them.
User carelessness. For instance, turn on your virtual private network when you visit any website or enter your credentials. Don’t use the app sporadically.
How is a VPN different from a proxy server?
On top of serving as a proxy server, VPN provides encryption. A proxy server only hides your IP address.
Proxies are good for the low-stakes task like: watching regionally restricted videos on YouTube, creating another Gmail account when your IP limit ran out, accessing region restricted websites, bypassing content filters, request restrictions on IP.
On the other hand, proxies are not so great for the high-stakes task. As we know, proxies only act as a middleman in our Internet traffic, they only serve a webpage which we are requesting them to serve.
Just like the proxy service, a VPN makes your traffic to have appeared from the remote IP address that is not yours. But, that’s when all the similarities end.
Unlike a proxy, VPN is set at the operating system level, it captures all the traffic coming from the device it is set up on. Whether it is your web traffic, BitTorrent client, game, or a Windows Update, it captures traffic from all the applications from your device.
Another difference between proxy and VPN is – VPN tunnels all your traffic through heavily encrypted and secure connection to the VPN server.
This makes VPN an ideal solution high-stakes tasks where security and privacy are of paramount of importance. With VPN, neither your ISP, Government, or a guy snooping over open Wi-Fi connection can access your traffic.
What are daily use of VPN for?
There are many uses of Virtual Private Network (VPN) for normal users and company employees. Here are the list of the most common usages:
Accessing Business Networks From Any Places in the World :
This is one of the best use of VPN. It is very much helpful when you are travelling and have to complete some work. You can connect any computer to your business network from anywhere and set up your work easily. Local resources need some security so they have to be kept in VPN-only to ensure their safety.
To Hide Your Browsing Data From ISP & Local Users :
All Internet Service Providers (ISP) will log the data of your IP address. If you use the VPN then they can only see the connection of your VPN. It won’t let anyone spy on your website history.
Moreover, it secures your connection when you use a public Wi-Fi network. As you may or may not know, users on these networks can spy on your browsing history, even if you are surfing HTTPS websites. Virtual Private Networks protect your privacy on public unsecured Wi-Fi connection.
To Access Geographically Blocked Sites :
Have you ever faced a problem like “This content is not available in your country”? VPNs are the best solution to bypass these restrictions.
Some videos on YouTube will also show this restriction. VPNs are a quick fix for all these restrictions.
What about TOR and VPN? What are the Pros and Cons?
The Tor network is similar to a VPN. Messages to and from your computer pass through the Tor network rather than connecting directly to resources on the Internet. But where VPNs provide privacy, Tor provides anonymity.
Tor is free and open-source software for enabling anonymous communication. The name is derived from an acronym for the original software project name “The Onion Router”. Tor directs Internet traffic through a free, worldwide, volunteer overlay network consisting of more than seven thousand relays to conceal a user’s location and usage from anyone conducting network surveillance or traffic analysis. Using Tor makes it more difficult to trace Internet activity to the user: this includes “visits to Web sites, online posts, instant messages, and other communication forms”.[ Tor’s intended use is to protect the personal privacy of its users, as well as their freedom and ability to conduct confidential communication by keeping their Internet activities from being monitored.
Tor does not prevent an online service from determining when it is being accessed through Tor. Tor protects a user’s privacy, but does not hide the fact that someone is using Tor. Some websites restrict allowances through Tor. For example, Wikipedia blocks attempts by Tor users to edit articles unless special permission is sought. Although a VPN is generally faster than Tor, using them together will slow down your internet connection and should be avoided. More is not necessarily better in this situation.
The deep web is the part of the web that can not be indexed by search machines: internal company login pages, or a school portal (the internal portal) private google sites or government pages.
The dark web is the more sinister form of the Deep Web. The dark web is more associated with illegal activity (i.e child pornography, drug dealing, hitmen etc). A VPN is not necessary when connecting to the DEEP WEB. Please do not confuse the DEEP WEB with the DARK WEB.
Are there any good free VPN services?
It is not recommended to use free VPN for following reasons:
1- Security: Free VPNs don’t necessarily have to ensure your privacy is protected.
2- Tracking – Free VPNs have no obligation to keep your details safe, so at any point, your details could be passed on.
3- Speed / bandwidth – Some free VPN services are capped at a lower bandwidth that is you will receive less browsing or download speed to that of paid VPN.
4- Protocols supported – A free VPN may not support all necessary protocols. PPTP, OpenVPN and L2TP are generally provided only on paid VPN services.
2- ExpressVPN – nearly 3x NordVPN’s price but guarantees Netflix in the US. Excellent customer service and claims to not log your info.
3- Private Internet Access – a U.S. based VPN that has proven its no log policy in the court of law. This is a unique selling point that 99.99% VPNs don’t have.
4- OpenVPN provides flexible VPN solutions to secure your data communications, whether it’s for Internet privacy, remote access for employees, securing IoT, or for networking Cloud data centers.
Other Questions about VPN and security:
Why might certain web sites not load with VPN?
For security, some corporations like Banks often block IP addresses used by major VPN companies, because it is thought to improve security.
You probably need a VPN that allow you to use dedicated IP address, otherwise the server ips are constantly switching every time you reconnect to your vpn and shared ip usually raised as suspicious logins due to many people logging in from the same ip address (which make the site thinks it might be bots or mass-hacked accounts).
How is a hacker traced when server logs show his or her IP is from a VPN?
Start looking for IP address leaks. Even hackers are terrible at not leaking their IPs.
Look for times the attacker forgot to enable their VPN. It happens all the time.
Look at other things related to the attcke like domains for example. They might have registered a domain using something you can trace or they left a string in the malware that can help identify them.
Silently take control of the command and control server legally.
What is the most secure VPN protocol?
OpenVPN technology uses the highest levels (military standards) of encryption algorithms i.e. 256bit keys to secure your data transfers.
OpenVPN is also known to have the fastest speeds even in the case of long distance connections that have latency. The protocol is highly recommended for streaming, downloading files and watching live TV. In addition to speeds, the protocol is stable and known to have fewer disconnections compared to its many counterparts.
OpenVPN comes equipped with solid military grade encryption and is way better, security wise, than PPTP, L2TP/IPSec and SSTP.
What are some alternatives for VPN?
Tor network, it is anonymous, free and well, rather slow, certainly fast enough to access your private email, but not fast enough to stream a movie.
Proxies are remote computers that individuals or organizations use to restrict Internet access, filter content, and make Internet browsing more secure. It acts as a middleman between the end user and the web server, since all connection requests pass through it. It filters the request first then sends it to the web server. Once the web server responds, the proxy filters the response then sends it to the end user.
IPSec (Cisco, Netgear, etc.): secure network protocol suite that authenticates and encrypts the packets of data sent over an Internet Protocol network.
SSL (Full) like OpenVPN
SSL (Partial) like SSL-Explorer and most appliances
SSH Tunneling is a method of transporting arbitrary networking data over an encrypted SSH connection. It can be used to add encryption to legacy applications. It also provides a way to secure the data traffic of any given application using port forwarding, basically tunneling any TCP/IP port over SSH.
You can create you own VPN as well using any encryption or simple tunneling technology.
How does private browsing or incognito window work?
When you are in private browsing mode, your browser doesn’t store any of this information at all. It functions as a completely isolated browser session.
For most web browsers, their optional private mode, often also called InPrivate or incognito, is like normal browsing except for a few things.
it uses separate temporary cookies that are deleted once the browser is closed (leaving your existing cookies unaffected)
no private activity is logged to the browser’s history
it often uses a separate temporary cache
What are the advantages of Google Chrome’s private browsing?
simultaneously log into a website using different account names
access websites without extensions (all extensions are disabled by default when in Incognito)
Shield you from being tracked by Google, Facebook and other online advertising companies
Allow you to be anonymous visitor to a website, or see how a personalized webpage will look like from a third-party perspective
Firefox private browsing or chrome incognito?
Mozilla doesn’t really have an incentive to spy on their users. It’s not really going to get them anything because they’re not a data broker and don’t sell ads. Couple this with the fact that Firefox is open-source and I would argue that Firefox is the clear winner here.
Adding a VPN to Firefox is clever because it means the privacy protection is integrated into one application rather than being spread across different services. That integration probably makes it more likely to be used by people who wouldn’t otherwise use one.
Pros and Cons of Adding VPN to browsers like Firefox and Opera:
Turning on the VPN will give users a secure connection to a trusted server when using a device connected to public Wi-Fi (and running the gamut of rogue Wi-Fi hotspots and unknown intermediaries). Many travellers use subscription VPNs when away from a home network – the Mozilla Private Network is just a simpler, zero-cost alternative.
However, like Opera’s offering, it’s not a true VPN – that is, it only encrypts traffic while using one browser, Firefox. Traffic from all other applications on the same computer won’t be secured in the same way.
As with any VPN, it won’t keep you completely anonymous. Websites you visit will see a Cloudflare IP address instead of your own, but you will still get advertising cookies and if you log in to a website your identity will be known to that site.
Definition 1: The AWS Developer is responsible for designing, deploying, and developing cloud applications on AWS platform
Definition 2: The AWS Developer Tools is a set of services designed to enable developers and IT operations professionals practicing DevOps to rapidly and safely deliver software.
The AWS Certified Developer Associate certification is a widely recognized certification that validates a candidate’s expertise in developing and maintaining applications on the Amazon Web Services (AWS) platform.
The certification is about to undergo a major change with the introduction of the new exam version DVA-C02, replacing the current DVA-C01. In this article, we will discuss the differences between the two exams and what candidates should consider in terms of preparation for the new DVA-C02 exam.
Quick facts
What’s happening?
The DVA-C01 exam is being replaced by the DVA-C02 exam.
When is this taking place?
The last day to take the current exam is February 27th, 2023 and the first day to take the new exam is February 28th, 2023.
What’s the difference?
The new exam features some new AWS services and features.
Main differences between DVA-C01 and DVA-C02
The table below details the differences between the DVA-C01 and DVA-C02 exams domains and weightings:
In terms of the exam content weightings, the DVA-C02 exam places a greater emphasis on deployment and management, with a slightly reduced emphasis on development and refactoring. This shift reflects the increased importance of operations and management in cloud computing, as well as the need for developers to have a strong understanding of how to deploy and maintain applications on the AWS platform.
One major difference between the two exams is the focus on the latest AWS services and features. The DVA-C02 exam covers around 57 services vs only 33 services in the DVA-C01. This reflects the rapidly evolving AWS ecosystem and the need for developers to be up-to-date with the latest services and features in order to effectively build and maintain applications on the platform.
Click the image above to watch our video about the NEW AWS Developer Associate Exam DVA-C02 from our youtube channel
In terms of preparation for the DVA-C02 exam, we strongly recommend enrolling in our on-demand training courses for the AWS Developer Associate certification. It is important for candidates to familiarize themselves with the latest AWS services and features, as well as the updated exam content weightings. Practical experience working with AWS services and hands-on experimentation with new services and features will be key to success on the exam. Candidates should also focus on their understanding of security best practices, access control, and compliance, as these topics will carry a greater weight in the new exam.
Frequently asked questions – FAQs:
In conclusion, the change from the DVA-C01 to the DVA-C02 exam represents a major shift in the focus and content of the AWS Certified Developer Associate certification. Candidates preparing for the new exam should focus on familiarizing themselves with the latest AWS services and features, as well as the updated exam content weightings, and placing a strong emphasis on security, governance, and compliance.
With the right preparation and focus, candidates can successfully navigate the changes in the DVA-C02 exam and maintain their status as a certified AWS Developer Associate.
AWS Developer and Deployment Theory Facts and summaries
Continuous Integration is about integrating or merging the code changes frequently, at least once per day. It enables multiple devs to work on the same application.
Continuous delivery is all about automating the build, test, and deployment functions.
Continuous Deployment fully automates the entire release process, code is deployed into Production as soon as it has successfully passed through the release pipeline.
AWS CodePipeline is a continuous integration/Continuous delivery service:
It automates your end-to-end software release process based on user defines workflow
It can be configured to automatically trigger your pipeline as soon as a change is detected in your source code repository
It integrates with other services from AWS like CodeBuild and CodeDeploy, as well as third party custom plug-ins.
AWS CodeBuild is a fully managed build service. It can build source code, run tests and produce software packages based on commands that you define yourself.
Dy default the buildspec.yml defines the build commands and settings used by CodeBuild to run your build.
AWS CodeDeploy is a fully managed automated deployment service and can be used as part of a Continuous Delivery or Continuous Deployment process.
There are 2 types of deployment approach:
In-place or Rolling update- you stop the application on each host and deploy the latest code. EC2 and on premise systems only. To roll back, you must re-deploy the previous version of the application.
Blue/Green : New instances are provisioned and the new application is deployed to these new instances. Traffic is routed to the new instances according to your own schedule. Supported for EC2, on-premise systems and Lambda functions. Rollback is easy, just route the traffic back to the original instances. Blue is active deployment, green is new release.
Docker allows you to package your software into Containers which you can run in Elastic Container Service (ECS)
A docker Container includes everything the software needs to run including code, libraries, runtime and environment variables etc..
A special file called Dockerfile is used to specify the instructions needed to assemble your Docker image.
Once built, Docker images can be stored in Elastic Container Registry (ECR) and ECS can then use the image to launch Docker Containers.
AWS CodeCommit is based on Git. It provides centralized repositories for all your code, binaries, images, and libraries.
CodeCommit tracks and manages code changes. It maintains version history.
CodeCommit manages updates from multiple sources and enables collaboration.
To support CORS, API resource needs to implement an OPTIONS method that can respond to the OPTIONS preflight request with following headers:
Access-Control-Allow-Headers
Access-Control-Allow-Origin
Access-Control-Allow-Methods
You have a legacy application that works via XML messages. You need to place the application behind the API gateway in order for customers to make API calls. Which of the following would you need to configure? You will need to work with the Request and Response Data mapping.
Your application currently points to several Lambda functions in AWS. A change is being made to one of the Lambda functions. You need to ensure that application traffic is shifted slowly from one Lambda function to the other. Which of the following steps would you carry out?
Create an ALIAS with the –routing-config parameter
Update the ALIAS with the –routing-config parameter
By default, an alias points to a single Lambda function version. When the alias is updated to point to a different function version, incoming request traffic in turn instantly points to the updated version. This exposes that alias to any potential instabilities introduced by the new version. To minimize this impact, you can implement the routing-config parameter of the Lambda alias that allows you to point to two different versions of the Lambda function and dictate what percentage of incoming traffic is sent to each version.
AWS CodeDeploy: The AppSpec file defines all the parameters needed for the deployment e.g. location of application files and pre/post deployment validation tests to run.
For Ec2 / On Premise systems, the appspec.yml file must be placed in the root directory of your revision (the same folder that contains your application code). Written in YAML.
For Lambda and ECS deployment, the AppSpec file can be YAML or JSON
Visual workflows are automatically created when working with which Step Functions
API Gateway stages store configuration for deployment. An API Gateway Stage refers to A snapshot of your API
AWS SWF Services SWF guarantees delivery order of messages/tasks
Blue/Green Deployments with CodeDeploy on AWS Lambda can happen in multiple ways. Which of these is a potential option? Linear, All at once, Canary
X-Ray Filter Expressions allow you to search through request information using characteristics like URL Paths, Trace ID, Annotations
S3 has eventual consistency for overwrite PUTS and DELETES.
What can you do to ensure the most recent version of your Lambda functions is in CodeDeploy? Specify the version to be deployed in AppSpec file.
https://docs.aws.amazon.com/codedeploy/latest/userguide/application-specification-files.htmlAppSpec Files on an Amazon ECS Compute Platform
If your application uses the Amazon ECS compute platform, the AppSpec file can be formatted with either YAML or JSON. It can also be typed directly into an editor in the console. The AppSpec file is used to specify:
The name of the Amazon ECS service and the container name and port used to direct traffic to the new task set. The functions to be used as validation tests. You can run validation Lambda functions after deployment lifecycle events. For more information, see AppSpec ‘hooks’ Section for an Amazon ECS Deployment, AppSpec File Structure for Amazon ECS Deployments , and AppSpec File Example for an Amazon ECS Deployment .
Q2: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?
Q5: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?
A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.
Q8: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?
A. Transforms
B. Outputs
C. Resources
D. Instances
Answer: C. The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the resources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3.
Q9: Which AWS service can be used to fully automate your entire release process?
A. CodeDeploy
B. CodePipeline
C. CodeCommit
D. CodeBuild
Answer: B. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates
Q10: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?
A. Outputs
B. Transforms
C. Resources
D. Exports
Answer: A. Outputs is used to output user defines data relating to the resources you have built and can also used as input to another CloudFormation stack.
Q11: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?
A. Inputs
B. Resources
C. Transforms
D. Files
Answer: C. Transforms is used to reference code located in S3 and also specifying the use of the Serverless Application Model (SAM) for Lambda deployments. Transform: Name: ‘AWS::Include’ Parameters: Location: ‘s3://MyAmazonS3BucketName/MyFileName.yaml’
Q12: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file used to specify source files and lifecycle hooks?
Q13: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?
A. Share the code using an EBS volume
B. Copy and paste the code into the template each time you need to use it
C. Use a cloudformation nested stack
D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.
Q15:You need to setup a RESTful API service in AWS that would be serviced via the following url https://democompany.com/customers Which of the following combination of services can be used for development and hosting of the RESTful service? Choose 2 answers from the options below
Q16: As a developer, you have created a Lambda function that is used to work with a bucket in Amazon S3. The Lambda function is not working as expected. You need to debug the issue and understand what’s the underlying issue. How can you accomplish this in an easily understandable way?
A. Use AWS Cloudwatch metrics
B. Put logging statements in your code
C. Set the Lambda function debugging level to verbose
D. Use AWS Cloudtrail logs
Answer: B You can insert logging statements into your code to help you validate that your code is working as expected. Lambda automatically integrates with Amazon CloudWatch Logs and pushes all logs from your code to a CloudWatch Logs group associated with a Lambda function (/aws/lambda/). Reference: Using Amazon CloudWatch
Q17: You have a lambda function that is processed asynchronously. You need a way to check and debug issues if the function fails? How could you accomplish this?
A. Use AWS Cloudwatch metrics
B. Assign a dead letter queue
C. Congure SNS notications
D. Use AWS Cloudtrail logs
Answer: B Any Lambda function invoked asynchronously is retried twice before the event is discarded. If the retries fail and you’re unsure why, use Dead Letter Queues (DLQ) to direct unprocessed events to an Amazon SQS queue or an Amazon SNS topic to analyze the failure. Reference: AWS Lambda Function Dead Letter Queues
Q18: You are developing an application that is going to make use of Amazon Kinesis. Due to the high throughput , you decide to have multiple shards for the streams. Which of the following is TRUE when it comes to processing data across multiple shards?
A. You cannot guarantee the order of data across multiple shards. Its possible only within a shard
B. Order of data is possible across all shards in a streams
C. Order of data is not possible at all in Kinesis streams
D. You need to use Kinesis firehose to guarantee the order of data
Answer: A Kinesis Data Streams lets you order records and read and replay records in the same order to many Kinesis Data Streams applications. To enable write ordering, Kinesis Data Streams expects you to call the PutRecord API to write serially to a shard while using the sequenceNumberForOrdering parameter. Setting this parameter guarantees strictly increasing sequence numbers for puts from the same client and to the same partition key. Option A is correct as it cannot guarantee the ordering of records across multiple shards. Reference: How to perform ordered data replication between applications by using Amazon DynamoDB Streams
Q19: You’ve developed a Lambda function and are now in the process of debugging it. You add the necessary print statements in the code to assist in the debugging. You go to Cloudwatch logs , but you see no logs for the lambda function. Which of the following could be the underlying issue for this?
A. You’ve not enabled versioning for the Lambda function
B. The IAM Role assigned to the Lambda function does not have the necessary permission to create Logs
C. There is not enough memory assigned to the function
D. There is not enough time assigned to the function
Answer: B “If your Lambda function code is executing, but you don’t see any log data being generated after several minutes, this could mean your execution role for the Lambda function did not grant permissions to write log data to CloudWatch Logs. For information about how to make sure that you have set up the execution role correctly to grant these permissions, see Manage Permissions: Using an IAM Role (Execution Role)”.
Q20: Your application is developed to pick up metrics from several servers and push them off to Cloudwatch. At times , the application gets client 429 errors. Which of the following can be done from the programming side to resolve such errors?
A. Use the AWS CLI instead of the SDK to push the metrics
B. Ensure that all metrics have a timestamp before sending them across
C. Use exponential backoff in your request
D. Enable encryption for the requests
Answer: C. The main reason for such errors is that throttling is occurring when many requests are sent via API calls. The best way to mitigate this is to stagger the rate at which you make the API calls. In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values and should be set based on the operation being performed, as well as other local factors, such as network latency. Reference: Error Retries and Exponential Backoff in AWS
Q21: You have been instructed to use the CodePipeline service for the CI/CD automation in your company. Due to security reasons , the resources that would be part of the deployment are placed in another account. Which of the following steps need to be carried out to accomplish this deployment? Choose 2 answers from the options given below
A. Dene a customer master key in KMS
B. Create a reference Code Pipeline instance in the other account
C. Add a cross account role
D. Embed the access keys in the codepipeline process
Answer: A. and C. You might want to create a pipeline that uses resources created or managed by another AWS account. For example, you might want to use one account for your pipeline and another for your AWS CodeDeploy resources. To do so, you must create a AWS Key Management Service (AWS KMS) key to use, add the key to the pipeline, and set up account policies and roles to enable cross-account access. Reference: Create a Pipeline in CodePipeline That Uses Resources from Another AWS Account
Q22: You are planning on deploying an application to the worker role in Elastic Beanstalk. Moreover, this worker application is going to run the periodic tasks. Which of the following is a must have as part of the deployment?
A. An appspec.yaml file
B. A cron.yaml file
C. A cron.cong file
D. An appspec.json file
Answer: B. Create an Application Source Bundle When you use the AWS Elastic Beanstalk console to deploy a new application or an application version, you’ll need to upload a source bundle. Your source bundle must meet the following requirements: Consist of a single ZIP file or WAR file (you can include multiple WAR files inside your ZIP file) Not exceed 512 MB Not include a parent folder or top-level directory (subdirectories are fine) If you want to deploy a worker application that processes periodic background tasks, your application source bundle must also include a cron.yaml file. For more information, see Periodic Tasks.
Q23: An application needs to make use of an SQS queue for working with messages. An SQS queue has been created with the default settings. The application needs 60 seconds to process each message. Which of the following step need to be carried out by the application.
A. Change the VisibilityTimeout for each message and then delete the message after processing is completed
B. Delete the message and change the visibility timeout.
C. Process the message , change the visibility timeout. Delete the message
D. Process the message and delete the message
Answer: A If the SQS queue is created with the default settings , then the default visibility timeout is 30 seconds. And since the application needs more time for processing , you first need to change the timeout and delete the message after it is processed. Reference: Amazon SQS Visibility Timeout
Q24: AWS CodeDeploy deployment fails to start & generate following error code, ”HEALTH_CONSTRAINTS_INVALID”, Which of the following can be used to eliminate this error?
A. Make sure the minimum number of healthy instances is equal to the total number of instances in the deployment group.
B. Increase the number of healthy instances required during deployment
C. Reduce number of healthy instances required during deployment
D. Make sure the number of healthy instances is equal to the specified minimum number of healthy instances.
Answer: C AWS CodeDeploy generates ”HEALTH_CONSTRAINTS_INVALID” error, when a minimum number of healthy instances defined in deployment group are not available during deployment. To mitigate this error, make sure required number of healthy instances are available during deployments. Reference: Error Codes for AWS CodeDeploy
B. A specific snapshot of all of your API’s settings, resources, and methods
C. A specific snapshot of your API’s resources
D. A specific snapshot of your API’s resources and methods
Answer: D. AWS API Gateway Deployments are a snapshot of all the resources and methods of your API and their configuration. Reference: Deploying a REST API in Amazon API Gateway
Q29: A SWF workflow task or task execution can live up to how long?
A. 1 Year
B. 14 days
C. 24 hours
D. 3 days
Answer: A. 1 Year Each workflow execution can run for a maximum of 1 year. Each workflow execution history can grow up to 25,000 events. If your use case requires you to go beyond these limits, you can use features Amazon SWF provides to continue executions and structure your applications using child workflow executions. Reference: Amazon SWF FAQs
Q30: With AWS Step Functions, all the work in your state machine is done by tasks. These tasks performs work by using what types of things? (Choose the best 3 answers)
A. An AWS Lambda Function Integration
B. Passing parameters to API actions of other services
A. A decider program that is written in the language of the developer’s choice
B. A visual workflow created in the SWF visual workflow editor
C. A JSON-defined state machine that contains states within it to select the next step to take
D. SWF outsources all decisions to human deciders through the AWS Mechanical Turk service.
Answer: A. SWF allows the developer to write their own application logic to make decisions and determine how to evaluate incoming data. Q: What programming conveniences does Amazon SWF provide to write applications? Like other AWS services, Amazon SWF provides a core SDK for the web service APIs. Additionally, Amazon SWF offers an SDK called the AWS Flow Framework that enables you to develop Amazon SWF-based applications quickly and easily. AWS Flow Framework abstracts the details of task-level coordination with familiar programming constructs. While running your program, the framework makes calls to Amazon SWF, tracks your program’s execution state using the execution history kept by Amazon SWF, and invokes the relevant portions of your code at the right times. By offering an intuitive programming framework to access Amazon SWF, AWS Flow Framework enables developers to write entire applications as asynchronous interactions structured in a workflow. For more details, please see What is the AWS Flow Framework? Reference:
Q34: CodePipeline pipelines are workflows that deal with stages, actions, transitions, and artifacts. Which of the following statements is true about these concepts?
A. Stages contain at least two actions
B. Artifacts are never modified or iterated on when used inside of CodePipeline
C. Stages contain at least one action
D. Actions will have a deployment artifact as either an input an output or both
Q35: When deploying a simple Python web application with Elastic Beanstalk which of the following AWS resources will be created and managed for you by Elastic Beanstalk?
A. An Elastic Load Balancer
B. An S3 Bucket
C. A Lambda Function
D. An EC2 instance
Answer: A. B. and D. AWS Elastic Beanstalk uses proven AWS features and services, such as Amazon EC2, Amazon RDS, Elastic Load Balancing, Auto Scaling, Amazon S3, and Amazon SNS, to create an environment that runs your application. The current version of AWS Elastic Beanstalk uses the Amazon Linux AMI or the Windows Server 2012 R2 AMI. Reference: AWS Elastic Beanstalk FAQs
A. Deploy and scale web applications and services developed with a supported platform
B. Deploy and scale serverless applications
C. Deploy and scale applications based purely on EC2 instances
D. Manage the deployment of all AWS infrastructure resources of your AWS applications
Answer: A. Who should use AWS Elastic Beanstalk? Those who want to deploy and manage their applications within minutes in the AWS Cloud. You don’t need experience with cloud computing to get started. AWS Elastic Beanstalk supports Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker web applications. Reference:
Q41: How does using ElastiCache help to improve database performance?
A. It can store petabytes of data
B. It provides faster internet speeds
C. It can store the results of frequent or highly-taxing queries
D. It uses read replicas
Answer: C. With ElastiCache, customers get all of the benefits of a high-performance, in-memory cache with less of the administrative burden involved in launching and managing a distributed cache. The service makes setup, scaling, and cluster failure handling much simpler than in a self-managed cache deployment. Reference: Amazon ElastiCache
Q42: Which of the following best describes the Lazy Loading caching strategy?
A. Every time the underlying database is written to or updated the cache is updated with the new information.
B. Every miss to the cache is counted and when a specific number is reached a full copy of the database is migrated to the cache
C. A specific amount of time is set before the data in the cache is marked as expired. After expiration, a request for expired data will be made through to the backing database.
D. Data is added to the cache when a cache miss occurs (when there is no data in the cache and the request must go to the database for that data)
Answer: D. Amazon ElastiCache is an in-memory key/value store that sits between your application and the data store (database) that it accesses. Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests the data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. Reference: Lazy Loading
Q45: You have just set up a push notification service to send a message to an app installed on a device with the Apple Push Notification Service. It seems to work fine. You now want to send a message to an app installed on devices for multiple platforms, those being the Apple Push Notification Service(APNS) and Google Cloud Messaging for Android (GCM). What do you need to do first for this to be successful?
A. Request Credentials from Mobile Platforms, so that each device has the correct access control policies to access the SNS publisher
B. Create a Platform Application Object which will connect all of the mobile devices with your app to the correct SNS topic.
C. Request a Token from Mobile Platforms, so that each device has the correct access control policies to access the SNS publisher.
D. Get a set of credentials in order to be able to connect to the push notification service you are trying to setup.
Answer: D. To use Amazon SNS mobile push notifications, you need to establish a connection with a supported push notification service. This connection is established using a set of credentials. Reference: Add Device Tokens or Registration IDs
Q46: SNS message can be sent to different kinds of endpoints. Which of these is NOT currently a supported endpoint?
A. Slack Messages
B. SMS (text message)
C. HTTP/HTTPS
D. AWS Lambda
Answer: A. Slack messages are not directly integrated with SNS, though theoretically, you could write a service to push messages to slack from SNS. Reference:
Q47: Company B provides an online image recognition service and utilizes SQS to decouple system components for scalability. The SQS consumers poll the imaging queue as often as possible to keep end-to-end throughput as high as possible. However, Company B is realizing that polling in tight loops is burning CPU cycles and increasing costs with empty responses. How can Company B reduce the number empty responses?
A. Set the imaging queue VisibilityTimeout attribute to 20 seconds
B. Set the imaging queue MessageRetentionPeriod attribute to 20 seconds
C. Set the imaging queue ReceiveMessageWaitTimeSeconds Attribute to 20 seconds
D. Set the DelaySeconds parameter of a message to 20 seconds
Answer: C. Enabling long polling reduces the amount of false and empty responses from SQS service. It also reduces the number of calls that need to be made to a queue by staying connected to the queue until all messages have been received or until timeout. In order to enable long polling the ReceiveMessageWaitTimeSeconds attribute needs to be set to a number greater than 0. If it is set to 0 then short polling is enabled. Reference: Amazon SQS Long Polling
Q48: Which of the following statements about SQS standard queues are true?
A. Message order can be indeterminate – you’re not guaranteed to get messages in the same order they were sent in
B. Messages will be delivered exactly once and messages will be delivered in First in, First out order
C. Messages will be delivered exactly once and message delivery order is indeterminate
D. Messages can be delivered one or more times
Answer: A. and D. A standard queue makes a best effort to preserve the order of messages, but more than one copy of a message might be delivered out of order. If your system requires that order be preserved, we recommend using a FIFO (First-In-First-Out) queue or adding sequencing information in each message so you can reorder the messages when they’re received. Reference: Amazon SQS Standard Queues
Q49: Which of the following is true if long polling is enabled?
A. If long polling is enabled, then each poll only polls a subset of SQS servers; in order for all messages to be received, polling must continuously occur
B. The reader will listen to the queue until timeout
C. Increases costs because each request lasts longer
D. The reader will listen to the queue until a message is available or until timeout
Q50: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?
A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
B. Permanently assigning users to specific instances and always routing their traffic to those instances
C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance
Q52: Your application must write to an SQS queue. Your corporate security policies require that AWS credentials are always encrypted and are rotated at least once a week. How can you securely provide credentials that allow your application to write to the queue?
A. Have the application fetch an access key from an Amazon S3 bucket at run time.
B. Launch the application’s Amazon EC2 instance with an IAM role.
C. Encrypt an access key in the application source code.
D. Enroll the instance in an Active Directory domain and use AD authentication.
Answer: B. IAM roles are based on temporary security tokens, so they are rotated automatically. Keys in the source code cannot be rotated (and are a very bad idea). It’s impossible to retrieve credentials from an S3 bucket if you don’t already have credentials for that bucket. Active Directory authorization will not grant access to AWS resources. Reference: AWS IAM FAQs
Q53: Your web application reads an item from your DynamoDB table, changes an attribute, and then writes the item back to the table. You need to ensure that one process doesn’t overwrite a simultaneous change from another process. How can you ensure concurrency?
A. Implement optimistic concurrency by using a conditional write.
B. Implement pessimistic concurrency by using a conditional write.
C. Implement optimistic concurrency by locking the item upon read.
D. Implement pessimistic concurrency by locking the item upon read.
Answer: A. Optimistic concurrency depends on checking a value upon save to ensure that it has not changed. Pessimistic concurrency prevents a value from changing by locking the item or row in the database. DynamoDB does not support item locking, and conditional writes are perfect for implementing optimistic concurrency. Reference: Optimistic Locking With Version Number
Answer: C. The intrinsic function Fn::FindInMap returns the value corresponding to keys in a two-level map that is declared in the Mappings section. You can use the Fn::FindInMap function to return a named value based on a specified key. The following example template contains an Amazon EC2 resource whose ImageId property is assigned by the FindInMap function. The FindInMap function specifies key as the region where the stack is created (using the AWS::Region pseudo parameter) and HVM64 as the name of the value to map to. Reference:
Q56: Your application triggers events that must be delivered to all your partners. The exact partner list is constantly changing: some partners run a highly available endpoint, and other partners’ endpoints are online only a few hours each night. Your application is mission-critical, and communication with your partners must not introduce delay in its operation. A delay in delivering the event to one partner cannot delay delivery to other partners.
What is an appropriate way to code this?
A. Implement an Amazon SWF task to deliver the message to each partner. Initiate an Amazon SWF workflow execution.
B. Send the event as an Amazon SNS message. Instruct your partners to create an HTTP. Subscribe their HTTP endpoint to the Amazon SNS topic.
C. Create one SQS queue per partner. Iterate through the queues and write the event to each one. Partners retrieve messages from their queue.
D. Send the event as an Amazon SNS message. Create one SQS queue per partner that subscribes to the Amazon SNS topic. Partners retrieve messages from their queue.
Answer: D. There are two challenges here: the command must be “fanned out” to a variable pool of partners, and your app must be decoupled from the partners because they are not highly available. Sending the command as an SNS message achieves the fan-out via its publication/subscribe model, and using an SQS queue for each partner decouples your app from the partners. Writing the message to each queue directly would cause more latency for your app and would require your app to monitor which partners were active. It would be difficult to write an Amazon SWF workflow for a rapidly changing set of partners.
Q57: You have a three-tier web application (web, app, and data) in a single Amazon VPC. The web and app tiers each span two Availability Zones, are in separate subnets, and sit behind ELB Classic Load Balancers. The data tier is a Multi-AZ Amazon RDS MySQL database instance in database subnets. When you call the database tier from your app tier instances, you receive a timeout error. What could be causing this?
A. The IAM role associated with the app tier instances does not have rights to the MySQL database.
B. The security group for the Amazon RDS instance does not allow traffic on port 3306 from the app instances.
C. The Amazon RDS database instance does not have a public IP address.
D. There is no route defined between the app tier and the database tier in the Amazon VPC.
Answer: B. Security groups block all network traffic by default, so if a group is not correctly configured, it can lead to a timeout error. MySQL security, not IAM, controls MySQL security. All subnets in an Amazon VPC have routes to all other subnets. Internal traffic within an Amazon VPC does not require public IP addresses.
Q58: What type of block cipher does Amazon S3 offer for server side encryption?
A. RC5
B. Blowfish
C. Triple DES
D. Advanced Encryption Standard
Answer: D Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data.
Q59: You have written an application that uses the Elastic Load Balancing service to spread traffic to several web servers Your users complain that they are sometimes forced to login again in the middle of using your application, after they have already togged in. This is not behaviour you have designed. What is a possible solution to prevent this happening?
A. Use instance memory to save session state.
B. Use instance storage to save session state.
C. Use EBS to save session state
D. Use ElastiCache to save session state.
E. Use Glacier to save session slate.
Answer: D. You can cache a variety of objects using the service, from the content in persistent data stores (such as Amazon RDS, DynamoDB, or self-managed databases hosted on EC2) to dynamically generated web pages (with Nginx for example), or transient session data that may not require a persistent backing store. You can also use it to implement high-frequency counters to deploy admission control in high volume web applications.
Q60: You are writing to a DynamoDB table and receive the following exception:” ProvisionedThroughputExceededException”. though according to your Cloudwatch metrics for the table, you are not exceeding your provisioned throughput. What could be an explanation for this?
A. You haven’t provisioned enough DynamoDB storage instances
B. You’re exceeding your capacity on a particular Range Key
C. You’re exceeding your capacity on a particular Hash Key
D. You’re exceeding your capacity on a particular Sort Key
E. You haven’t configured DynamoDB Auto Scaling triggers
Answer: C. The primary key that uniquely identifies each item in a DynamoDB table can be simple (a partition key only) or composite (a partition key combined with a sort key). Generally speaking, you should design your application for uniform activity across all logical partition keys in the Table and its secondary indexes. You can determine the access patterns that your application requires, and estimate the total read capacity units and write capacity units that each table and secondary Index requires.
As traffic starts to flow, DynamoDB automatically supports your access patterns using the throughput you have provisioned, as long as the traffic against a given partition key does not exceed 3000 read capacity units or 1000 write capacity units.
Q61: Which DynamoDB limits can be raised by contacting AWS support?
A. The number of hash keys per account
B. The maximum storage used per account
C. The number of tables per account
D. The number of local secondary indexes per account
E. The number of provisioned throughput units per account
Answer: C. and E.
For any AWS account, there is an initial limit of 256 tables per region. AWS places some default limits on the throughput you can provision. These are the limits unless you request a higher amount. To request a service limit increase see https://aws.amazon.com/support.Reference: Limits in DynamoDB
Q62: AWS CodeBuild allows you to compile your source code, run unit tests, and produce deployment artifacts by:
A. Allowing you to provide an Amazon Machine Image to take these actions within
B. Allowing you to select an Amazon Machine Image and provide a User Data bootstrapping script to prepare an instance to take these actions within
C. Allowing you to provide a container image to take these actions within
D. Allowing you to select from pre-configured environments to take these actions within
Answer: C. and D. You can provide your own custom container image to build your deployment artifacts. You never actually pass a specific AMI to CodeBuild. Though you can provide a custom docker image which you could basically ‘bootstrap’ for the purposes of your build. Reference: AWS CodeBuild Faqs
Q63: Which of the following will not cause a CloudFormation stack deployment to rollback?
A. The template contains invalid JSON syntax
B. An AMI specified in the template exists in a different region than the one in which the stack is being deployed.
C. A subnet specified in the template does not exist
D. The template specifies an instance-store backed AMI and an incompatible EC2 instance type.
Answer: A. Invalid JSON syntax will cause an error message during template validation. Until the syntax is fixed, the template will not be able to deploy resources, so there will not be a need to or opportunity to rollback. Reference: AWS CloudFormatio Faqs
Q64: Your team is using CodeDeploy to deploy an application which uses secure parameters that are stored in the AWS System Mangers Parameter Store. What two options below must be completed so CodeDeploy can deploy the application?
A. Use ssm get-parameters with –with-decryption option
B. Add permissions using AWS access keys
C. Add permissions using AWS IAM role
D. Use ssm get-parameters with –with-no-decryption option
Q65: A corporate web application is deployed within an Amazon VPC, and is connected to the corporate data center via IPSec VPN. The application must authenticate against the on-premise LDAP server. Once authenticated, logged-in users can only access an S3 keyspace specific to the user. Which of the solutions below meet these requirements? Choose two answers How would you authenticate to the application given these details? (Choose 2)
A. The application authenticates against LDAP, and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM Role. The application can use the temporary credentials to access the S3 keyspace.
B. Develop an identity broker which authenticates against LDAP, and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 keyspace
C. Develop an identity broker which authenticates against IAM Security Token Service to assume an IAM Role to get temporary AWS security credentials. The application calls the identity broker to get AWS temporary security credentials with access to the app
D. The application authenticates against LDAP. The application then calls the IAM Security Service to login to IAM using the LDAP credentials. The application can use the IAM temporary credentials to access the appropriate S3 bucket.
Answer: A. and B. The question clearly says “authenticate against LDAP”. Temporary credentials come from STS. Federated user credentials come from the identity broker. Reference: IAM faqs
Q66: A corporate web application is deployed within an Amazon VPC, and is connected to the corporate data center via IPSec VPN. The application must authenticate against the on-premise LDAP server. Once authenticated, logged-in users can only access an S3 keyspace specific to the user. Which of the solutions below meet these requirements? Choose two answers How would you authenticate to the application given these details? (Choose 2)
A. The application authenticates against LDAP, and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM Role. The application can use the temporary credentials to access the S3 keyspace.
B. Develop an identity broker which authenticates against LDAP, and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 keyspace
C. Develop an identity broker which authenticates against IAM Security Token Service to assume an IAM Role to get temporary AWS security credentials. The application calls the identity broker to get AWS temporary security credentials with access to the app
D. The application authenticates against LDAP. The application then calls the IAM Security Service to login to IAM using the LDAP credentials. The application can use the IAM temporary credentials to access the appropriate S3 bucket.
Answer: A. and B. The question clearly says “authenticate against LDAP”. Temporary credentials come from STS. Federated user credentials come from the identity broker. Reference: AWA STS Faqs
Q67: When users are signing in to your application using Cognito, what do you need to do to make sure if the user has compromised credentials, they must enter a new password?
A. Create a user pool in Cognito
B. Block use for “Compromised credential” in the Basic security section
C. Block use for “Compromised credential” in the Advanced security section
D. Use secure remote password
Answer: A. and C. Amazon Cognito can detect if a user’s credentials (user name and password) have been compromised elsewhere. This can happen when users reuse credentials at more than one site, or when they use passwords that are easy to guess.
From the Advanced security page in the Amazon Cognito console, you can choose whether to allow, or block the user if compromised credentials are detected. Blocking requires users to choose another password. Choosing Allow publishes all attempted uses of compromised credentials to Amazon CloudWatch. For more information, see Viewing Advanced Security Metrics.
You can also choose whether Amazon Cognito checks for compromised credentials during sign-in, sign-up, and password changes.
Note Currently, Amazon Cognito doesn’t check for compromised credentials for sign-in operations with Secure Remote Password (SRP) flow, which doesn’t send the password during sign-in. Sign-ins that use the AdminInitiateAuth API with ADMIN_NO_SRP_AUTH flow and the InitiateAuth API with USER_PASSWORD_AUTH flow are checked for compromised credentials.
Q68: You work in a large enterprise that is currently evaluating options to migrate your 27 GB Subversion code base. Which of the following options is the best choice for your organization?
A. AWS CodeHost
B. AWS CodeCommit
C. AWS CodeStart
D. None of these
Answer: D. None of these. While CodeCommit is a good option for git reponsitories it is not able to host Subversion source control.
Q69: You are on a development team and you need to migrate your Spring Application over to AWS. Your team is looking to build, modify, and test new versions of the application. What AWS services could help you migrate your app?
A. Elastic Beanstalk
B. SQS
C. Ec2
D. AWS CodeDeploy
Answer: A. C. and D. Amazon EC2 can be used to deploy various applications to your AWS Infrastructure. AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, or serverless Lambda functions.
Q70: You are a developer responsible for managing a high volume API running in your company’s datacenter. You have been asked to implement a similar API, but one that has potentially higher volume. And you must do it in the most cost effective way, using as few services and components as possible. The API stores and fetches data from a key value store. Which services could you utilize in AWS?
A. DynamoDB
B. Lambda
C. API Gateway
D. EC2
Answer: A. and C. NoSQL databases like DynamoDB are designed for key value usage. DynamoDB can also handle incredible volumes and is cost effective. AWS API Gateway makes it easy for developers to create, publish, maintain, monitor, and secure APIs.
Q72: AWS X-Ray was recently implemented inside of a service that you work on. Several weeks later, after a new marketing push, that service started seeing a large spike in traffic and you’ve been tasked with investigating a few issues that have started coming up but when you review the X-Ray data you can’t find enough information to draw conclusions so you decide to:
A. Start passing in the X-Amzn-Trace-Id: True HTTP header from your upstream requests
B. Refactor the service to include additional calls to the X-Ray API using an AWS SDK
C. Update the sampling algorithm to increase the sample rate and instrument X-Ray to collect more pertinent information
D. Update your application to use the custom API Gateway TRACE method to send in data
Answer: C. This is a good way to solve the problem – by customizing the sampling so that you can get more relevant information.
Q75: Which of the following is the right sequence that gets called in CodeDeploy when you use Lambda hooks in an EC2/On-Premise Deployment?
A. Before Install-AfterInstall-Validate Service-Application Start
B. Before Install-After-Install-Application Stop-Application Start
C. Before Install-Application Stop-Validate Service-Application Start
D. Application Stop-Before Install-After Install-Application Start
Answer: D. In an in-place deployment, including the rollback of an in-place deployment, event hooks are run in the following order:
Note An AWS Lambda hook is one Lambda function specified with a string on a new line after the name of the lifecycle event. Each hook is executed once per deployment. Following are descriptions of the lifecycle events where you can run a hook during an Amazon ECS deployment.
BeforeInstall – Use to run tasks before the replacement task set is created. One target group is associated with the original task set. If an optional test listener is specified, it is associated with the original task set. A rollback is not possible at this point. AfterInstall – Use to run tasks after the replacement task set is created and one of the target groups is associated with it. If an optional test listener is specified, it is associated with the original task set. The results of a hook function at this lifecycle event can trigger a rollback. AfterAllowTestTraffic – Use to run tasks after the test listener serves traffic to the replacement task set. The results of a hook function at this point can trigger a rollback. BeforeAllowTraffic – Use to run tasks after the second target group is associated with the replacement task set, but before traffic is shifted to the replacement task set. The results of a hook function at this lifecycle event can trigger a rollback. AfterAllowTraffic – Use to run tasks after the second target group serves traffic to the replacement task set. The results of a hook function at this lifecycle event can trigger a rollback. Run Order of Hooks in an Amazon ECS Deployment
In an Amazon ECS deployment, event hooks run in the following order:
For in-place deployments, the six hooks related to blocking and allowing traffic apply only if you specify a Classic Load Balancer, Application Load Balancer, or Network Load Balancer from Elastic Load Balancing in the deployment group.Note The Start, DownloadBundle, Install, and End events in the deployment cannot be scripted, which is why they appear in gray in this diagram. However, you can edit the ‘files’ section of the AppSpec file to specify what’s installed during the Install event.
Q76: Describe the process of registering a mobile device with SNS push notification service using GCM.
A. Receive Registration ID and token for each mobile device. Then, register the mobile application with Amazon SNS, and pass the GCM token credentials to Amazon SNS
B. Pass device token to SNS to create mobile subscription endpoint for each mobile device, then request the device token from each mobile device. SNS then communicates on your behalf to the GCM service
C. None of these are correct
D. Submit GCM notification credentials to Amazon SNS, then receive the Registration ID for each mobile device. After that, pass the device token to SNS, and SNS then creates a mobile subscription endpoint for each device and communicates with the GCM service on your behalf
Answer: D. When you first register an app and mobile device with a notification service, such as Apple Push Notification Service (APNS) and Google Cloud Messaging for Android (GCM), device tokens or registration IDs are returned from the notification service. When you add the device tokens or registration IDs to Amazon SNS, they are used with the PlatformApplicationArn API to create an endpoint for the app and device. When Amazon SNS creates the endpoint, an EndpointArn is returned. The EndpointArn is how Amazon SNS knows which app and mobile device to send the notification message to.
Q77: You run an ad-supported photo sharing website using S3 to serve photos to visitors of your site. At some point you find out that other sites have been linking to the photos on your site, causing loss to your business. What is an effective method to mitigate this?
A. Store photos on an EBS volume of the web server.
B. Block the IPs of the offending websites in Security Groups.
C. Remove public read access and use signed URLs with expiry dates.
D. Use CloudFront distributions for static content.
Answer: C. This solves the issue, but does require you to modify your website. Your website already uses S3, so it doesn’t require a lot of changes. See the docs for details: http://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html
CloudFront on its own doesn’t prevent unauthorized access and requires you to add a whole new layer to your stack (which may make sense anyway). You can serve private content, but you’d have to use signed URLs or similar mechanism. Here are the docs: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
Q78: How can you control access to the API Gateway in your environment?
A. Cognito User Pools
B. Lambda Authorizers
C. API Methods
D. API Stages
Answer: A. and B. Access to a REST API Using Amazon Cognito User Pools as Authorizer As an alternative to using IAM roles and policies or Lambda authorizers (formerly known as custom authorizers), you can use an Amazon Cognito user pool to control who can access your API in Amazon API Gateway.
To use an Amazon Cognito user pool with your API, you must first create an authorizer of the COGNITO_USER_POOLS type and then configure an API method to use that authorizer. After the API is deployed, the client must first sign the user in to the user pool, obtain an identity or access token for the user, and then call the API method with one of the tokens, which are typically set to the request’s Authorization header. The API call succeeds only if the required token is supplied and the supplied token is valid, otherwise, the client isn’t authorized to make the call because the client did not have credentials that could be authorized.
The identity token is used to authorize API calls based on identity claims of the signed-in user. The access token is used to authorize API calls based on the custom scopes of specified access-protected resources. For more information, see Using Tokens with User Pools and Resource Server and Custom Scopes.
Q80: Company B provides an online image recognition service and utilizes SQS to decouple system components for scalability. The SQS consumers poll the imaging queue as often as possible to keep end-to-end throughput as high as possible. However, Company B is realizing that polling in tight loops is burning CPU cycles and increasing costs with empty responses. How can Company B reduce the number of empty responses?
A. Set the imaging queue MessageRetentionPeriod attribute to 20 seconds.
B. Set the imaging queue ReceiveMessageWaitTimeSeconds attribute to 20 seconds.
C. Set the imaging queue VisibilityTimeout attribute to 20 seconds.
D. Set the DelaySeconds parameter of a message to 20 seconds.
Answer: B. ReceiveMessageWaitTimeSeconds, when set to greater than zero, enables long polling. Long polling allows the Amazon SQS service to wait until a message is available in the queue before sending a response. Short polling continuously pools a queue and can have false positives. Enabling long polling reduces the number of poll requests, false positives, and empty responses. Reference: AWS SQS Long Polling
81: You’re using CloudFormation templates to build out staging environments. What section of the CloudFormation would you edit in order to allow the user to specify the PEM key-name at start time?
A. Resources Section
B. Parameters Section
C. Mappings Section
D. Declaration Section
Answer:B.
Parameters property type in CloudFormation allows you to accept user input when starting the CloudFormation template. It allows you to reference the user input as variable throughout your CloudFormation template. Other examples might include asking the user starting the template to provide Domain admin passwords, instance size, pem key, region, and other dynamic options.
Q82: You are writing an AWS CloudFormation template and you want to assign values to properties that will not be available until runtime. You know that you can use intrinsic functions to do this but are unsure as to which part of the template they can be used in. Which of the following is correct in describing how you can currently use intrinsic functions in an AWS CloudFormation template?
A. You can use intrinsic functions in any part of a template, except AWSTemplateFormatVersion and Description
B. You can use intrinsic functions in any part of a template.
C. You can use intrinsic functions only in the resource properties part of a template.
D. You can only use intrinsic functions in specific parts of a template. You can use intrinsic functions in resource properties, metadata attributes, and update policy attributes.
Answer: D.
You can use intrinsic functions only in specific parts of a template. Currently, you can use intrinsic functions in resource properties, outputs, metadata attributes, and update policy attributes. You can also use intrinsic functions to conditionally create stack resources.
Definition 1: Amazon S3 or Amazon Simple Storage Service is a “simple storage service” offered by Amazon Web Services that provides object storage through a web service interface. Amazon S3 uses the same scalable storage infrastructure that Amazon.com uses to run its global e-commerce network.
Definition 2: Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
S3 is a universal namespace, meaning each S3 bucket you create must have a unique name that is not being used by anyone else in the world.
S3 is object based: i.e allows you to upload files.
Files can be from 0 Bytes to 5 TB
What is the maximum length, in bytes, of a DynamoDB range primary key attribute value? The maximum length of a DynamoDB range primary key attribute value is 2048 bytes (NOT 256 bytes).
S3 has unlimited storage.
Files are stored in Buckets.
Read after write consistency for PUTS of new Objects
Eventual Consistency for overwrite PUTS and DELETES (can take some time to propagate)
S3 Standard (durable, immediately available, frequently accesses)
Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering): It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access.
S3 – One Zone-Infrequent Access – S3 One Zone IA: Same ad IA. However, data is stored in a single Availability Zone only
S3 – Reduced Redundancy Storage (data that is easily reproducible, such as thumbnails, etc.)
Glacier – Archived data, where you can wait 3-5 hours before accessing
You can have a bucket that has different objects stored in S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA.
The default URL for S3 hosted websites lists the bucket name first followed by s3-website-region.amazonaws.com . Example: enoumen.com.s3-website-us-east-1.amazonaws.com
Core fundamentals of an S3 object
Key (name)
Value (data)
Version (ID)
Metadata
Sub-resources (used to manage bucket-specific configuration)
Bucket Policies, ACLs,
CORS
Transfer Acceleration
Object-based storage only for files
Not suitable to install OS on.
Successful uploads will generate a HTTP 200 status code.
S3 Security – Summary
By default, all newly created buckets are PRIVATE.
You can set up access control to your buckets using:
Bucket Policies – Applied at the bucket level
Access Control Lists – Applied at an object level.
S3 buckets can be configured to create access logs, which log all requests made to the S3 bucket. These logs can be written to another bucket.
S3 Encryption
Encryption In-Transit (SSL/TLS)
Encryption At Rest:
Server side Encryption (SSE-S3, SSE-KMS, SSE-C)
Client Side Encryption
Remember that we can use a Bucket policy to prevent unencrypted files from being uploaded by creating a policy which only allows requests which include the x-amz-server-side-encryption parameter in the request header.
S3 CORS (Cross Origin Resource Sharing): CORS defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.
Used to enable cross origin access for your AWS resources, e.g. S3 hosted website accessing javascript or image files located in another bucket. By default, resources in one bucket cannot access resources located in another. To allow this we need to configure CORS on the bucket being accessed and enable access for the origin (bucket) attempting to access.
Always use the S3 website URL, not the regular bucket URL. E.g.: https://s3-eu-west-2.amazonaws.com/acloudguru
S3 CloudFront:
Edge locations are not just READ only – you can WRITE to them too (i.e put an object on to them.)
Objects are cached for the life of the TTL (Time to Live)
You can clear cached objects, but you will be charged. (Invalidation)
S3 Performance optimization – 2 main approaches to Performance Optimization for S3:
GET-Intensive Workloads – Use Cloudfront
Mixed Workload – Avoid sequencial key names for your S3 objects. Instead, add a random prefix like a hex hash to the key name to prevent multiple objects from being stored on the same partition.
The best way to handle large objects uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts.
You can enable versioning on a bucket, even if that bucket already has objects in it. The already existing objects, though, will show their versions as null. All new objects will have version IDs.
Bucket names cannot start with a . or – characters. S3 bucket names can contain both the . and – characters. There can only be one . or one – between labels. E.G mybucket-com mybucket.com are valid names but mybucket–com and mybucket..com are not valid bucket names.
What is the maximum number of S3 buckets allowed per AWS account (by default)? 100
You successfully upload an item to the us-east-1 region. You then immediately make another API call and attempt to read the object. What will happen? All AWS regions now have read-after-write consistency for PUT operations of new objects. Read-after-write consistency allows you to retrieve objects immediately after creation in Amazon S3. Other actions still follow the eventual consistency model (where you will sometimes get stale results if you have recently made changes)
S3 bucket policies require a Principal be defined. Review the access policy elements here
What checksums does Amazon S3 employ to detect data corruption? Amazon S3 uses a combination of Content-MD5 checksums and cyclic redundancy checks (CRCs) to detect data corruption. Amazon S3 performs these checksums on data at rest and repairs any corruption using redundant data. In addition, the service calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data.
Q0: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?
A. Create multiple threads and upload the objects in the multiple threads
B. Write the items in batches for better performance
C. Use the Multipart upload API
D. Enable versioning on the Bucket
C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.
Q2: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?
A. AWS::Serverless::Api
B. AWS::Serverless::Application
C. AWS::Serverless::Layerversion
D. AWS::Serverless::Function
Answer – B AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets. Reference: Declaring Serverless Resources
Q3: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?
A. Enable versioning for the underlying S3 bucket.
B. Enable Replication so that the objects get replicated to the other bucket
C. Enable CORS for the bucket
D. Change the Bucket policy for the bucket to allow access from the other bucket
Answer – C
Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.
Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:
Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can congure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.
Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would congure the bucket that is hosting the web font to allow any origin to make these requests.
Q4: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below
A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.
Answer- C The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3. You can then provides access to the objects based on the key values generated via the user id.
Q5: Both ACLs and Bucket Policies can be used to grant access to S3 buckets. Which of the following statements is true about ACLs and Bucket policies?
A. Bucket Policies are Written in JSON and ACLs are written in XML
B. ACLs can be attached to S3 objects or S3 Buckets
C. Bucket Policies and ACLs are written in JSON
D. Bucket policies are only attached to s3 buckets, ACLs are only attached to s3 objects
Answer: A. and B. Only Bucket Policies are written in JSON, ACLs are written in XML. While Bucket policies are indeed only attached to S3 buckets, ACLs can be attached to S3 Buckets OR S3 Objects. Reference:
Q6: What are good options to improve S3 performance when you have significantly high numbers of GET requests?
A. Introduce random prefixes to S3 objects
B. Introduce random suffixes to S3 objects
C. Setup CloudFront for S3 objects
D. Migrate commonly used objects to Amazon Glacier
Answer: C CloudFront caching is an excellent way to avoid putting extra strain on the S3 service and to improve the response times of reqeusts by caching data closer to users at CloudFront locations. S3 Transfer Acceleration optimizes the TCP protocol and adds additional intelligence between the client and the S3 bucket, making S3 Transfer Acceleration a better choice if a higher throughput is desired. If you have objects that are smaller than 1GB or if the data set is less than 1GB in size, you should consider using Amazon CloudFront’s PUT/POST commands for optimal performance. Reference: Amazon S3 Transfer Acceleration
Q7: If an application is storing hourly log files from thousands of instances from a high traffic web site, which naming scheme would give optimal performance on S3?
A. Sequential
B. HH-DD-MM-YYYY-log_instanceID
C. YYYY-MM-DD-HH-log_instanceID
D. instanceID_log-HH-DD-MM-YYYY
E. instanceID_log-YYYY-MM-DD-HH
Answer: A. B. C. D. and E. Amazon S3 now provides increased performance to support at least 3,500 requests per second to add data and 5,500 requests per second to retrieve data, which can save significant processing time for no additional charge. Each S3 prefix can support these request rates, making it simple to increase performance significantly. This S3 request rate performance increase removes any previous guidance to randomize object prefixes to achieve faster performance. That means you can now use logical or sequential naming patterns in S3 object naming without any performance implications.
Q9: You created three S3 buckets – “mywebsite.com”, “downloads.mywebsite.com”, and “www.mywebsite.com”. You uploaded your files and enabled static website hosting. You specified both of the default documents under the “enable static website hosting” header. You also set the “Make Public” permission for the objects in each of the three buckets. You create the Route 53 Aliases for the three buckets. You are going to have your end users test your websites by browsing to http://mydomain.com/error.html, http://downloads.mydomain.com/index.html, and http://www.mydomain.com. What problems will your testers encounter?
A. http://mydomain.com/error.html will not work because you did not set a value for the error.html file
B. There will be no problems, all three sites should work.
C. http://www.mywebsite.com will not work because the URL does not include a file name at the end of it.
D. http://downloads.mywebsite.com/index.html will not work because the “downloads” prefix is not a supported prefix for S3 websites using Route 53 aliases
Answer: B. It used to be that the only allowed domain prefix when creating Route 53 Aliases for S3 static websites was the “www” prefix. However, this is no longer the case. You can now use other subdomain.
Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.
submitted by /u/Outrageous-Catch4731 [link] [comments]
Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.