This blog is about Clever Questions, Answers, Resources, Links, Discussions, Tips about jobs and careers at FAANGM companies: Facebook, Apple, Amazon, AWS, Netflix, Google, Microsoft, Linkedin
2. How to prepare for FAANGM jobs interviews
You must be able to write code. It is as simple as that. Prepare for the interview by practicing coding exercises in different categories. You’ll solve one or more coding problems focused on CS fundamentals like algorithms, data structures, recursions, and binary trees.
Coding Interview Tips These tips from FAANGM engineers can help you do your best.
Make sure you understand the question. Read it back to your interviewer. Be sure to ask any clarifying questions.
An interview is a two-way conversation; feel free to be the one to ask questions, too.
Don’t rush. Take some time to consider your approach. For example, on a tree question, you’ll need to choose between an iterative or a recursive approach. It’s OK to first use a working, unoptimized solution that you can iterate on later.
Talk through your thinking and processes out loud. This can feel unnatural; be sure to practice it before the interview.
Test your code by running through your problem with a few test and edge cases. Again, talk through your logic out loud when you walk through your test cases.
Think of how your solution could be better, and try to improve it. When you’ve finished, your interviewer will ask you to analyze the complexity of the code in Big O notation.
Walk through your code line by line and assign a complexity to each line.
Remember how to analyze how “good” your solution is: how long does it take for your solution to complete? Watch this video to get familiar with Big O Notation.
How to Approach Problems During Your Interview
Before you code
• Ask clarifying questions. Talk through the problem and ask follow-up questions to make sure you understand the exact problem you’re trying to solve before you jump into building the solution. • Let us know if you’ve seen the problem previously. That will help us understand your context. • Present multiple potential solutions, if possible. Talk through which solution you’re choosing and why.
While you code
• Don’t forget to talk! While your tech screen will focus heavily on coding, the engineer you’re interviewing with will also be evaluating your thought process. Explaining your decisions and actions as you go will help the interviewer understand your choices. • Be flexible. Some problems have elegant solutions, and some must be brute forced. If you get stuck, just describe your best approach and ask the interviewer if you should go that route. It’s much better to have non-optimal but working code than just an idea with nothing written down. • Iterate rather than immediately trying to jump to the clever solution. If you can’t explain your concept clearly in five minutes, it’s probably too complex. • Consider (and be prepared to talk about): • Different algorithms and algorithmic techniques, such as sorting, divide-and-conquer, recursion, etc. • Data structures, particularly those used most often (array, stack/queue, hashset/hashmap/hashtable/dictionary, tree/binary tree, heap, graph, etc.) • O memory constraints on the complexity of the algorithm you’re writing and its running time as expressed by big-O notation. • Generally, avoid solutions with lots of edge cases or huge if/else if/else blocks, in most cases. Deciding between iteration and recursion can be an important step
After you code
• Expect questions. The interviewer may tweak the problem a bit to test your knowledge and see if you can come up with another answer and/or further optimize your solution. • Take the interviewer’s hints to improve your code. If the interviewer makes a suggestion or asks a question, listen fully so you can incorporate any hints they may provide. • Ask yourself if you would approve your solution as part of your codebase. Explain your answer to your interviewer. Make sure your solution is correct and efficient, that you’ve taken into account edge cases, and that it clearly reflects the ideas you’re trying to express in your code.
Reverse to Make Equal: Given two arrays A and B of length N, determine if there is a way to make A equal to B by reversing any subarrays from array B any number of times. Solution here
Contiguous Subarrays: You are given an array arr of N integers. For each index i, you are required to determine the number of contiguous subarrays that fulfills the following conditions:
The value at index i must be the maximum element in the contiguous subarrays, and These contiguous subarrays must either start from or end on index i. Solution here
Add 2 long integer (Example: “1001202033933333093737373737” + “934019393939122727099000000”) Solution here
# Python3 program to find sum of
# two large numbers.
# Function for finding sum of
# larger numbers
def findSum(str1, str2):
# Before proceeding further,
# make sure length of str2 is larger.
if (len(str1) > len(str2)):
t = str1;
str1 = str2;
str2 = t;
# Take an empty string for
# storing result
str = "";
# Calculate length of both string
n1 = len(str1);
n2 = len(str2);
# Reverse both of strings
str1 = str1[::-1];
str2 = str2[::-1];
carry = 0;
for i in range(n1):
# Do school mathematics, compute
# sum of current digits and carry
sum = ((ord(str1[i]) - 48) +
((ord(str2[i]) - 48) + carry));
str += chr(sum % 10 + 48);
# Calculate carry for next step
carry = int(sum / 10);
# Add remaining digits of larger number
for i in range(n1, n2):
sum = ((ord(str2[i]) - 48) + carry);
str += chr(sum % 10 + 48);
carry = (int)(sum / 10);
Rotational Cipher: One simple way to encrypt a string is to “rotate” every alphanumeric character by a certain amount. Rotating a character means replacing it with another character that is a certain number of steps away in normal alphabetic or numerical order.For example, if the string “Zebra-493?” is rotated 3 places, the resulting string is “Cheud-726?”. Every alphabetic character is replaced with the character 3 letters higher (wrapping around from Z to A), and every numeric character replaced with the character 3 digits higher (wrapping around from 9 to 0). Note that the non-alphanumeric characters remain unchanged.Given a string and a rotation factor, return an encrypted string. Solution here
Matching Pairs: Given two strings s and t of length N, find the maximum number of possible matching pairs in strings s and t after swapping exactly two characters within s.A swap is switching s[i] and s[j], where s[i] and s[j] denotes the character that is present at the ith and jth index of s, respectively. The matching pairs of the two strings are defined as the number of indices for which s[i] and t[i] are equal. Note: This means you must swap two characters at different indices. Solution here
Minimum Length Substrings: You are given two strings s and t. You can select any substring of string s and rearrange the characters of the selected substring. Determine the minimum length of the substring of s such that string t is a substring of the selected substring. Solution here
Recursion
Encrypted Words: You’ve devised a simple encryption method for alphabetic strings that shuffles the characters in such a way that the resulting string is hard to quickly read, but is easy to convert back into the original string. When you encrypt a string S, you start with an initially-empty resulting string R and append characters to it as follows: Append the middle character of S (if S has even length, then we define the middle character as the left-most of the two central characters) Append the encrypted version of the substring of S that’s to the left of the middle character (if non-empty) Append the encrypted version of the substring of S that’s to the right of the middle character (if non-empty) For example, to encrypt the string “abc”, we first take “b”, and then append the encrypted version of “a” (which is just “a”) and the encrypted version of “c” (which is just “c”) to get “bac”. If we encrypt “abcxcba” we’ll get “xbacbca”. That is, we take “x” and then append the encrypted version “abc” and then append the encrypted version of “cba”.
Slow Sums: Suppose we have a list of N numbers, and repeat the following operation until we’re left with only a single number: Choose any two numbers and replace them with their sum. Moreover, we associate a penalty with each operation equal to the value of the new number, and call the penalty for the entire list as the sum of the penalties of each operation.For example, given the list [1, 2, 3, 4, 5], we could choose 2 and 3 for the first operation, which would transform the list into [1, 5, 4, 5] and incur a penalty of 5. The goal in this problem is to find the worst possible penalty for a given input.
Reverse Operations: You are given a singly-linked list that contains N integers. A subpart of the list is a contiguous set of even elements, bordered either by either end of the list or an odd element. For example, if the list is [1, 2, 8, 9, 12, 16], the subparts of the list are [2, 8] and [12, 16].Then, for each subpart, the order of the elements is reversed. In the example, this would result in the new list, [1, 8, 2, 9, 16, 12].The goal of this question is: given a resulting list, determine the original order of the elements. Solution Here.
Hash Tables
Pair Sums: Given a list of n integers arr[0..(n-1)], determine the number of different pairs of elements within it which sum to k. If an integer appears in the list multiple times, each copy is considered to be different; that is, two pairs are considered different if one pair includes at least one array index which the other doesn’t, even if they include the same values. Solution here.
Note: These exercises assume you have knowledge in coding but not necessarily knowledge of binary trees, sorting algorithms, or related concepts. • Topic 1 | Arrays & Strings • A Very Big Sum • Designer PDF Viewer • Left Rotation
/**
* Your solution in here. Just need to add the number in a variable type long so you
* don't face overflow.
*/
static long aVeryBigSum(int n, long[] ar) {
long sum = 0;
for (int i = 0; i < n; i++) {
sum += ar[i];
}
return sum;
}
/**
* HackerRank provides this code.
*/
public static void main(String[] args) {
Scanner in = new Scanner(System.in);
int n = in.nextInt();
long[] ar = new long[n];
for(int ar_i = 0; ar_i < n; ar_i++){
ar[ar_i] = in.nextLong();
}
long result = aVeryBigSum(n, ar);
System.out.println(result);
}
}
static int[] leftRotation(int[] a, int d) {
// They say in requirements that these inputs should not be considered.
// However, noting that we should prevent against those.
if (d == 0 || a.length == 0) {
return a;
}
int rotation = d % a.length;
if (rotation == 0) return a;
// Please note that there is an implementation, circular arrays that could be considered here,
// but that one has an edge case (Test#1)
// As, we don't need to optimize for memory, let's keep it simple.
int [] b = new int[a.length];
for (int i = 0; i < a.length; i++) {
b[i] = a[indexHelper(i + rotation, a.length)];
}
return b;
}
/**
* Takes care of the case where the rotation index. You have to take into account
* how it is rotated towards the left. To compute index of B, we rotate towards the right.
* If we were to do a[i] in the loop, then these method would need to be slightly chnaged
* to compute index of b.
*/
private static int indexHelper(int index, int length) {
if (index >= length) {
return index - length;
} else {
return index;
}
}
public static void main(String[] args) {
Scanner in = new Scanner(System.in);
int n = in.nextInt();
int d = in.nextInt();
int[] a = new int[n];
for(int a_i = 0; a_i < n; a_i++){
a[a_i] = in.nextInt();
}
int[] result = leftRotation(a, d);
for (int i = 0; i < result.length; i++) {
System.out.print(result[i] + (i != result.length - 1 ? " " : ""));
}
System.out.println("");
in.close();
}
}
Sparse Array in Java
import java.io.*;
import java.util.*;
public class Solution {
public static void main(String[] args) {
Scanner scanner = new Scanner(System.in);
final int totalN = Integer.parseInt(scanner.nextLine());
final Map mapWords = buildCollectionOfStrings(scanner, totalN);
final int numberQueries = Integer.parseInt(scanner.nextLine());
printOcurrenceOfQueries(scanner, numberQueries, mapWords);
}
/**
* This method construcs a map with the collection of Strings and occurrence.
*/
private static Map buildCollectionOfStrings(Scanner scanner, int n) {
final Map map = new HashMap();
for (int i = 0; i < n; i++) {
final String line = scanner.nextLine();
if (map.containsKey(line)) {
map.put(line, map.get(line) + 1);
} else {
map.put(line, 1);
}
}
return map;
}
private static void printOcurrenceOfQueries(Scanner scanner, int numberQueries, Map mapWords) {
for (int i = 0; i < numberQueries; i++) {
// for each query, we look for how many times it occurs and we print on screen the value.
final String line = scanner.nextLine();
if (mapWords.containsKey(line)) {
System.out.println(mapWords.get(line));
} else {
System.out.println(0);
}
}
}
}
The interviewer will be thinking about how your skills and experience might help their teams.
Help them understand the value you could bring by focusing on these traits and abilities. • Communication: Are you asking for requirements and clarity when necessary, or are you just diving into the code? Your initial tech screen should be a conversation, so don’t forget to ask questions. • Problem solving: They are evaluating how you comprehend and explain complex ideas. Are you providing the reasoning behind a particular solution? Developing and comparing multiple solutions? Using appropriate data structures? Speaking about space and time complexity?Optimizing your solution? • Coding. Can you convert solutions to executable code? Is the code organized and does it capture the right logical structure? • Verification. Are you considering a reasonable number of test cases or coming up with a good argument for why your code is correct? If your solution has bugs, are you able to walk through your own logic to find them and explain what the code is doing?
- Please review Amazon Leadership Principles, to help you understand Amazon's culture and assess if it's the right environment for you (For Amazon) - Review the STAR interview technique (All)
5- FAANGM Compensation
Legend - Base / Stocks (Total over 4 years) / Sign On
I am not a lawyer, but I believe there is some sort of legal restrictions on what an internship can be (in California, at least). In my previous company, we had discussed cases on whether or not someone who was out of school can be an intern, and we got an unequivocal verdict that you have to be in school, or about to start school (usually college or graduate school, but high school can work too), in order to be considered for an internship.
I suspect the reason for this is to protect a potential intern from exploitation by companies who would offer temporary jobs to people while labeling them as learning opportunities in order to pay less.
Personally, I feel for people like you, in an ideal world you should be allowed to have the same opportunities as the formally schooled people, and occasionally there are. For example, some companies offer residency programs which are a bit similar to internships. In many cases, though, these are designed to help underrepresented groups. Some examples include:
This is a common bug in the thinking of people, doing the wrong thing harder in the hope that it works this time. Asking tough questions is like hitting a key harder when the broken search function in LinkedIn fails. No matter how hard you hit the key, it won’t work.
Given the low quality of the LinkedIn platform from a technical perspective it seems hard to imagine that they hire the best or even the mediocre. But it may be that because their interviews are too tough to hire good programmers.
Just because so few people can get a question right, does not mean it is a good question, merely that it is tough. I am also a professional software developer and know a bunch of algorithms, but also am wholly ignorant of many others. Thus it is easy to ask me (or anyone else) questions they can’t answer. I am (for instance) an expert on sort/merging, and partial text matching, really useful for big data, wholly useless for UX.
So if you ask about an obscure algorithm or brain teaser that is too hard you don’t measure their ability, but their luck in happening to know that algo or having seen the answer to a puzzle. More importantly how likely are they to need it ?
This is a tricky problem for interviewers, if you know that a certain class of algorithms are necessary for a job, then if you’re a competent s/w dev manager then odds are that you’ve already put them in and they just have to integrate or debug them. Ah, so I’ve now given away one of my interview techniques. We both know that debugging is more of our development effort than writing code. Quicksort (for instance) is conceptually quite easy being taught to 16 year old Brits or undergraduate Americans, but it turns out that some implementations can go quadratic in space and/or time complexity and that comparison of floating point numbers is a sometimes thing and of course some idiot may have put = where he should have put == or >= when > is appropriate.
Resolving that is a better test, not complete of course, but better.
For instance I know more about integrating C++ with Excel than 99.9% of programmers. I can ask you why there are a whole bunch of INT 3’s laying around a disassembly of Excel, yes I have disassembled Excel, and yes my team was at one point asked to debug the damned thing for Microsoft. I can ask about LSTRs, and why SafeArrays are in fact really dangerous. I can even ask you how to template them. That’s not easy, trust me on this.
Are you impressed by my knowledge of this ?
I sincerely hope not.
Do you think it would help me build a competent search engine, something that the coders at LinkedIn are simply unable to do ?
No.
I also know a whole bunch of numerical methods for solving PDEs. Do you know what a PDE even is ? This can be really hard as well. Do you care if you can’t do this ? Again not relevant to fixing the formless hell of LI code. fire up the developer mode of your browser and see what it thinks of Linkedin’s HTML, I’ve never seen a debugger actually vomit in my face before.
A good interview is a measure not just of ability but of the precise set of skills you bring to the team. A good interviewer is not looking for the best, but the best fit.
Sadly some interviewers see it as an ego thing that they can ask hard questions. So can I, but it’s not my job. My job is identfying those that can deliver most, hard questions that you can’t answer are far less illuminating that questions you struggle with because I get to see the quality of your thinking in terms of complexity, insight, working with incomplete and misleading information and determination not to give up because it is going badly.
Do you as a candidate ask questions well ? If I say something wrong, do you a) notice, b) have the soft skills to put it to me politely, c) have the courage to do so ?
Courage is an under-rated attribute that superior programmers have and failed projects have too little of.
Yes, some people have too much courage, which is a deeper point, where for many things there is an optimum amount and the right mix for your team at this time. I once had to make real time un-reversible changes to a really important database whilst something very very bad happened on TV news screens above my head. Too much or too little bravery would have had consequences. Most coding ain’t that dramatic, but the superior programmer has judgement, when to try the cool new language/framework feature in production code, when to optimise for speed, or for space and when for never ever crashes even when memory is corrupted by external effects. When does portability matter or not ? Some code will only be used once and we both know some of it will literally never be executed in live, do we obsess about it’s quality in terms of performance and maintainability .
The right answer is it depends, and that is a lot more important to hiring the best than curious problems in number theory or O(N Log(N)) for very specific code paths that rarely execute.
Also programming is a marathon, not a sprint, or perhaps more like a long distance obstacle course, stretches of plodding along with occasional walls. Writing a better search engine than LinkedIn “programmers” manage is a wall, I know this because my (then) 15 year old son took several weeks, at 15, he was only 2 or 3 times better than the best at LinkedIn, but he’s 18 now and as a professional grade programmer, him working for LinkedIn would be like putting the head of the Vulcan Science Academy in among a room of Gwyneth Paltrow clones.
And that ultimately may be the problem.
Good people have more options and if you have a bad recruitment process then they often will reject your offer. We spend more of our lives with the people we work with than sleep with and if at interview management is seen as pompous or arrogant then they won’t get the best people.
There’s now a Careers advice space on Quora, you might find it interesting.
This blog explores Clever Questions and Answers about Electric Vehicles, Autonomous Cars, Self driving cars, Tesla, Volt, Wayne, Nissan Leaf, Electric Bikes, e-bikes, i-cars, smart cars, Cyber Trucks, etc…
BNEF outlines that electric vehicles (EVs) will hit 10% of global passenger vehicle sales in 2025, with that number rising to 28% in 2030 and 58% in 2040. According to the study, EVs currently make up 3% of global car sales.
The 5 Levels of Autonomous Vehicles
Level 0 – No Automation. This describes your everyday car.
Level 1 – Driver Assistance. Here we can find your adaptive cruise control and lane keep assist to help with driving fatigue.
Level 2 – Partial Automation.
Level 3 – Conditional Automation.
Level 4 – High Automation.
Level 5 – Full Automation.
ASTON MARTIN RAPIDE E
Secret agent James Bond’s favorite British automaker will take the wraps off it’s its first battery-powered ride by year’s end, and it’s a true exotic sports car. Based on the low-slung Rapide coupe, production will be limited to 155 units worldwide, with a sky-high sticker price. It’s expected to run for around 200 miles on a charge and register a 0-60 mph time of less than four seconds.
BOLLINGER B1
Fledgling EV maker Bollinger Motors is ramping up to launch its first model, the B1 for 2020. It’s a decidedly boxy SUV and it looks a lot like a classic Land Rover. It’s built on an aluminum frame and comes with a dual-motor electric all-wheel-drive system. The B1 promises a 200-mile range with 613 horsepower and a strong 668 pound-feet of torque, and is said to tow as much as 7,500 pounds.
Now entering a new class of strength, speed and versatility—only possible with an all-electric design. The powerful drivetrain and low center of gravity provides extraordinary traction control and torque—enabling acceleration from 0-60 mph in as little as 2.9 seconds and up to 500 miles of range.
Cybertruck is a vehicle that has better utility than an F-150, while beating out a Porsche 911 in performance.
Cybertruck is designed with 30X Stainless Steel, which is also used on Starship. Cybertruck uses this material for maximum durability, function, and design.
Cybertruck is a beautiful platform for a wildly futuristic design, which contains insane performance, on-road or off-road, regardless.
Cybertruck contains a beautiful full-width unibrow LED bar for a headlight, evident of form, and function packed into one package. With this headlight, maximum visibility is always present, whether at night, or at day with it’s beautiful Always-On LEDs.
With it’s ability to sprint from 0-60 in under 2.9s and be virtually bulletproof, Cybertruck is the best platform for an advanced, beautiful, technological reliant future.
Absolutely! But I’m looking forward to an even better one — Cybertruck. I fell in love with that beast the moment I saw it, and put down a reservation as quickly as possible. I find myself near the head of a long, long waiting list for this revolutionary vehicle. I hope to take delivery of a tri-motor within the first 5000 off the assembly line.
Even before seeing it up close and in person, I know it will be the best vehicle I’ll ever purchase. It goes beyond what I love about my Model 3 AWD. I don’t think of it as a pickup truck. I would never buy a traditional pickup for as little as I would use it as such. Cybertruck is an all-in-one vehicle. It’s a pickup truck, sure, but it’s also an SUV that seats six and has 100 cu ft of secure, weather protected storage. I plan to use it for wilderness camping in hard to get to places by virtue of its exemplary off-roading capabilities.
I will happily take my Cybertruck on cross country road trips. The self-driving capabilities of Tesla vehicles make long distance cruising an enjoyable experience, devoid of the typical driving fatigue that I’ve always endured traveling in other cars I’ve owned, even my Class B motor-home, which I recently sold. I’m looking forward to spending time in the back country of Alaska with the grizzlies and the moose (safely tucked inside CT, of course).
Cybertruck will be the most durable vehicle I’ve ever owned, as well. That 3mm cold rolled stainless steel exoskeleton is dent proof, bullet proof, and rust proof. The windows are almost impossible to break, and the rolling tonneau cover is strong enough to support the weight of a 200+ lb man.
Cybertruck comes without paint of any kind which is great for squeezing through brush on abandoned logging roads. No need to hold back to avoid scratching the finish. I may have the truck painted, though, just to give it a personalized touch.
My Cybertruck won’t be left unused. It will be my daily driver. Sure, it’s large, but it won’t be like driving one of those behemoths from Detroit. It’s fast and responsive. The air suspension can be lowered to make it easier to get in and out of, improve handling, and reduce aerodynamic drag.
Add to that its 3500 lbs load capacity, its 14,000 lb towing capacity, 500+ mile range, and fast charging at the ever-expanding Tesla Supercharger network, and it’s easy to understand how this vehicle will be the best, and probably the last, vehicle I’ll ever own. Unless I deploy it to the Tesla Network as a robotaxi in a couple of years. Now there’s a money making idea!
KIA SOUL EV
Kia is redesigning its funky/boxy compact full-electric hatchback for 2020 with fresh styling and myriad improvements. A new 64 kWh liquid-cooled lithium ion polymer battery pack should deliver well in excess of 200 miles on a charge. Power will be bumped up to 200 horsepower with 291 pound-feet of torque. It will come with four drive modes and four levels of regenerative braking, including a setting for one-pedal driving.
MERCEDES-BENZ EQC
The EQC is the first in what will be a series of luxury EVs coming from Mercedes-Benz. It’s a boldly styled SUV with two electric motors that combine for an output of 402 horsepower with 564 pound-feet of torque. All-wheel-drive will be standard, along with a long list of convenience, connectivity, and safety features. In Europe it’s rated to run for 279 miles on a full charge, though that number may be somewhat lower when evaluated by U.S. standards.
MINI ELECTRIC
BMW’s Mini brand is developing a new full-electric version of the comely Cooper coupe, likely for later in 2020. Details, however, remain sketchy. Only scant visual tidbits like this one remain available. Reports say it will share technology with the BMW i3, and could run for as many as 200 miles on a full charge. Expect it to deliver Mini’s famed go-kart-like handling.
POLESTAR 2
Volvo is launching a new high-tech sub-brand this year called Polestar. While its first model, the Polestar 1, will be a plug-in hybrid, the Polestar 2 is a sleekly cast full-electric luxury four-door hatchback. Intended to compete with the Tesla Model 3, the automaker is targeting a range of 275 miles on a charge, with its two electric motors expected to put around a combined 485 pound-feet of torque to the pavement. All-wheel drive will come standard.
PORSCHE TAYCAN
Porsche’s first full-electric model will be an ultra-exotic battery-powered four-door sports car. It’s said to leap off the line and reach 62 mph (100 km/h) in a sudden 3.5 seconds. The automaker claims around 300 miles of range with a full battery, with the ability to recharge about 60 miles worth of energy in just four minutes.
RIVIAN R1T
Yet another startup EV builder, Rivian plans to introduce a futuristic-looking pickup truck for 2020 to be built in the former Mitsubishi factory in Normal, IL. No mere poseur, the R1T is said to deliver a 400-mile range, with its quad-motor system enabling off-road adventures and a 0-60 mph time of just three seconds on paved roads.
TESLA MODEL Y
Expected sometime during 2020, assuming the automaker incurs no production delays or other corporate calamities, the Tesla Model Y will essentially be a crossover SUV version of the Model 3 sedan. Smaller and less expensive than the Model X, it’s sure to become the company’s best selling model. It will initially come in performance, long-range, and dual motor all-wheel drive variants with specs similar to the Model 3.
TESLA ROADSTER
Tesla’s original Roadster was its first model and it broke new ground in terms of performance and operating range. It’s coming back for 2020 with a freshly curvy profile and uncanny performance. Tesla claims it will fly to 60 mph in a rocket-like 1.9 seconds, reach a felonious top speed of 250 mph, and run for a seemingly impossible 620 miles with a full charge.
When going at a constant speed downhill – the car uses regenerative braking to maintain that speed without going faster and faster.
In effect, the electric motor(s) in the car are turned in to generators – and charge up the battery as they go.
Here is an actual screen shot from my Tesla Model 3 – taken shortly after driving over the Franklin Mountains in El Paso…it’s a graph of the energy consumed per mile driven over the last 30 minutes (kinda like the “mpg” number for a gasoline car):
I bought a Model 3 Standard Range Plus with Full-Self-Driving – a little over a year ago.
I went online – did all of the options selection, all of the financing, taxing and insuring in about 40 minutes – and without ever speaking to an actual human. The deposit money was taken from my credit card.
The car was on a 14day delivery back then – but it took a little longer – more like three3 weeks.
During which time, I had to put up with just a model-Model-3:
The full-sized model 3 was delivered on a large covered car transporter – theoretically to my front door – but in fact the driver phoned me to say he couldn’t get through the twisty streets in my neighborhood in his gigantic truck – so we met him in a nearby street. He offloaded the car – gave me time to inspect it – handed me the “credit card” car keys – and that was that. So I drove it the last 100 yards home.
This is by FAR the most pleasant way to buy a car.
SNAFU’S AND MINOR GLITCHES:
There were some SNAFU’s and complications…mostly because I live in Texas where it’s illegal for a car company to sell direct to a customer…this is true in about 50% of US states.
So what happened was that the car was sold to me in Arizona – where it’s legal. The car was delivered by Tesla to their distribution center in Phoenix Arizona – where I actually purchased it. Then Tesla did the work to re-title the car in Texas at their expense.
The car was not shipped in a Tesla transporter but by some 3rd party (whom Tesla also paid).
This complicated little legal dance would not have been noticeable to me EXCEPT that there was some confusion about my insurance. They couldn’t re-title the car to me without it being insured – and I couldn’t insure it without a VIN – and somewhere along the line someone dropped the ball.
So the car arrived with no temporary licence plates and I had to go back to Tesla and have them do that after the insurance SNAFU got ironed out. I’m still not sure whether it was their fault, my fault, my insurance company’s fault – or the DMV here in El Paso’s fault…I’m betting the latter because we’re pretty sure ours was the first Tesla ever sold here.
Although companies like Uber and Tesla are not very successful in that aspect, fully self driving cars will have to be able to avoid collisons by all means. They simply will have to be designed in a way that they do not crash against something.
Okay, so what will a self driving car do, when another driver deliberately cuts its lane? What, if some guys make a fun out of throwing garbage cans at self driving cars? What, if some unemployed taxicab drivers try to make a ride in a self-driving taxicab as unpleasant as possible?
This is unlikely? Tell me why people spend extra money in order to make their trucks pollute the air as much as possible:
CyberTruck will cost half the competition, and here’s why the math says it works
You are given an array arr of N integers. For each index i, you are required to determine the number of contiguous subarrays that fulfills the following conditions:
The value at index i must be the maximum element in the contiguous subarrays, and
These contiguous subarrays must either start from or end on index i.
Signature int[] countSubarrays(int[] arr)Input
Array arr is a non-empty list of unique integers that range between 1 to 1,000,000,000
Size N is between 1 and 1,000,000
Output An array where each index i contains an integer denoting the maximum number of contiguous subarrays of arr[i]Example: arr = [3, 4, 1, 6, 2] output = [1, 3, 1, 5, 1]Explanation:
For index 0 – [3] is the only contiguous subarray that starts (or ends) with 3, and the maximum value in this subarray is 3.
For index 1 – [4], [3, 4], [4, 1]
For index 2 – [1]
For index 3 – [6], [6, 2], [1, 6], [4, 1, 6], [3, 4, 1, 6]
For index 4 – [2]
So, the answer for the above input is [1, 3, 1, 5, 1]
Rotational Cipher: One simple way to encrypt a string is to “rotate” every alphanumeric character by a certain amount. Rotating a character means replacing it with another character that is a certain number of steps away in normal alphabetic or numerical order. For example, if the string “Zebra-493?” is rotated 3 places, the resulting string is “Cheud-726?”. Every alphabetic character is replaced with the character 3 letters higher (wrapping around from Z to A), and every numeric character replaced with the character 3 digits higher (wrapping around from 9 to 0). Note that the non-alphanumeric characters remain unchanged. Given a string and a rotation factor, return an encrypted string.
Signature
string rotationalCipher(string input, int rotationFactor)
Given a list of n integers arr[0..(n-1)], determine the number of different pairs of elements within it which sum to k. If an integer appears in the list multiple times, each copy is considered to be different; that is, two pairs are considered different if one pair includes at least one array index which the other doesn’t, even if they include the same values.
Signature
int numberOfWays(int[] arr, int k)
Input
n is in the range [1, 100,000]. Each value arr[i] is in the range [1, 1,000,000,000]. k is in the range [1, 1,000,000,000].
Output
Return the number of different pairs of elements which sum to k.
Example 1
n = 5 k = 6 arr = [1, 2, 3, 4, 3] output = 2The valid pairs are 2+4 and 3+3.
Example 2
n = 5 k = 6 arr = [1, 5, 3, 3, 3] output = 4There’s one valid pair 1+5, and three different valid pairs 3+3 (the 3rd and 4th elements, 3rd and 5th elements, and 4th and 5th elements).
Solution using Python:
Complexity:
Time: O(n)
Space: Array of n and Hash of n (Who cares about space?? really ??)
Execution:
Case 1:
[1, 2, 3, 4, 3] {1: 1, 2: 1, 3: 2, 4: 1} Total Pairs is: 2.0
Microsoft Certified: Azure Administrator Associate Average Salary — $125,993
Candidates for the Azure Administrator Associate certification should have subject matter expertise implementing, managing, and monitoring an organization’s Microsoft Azure environment.
Responsibilities for this role include implementing, managing, and monitoring identity, governance, storage, compute, and virtual networks in a cloud environment, plus provision, size, monitor, and adjust resources, when needed.
AZ-104 Microsoft Azure AdministratorExam Breakdown:
Manage Azure identities and governance (15-20%), Manage Azure AD objects, Manage role-based access control (RBAC), Manage subscriptions and governance, Implement and manage storage (10-15%), Manage storage accounts, Manage data in Azure Storage, Configure Azure files and Azure blob storage, Deploy and manage Azure compute resources (25-30%), Configure VMs for high availability and scalability, Automate deployment and configuration of VMs, Create and configure VMs, Create and configure containers, Create and configure Web Apps, Configure and manage virtual networking (30-35%), Implement and manage virtual networking, Configure name resolution, Secure access to virtual networks, Configure load balancing, Monitor and troubleshoot virtual networking, Integrate an on-premises network with an Azure virtual network, Monitor and back up Azure resources (10-15%), Monitor resources by using Azure Monitor, Implement backup and recovery,
Below are the top 50 Microsoft Azure Administrator Certification Questions and Answers Dumps.
Question 1:In our subscription, we have four different resource groups. They are RG1, RG2, RG3, RG4. RG2 has a Read-only lock at the resource group scope. RG3 has a Delete lock at the resource group scope. RG1 and RG4 do not have locks. We need to determine how we could move resources between resource groups during the lifecycle of these resources. Assuming all resources provisioned support moving between resource groups regardless of region. Which of the following statements are plausible?
A. We can move resources from RG1 to RG4.
B. We can move resources between any of these resource groups.
Notes: We can effectively move resources from RG1 and RG4 because RG1 does not have a lock. We can move resources from RG4 and RG3 because RG4 does not have a lock. Also, while RG3 does have a Delete lock this does not stop resources from being moved into this resource group.
Question 2: Your company has recently added a few new users to your Azure Active Directory. You have already added them to an active directory group, and now you have asked them to add their devices to the domain. When they add their devices, you have to ensure they are prompted to use a mobile phone to verify their identity. How do you configure this?
A. Require multi-factor authentication to join devices
Notes: This setting in Azure Active directory will require multi-factor authentication for all devices under any conditions.
Question 3: Under your Azure Subscription, you are trying to identify VMs that are underutilized in order to shutdown all VMs with CPU utilization under 5%. Which blade should you use?
Notes: Advisor helps you follow best practices to optimize your Azure deployments. It analyzes your resource configuration and usage telemetry and then recommends solutions that can help you improve the cost-effectiveness, performance, high availability, and security of your Azure resources.
Question4: You have just purchased the domain name arseemagroup.com from a third party registrar. Using your Azure Active Directory domain, you’d like to create new users with the suffix @arseemagroup.com. Which three things must you do?
A. Access the custom domain names blade from Azure AD
B. Create a MX or TXT record from arseemagroup.com DNS
C. Verify that you own the domain name
D. Access the App registrations blade from Azure AD
ANSWER4:
A B and C
Notes: In order to add the domain “arseemagroup.com” to Azure AD, you must add the domain from the custom domain names blade.
When you add your custom domain to Azure AD, you must create an MX or TXT record with a destination address (provided) in order to verify that the domain does indeed belong to you.
When you add your custom domain to Azure AD, you must verify that this domain belongs to you by going through a verification process. Azure AD will provide the verification information.
Question 5: You have two subscriptions named Subscription1 and Subscription2. You are logged into Azure using Azure PowerShell from Computer1. How can you identify which subscription you are currently viewing and then switch from one subscription to the other for the current session at Computer1.
A. Set-AzContext -SubscriptionName
B. Get-AzContext
C. Select-AzContext
D. AzShow-Context
ANSWER5:
A and B
Notes: In Az PowerShell 3.7.0, Set-AzContext sets the tenant, subscription, and environment for cmdlets to use in the current session.
In Az PowerShell 3.7.0, 'Get-AzContext' gets the metadata used to authenticate Azure Resource Manager requests.
Question 6: You have two subscriptions named Subscription1 and Subscription2. You are currently managing resources in Subscription1 from Computer1 that has the Azure CLI installed. You need to switch to Subscription2. Which command should you run?
A. az set account –subscription “Subscription2”
B. az account set –subscription “Subscription2”
C. az subscription set “Subscription2”
D. Select-AzureSubscription -SubscriptionName “Subscription2”
ANSWER6:
B
Notes: You are accessing Azure from Computer1 with the Azure CLI installed; therefore, this command is the correct command.
Question 7: You work at the IT help desk for Consilium Corporation. You have been getting an influx of calls into the help desk about resetting users’ passwords. They keep reporting that they can’t seem to figure out how to reset their password in order to gain access to their Customer Relationship Management (CRM) software. What do you do?
A. Ensure that the users who are having problems are within the correct AD group
B. Make sure you have Azure Active Directory Free
C. Make sure they have their verification device (mobile app or access to email)
D. Verify that self-service password reset is enabled in Azure Active Directory
ANSWER7:
A C and D
Notes: Self-service password may not apply to those not in a specific Active Directory group. If the user is not in the group, they will not be able to reset their password.
In order to reset their password, the user will have to verify their identity using a mobile phone, mobile app, office phone or email.
Self-service password reset is an optional feature in Azure Active Directory, which may not apply to any and all users in the organization.
Question 8: In this scenario, we are working for Cloud Chase Support. We our the active administrator, and we have been tasked with determining how to ensure we do not incur costs in either our Prod-Subscription and our Dev-Subscription for virtual machine resources. We have a CloudChase management group where both subscription nested. We decide to use Azure Policy to enforce compliance on Virtual Machines. Our Policy definition states that virtual machines are not an allowed resource type at the scope of our CloudChase management group. There are some existing virtual machines in our Prod-Subscription at the time this policy is created. After the enforcement of our new policy which of the below statements is true?
A. We cannot create virtual machines in any subscription under the scope of our management group and our existing virtual machines will be deallocated.
B. Virtual machines can be created in our Prod-Subscription if they are compliant.
C. Virtual machines can be created in our Dev-Subscription.
D. We cannot create virtual machines in any subscription under the scope of our management group.
ANSWER8:
D
Notes: We created a policy that has a definition that defines that virtual machines are not a supported resource type at the scope of our management group. Any subscription under the scope of this management group will not support the provisioning of virtual machine resources.
Question 9: You recently signed up for Azure Active Directory Premium and need users to be able to reset their passwords if they are unable to login. What should you configure in Azure Active Directory?
A. Set “block sign-in” to off when creating the user
B. User password reset
C. User password change
D. Add user to sign-in group in Azure AD
ANSWER9:
B
Notes: With the password reset capability, the user will be able to click “forgot password” when trying to log in to the portal and reset their password on their own.
Question 10: You have an Azure Pay-as-you-go Subscription named Subscription1. You have some concerns about cost for Subscription1, and you would like to spend less than $100.00 US per month on all resources in this subscription. If you spend more than $90.00 US, you would like to get an alert in the form of a text message. What should you do?
A. Shutdown VMs when you are not using them
B. Create an alert in Azure Monitor
C. Create a budget alert condition tied to an action group
D. Create a budget in the subscriptions blade
ANSWER10:
C
Notes: Creating an alert condition is available when setting your budget, it is not required that you create an action group, however in this case where we want to be notified via SMS (text message), it is required that we tie an action group to our budget alert.
Question 11: We want to be provide an Azure AD B2B guest user the ability to manage all resources inside of our DevRG resource group. We want to give them these abilities over managing all resources inside of this resource group and nothing more. What role would we assign to the user to accomplish this goal? Assume we are assigning the role to the DevRG scope.
A. User Access Administrator
B. Owner
C. Contributor
D. Global Admin
ANSWER11:
C
Notes: This role will allow us to give this guest user the ability to manage all resources inside of the DevRG resource group, and nothing more like manage role assignments. This is exactly what we need for our scenario. When assigning permissions we need to think the principle of least privilege.
Question 12: You have just created a General-purpose V2 storage account in Azure. From a VM located in your on-prem environment, you’ve logged into your Azure subscription using the Connect-AzConnect command from the PowerShell command line. Next, you need to retrieve the key, in order to access your storage account. Which PowerShell cmdlet will you use to retrieve the access key?
A. Get-AzStorageAccount
B. Get-AzStorageContainerKey
C. Get-AzStorageContainerStoredAccessPolicy
D. Get-AzStorageAccountKey
ANSWER12:
D
Notes: The Get-AzStorageAccountKey cmdlet gets the access keys for an Azure Storage account.
Question 13: You have been directed to copy all data from one storage account to another using the AzCopy tool. You need to report which storage services you can copy. Which of those services would it be?
A. Only Azure File Shares
B. Azure Queues and Blobs
C. Azure Blob and File Shares
D. Azure Table and File Shares
ANSWER13:
C
Notes: AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account.
Question 14: You have a general purpose v1 storage account named consiliumstore that has a private container named container2. You need to allow read access to the data inside container2, but only within a 14 day window. How do you accomplish this using the Azure Portal?
A. Upgrade the storage account to general purpose v2
B. Create a shared access signatures
C. Create a service SAS
D. Create a stored access policy
ANSWER14:
B and D
Notes: A Shared Access Signature (SAS) allows you to have granular control over your storage account, including access to only certain services (i.e. Azure Blobs) and permitting only read, write, delete, list, add, or create access. A Stored Access Policy allows granular control over a single storage container using a Shared Access Signature (SAS).
Question 15: You have an existing Microsoft Enterprise Agreement (EA) Subscription. You need to ship 34TB of data from an on-premise Windows 2016 server to your Azure storage account. You need to ensure that the data transfer has zero impact on the network, preserves your existing drives and is the fastest and most secure method. What should be your first step to starting the import job?
A. Open a ticket with Microsoft Support
B. Order an Azure Databox via the Azure Portal
C. Start an Import Job via the Azure Portal
D. Prepare your hard drives using the WAImportExport tool
ANSWER15:
B
Notes: This option would be the best, as Azure Data boxsupports Windows 2016 servers, and is secure and reliable.
Question 16: You have data in an AWS S3 Bucket named myS3Bucket and you need to copy all of its contents to a container named container1 in an Azure storage account named companydata. Which command would be most efficient use of getting the data from the S3 bucket to the Azure storage container?
A. azcopy copy ‘https://s3.amazonaws.com/myS3Bucket’ ‘https://companydata.blob.core.windows.net/container1’ –recursive=true
B. aws s3 cp s3://mybucket/test.txt https://companydata.blob.core.windows.net/container1
C. azcopy blob copy ‘https://s3.amazonaws.com/myS3Bucket’ ‘https://companydata.blob.core.windows.net/container1’
D. azcopy copy sync ‘https://s3.amazonaws.com/myS3Bucket’ ‘https://companydata.blob.core.windows.net/container1’
ANSWER16:
A
Notes: The AzCopy tool can copy directly from an AWS S3 bucket to an Azure Storage Account. source
Question 17: You have the following Azure Storage Accounts in your Subscription: stor1 (BlockBlobStorage) stor2 (FileStorage) stor3 (StorageV2) Which of these storage accounts can be converted to Read-Access Geo-Redundant Storage (RA-GRS) based on their storage account kind? Please select the most appropriate answer.
Question 18: You create an Azure storage account named companystore with a publicly accessible container named container1. You upload a file to container1 named pic1.png. What will be the URL in order to access this blob?
Notes: The URL of the blob, by default will be the storage account name, followed by blob.core.windows.net, the container name, then the name of the blob.
Question 19: You have an Azure subscription named Subscription1. In Subscription1, you have an Azure virtual machine named VM1. Attached to VM1 are two network interface cards. You require a third network interface card with a network bandwidth above 1000 Mbps for your storage area network. What should you do?
A. Create an additional VM in the same subnet and connect to VM1 over the LAN
B. Create a new subnet with a sufficient number of available IP addresses
C. Create a new storage account to store data for VM1
Question 20: You are trying to create a new Azure Kubernetes Service (AKS) cluster from your local workstation. The AKS cluster must contain three nodes and ensure access to the worker nodes in order to troubleshoot the kubelet. You have authenticated to Azure from your local workstation with the Azure CLI. What command will you use to create an AKS cluster named AKS1 with the necessary components inside of the resource group named RG1?
A. az aks create -g RG1 -n AKS1 –generate-ssh-keys –node-count 3
B. az kubernetes create –name AKS1 –group RG1 –nodes 3 –generate-keys
C. az aks create –name AKS1 –resource-group RG1 –nodes 3 –ssh-key-value ~/.ssh/id_rsa.pub
D. az kubernetes create –name AKS1 –resource-group RG1 –nodes 3 –generate-keys
ANSWER20:
A
Notes: The correctcommand to use for creating an AKS cluster is az aks create and the -g and -n values are abbreviated syntax for resource group and name respectively. The --generate-ssh-keys flag will create the SSH keys in order to access the worker nodes. The --node-count flag will ensure that there are three worker nodes in the cluster.
Question 21: VM1 is located in the West US region, and the OS disk is Premium SSD. The size of VM1 is currently Standard_D2s_v3, but you need to change the size to Standard_D2. You are able to select the size from the size blade, but you receive an error message. Why can’t you change the VM size?
A. You need to provide the username and password for the OS to upgrade
B. Standard_D2 does not support premium SSD disks
C. The size Standard_D2 is not available in the West US region
D. You did not shut down (deallocated) VM1 before you change the size
ANSWER21:
B
Notes: Standard_D2 does not support premium disks; therefore, you are unable to change VM1 to this size. A good way to remember which size is available is the s in the size, as the s indicates Premium SSD. See more here: dsv3-series
Question 22: You have an Azure Kubernetes Service (AKS) cluster named AKS1 within the resource group named RG1. You are trying run the command kubectl get all from the Azure Cloud Shell (https://shell.azure.com) to view your cluster resources. You received the error Error from server (BadRequest): the server rejected our request for an unknown reason. You’ve verified that the resources exist and the command is correct. What do you need to do in order to view your cluster resources from the Azure Cloud Shell?
A. Retrieve the access credentials using the command az aks get-credentials --name AKS1 --resource-group RG1
B. Log into the cluster GUI from the Azure Portal
C. Install the kubectl tool
D. Access the Kubernetes Dashboard using the command az aks browse --name AKS1 --resource-group RG1
ANSWER22:
A
Notes: AKS does not have a cluster GUI that is accessible from the Azure Portal. You must use a machine with kubectl installed, or the Azure Cloud Shell.
The kubeconfig is required in order to access the Kubernetes API. You can retrieve the kubeconfig using the az aks get-credentials command.
Question 23: You have a subscription named Subscription1. You create a new Azure VM in your subscription named VM5 running Windows 2012 R2. You try to connect and login to VM5, but you get an error that says “We couldn’t connect to the remote PC. Make sure the PC is turned on and connected to the network, and that remote access is enabled.” You have verified that VM5 is running and has been assigned a public IP address. What change do you need to make in order to successfully connect and login to VM5?
A. Add a rule to the Network Security Group that will allow port 3389
B. Select Reset password from the VM blade
C. Use Network Watcher for detailed connection tracing
D. You need to access the VM from a computer that’s in the same subnet
ANSWER23:
A
Notes: A Network Security Group (NSG) is designed to filter traffic to and from Azure resources, including Azure VMs. Allowing port 3389 from your machine to the Azure VM will address the connection issue. You may reset the password, but being you received the error before attempting to enter your credentials says that it's a connectivity problem, not a credentials problem.
Question 24: Subscription1 contains an Azure VM named VM1 with the following configuration:VM Size: Standard_D2s_v3
Public IP Address: 52.173.36.55
Resource Group: RG1
Availability Zone: None
Location: Japan East
Disk Type: Standard HDD
What are two things you can do to reduce data loss and achieve a 99.9% SLA?
A. Create a recovery services vault and enable replication for VM1
B. Move VM1 to a paired region
C. Place the VM in an availability zone
D. Change the disk type to Premium SSD
ANSWER24:
A and D
Notes: Creating a recovery services vault will allow you to back up the VM to a different region and location. You will enable replication to ensure that VM data and settings are continually replicated to the backup location for simple recovery.
Virtual machines with Premium SSD disks qualify for the 99.9% connectivity SLA.
Question 25: You have created an application that is to be run on Linux containers named ContainerApp1. You’ve created an Azure container instance with an FQDN, but you notice that when the container restarts, all application data is lost. What is the best solution to preserve the data associated with your application?
A. Create a public blob storage container and share the URI with the application
B. Create a storage account and share the SAS with the application
C. Mount an Azure file share as a volume in Azure Container Instances
D. Run the container on a VM, and use the managed disk attached to the VM
ANSWER25:
C
Notes:Azure Container Instancescan mount an Azure file share created with Azure Files. Azure Files offers fully managed file shares hosted in Azure Storage that are accessible via Server Message Block (SMB) protocol. Using an Azure file share with Azure Container Instances provides file-sharing features similar to using an Azure file share with Azure virtual machines.
Question 26: You’ve created a Dockerfile that contains the necessary steps to build an image that you plan to use for your application running as a Web App in App Services named APP1. You have created an Azure Container Registry, which is where you plan to store your images to be used for APP1. What should your next step be?
A. Run the az acr build command
B. Create the App Service Plan
C. Run the docker push command
D. Run the docker login command
ANSWER26:
A
Notes: The az acr build command will build and push your image to an Azure Container Registry all in one command. You should use this if you don't have docker installed, and/or if you don't have the compute resources to build images on your local machine.
Question 27: You have an application that runs on instances in a Virtual Machine Scale Set. The number of instances in the VMSS is at three starting Monday. The minimum number of instances is one, and the maximum is 5 instances. There are two scaling rules for this VMSS:
Rule
Condition
Action
Rule1
CPU > 75%
+1 instance
Rule2
CPU < 25%
-1 instance
Based on the rules above and the chart below, on Wednesday how many instances will there be in our VMSS?
Notes: We start with 3 instances on Monday. Based on the chart we will still be at 3 instances on Tuesday at 12:01 because we have not met a condition for any scaling actions to take place, but then at 13:36 on Tuesday we will scale down an instance due to the CPU% being below 25%. Now we have 2 instances. Then on Wednesday at 12:10 we will be scale-out by one instance because our CPU% has gone above 75%. This gives us three instances on Wednesday.
Question 28: Subscription1 contains an Azure VM named VM1. You have added a data disk to VM1, as well as a new network interface card. You need to create two more Azure VMs just like this one named VM2 and VM3. What is the most efficient way to create VM2 and VM3 that will minimize cost?
A. Backup the VM and recover to a different region
B. Redeploy VM1 with the new disk and NIC and deploy the template to VM2 and VM3
C. Select Export template from VM1 blade, then deploy VM2 and VM3 with that template
D. Create an image from VM1 and use the image to deploy VM2 and VM3
ANSWER28:
C
Notes: Exporting the template from a VM is a quick and easy way to take the existing VM settings and automate future deployments.
Question 29: You have an Azure subscription named Subscription1. You have created a web app named App1 in Subscription1 that is sourced from a git repository named Git1. You need to ensure that every commit to the master branch in Git1 triggers a deployment to a test version of the application before releasing it to production. What are two changes that you must make to App1 to fulfill this requirement?
A. Create a build server with the master branch of Git1 as the trigger
B. Configure custom domains for test and production versions of App1
C. Add a new deployment slot to App1 to release the test version of App1
D. Create a new web app and configure failover settings from test to production
ANSWER29:
A and C
Notes: You have the option of creating a build server natively in App Services by selecting Deployment Center in the App1 blade. This will trigger a build every time a commit is made to the master branch of Git1.
Deployment Slots allow greater flexibility within app services, providing a built-in staging environment for your app, allowing you access to your application without deploying it to production.
Question 30: You plan to create an Azure Web App in the East US region. You need to ensure that this web app scales out with demand, to prevent downtime. You also need to ensure that the data that resides inside of the application will remain secure and never become exposed to anyone outside of the organization. Which App Service plan SKU will you chose that will meet these requirements and also save on cost?
A. FREE
B. B1
C. SHARED
D. I1
ANSWER30:
D
Notes: The I1 SKU allows your app to run on dedicated hardware, and also provides network isolation on top of compute isolation to protect your app. It also provides the maximum scale-out capabilities.
Question 31: VM1 is located in the East US region. You have added a premium SSD data disk to VM1, but the IOPS are not satisfying the needs of your application, how can you change the speed of the disk?
A. Select the disk configuration and increase the size
B. Shut down (Deallocate) the VM
C. Export the disk and convert to VHD
D. Create a new disk and migrate the data
ANSWER31:
A and B
Notes: Premium disk performance increases based on the size of the disk, while standard disks have consistent performance for all disk sizes. Disks can be resized only when they are unattached or the owner VM is deallocated. Disks can be resized only when they are unattached or the owner VM is deallocated.
Question 32: The NoName Company has just deployed a number of Azure VMs into a specific subnet in an Azure virtual network. They have also implemented a network security plan which includes the use of Azure Firewall. From those newly deployed VMs, the company wants to deny access to the website https://www.microsoft.com. How can you achieve this using their current Azure resources?
A. A network rule
B. Create a route via Route Table to the firewall (as a virtual appliance hop)
C. Configure an application rule on the Azure Firewall that blocks FQDNS www.microsoft.com
D. An Application Gateway
E. A Subnet named AzureFirewallSubnet
F. A VPN Gateway
ANSWER32:
A B C
Notes: A network rule would allow access to an external public DNS service, to lookup the microsoft.com domain name. Creating a route via Route Table to the firewall is required to direct incoming traffic (from the firewall public IP address) to a specific destination.
An application rule allows or blocks an address by URL. This is necessary in order to block https://www.microsoft.com according to the requirements of the company.
Question 33: You need to create an Azure virtual machine named VM1 that requires a static private IP address configured inside the IP address space for the VNet in which the VM resides. How do you configure a static IP address for this Azure VM?
A. After the VM has been created, create a new network interface and configure a static IP address for that network interface
B. After the VM has been created, go to the network interface attached to the VM and change the IP configuration to static assignment
C. When creating a VM in the portal, select New next to private ip address and choose static after assigning the correct IP address
D. When creating the VM in the portal, change the setting from dynamic to static on the networking tab under private IP address
ANSWER33:
B
Notes: Changing the IP configuration on the network interface will achieve this goal.
Question 34: You have an Azure subscription named Subscription1. In Subscription1, you have a web server that has the IP address 10.1.0.83 and a database server that has the IP address 10.1.0.142. Instead of remembering the IP addresses of the servers, you’d like to connect to these servers using a DNS name. With no DNS server currently, and without having to create a new DNS server, how can you access your database server from your web server by the DNS name db.yourcompany.com?
A. Public DNS Zone
B. Promote Server to Domain Controller
C. Access the Domain Controller
D. Private DNS Zone
ANSWER34:
D
Notes: A private DNS zone is an easy way to register servers with a DNS name versus having to access them by their IP address
Question 35: You have an Azure subscription named Subscription1. In Subscription1 you have two VNets, one named VNet-Hub and one named VNet-Spoke. Within VNet-Hub, there is an Azure Firewall with a public IP address, configured as a Standard SKU. In VNet-Spoke, there is a Windows Server 2016 with no public IP address and no Network Security Group (NSG). Using which three items can you utilize the public IP address of the Azure firewall to connect to the Windows Server, without exposing the server to the public internet directly?
A. NAT Rule for the Firewall
B. Route Table
C. Virtual Network Gateway
D. Virtual Network Peering
E. ExpressRoute Gateway
ANSWER35:
A B D
Notes: You can configure a NAT rule on the firewall to translate and filter inbound Internet traffic to your subnets. You will need a route table to route ingress traffic to the firewall virtual appliance. In order for traffic to flow from the VNet-Spoke to VNet-Hub, you will need a peer connection between the virtual networks (Virtual Network Peering).
Question 36: You have an on-premises environment as well as your Azure environment with a subscription named Subscription1. Subscription1 has a virtual network named VNET1 and you need to connect to the on-premises network securely using an ExpressRoute link and Site-to-site VPN. What Azure resources do you need in order to establish the connection while minimizing cost?
A. Azure VPN Gateway
B. Network virtual appliance
C. No resources needed, ExpressRoute is encrypted by default
D. A route table
ANSWER36:
B and D
Notes: VPN tunnels over Microsoft peering can be terminated either using VPN gateway, or using an appropriate Network Virtual Appliance (NVA) available through Azure Marketplace. We choose to use NVA because it accomplishes our goal, but for a lesser cost than Azure VPN Gateway. A route table is required to specify the next hop for traffic coming and going from the on-premises network.
Question 37: You have a Network Security Group (NSG) that is associated with a network interface that is attached to an Azure virtual machine named VM1 running Windows Server 2019. VM1 is in subnet named subnet1, in a virtual network named VNet1. A different NSG is attached to subnet1, but you notice that there is an inbound rule to allow port 3389. When you try to connect to VM1, you cannot connect. You reviewed the NSG and the source IP address and the protocol are correct. How can you connect to VM1 using best practices for NSGs in Azure?
A. The protocol on the NSG rule is set to UDP
B. The NSG attached to the network interface needs to be removed
C. The source IP address on the NSG rule is incorrect
D. You need to add an inbound rule for the NSG attached to the network interface
ANSWER37:
B
Notes: Removing the NSG from the network interface would allow the VM to use the NSG associated with the subnet, which is best practice.
Question 38: You have an Azure subscription named Subscription1. In Subscription1 you have an Azure VM named VM1 with Windows Server 2019 as the operating system. VM1 does not have a public IP address assigned to it. VM1 is located in a virtual network named VNet1, in subnet1. Attached to subnet1 is a Network Security Group (NSG) that has port 3389 open inbound. On your local machine, you do not have an RDP client installed, but you need to login into the VM. Without assigning a public IP address to the VM, what three things in combination can we use to log into VM1?
A. HTML5 supported Web Browser
B. Azure VPN Gateway
C. A subnet named AzureBastionSubnet
D. A Gateway Subnet
E. Azure Bastion Host
F. Inbound security rule to open port 443
ANSWER38:
A C E
Notes: The RDP connection to the virtual machine happens via Bastion host using the Azure portal (over HTML5) using port 443 and the Bastion service.
The subnet inside your virtual network to which the Bastion resource will be deployed must have the name AzureBastionSubnet. The name lets Azure know which subnet to deploy the Bastion resource to. This is different than a Gateway Subnet.
The Azure Bastion service is a new fully platform-managed PaaS service that you provision inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly in the Azure portal over TLS. When you connect via Azure Bastion, your virtual machines do not need a public IP address.
Question 39: You have a subscription named Subscription1. Subscription1 has two virtual networks named VNet1 and VNet2 in two different resource groups. VNet1 is located in the West US region and VNet2 is located in the East US region. You need to apply a network security group named NSG1 to a subnet in VNet1. NSG1 is located in the East US region. How do you attach NSG1 to the subnet in VNet1?
A. You can’t. Create a new network security group in the west us region
B. Move VNet1 into a resource group located in the east us region
C. Select the subnet and choose NSG1 from the network security group drop-down
D. Move NSG1 into the VNet1 resource group
ANSWER39:
A
Notes: In order for you to associate a network security group to a subnet, both the virtual network and the network security group must be in the same region.
Question 40: You have a subscription named Subscription1. Subscription1 has one Azure virtual machine named VM1 which is an Ubuntu server. You can’t seem to login to the server via SSH. What tool should you use to verify if the problem is the network security group?
A. IP flow verify tool in Azure Network Watcher
B. Azure Monitor VM metrics
C. Azure Traffic Manager traffic view
D. Azure Virtual Network logs
ANSWER40:
A
Notes: The IP Flow Verify tool checks if a packet is allowed or denied to or from a virtual machine. The information consists of direction, protocol, local IP, remote IP, local port, and a remote port. If the packet is denied by a security group, the name of the rule that denied the packet is returned.
Question 41: You have two Azure virtual machines named VM1 and VM2. VM1 is using the Red Hat Enterprise Linux 8.1 (LVM) operating system and is located in VNet1, within subnet1. VM2 is using the Windows Server 2019 operating system, and is located in VNet1, within subnet2. VNet1 has custom DNS configured, pointing to a DNS server with the IP address 172.168.0.6. VM2 has 10.0.1.15 configured as the DNS server on its network interface. Which DNS server will VM2 use for DNS queries?
A. 8.8.8.8
B. 10.0.1.15 for primary, 172.168.0.6 as secondary
C. 10.0.1.15
D. 172.168.0.6
ANSWER41:
C
Notes: Since the network interface attached to VM2 is assigned to a specific DNS server, it takes precedence over the DNS configured on the VNet.
Question 42: You have created a new Azure virtual machine in a subnet named Subnet1 with an attached network interface card named NIC1. The NIC1, attached to Subnet1, has the following effective routes:
Question 43: You have a standard load balancer that directs traffic from port 80 externally to three different virtual machines. You need to direct all incoming TCP traffic on port 5000 to port 22 internally for connecting to Linux VMs. What do you need in order to connect to the VM via SSH?
A. A public IP address for all three VMs
B. A Route Table with at least one rule
C. A Network Security Group (NSG)
D. A Network Address Translation (NAT) Rule
ANSWER43:
C and D
Notes: The NSG rules work alongside the NAT rules to provide a connection to a VM that's behind a load balancer. NAT rules work alongside NSG rules to provide a connection to a VM that's behind a load balancer.
Question 44: You have a web application that serves video and images to those visiting the site. You start to notice that your web server is overloaded, and often crashes because the requests have consumed all of its resources. To combat this, you’ve added an additional web server and you plan to load balance these servers by serving images from the first server only and serving video from the second server only. Which Azure resource can you implement that will properly load balance (at OSI layer 7) with URL-based routing and secure with SSL at the lowest cost?
A. Azure Load Balancer
B. Azure Front Door
C. Azure Application Gateway
D. Web Application Firewall
ANSWER44:
C
Notes: Azure Application Gateway operates at layer 7 (the application layer), and is a web traffic load balancer that enables you to manage traffic to your web applications. Application Gateway can make routing decisions based on URI path and secure with SSL.
Question 45: You manage a virtual network named VNet1 that is hosted in the West US region. Two virtual machines named VM1 and VM2, both running Windows Server, are on VNet1. You need to monitor traffic between VM1 and VM2 for a period of five hours. As a solution, you propose to create a connection monitor in Azure Network Watcher. Does this solution meet the goal?
A. Yes
B. –
C. –
D. No
ANSWER45:
A
Notes: The connection monitor capability in Azure Network Watcher monitors communication at a regular interval and informs you of reachability, latency, and network topology changes between the VM and the endpoint.
Question 46: You have an Azure subscription named Subscription1. You would like to connect your on-premises environment to Subscription1. You have to meet three requirements from the business. The first requirement is that the connection from the on-premises office and Azure must be a private connection. No network traffic is allowed to go over the public internet. The second requirement is that all traffic from the on-premises office and Azure must happen at layer 3 (network layer). The third requirement is that this connection from on-premises to Azure must be redundant to minimize the opportunity for failure. What type of connection fulfills these three requirements?
A. ExpressRoute with premium add-on
B. ExpressRoute
C. Site-to-Site VPN
D. Virtual WAN
ANSWER46:
B
Notes:ExpressRoute lets you extend your on-premises networks into the Microsoft cloud over a private connection facilitated by a connectivity provider. ExpressRoute connections do not go over the public Internet. An ExpressRoute Connection is a layer 3 connection between your on-premises network and Azure through a connectivity provider (e.g. Verizon).
Question 47: You have an Azure subscription as well as an on-premises environment that is connected via ExpressRoute circuit. You have two additional branch offices that you need to connect to the network, as well as ten remote employees that change locations frequently but still need access to Azure resources. What is the solution that will provide the quickest setup at the lowest cost?
A. Site-to-Site VPN
B. Point-to-Site VPN
C. Virtual WAN
D. Hub-and-Spoke Network Topology
ANSWER47:
C
Notes: The Virtual WAN architecture is a hub and spoke architecture for branches and users. It enables global transit network architecture, where the cloud-hosted network 'hub' enables transitive connectivity between endpoints that may be distributed across different types of 'spokes'. All hubs are connected in full mesh in a Standard Virtual WAN making it easy for the user to use the Microsoft backbone for any-to-any (any spoke) connectivity. This satisfies the requirement to provide the quickest set up at the lowest cost.
Question 48: You have a small number of servers running a microservice, and you want to make sure that all the servers have connectivity to each other. You also need to calculate network performance metrics like packet loss and link latency. Which two Azure resources do you need to meet this requirement?
A. Log Analytics Workspace
B. Network Performance Monitor
C. Azure Monitor
D. Azure Traffic Manager
ANSWER48:
A and B
Notes:A Log Analytics workspace is a data repository for Azure Monitor log data. A pre-requisite in order to use Network Performance Monitor. Network Performance Monitor helps you monitor network performance between various points in your network infrastructure. It also helps you monitor network connectivity to service and application endpoints and monitor the performance of Azure ExpressRoute.
Question 49: You have two virtual networks named VNet1 and VNet2. VNet1 is located in the West US region, whereas VNet2 is located in the East US region. You need to configure a virtual machine that’s located in VNet1 to also communicate with VMs in VNet2. From the choices available how can we enable communication between resources in VNet1 and VNet2
A. Migrate the VNet1 VM to VNet2 and leave the other VM components on VNet1
B. Migrate the network interface card (NIC), the network security group (NSG) and the VM disks to VNet2
C. Just the VM disks will need to be migrated to VNet2
D. Configure a VNet-to-VNet VPN gateway connection to allow communication between VNets in different regions
Question 50: You have two subscriptions, one named Subscription1 and the other named Subscription2. Both subscriptions are located within the same tenant. You have one Azure virtual machine located within Subscription1 and another Azure virtual machine within Subscription2 and you’d like to view CPU utilization metrics on both virtual machines. How can you achieve this while maintaining the minimum number of Azure resources and minimizing cost?
A. Create a Log Analytics Workspace for both VMs
B.Turn on VM Insights in Azure Monitor
C. Install the Log Analytics (OMS) Agent on the VMs
D. Enable guest-level monitoring on each VM
ANSWER50:
A and B
Notes: You can view metrics data (such as CPU utilization %) over time by sending your metrics data to a log analytics workspace. This workspace can collect metrics data from multiple VMs, no matter if they are located in the same or different subscriptions.
VM integration with Azure Monitor Logs delivers powerful aggregation and filtering, allowing Azure Monitor for VMs to analyze data trends over time. You can view this data in a single VM from the virtual machine directly, or you can use Azure Monitor to deliver an aggregated view of your VMs where the view supports Azure resource-context or workspace-context modes.
Question 51: You have created a new Azure virtual machine named VM1. You plan to use VM1 as a web server, which will require the VM to be accessible using HTTP/S (HTTP and HTTPS) protocol. A Network Security Group (NSG) is attached to the NIC of VM1 with the following rules:
What changes do you have to make to the NSG in order to meet the requirements for VM1?
A. Change the priority of Rule3 to 200
B. Change the action of Rule1 to Allow
C. Change the priority of Rule4 to 200
D. Change the port of Rule5 to 443
ANSWER51:
C
Notes: Lower priority rules take precedence over higher ones. Changing Rule4 to a lower number will negate all the other rules of a lesser priority, therefore allowing traffic on ports 60-500, which includes 80 and 443, the ports necessary for allowing traffic over HTTP/S. Remember the lower the priority the priority number the higher the priority in regards to reading the rules.
Question 52: You have an Azure virtual machine running Windows Server 2016. You need to collect OS level metrics on this virtual machine, including Windows event logs and performance counters. Which of the following items do you need in order to collect this metrics data?
A. Enable guest-level monitoring
B. Windows Diagnostics Extension
C. Log Analytics Agent
D. InfluxData Telegraf Agent
E. Storage Account for Diagnostic Data
ANSWER52:
A B E
Notes: In order to install the diagnostics extension on an Azure VM, you must enable guest-level monitoring from the VM settings in the portal. Windows Diagnostic Extension is an agent in Azure Monitor that collects monitoring data from the guest operating system and workloads of Azure virtual machines and other compute resources. In order to enable guest-level monitoring, you need to create a storage account for storing the metrics data.
Question 53: You have an Azure subscription with a virtual machine named VM1. You are using Recovery Services Vault (RSV) to backup VM1 with soft delete enabled. The backup policy is set to backup daily at 11 PM UTC, retain an instant recovery snapshot for 2 days, and retain the daily backup point for 14 days. After the initial backup of VM1, you are instructed to delete the vault and all of the backup data. What should you do?
A. Turn off soft delete in the vault security settings
B. Wait 14 days
C. Stop the backup of VM1 and delete backup data
D. Delete the backup policy
E. Delete Backup Jobs Workload
F. Wait 15 days
ANSWER53:
A and C
Notes: When you stop the backup and delete the backup data, because you have soft delete enabled, the backup data is still kept. Permanently delete the soft-deleted backup items that would remove the backup data indefinitely. If you stop the backup of VM1 and choose delete backup data from the dropdown menu, this will stop future backups and delete the existing backup data.
Question 54: You have a number of virtual machines and web applications running in your Azure environment. These Azure resources are critical for business operations, so you’ve locked the resources in order to prevent deletion. In addition, how can you alert on these actions in the portal, and notify your team via email and SMS when a user is trying to delete or create a new resource from within your Azure subscription?
A. Pin the activity log to your dashboard
B. Create a new alert rule
C. Query Administrative Events and Copy Link to Query
D. Create a new action group
ANSWER54:
B and D
Notes: Alert rules specify the conditions for which the alert is triggered. Activity log alerts are the alerts that get activated when a new activity log event occurs that matches the conditions specified in the alert. An action group is a collection of notification preferences defined by the owner of an Azure subscription. Azure Monitor and Service Health alerts use action groups to notify users that an alert has been triggered.
Question 55: You have a .NET Core application running in Azure App Services. You are expecting a huge influx of traffic to your application in the coming days. When your application experiences this spike in traffic, you want to detect any anomalies such as request errors or failed queries immediately. What service can you use to assure that you know about these types of errors related to your .NET application immediately?
A. Client-side monitoring
B. Live Metrics Stream in Application Insights
C. Application Insights Search
D. Log analytics workspace
ANSWER55:
B
Notes: Live metrics stream includes such information as the number of incoming requests, the duration of those requests, and any failures that occur. You can also inspect critical performance metrics such as processor and memory.
Question 56: You have an Azure subscription named Subscription1. In Subscription1 you have two Azure VMs named VM1 and VM2, both running Windows Server 2016. VM1 is backed up using Recovery Services Vault, with a backup policy of producing a daily backup and keeping that daily backup for seven days. Also, a snapshot is kept for 2 days. VM1 is compromised by a virus that infects the entire system, including the files. You need to restore the files from yesterday’s backup of VM1. Where can you restore the files to in the quickest manner?
A. A new Azure VM
B. Restore the VM1 snapshot
C. VM2
D. In-place
ANSWER56:
B
Notes: Using snapshots for VM backups, you speed up the recovery time considerably. The snapshots are stored with the disks in Azure, so the transfer speeds are optimal.
Question 57: You have a subscription named Subscription1. You would like to be alerted upon certain administrative events within Subscription1 to detect unauthorized access. Which of the following is the quickest method to setup these types of alerts?
A. Monitor > Alerts > New Alert Rule
B. Log Analytics Workspace > myWorkdspace > Advanced Settings
C. Policy > Assignments > Assign Policy
D. Subscriptions > mySubscription > Activity Log > New Alert
ANSWER57:
A
Notes: Alerts can be created from within Azure Monitor
Microsoft Azure Administrator Certification Q&A:
1- Theaz vmss deallocate command will deallocate and remove the VMs within a VMSS. Azure Doc
COVID-19 has pushed the need for quality videoconferencing tools at the top of most businesses and organizations success factors.
American business people hold approximately 11 million meetings a day, which equals 55 million meetings per week and 220 million meetings per year. Zoom usage shot up in March 2020. Over the course of that month, Zoom was seeing 200 million daily meeting participants. The following month, this figure had risen to 300 millions.
In this blog, we are going to help you choose the best videoconferencing and web collaboration/webinars tools for your business or organization.
I- Class A Videoconferencing Apps and Tools
Features
Zoom
Google Meet
Microsoft Teams
Winner
Pricing/Cost
ZOOM Zoom is free for up to 3 people for an unlimited amount of timeand more than 3 people up to 100 participants for 40-minute meetings. From US$14.99/month per host Pro plan for user management, unlimited meeting length, and reporting.
GOOGLE MEET G Suite now called Google Workspace is priced at a starting cost of US$6 per user for a month but access to Google Meet itself is free. Get 20% Off with thesePromo Codes Google Workspace for Business is starts at US $12 per month Promo Codes
MICROSOFT TEAMS For unlimited use, it comes with all Microsoft 365 plans (whose starting is $5 per user for a month). The Office 365 Business Premium Plan starts at US$12.50/user/month with additional features
WINNER 1- Zoom wins for individuals, families, not for profits,startups, small businesses, large enterprises interacting with external vendors or international clients, moms and pops shops, etc..
2- Google Meet wins for Education, Research, Companies that are already using GCP or G-Suite. Google doesn’t charge an additional call-in fee. 3- Teams wins for large organization using Office 365 already.
Zoom offers Online Meetings, Videos webinars, conference rooms, phone system. Zoom offers a much more stable connection than all of its counterparts, with crystal clear audio and video even while using a not so great internet connection.
GOOGLE MEET Unlimited number of meetings. Live captioning during meetings. Compatible across devices. Video and audio preview screen. Adjustable layouts and screen settings. Controls for meeting hosts. Screen sharing with participants. Messaging with participants.
Google Meet allows anyone with a Google Account to create an online meeting with up to 100 participants. By utilizing Google’s advanced features, it can host up to 250 people per call for 60 minutes and record those calls to save them automatically to Google Drive. Furthermore, Google Meet also supports live streaming for a hundred thousand viewers within a domain.
MICROSOFT TEAMS Online meetings & webinars. Calling. Video conferencing. Screen sharing. File sharing. Custom backgrounds. Instant messaging.
Microsoft Teams offers business VoIP, video and collaboration featuresall-in-one. Additionally, it’s integrated with all the Microsoft apps which makes collaborative work a lot easier if you are a Microsoft Office user.
WINNER 1- Zoom 2- Google Meet 3- Microsoft Teams
Zoom is super easy to set up, use, and manage. Zoom works on low-quality internet. Zoom gives the best quality video output.
Capacity and Compatibility
ZOOM
A special feature of Zoom for large audiences is the Zoom webinar which can host up to 3000 participants. Although it’s not free and a Zoom Premium account needs to be purchased for it, it is one of the best platforms for hosting events or seminars with a huge audience that can participate as well. Unlimited meetings for up to 100 participants HD audio and video
GOOGLE MEET By utilizing Google’s advanced features, it can host up to 250 people per call for 60 minutes and record those calls to save them automatically to Google Drive. Furthermore, Google Meet also supports live streaming for a hundred thousand viewers within a domain. If you have a google account, you’ll be able to have 24-hour meetings with up to 100 people.
MICROSOFT TEAMS Teams can host up to 250 members per call and even provide transcripts of the call after it ends. This feature is enabled in both desktop and web applications and regardless of participants being guests or users. The Microsoft Teams Live Events has an audience capacity of 10,000 attendees with a duration limit of 4 hours. An Office 365 Organization can run up to 15 live events at a time.
WINNER: 1- Zoom 2- Google Meet 2- Microsoft Teams You don’t need a Google Account to participate in Meet video meetings. However, if you don’t have a Google Account, the meeting organizer or someone from the organization must grant you access to the meeting. Tip: If you are not signed into a Google or Gmail account, you cannot join using your mobile device. Microsoft Teams is excellent for internal collaboration, whereas Zoom is often preferred for working externally – whether that’s with customers or guest vendors.
GOOGLE MEET All data in Google Meet is encrypted in transit by default between the client and Google for video meetings on a web browser, on the Meet Android and Apple® iOS® apps, and in meeting rooms with Google meeting room hardware. Meet recordings stored in Google Drive are encrypted at rest by default. Google Meet has an anti-abuse feature that keeps the meetings safe with its anti-hijacking features and allowing the host to secure all controls of the meeting. This feature also allows multiple 2-step verification processes and security keys.
MICROSOFT TEAMS Teams enforces team-wide and organization-wide two-factor authentication, single sign-on through Active Directory, and encryption of data in transit and at rest. Files are stored in SharePoint and are backed by SharePoint encryption. Notes are stored in OneNote and are backed by OneNote encryption.
WINNER: 1- Google Meet 2- Microsoft Teams 3- Zoom
Extra Unique Features
ZOOM Zoom has all the basic features such as screen sharing, file sharing, and a message box. But the distinguishing feature of Zoom is it allows you to control all settings, from enabling waiting rooms, allowing participants to rename themselves and give reactions, controlling the chatbox options, and much, much more.
GOOGLE MEET Google Meet allows live captioning during meetings with its automated live captions powered by Google Speech. It also allows you to check video and audio controls before you enter a meeting. The Google Meet layout automatically adjusts to display the most active content and participants. It also has screen sharing features, chat features, file sharing features, and is integrated with Microsoft and Google Apps.
MICROSOFT TEAMS Additional features that Microsoft Teams has, are the customizable backgrounds, the ‘together mode’ in which all members sit together in the same background making it feel like they are all at one place, and file sharing and co-authoring files in real-time.
Winner: 1- Google Meet 2- Zoom 3- Microsoft Teams Google Meet is a far stronger and more intuitive product than the video calling feature on Microsoft Teamsand Zoom.
Device compatibilities
ZOOM Zoom is compatible with all devices whether it is a desktop/laptop, Android, iPhone, or iPad.
GOOGLE MEET Google Meet is compatible across many devices whether it is a desktop/laptop, Android, iPhone, or iPad. REVIEWS
TEAMS Microsoft Teams is compatible with iOS, Android, macOS, and Windows and is mostly free for up to 300 people and 5GB worth of files. REVIEWS
WINNER: Tie
REVIEWS: PROS
PRO ZOOMREVIEWS We have had great success and increase moblity by using this product. Extremely intuitive so video conference meetings are now becoming more popular than the face-to-face. Gary A. Read the full review This is a great platform to have a video conversation. It is easy to use and for the most part the video quality is good. Fonda C. Read the full review We love being able to hold meetings and the ease of inviting and logging on is worth the money spent on this service. Jeana L. Read the full review
PROGOOGLE MEETREVIEWS It’s a good tool for what it does – but there are other tools that do it better; but those cost. Anonymous Reviewer Read the full review I like that it’s super intuitive and easy to use and set up. Since it’s Google, it also has good tech support. Anonymous Reviewer Read the full review Google Hangouts is a popular tool for video-chatting your friends, clients, family members and the sort. The platform is daily gaining popularity since its establishment in 2013. Anonymous Reviewer Read the full review
PRO TEAMS REVIEWS I like that you have the capability to hold a meeting and have the camera on or off-it is great when collaborating with team members who may not be in the office. Anonymous Reviewer Read the full review Very good, it increased productivity, good online meeting experience, some flaws like file storage and creating new channels but overall it is a great product. Akashbir S. Read the full review With the current pandemic situation, Teams has been a big lifesaver to us. It is easy to use, all integrated application with good customer support. Lakshika G. Read the full review
WINNER: 1- Zoom 2- Google Meet 3- Microsoft Teams
REVIEWS: CONS
CONS ZOOM REVIEWS It might be people’s internet connection that causes these problems, but sometimes the video lags and is unclear. People’s audio sometimes cuts out too. Sally L. Read the full review The recent discovery of the forced video/automatic video without your warning due to the install on your machine is very concerning, and I avoid this software now because of this. Diana B. Read the full review Meeting setup can be a bit confusing – this is a pretty common problem among most platforms, but generating conference ID can get a little confusing depending on who is navigating the meeting. Anonymous Reviewer Read the full review
CONS GOOGLE MEET REVIEWS It’ll switch off audio or drop someone at random during larger group chats. It also doesn’t add back group members, nor does it have any feature that tells you if your microphone setting is wrong. Anonymous Reviewer Read the full review It is a downside that you need a gmail account to use this. Some people have email accounts specific to their internet provider or they have yahoo. Victoria K. Read the full review The hardest part is actually finding it on the email interface – I have a google phone number too and that takes priority in the side bar which can be confusing. Sarah F. Read the full review
CONS TEAMSREVIEWS The teams arrangement is not straightforward and the UI is very busy. Often times, notifications are lost when there are multiple channels within a team and those channels are in the collapsed view. Anonymous Reviewer Read the full review I sometimes have trouble opening the software, but I think it might be more of a problem with our internet connection. It is very difficult to figure out how to use the call feature. Christina B. Read the full review Notifications for teams are not automatically enabled so you have to enable it yourself but you might run into problems because it is not very user-friendly. Anonymous Reviewer Read the full review
LOSER: TIE
As of December 2020, Google Meet has 30 million daily users, Microsoft Teams has 75 Million daily active users, and Zoom takes the lead with 300 Million daily active users.
GOTOMEETING GoToMeeting Professional allows you to host meetings with up to 150 participants and costs $12 monthly (billed annually at $144).
GoToMeeting Business costs $16per month (billed annually at $192).
WEBEX Free $ 0 per host, per month 1 host maximum
Starter $ 13.50 per host, per month 1-9 hosts Business $ 26.95 per host, per month 5-100 hosts
SLACK Free For small teams trying out Slack for an unlimited period of time Standard For small- and medium-sized businesses $6.67 USD/month Plus For larger businesses or those with additional administration needs $12.50 USD/month
1- GoToMeeting 2- Slack 3- Webex
Offerings
GOTOMEETING Virtual Whiteboard. Built-In Audio. Meeting Scheduler. Hand Over Control. One-Click Recording. Join via Mobile Options. Desktop/Application Sharing. Personal Meeting Room.
WEBEX 20 million reliable video conferences a month. Free video calls and screen sharing with Webex. Screen share — free. Webex webinars are delivered reliably. Easily present online. Collaborate with your team. Get more from your conference call.
SLACK Create open channels Support for private groups and 1:1 direct messaging File Sharing Deep, Contextual Search Always In Sync Chat functionality Tags, keywords & @mentions 1:1 and group calls Screen sharing Activity logging API availability Activity tracking Open API to build your own integrations
WINNER 1- GoToMeeting 2- Slack 3- WeBex
Capacity compatibility
GOTOMEETING Join from Mac, PC, iPad®, iPhone® or Android
WEBEX WebEx mobile Mobile home screen widget Video call recording “Call Me” alternative to dial-in Remote desktop control, iOS, android
SLACK Single sign on All data transfer is encrypted SSL Security Reviews
WINNER 1- WeBex 2- Slack 3- GoToMeeting
Other Videoconferencing Tools and Apps:
1- LiveStorm: Livestorm is a browser based online web conferencing software used to share real-time live streams. It can be used to power remote live meetings, product demos, sales webinars, online lessons, onboarding sessions, more etc.. Cost: Starting from €89/month – Reviews
2- ClickMeeting: Video conferencing, online meetings, and webinar software to bring your students, customers, and team members together. Cost: Starting from $25/month – Reviews
3- AirMeet:Airmeet is a platform for virtual summits, meetups & workshops with a social lounge to deliver a rich networking experience. Virtual events, real connections. Starting at $99 per month.
4- GoToWebinar: Video conferencing and webinar hosting for large events. Cost: Starting from $89 per month. Reviews
Videoconferencing Q&A:
The Cloud is the future: The AWS Certified Solutions Architect – Associate Averge salary is $149,446/year. Get Certified Now with the apps below:
I prefer Zoom and I have used all three of these mediums.
I prefer Zoom because its streamlined and easy for domestic and international clients. I like Zoom because you can utilize breakout rooms to easily divide people in to groups for short periods of time and have people raise their hand so you can toggle over audio during a busy call with a lot of people on. It’s great for managing webinars as well as internal team meets or client correspondences.
Yes, it’s possible. All you need is to create a free account at SpeechText AI audio transcription service and upload recorded audio files.
Here is the step-by-step guide:
1. In Google Meet, start recording the meeting. Click “More” (3 vertical dots on the lower right-hand side) and choose the “Record Meeting” option.
2. Recorded Google Meet video will be automatically stored in your Google Drive. Check “My Drive” -> “Meet recordings” folder. An email with a link to the Google Meet recording will also be sent to the meeting administrator. You need to download it to your computer.
3. Create a trial SpeechText AI account. It’s free.
4. Locate your downloaded recording on your computer and upload it to the cloud transcription service.
5. SpeechText AI is the first multilingual and industry-specific audio transcription engine. To start transcription you should accurately select the transcription language (the service supports more than 30 languages and accents), industry to accurately transcribe domain-specific terms and audio type (in your case it will be the ‘Meeting record’).
6. Hit the “Transcribe” button and SpeechText AI transcribes your Google Meet meeting in seconds.
I’ve attached the example of transcription results you can get using SpeechText AI service. It’s the interview of Elon Musk at The Late Show with Stephen Colbert.
Google Classroom and Zoom are completely different types of programs. Classroom is used for assigning and tracking student work while Zoom is a platform for video conferencing. Google does have an equivalent to Zoom in their Google Meets app, but Zoom does not have an equivalent to Google Classroom.
As teachers at my school had already started is Zoom back when lockdown started, it was easier to continue using the same format than to transition to a different platform, plus there was a *ridiculously* long wait to get approval for Google Classroom back when every school in the country was scrambling to become a provider of online education all at the same time.
From the reports I’ve heard, both Google Meets and Zoom had security issues when they got all of those new customers all at once. Both have increased security over the past few months & added extra layers of security to the point that security issues seem to be pretty far in between, or at least you hear fewer reports of them. Now it seems to be more an issue of them both working on improving system stability, so there are fewer system-wide crashes & fewer dropped participants.
You can use a third-party screen recorder to meet that goal. The tool I widely used to capture meeting calls on PC is RecMaster. It comes with multiple recording modes and simultaneously provides versatile recording tools. Here let me show you how to record Google Meet in Full screen.
Step 1: Download and make RecMaster running on your computer. After that, choose the Full screen mode to capture the Google Meet call.
Step 2: Choose the system sound button so that the speaker’s voice will be recorded. Here you can also make configuration on the video format, video quality, frame rate one by one. If you are holding any fixed-time recording, you can preset the beginning and ending time of the recording and it will automatically start.
Step 3: Press REC button to start. When it’s time to end this task, tab the red button again to terminate the recording.
Now you’ve got the desired video. You can upload it to Google Drive, YouTube or share with ease.
Here are some additional online resources to help you most effectively use Zoom for virtual education:
Live Zoom training daily: These include sessions specifically highlighting Zoom Meetings for Education (Students & Educators), focusing on using Zoom Meetings as your classroom setting. Zoom Webinar training is also available.
Recorded Zoom training: Watch previously recorded sessions on demand and at your convenience. Several are in German, Japanese, and Korean, in addition to English.
Tips for instructors: Check out this Twitter thread from USC Ph.D. student and online instructor Alana Kennedyon some of the most useful features and best practices for teaching over Zoom.
March 6
Zoom has a wealth of experience helping educational institutions optimize the Zoom platform for virtual classrooms and online learning. It’s our goal to make Zoom easy to use and accessible for everyone, and we’re committed to streamlining the experience for our educational users amid the global coronavirus (COVID-19) outbreak.
Zoom’s teams are working to provide teachers, administrators, and students around the world with the resources they need to quickly spin up virtual classrooms, participate in online classes, and continue their studies online. It’s our intention that everyone, from seasoned Zoom users to those who’ve never interacted with our product, can easily download the client, start and schedule meetings, set students up with Zoom, and start using Zoom for virtual instruction with ease.
This post is designed to help our education users:
Sign up for a Zoom account
Pick the best account option
Understand best practices for using Zoom in education
Help for schools
To ensure all of our K-12 districts and other institutions can most effectively leverage Zoom for virtual education during this time, Zoom is:
Temporarily removing the 40-minute limit on free Basic accounts for schools in Japan and Italy, and by request for K-12 schools in the United States
Providing multi-language resources specifically designed for principals, vice principals, teachers, students, and parents to set up and use Zoom
Expanding live trainings, webinars, and recorded offerings to share best practices for using the platform
How to enable your free Zoom account
To have the 40-minute time limit temporarily removed for your organization’s free Basic accounts:
Have your administrators, staff, and teachers sign up for a free Zoom account.
Have a member of your school fill out this form to request the temporary removal.
Upon verification, all free Basic accounts using your school’s email domain will have the time restriction lifted.
Now teachers will be able to log in, schedule their classes, and send out invites to students. Students are not required to have a Zoom account and can join classes using the links sent from the teacher. For the best experience, we do recommend every user download the Zoom application on their preferred Mac, Windows, Linux, iOS, or Android device.
We have numerous short videos on support.zoom.us to help you get started.
Zoom account features & benefits
Zoom offers robust collaboration and engagement tools as part of its standard free license, including the ability to connect using VoIP or via traditional phone when internet is not available. Administrators, teachers, parents, and students also have access to:
For organizations requiring a more robust feature set and administrative control, Zoom’s Education plan provides the above capabilities and more at a low cost, including:
Unlimited meetings for up to to 300 participants
Single sign-on (SSO)
LTI integration to support most LMS platforms
Enhanced user management to add, delete, and assign add-on features
Advanced admin controls for enabling/disabling recording, chat, and notifications
500 MB of cloud recording
Cloud recording transcription
Usage reports to track participation
Need help deciding whether a Basic or Education plan is right for you? Connect with a Zoom education specialistfor assistance.
Resources for Zoom’s education users
Here are some guides to help school administrators, staff, teachers, students, and parents leverage Zoom for virtual learning:
We’re also providing multi-language resources specifically designed for principals, vice principals, teachers, students, and parents to set up and use Zoom.
Additional measures
Zoom is also proactively monitoring our global infrastructure to ensure reliability and uptime for your online learning programs. Our proven infrastructure regularly supports over 8 billion meeting minutes a month, and we are confident that our architecture can handle spiking levels of activity and support educational institutions around the world during this time.
Google Certified Cloud Professional Architect is the top high paying certification in the world: Google Certified Professional Cloud Architect Average Salary – $175,761
The Google Certified Cloud Professional Architect Exam assesses your ability to:
Design and plan a cloud solution architecture
Manage and provision the cloud solution infrastructure
Design for security and compliance
Analyze and optimize technical and business processes
Manage implementations of cloud architecture
Ensure solution and operations reliability
Designing and planning a cloud solution architecture
Designing and planning a cloud solution architecture: 36%
This domain tests your ability to design a solution infrastructure that meets business and technical requirements and considers network, storage and compute resources. It will test your ability to create a migration plan, and that you can envision future solution improvements.
Managing and provisioning a solution Infrastructure: 20%
This domain will test your ability to configure network topologies, individual storage systems and design solutions using Google Cloud networking, storage and compute services.
Designing for security and compliance: 12%
This domain assesses your ability to design for security and compliance by considering IAM policies, separation of duties, encryption of data and that you can design your solutions while considering any compliance requirements such as those for healthcare and financial information.
Managing implementation: 10%
This domain tests your ability to advise development/operation team(s) to make sure you have successful deployment of your solution. It also tests yours ability to interact with Google Cloud using GCP SDK (gcloud, gsutil, and bq).
This domain tests your ability to run your solutions reliably in Google Cloud by building monitoring and logging solutions, quality control measures and by creating release management processes.
Analyzing and optimizing technical and business processes: 16%
This domain will test how you analyze and define technical processes, business processes and develop procedures to ensure resilience of your solutions in production.
Below are the Top 50 Google Certified Cloud Professional Architect Exam Questions and Answers Dumps: You will need to have the three case studies referred to in the exam open in separate tabs in order to complete the exam: Company A , Company B, Company C
Question 1:Because you do not know every possible future use for the data Company A collects, you have decided to build a system that captures and stores all raw data in case you need it later. How can you most cost-effectively accomplish this goal?
A. Have the vehicles in the field stream the data directly into BigQuery.
B. Have the vehicles in the field pass the data to Cloud Pub/Sub and dump it into a Cloud Dataproc cluster that stores data in Apache Hadoop Distributed File System (HDFS) on persistent disks.
C. Have the vehicles in the field continue to dump data via FTP, adjust the existing Linux machines, and use a collector to upload them into Cloud Dataproc HDFS for storage.
D. Have the vehicles in the field continue to dump data via FTP, and adjust the existing Linux machines to immediately upload it to Cloud Storage with gsutil.
ANSWER1:
D
Notes/References1:
D is correct because several load-balanced Compute Engine VMs would suffice to ingest 9 TB per day, and Cloud Storage is the cheapest per-byte storage offered by Google. Depending on the format, the data could be available via BigQuery immediately, or shortly after running through an ETL job. Thus, this solution meets business and technical requirements while optimizing for cost.
Question 2: Today, Company A maintenance workers receive interactive performance graphs for the last 24 hours (86,400 events) by plugging their maintenance tablets into the vehicle. The support group wants support technicians to view this data remotely to help troubleshoot problems. You want to minimize the latency of graph loads. How should you provide this functionality?
A. Execute queries against data stored in a Cloud SQL.
B. Execute queries against data indexed by vehicle_id.timestamp in Cloud Bigtable.
C. Execute queries against data stored on daily partitioned BigQuery tables.
D. Execute queries against BigQuery with data stored in Cloud Storage via BigQuery federation.
ANSWER2:
B
Notes/References2:
B is correct because Cloud Bigtable is optimized for time-series data. It is cost-efficient, highly available, and low-latency. It scales well. Best of all, it is a managed service that does not require significant operations work to keep running.
Question 3: Your agricultural division is experimenting with fully autonomous vehicles. You want your architecture to promote strong security during vehicle operation. Which two architecture characteristics should you consider?
A. Use multiple connectivity subsystems for redundancy.
B. Require IPv6 for connectivity to ensure a secure address space.
C. Enclose the vehicle’s drive electronics in a Faraday cage to isolate chips.
D. Use a functional programming language to isolate code execution cycles.
E. Treat every microservice call between modules on the vehicle as untrusted.
F. Use a Trusted Platform Module (TPM) and verify firmware and binaries on boot.
ANSWER3:
E and F
Notes/References3:
E is correct because this improves system security by making it more resistant to hacking, especially through man-in-the-middle attacks between modules.
F is correct because this improves system security by making it more resistant to hacking, especially rootkits or other kinds of corruption by malicious actors.
Question 4: For this question, refer to the Company A case study.
Which of Company A’s legacy enterprise processes will experience significant change as a result of increased Google Cloud Platform adoption?
A. OpEx/CapEx allocation, LAN change management, capacity planning
B. Capacity planning, TCO calculations, OpEx/CapEx allocation
C. Capacity planning, utilization measurement, data center expansion
D. Data center expansion, TCO calculations, utilization measurement
ANSWER4:
B
Notes/References4:
B is correct because all of these tasks are big changes when moving to the cloud. Capacity planning for cloud is different than for on-premises data centers; TCO calculations are adjusted because Company A is using services, not leasing/buying servers; OpEx/CapEx allocation is adjusted as services are consumed vs. using capital expenditures.
Question 5: For this question, refer to the Company A case study.
You analyzed Company A’s business requirement to reduce downtime and found that they can achieve a majority of time saving by reducing customers’ wait time for parts. You decided to focus on reduction of the 3 weeks’ aggregate reporting time. Which modifications to the company’s processes should you recommend?
A. Migrate from CSV to binary format, migrate from FTP to SFTP transport, and develop machine learning analysis of metrics.
B. Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine learning analysis of metrics.
C. Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine learning analysis of metrics.
D. Migrate from FTP to SFTP transport, develop machine learning analysis of metrics, and increase dealer local inventory by a fixed factor.
ANSWER5:
C
Notes/References5:
C is correct because using cellular connectivity will greatly improve the freshness of data used for analysis from where it is now, collected when the machines are in for maintenance. Streaming transport instead of periodic FTP will tighten the feedback loop even more. Machine learning is ideal for predictive maintenance workloads.
Question 6: Your company wants to deploy several microservices to help their system handle elastic loads. Each microservice uses a different version of software libraries. You want to enable their developers to keep their development environment in sync with the various production services. Which technology should you choose?
A. RPM/DEB
B. Containers
C. Chef/Puppet
D. Virtual machines
ANSWER6:
B
Notes/References6:
B is correct because using containers for development, test, and production deployments abstracts away system OS environments, so that a single host OS image can be used for all environments. Changes that are made during development are captured using a copy-on-write filesystem, and teams can easily publish new versions of the microservices in a repository.
Question 7: Your company wants to track whether someone is present in a meeting room reserved for a scheduled meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a motion sensor that reports its status every second. You want to support the data upload and collection needs of this sensor network. The receiving infrastructure needs to account for the possibility that the devices may have inconsistent connectivity. Which solution should you design?
A. Have each device create a persistent connection to a Compute Engine instance and write messages to a custom application.
B. Have devices poll for connectivity to Cloud SQL and insert the latest messages on a regular interval to a device specific table.
C. Have devices poll for connectivity to Cloud Pub/Sub and publish the latest messages on a regular interval to a shared topic for all devices.
D. Have devices create a persistent connection to an App Engine application fronted by Cloud Endpoints, which ingest messages and write them to Cloud Datastore.
ANSWER7:
C
Notes/References7:
C is correct becauseCloudPub/Sub can handle the frequency of this data, and consumers of the data can pull from the shared topic for further processing.
Question 8: Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of their log data to the cloud and test the analytics features available to them there, while also retaining that data as a long-term disaster recovery backup. Which two steps should they take?
A. Load logs into BigQuery.
B. Load logs into Cloud SQL.
C. Import logs into Stackdriver.
D. Insert logs into Cloud Bigtable.
E. Upload log files into Cloud Storage.
ANSWER8:
A and E
Notes/References8:
A is correct because BigQuery is the fully managed cloud data warehouse for analytics and supports the analytics requirement.
E is correct because Cloud Storage provides the Coldline storage class to support long-term storage with infrequent access, which would support the long-term disaster recovery backup requirement.
Question 9: You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address. You have verified that the appropriate web response is coming from each instance using the curl command. You want to ensure that the backend is configured correctly. What should you do?
A. Ensure that a firewall rule exists to allow source traffic on HTTP/HTTPS to reach the load balancer.
B. Assign a public IP to each instance, and configure a firewall rule to allow the load balancer to reach the instance public IP.
C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.
D. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.
ANSWER9:
C
Notes/References9:
C is correct because health check failures lead to a VM being marked unhealthy and can result in termination if the health check continues to fail. Because you have already verified that the instances are functioning properly, the next step would be to determine why the health check is continuously failing.
Question 10: Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform. Each tier (web, API, and database) scales independently of the others. Network traffic should flow through the web to the API tier, and then on to the database tier. Traffic should not flow between the web and the database tier. How should you configure the network?
A. Add each tier to a different subnetwork.
B. Set up software-based firewalls on individual VMs.
C. Add tags to each tier and set up routes to allow the desired traffic flow.
D. Add tags to each tier and set up firewall rules to allow the desired traffic flow.
ANSWER10:
D
Notes/References10:
D is correct because as instances scale, they will all have the same tag to identify the tier. These tags can then be leveraged in firewall rules to allow and restrict traffic as required, because tags can be used for both the target and source.
Question 11: Your organization has 5 TB of private data on premises. You need to migrate the data to Cloud Storage. You want to maximize the data transfer speed. How should you migrate the data?
A. Use gsutil.
B. Use gcloud.
C. Use GCS REST API.
D. Use Storage Transfer Service.
ANSWER11:
A
Notes/References11:
A is correct because gsutil gives you access to write data to Cloud Storage.
Question 12: You are designing a mobile chat application. You want to ensure that people cannot spoof chat messages by proving that a message was sent by a specific user. What should you do?
A. Encrypt the message client-side using block-based encryption with a shared key.
B. Tag messages client-side with the originating user identifier and the destination user.
C. Use a trusted certificate authority to enable SSL connectivity between the client application and the server.
D. Use public key infrastructure (PKI) to encrypt the message client-side using the originating user’s private key.
ANSWER12:
D
Notes/References12:
D is correct because PKI requires that both the server and the client have signed certificates, validating both the client and the server.
Question 13: You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database backend. You want to store the credentials securely. Where should you store the credentials?
A. In the source code
B. In an environment variable
C. In a key management system
D. In a config file that has restricted access through ACLs
Question 14: For this question, refer to the Company B case study.
Company B wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements. Which combination of Google technologies will meet all of their requirements?
A. Kubernetes Engine, Cloud Pub/Sub, and Cloud SQL
B. Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery
C. Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow
D. Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc
ANSWER14:
B
Notes/References14:
B is correct because: – Cloud Dataflow dynamically scales up or down, can process data in real time, and is ideal for processing data that arrives late using Beam windows and triggers. – Cloud Storage can be the landing space for files that are regularly uploaded by users’ mobile devices. – Cloud Pub/Sub can ingest the streaming data from the mobile users. BigQuery can query more than 10 TB of historical data.
Question 15: For this question, refer to the Company B case study.
Company B has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?A. Create a scalable environment in GCP for simulating production load.B. Use the existing infrastructure to test the GCP-based backend at scale. C. Build stress tests into each component of your application and use resources from the already deployed production backend to simulate load.D. Create a set of static environments in GCP to test different levels of load—for example, high, medium, and low.
ANSWER15:
A
Notes/References15:
A is correct because simulating production load in GCP can scale in an economical way.
Question 16:For this question, refer to the Company B case study.
Company B wants to set up a continuous delivery pipeline. Their architecture includes many small services that they want to be able to update and roll back quickly. Company B has the following requirements:
Services are deployed redundantly across multiple regions in the US and Europe
Only frontend services are exposed on the public internet.
They can reserve a single frontend IP for their fleet of services.
Deployment artifacts are immutable
Which set of products should they use?
A. Cloud Storage, Cloud Dataflow, Compute Engine
B. Cloud Storage, App Engine, Cloud Load Balancing
C. Container Registry, Google Kubernetes Engine, Cloud Load Balancing
D. Cloud Functions, Cloud Pub/Sub, Cloud Deployment Manager
ANSWER16:
C
Notes/References16:
C is correct because: –Google Kubernetes Engine is ideal for deploying small services that can be updated and rolled back quickly. It is a best practice to manage services using immutable containers. –Cloud Load Balancing supports globally distributed services across multiple regions. It provides a single global IP address that can be used in DNS records. Using URL Maps, the requests can be routed to only the services that Company B wants to expose. –Container Registry is a single place for a team to manage Docker images for the services.
Question 17: Your customer is moving their corporate applications to Google Cloud Platform. The security team wants detailed visibility of all resources in the organization. You use Resource Manager to set yourself up as the org admin. What Cloud Identity and Access Management (Cloud IAM) roles should you give to the security team?
A. Org viewer, Project owner
B. Org viewer, Project viewer
C. Org admin, Project browser
D. Project owner, Network admin
ANSWER17:
B
Notes/References17:
B is correct because: –Org viewer grants the security team permissions to view the organization's display name. –Project viewer grants the security team permissions to see the resources within projects.
Question 18: To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines (VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department. Which two steps should you take?
A. Use persistent disks to store the state. Start and stop the VM as needed.
B. Use the –auto-delete flag on all persistent disks before stopping the VM.
C. Apply VM CPU utilization label and include it in the BigQuery billing export.
D. Use BigQuery billing export and labels to relate cost to groups.
E. Store all state in local SSD, snapshot the persistent disks, and terminate the VM.F. Store all state in Cloud Storage, snapshot the persistent disks, and terminate the VM.
ANSWER18:
A and D
Notes/References18:
A is correct because persistent disks will not be deleted when an instance is stopped.
D is correct because exporting daily usage and cost estimates automatically throughout the day to a BigQuery dataset is a good way of providing visibility to the finance department. Labels can then be used to group the costs based on team or cost center.
Question 19: Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs. What should they do?
A. Configure a new load balancer for the new version of the API.
B. Reconfigure old clients to use a new endpoint for the new API.
C. Have the old API forward traffic to the new API based on the path.
D. Use separate backend services for each API path behind the load balancer.
ANSWER19:
D
Notes/References19:
D is correct because an HTTP(S) load balancer can direct traffic reaching a single IP to different backends based on the incoming URL.
Question 20: The database administration team has asked you to help them improve the performance of their new database server running on Compute Engine. The database is used for importing and normalizing the company’s performance statistics. It is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD zonal persistent disk. What should they change to get better performance from this system in a cost-effective manner?
A. Increase the virtual machine’s memory to 64 GB.
B. Create a new virtual machine running PostgreSQL.
C. Dynamically resize the SSD persistent disk to 500 GB.
D. Migrate their performance metrics warehouse to BigQuery.
ANSWER20:
C
Notes/References20:
C is correct because persistent disk performance is based on the total persistent disk capacity attached to an instance and the number of vCPUs that the instance has. Incrementing the persistent disk capacity will increment its throughput and IOPS, which in turn improve the performance of MySQL.
Question 21: You need to ensure low-latency global access to data stored in a regional GCS bucket. Data access is uniform across many objects and relatively high. What should you do to address the latency concerns?
A. Use Google’s Cloud CDN.
B. Use Premium Tier routing and Cloud Functions to accelerate access at the edges.
C. Do nothing.
D. Use global BigTable storage.
E. Use a global Cloud Spanner instance.
F. Migrate the data to a new multi-regional GCS bucket.
G. Change the storage class to multi-regional.
ANSWER21:
A
Notes/References21:
Cloud Functions cannot be used to affect GCS data access, so that option is simply wrong. BigTable does not have any “global” mode, so that option is wrong, too. Cloud Spanner is not a good replacement for GCS data: the data use cases are different enough that we can assume it would probably not be a good fit. You cannot change a bucket’s location after it has been created–not via the storage class nor any other way; you would have to migrate the data to a new bucket. Google’s Cloud CDN is very easy to turn on, but it does only work for data that comes from within GCP and only if the objects are being accessed frequently enough.
Question 22: You are building a sign-up app for your local neighbourhood barbeque party and you would like to quickly throw together a low-cost application that tracks who will bring what. Which of the following options should you choose?
A. Python, Flask, App Engine Standard
B. Ruby, Nginx, GKE
C. HTML, CSS, Cloud Storage
D. Node.js, Express, Cloud Functions
E. Rust, Rocket, App Engine Flex
F. Perl, CGI, GCE
ANSWER22:
A
Notes/References22:
The Cloud Storage option doesn’t offer any way to coordinate the guest data. App Engine Flex would cost much more to run when no one is on the sign-up site. Cloud Functions could handle processing some API calls, but it would be more work to set up and that option doesn’t mention anything about storage. GKE is way overkill for such a small and simple application. Running Perl CGI scripts on GCE would also cost more than it needs (and probably make you very sad). App Engine Standard makes it super-easy to stand up a Python Flask app and includes easy data storage options, too.
Question 23: Your company has decided to migrate your AWS DynamoDB database to a multi-regional Cloud Spanner instance and you are designing the system to transfer and load all the data to synchronize the DBs and eventually allow for a quick cut-over. A member of your team has some previous experience working with Apache Hadoop. Which of the following options will you choose for the streamed updates that follow the initial import?
A. The DynamoDB table change is captured by Cloud Pub/Sub and written to Cloud Dataproc for processing into a Spanner-compatible format.
B. The DynamoDB table change is captured by Cloud Pub/Sub and written to Cloud Dataflow for processing into a Spanner-compatible format.
C. Changes to the DynamoDB table are captured by DynamoDB Streams. A Lambda function triggered by the stream writes the change to Cloud Pub/Sub. Cloud Dataflow processes the data from Cloud Pub/Sub and writes it to Cloud Spanner.
D. The DynamoDB table is rescanned by a GCE instance and written to a Cloud Storage bucket. Cloud Dataproc processes the data from Cloud Storage and writes it to Cloud Spanner.
E. The DynamoDB table is rescanned by an EC2 instance and written to an S3 bucket. Storage Transfer Service moves the data from S3 to a Cloud Storage bucket. Cloud Dataflow processes the data from Cloud Storage and writes it to Cloud Spanner.
ANSWER23:
C
Notes/References23:
Rescanning the DynamoDB table is not an appropriate approach to tracking data changes to keep the GCP-side of this in synch. The fact that someone on your team has previous Hadoop experience is not a good enough reason to choose Cloud Dataproc; that’s a red herring. The options purporting to connect Cloud Pub/Sub directly to the DynamoDB table won’t work because there is no such functionality.
Question 24: Your client is a manufacturing company and they have informed you that they will be pausing all normal business activities during a five-week summer holiday period. They normally employ thousands of workers who constantly connect to their internal systems for day-to-day manufacturing data such as blueprints and machine imaging, but during this period the few on-site staff will primarily be re-tooling the factory for the next year’s production runs and will not be performing any manufacturing tasks that need to access these cloud-based systems. When the bulk of the staff return, they will primarily work on the new models but may spend about 20% of their time working with models from previous years. The company has asked you to reduce their GCP costs during this time, so which of the following options will you suggest?
A. Pause all Cloud Functions via the UI and unpause them when work starts back up.
B. Disable all Cloud Functions via the command line and re-enable them when work starts back up.
C. Delete all Cloud Functions and recreate them when work starts back up.
D. Convert all Cloud Functions to run as App Engine Standard applications during the break.
E. None of these options is a good suggestion.
ANSWER24:
E
Notes/References24:
Cloud Functions scale themselves down to zero when they’re not being used. There is no need to do anything with them.
Question 25: You need a place to store images before updating them by file-based render farm software running on a cluster of machines. Which of the following options will you choose?
A. Container Registry
B. Cloud Storage
C. Cloud Filestore
D. Persistent Disk
ANSWER25:
C
Notes/References25:
There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “images” refers to visual images, thus eliminating CI/CD products like Container Registry. Compute Engine is not a storage product and should be eliminated. The term “file-based” software means that it is unlikely to work well with object-based storage like Cloud Storage (or any of its storage classes). Persistent Disk cannot offer shared access across a cluster of machines when writes are involved; it only handles multiple readers. However, Cloud Filestore is made to provide shared, file-based storage for a cluster of machines as described in the question.
Question 26: Your company has decided to migrate your AWS DynamoDB database to a multi-regional Cloud Spanner instance and you are designing the system to transfer and load all the data to synchronize the DBs and eventually allow for a quick cut-over. A member of your team has some previous experience working with Apache Hadoop. Which of the following options will you choose for the initial data import?
A. The DynamoDB table is scanned by an EC2 instance and written to an S3 bucket. Storage Transfer Service moves the data from S3 to a Cloud Storage bucket. Cloud Dataflow processes the data from Cloud Storage and writes it to Cloud Spanner.
B. The DynamoDB table data is captured by DynamoDB Streams. A Lambda function triggered by the stream writes the data to Cloud Pub/Sub. Cloud Dataflow processes the data from Cloud Pub/Sub and writes it to Cloud Spanner.
C. The DynamoDB table data is captured by Cloud Pub/Sub and written to Cloud Dataproc for processing into a Spanner-compatible format.
D. The DynamoDB table is scanned by a GCE instance and written to a Cloud Storage bucket. Cloud Dataproc processes the data from Cloud Storage and writes it to Cloud Spanner.
ANSWER26:
A
Notes/References26:
The same data processing will have to happen for both the initial (batch) data load and the incremental (streamed) data changes that follow it. So if the solution built to handle the initial batch doesn't also work for the stream that follows it, then the processing code would have to be written twice. A Professional Cloud Architect should recognize this project-level issue and not over-focus on the (batch) portion called out in this particular question. This is why you don’t want to choose Cloud Dataproc. Instead, Cloud Dataflow will handle both the initial batch load and also the subsequent streamed data. The fact that someone on your team has previous Hadoop experience is not a good enough reason to choose Cloud Dataproc; that’s a red herring. The DynamoDB streams option would be great for the db synchronization that follows, but it can’t handle the initial data load because DynamoDB Streams only fire for data changes. The option purporting to connect Cloud Pub/Sub directly to the DynamoDB table won’t work because there is no such functionality.
Question 27: You need a managed service to handle logging data coming from applications running in GKE and App Engine Standard. Which option should you choose?
A. Cloud Storage
B. Logstash
C. Cloud Monitoring
D. Cloud Logging
E. BigQuery
F. BigTable
ANSWER27:
D
Notes/References27:
Cloud Monitoring is made to handle metrics, not logs. Logstash is not a managed service. And while you could store application logs in almost any storage service, the Cloud Logging service–aka Stackdriver Logging–is purpose-built to accept and process application logs from many different sources. Oh, and you should also be comfortable dealing with products and services by names other than their current official ones. For example, “GKE” used to be called “Container Engine”, “Cloud Build” used to be “Container Builder”, the “GCP Marketplace” used to be called “Cloud Launcher”, and so on.
Question 28: You need a place to store images before serving them from AppEngine Standard. Which of the following options will you choose?
A. Compute Engine
B. Cloud Filestore
C. Cloud Storage
D. Persistent Disk
E. Container Registry
F. Cloud Source Repositories
G. Cloud Build
H. Nearline
ANSWER28:
C
Notes/References28:
There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “images” refers to picture files, because that’s something that you would serve from a web server product like AppEngine Standard, so we eliminate Cloud Build (which isn’t actually for storage, at all) and the other two CI/CD products: Cloud Source Repositories and Container Registry. You definitely could store image files on Cloud Filestore or Persistent Disk, but you can’t hook those up to AppEngine Standard, so those options need to be eliminated, too. The only options left are both types of Cloud Storage, but since “Cloud Storage” sits next to “Coldline” as an option, we can confidently infer that the former refers to the “Standard” storage class. Since the question implies that these images will be served by AppEngine Standard, we would prefer to use the Standard storage class over the Coldline one–so there’s our answer.
Question 29: You need to ensure low-latency global access to data stored in a multi-regional GCS bucket. Data access is uniform across many objects and relatively low. What should you do to address the latency concerns?
A. Use a global Cloud Spanner instance.
B. Change the storage class to multi-regional.
C. Use Google’s Cloud CDN.
D. Migrate the data to a new regional GCS bucket.
E. Do nothing.
F. Use global BigTable storage.
ANSWER29:
E
Notes/References29:
Cloud Functions cannot be used to affect GCS data access, so that option is simply wrong. BigTable does not have any “global” mode, so that option is wrong, too. Cloud Spanner is not a good replacement for GCS data: the data use cases are different enough that we can assume it would probably not be a good fit. You cannot change a bucket’s location after it has been created–not via the storage class nor any other way; you would have to migrate the data to a new bucket. But migrating the data to a regional bucket only helps when the data access will primarily be from that region. Google’s Cloud CDN is very easy to turn on, but it does only work for data that comes from within GCP and only if the objects are being accessed frequently enough to get cached based on previous requests. Because the access per object is so low, Cloud CDN won’t really help. This then brings us back to the question. Now, it may seem implied, but the question does not specifically state that there is currently a problem with latency, only that you need to ensure low latency–and we are already using what would be the best fit for this situation: a multi-regional CS bucket.
Question 30: You need to ensure low-latency GCP access to a volume of historical data that is currently stored in an S3 bucket. Data access is uniform across many objects and relatively high. What should you do to address the latency concerns?
A. Use Premium Tier routing and Cloud Functions to accelerate access at the edges.
B. Use Google’s Cloud CDN.
C. Use global BigTable storage.
D. Do nothing.
E. Migrate the data to a new multi-regional GCS bucket.
F. Use a global Cloud Spanner instance.
ANSWER30:
E
Notes/References30:
Cloud Functions cannot be used to affect GCS data access, so that option is simply wrong. BigTable does not have any “global” mode, so that option is wrong, too. Cloud Spanner is not a good replacement for GCS data: the data use cases are different enough that we can assume it would probably not be a good fit–and it would likely be unnecessarily expensive. You cannot change a bucket’s location after it has been created–not via the storage class nor any other way; you would have to migrate the data to a new bucket. Google’s Cloud CDN is very easy to turn on, but it does only work for data that comes from within GCP and only if the objects are being accessed frequently enough. So even if you would want to use Cloud CDN, you have to migrate the data into a GCS bucket first, so that’s a better option.
Question 31: You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend and backend tiers and will be deployed in three regions. How many subnets will you need?
A. Six
B. One
C. Three
D. Four
E. Two
F. Nine
ANSWER31:
A
Notes/References31:
A single subnet spans and can be used across all zones in a single region, but you will need different subnets in different regions. Also, to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have two tiers which will each need their own subnet in each of the three regions in which you will deploy this system.
Question 32: You need a place to produce images before deploying them to AppEngine Flex. Which of the following options will you choose?
A. Container Registry
B. Cloud Storage
C. Persistent Disk
D. Nearline
E. Cloud Source Repositories
F. Cloud Build
G. Cloud Filestore
H. Compute Engine
ANSWER32:
F
Notes/References32:
There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “deploying [these images] to AppEngine Flex” lets us know that we are dealing with Docker container images, and thus although they would likely be stored in the Container Registry, after being built, this question asks us where that building might happen, which is Cloud Build. Cloud Build, which used to be called Container Builder, is ideal for building container images–though it can also be used to build almost any artifacts, really. You could also do this on Compute Engine, but that option requires much more work to manage and is therefore worse.
Question 33: You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend, app, and data tiers and will be deployed in three regions. How many subnets will you need?
A. Two
B. One
C. Three
D. Nine
E. Four
F. Six
ANSWER33:
D
Notes/References33:
A single subnet spans and can be used across all zones in a single region, but you will need different subnets in different regions. Also, to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have three tiers which will each need their own subnet in each of the three regions in which you will deploy this system.
Question 34: You need a place to store images in case any of them are needed as evidence for a tax audit over the next seven years. Which of the following options will you choose?
A. Cloud Filestore
B. Coldline
C. Nearline
D. Persistent Disk
E. Cloud Source Repositories
F. Cloud Storage
G. Container Registry
ANSWER34:
B
Notes/References34:
There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “images” probably refers to picture files, and so Cloud Storage seems like an interesting option. But even still, when “Cloud Storage” is used without any qualifier, it generally refers to the “Standard” storage class, and this question also offers other storage classes as response options. Because the images in this scenario are unlikely to be used more than once a year (we can assume that taxes are filed annually and there’s less than 100% chance of being audited), the right storage class is Coldline.
Question 35: You need a place to store images before deploying them to AppEngine Flex. Which of the following options will you choose?
A. Container Registry
B. Cloud Filestore
C. Cloud Source Repositories
D. Persistent Disk
E. Cloud Storage
F. Code Build
G. Nearline
ANSWER35:
A
Notes/References35:
Compute Engine is not a storage product and should be eliminated. There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “deploying [these images] to AppEngine Flex” lets us know that we are dealing with Docker container images, and thus they would likely have been stored in the Container Registry.
Question 36: You are configuring a SaaS security application that updates your network’s allowed traffic configuration to adhere to internal policies. How should you set this up?
A. Install the application on a new appropriately-sized GCE instance running in your host VPC, and apply a read-only service account to it.
B. Create a new service account for the app to use and grant it the compute.networkViewer role on the production VPC.
C. Create a new service account for the app to use and grant it the compute.securityAdmin role on the production VPC.
D. Run the application as a container in your system’s staging GKE cluster and grant it access to a read-only service account.
E. Install the application on a new appropriately-sized GCE instance running in your host VPC, and let it use the default service account.
ANSWER36:
C
Notes/References36:
You do not install a Software-as-a-Service application yourself; instead, it runs on the vendor's own hardware and you configure it for external access. Service accounts are great for this, as they can be used externally and you maintain full control over them (disabling them, rotating their keys, etc.). The principle of least privilege dictates that you should not give any application more ability than it needs, but this app does need to make changes, so you'll need to grant securityAdmin, not networkViewer.
Question 37:You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend and backend tiers and will be deployed across three zones. How many subnets will you need?
A. One
B. Six
C. Four
D. Three
E. Nine
ANSWER37:
F
Notes/References37:
A single subnet spans and can be used across all zones in a given region. But to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have two tiers, so you only need two subnets.
Question 38:You have been tasked with setting up a system to comply with corporate standards for container image approvals. Which of the following is your best choice for this project?
A. Binary Authorization
B. Cloud IAM
C. Security Key Enforcement
D. Cloud SCC
E. Cloud KMS
ANSWER38:
A
Notes/References38:
Cloud KMS is Google's product for managing encryption keys. Security Key Enforcement is about making sure that people's accounts do not get taken over by attackers, not about managing encryption keys. Cloud IAM is about managing what identities (both humans and services) can access in GCP. Cloud DLP–or Data Loss Prevention–is for preventing data loss by scanning for and redacting sensitive information. Cloud SCC–the Security Command Center–centralizes security information so you can manage it all in one place. Binary Authorization is about making sure that only properly-validated containers can run in your environments.
Question 39: For this question, refer to the Company B‘s case study. Which of the following are most likely to impact the operations of Company B’s game backend and analytics systems?
A. PCI
B. PII
C. SOX
D. GDPR
E. HIPAA
ANSWER39:
B and D
Notes/References39:
There is no patient/health information, so HIPAA does not apply. It would be a very bad idea to put payment card information directly into these systems, so we should assume they’ve not done that–therefore the Payment Card Industry (PCI) standards/regulations should not affect normal operation of these systems. Besides, it’s entirely likely that they never deal with payments directly, anyway–choosing to offload that to the relevant app stores for each mobile platform. Sarbanes-Oxley (SOX) is about proper management of financial records for publicly traded companies and should therefore not apply to these systems. However, these systems are likely to contain some Personally-Identifying Information (PII) about the users who may reside in the European Union and therefore the EU’s General Data Protection Regulations (GDPR) will apply and may require ongoing operations to comply with the “Right to be Forgotten/Erased”.
Question 40:Your new client has advised you that their organization falls within the scope of HIPAA. What can you infer about their information systems?
A. Their customers located in the EU may require them to delete their user data and provide evidence of such.
B. They will also need to pass a SOX audit.
C. They handle money-linked information.
D. Their system deals with medical information.
ANSWER40:
D
Notes/References40:
SOX stands for Sarbanes Oxley and is US regulation governing financial reporting for publicly-traded companies. HIPAA–the Health Insurance Portability and Accountability Act of 1996–is US regulation aimed at safeguarding individuals' (i.e. patients’) health information. PCI is the Payment Card Industry, and they have Data Security Standards (DSS) that must be adhered to by systems handling payment information of any of their member brands (which include Visa, Mastercard, and several others).
Question 41:Your new client has advised you that their organization needs to pass audits by ISO and PCI. What can you infer about their information systems?
A. They handle money-linked information.
B. Their customers located in the EU may require them to delete their user data and provide evidence of such.
C. Their system deals with medical information.
D. They will also need to pass a SOX audit.
ANSWER42:
A
Notes/References42:
SOX stands for Sarbanes Oxley and is US regulation governing financial reporting for publicly-traded companies. HIPAA–the Health Insurance Portability and Accountability Act of 1996–is US regulation aimed at safeguarding individuals' (i.e. patients’) health information. PCI is the Payment Card Industry, and they have Data Security Standards (DSS) that must be adhered to by systems handling payment information of any of their member brands (which include Visa, Mastercard, and several others). ISO is the International Standards Organization, and since they have so many completely different certifications, this does not tell you much.
Question 43:Your new client has advised you that their organization deals with GDPR. What can you infer about their information systems?
A. Their system deals with medical information.
B. Their customers located in the EU may require them to delete their user data and provide evidence of such.
C. They will also need to pass a SOX audit.
D. They handle money-linked information.
ANSWER43:
B
Notes/References43:
SOX stands for Sarbanes Oxley and is US regulation governing financial reporting for publicly-traded companies. HIPAA–the Health Insurance Portability and Accountability Act of 1996–is US regulation aimed at safeguarding individuals' (i.e. patients’) health information. PCI is the Payment Card Industry, and they have Data Security Standards (DSS) that must be adhered to by systems handling payment information of any of their member brands (which include Visa, Mastercard, and several others).
Question 44:For this question, refer to the Company C case study. Once Company C has completed their initial cloud migration as described in the case study, which option would represent the quickest way to migrate their production environment to GCP?
A. Apply the strangler pattern to their applications and reimplement one piece at a time in the cloud
B. Lift and shift all servers at one time
C. Lift and shift one application at a time
D. Lift and shift one server at a time
E. Set up cloud-based load balancing then divert traffic from the DC to the cloud system
F. Enact their disaster recovery plan and fail over
ANSWER44:
F
Notes/References44:
The proposed Lift and Shift options are all talking about different situations than Dress4Win would find themselves in, at that time: they’d then have automation to build a complete prod system in the cloud, but they’d just need to migrate to it. “Just”, right? 🙂 The strangler pattern approach is similarly problematic (in this case), in that it proposes a completely different cloud migration strategy than the one they’ve almost completed. Now, if we purely consider the kicker’s key word “quickest”, using the DR plan to fail over definitely seems like it wins. Setting up an additional load balancer and migrating slowly/carefully would take more time.
Question 45:Which of the following commands is most likely to appear in an environment setup script?
A. gsutil mb -l asia gs://${project_id}-logs
B. gcloud compute instances create –zone–machine-type=n1-highmem-16 newvm
C. gcloud compute instances create –zone–machine-type=f1-micro newvm
D. gcloud compute ssh ${instance_id}
E. gsutil cp -r gs://${project_id}-setup ./install
F. gsutil cp -r logs/* gs://${project_id}-logs/${instance_id}/
ANSWER45:
A
Notes/References45:
The context here indicates that “environment” is an infrastructure environment like “staging” or “prod”, not just a particular command shell. In that sort of a situation, it is likely that you might create some core per-environment buckets that will store different kinds of data like configuration, communication, logging, etc. You're not likely to be creating, deleting, or connecting (sshing) to instances, nor copying files to or from any instances.
Question 46:Your developers are working to expose a RESTful API for your company’s physical dealer locations. Which of the following endpoints would you advise them to include in their design?
A. /dealerLocations/get
B. /dealerLocations
C. /dealerLocations/list
D. Source and destination
E. /getDealerLocations
ANSWER46:
B
Notes/References46:
It might not feel like it, but this is in scope and a fair question. Google expects Professional Cloud Architects to be able to advise on designing APIs according to best practices (check the exam guide!). In this case, it's important to know that RESTful interfaces (when properly designed) use nouns for the resources identified by a given endpoint. That, by itself, eliminates most of the listed options. In HTTP, verbs like GET, PUT, and POST are then used to interact with those endpoints to retrieve and act upon those resources. To choose between the two noun-named options, it helps to know that plural resources are generally already understood to be lists, so there should be no need to add another “/list” to the endpoint.
Question 47:Which of the following commands is most likely to appear in an instance shutdown script?
A. gsutil cp -r gs://${project_id}-setup ./install
B. gcloud compute instances create –zone–machine-type=n1-highmem-16 newvm
C. gcloud compute ssh ${instance_id}
D. gsutil mb -l asia gs://${project_id}-logs
E. gcloud compute instances delete ${instance_id}
F. gsutil cp -r logs/* gs://${project_id}-logs/${instance_id}/
G. gcloud compute instances create –zone–machine-type=f1-micro newvm
ANSWER47:
F
Notes/References47:
The startup and shutdown scripts run on an instance at the time when that instance is starting up or shutting down. Those situations do not generally call for any other instances to be created, deleted, or connected (sshed) to. Also, those would be a very unusual time to make a Cloud Storage bucket, since buckets are the overall and highly-scalable containers that would likely hold the data for all (or at least many) instances in a given project. That said, instance shutdown time may be a time when you'd want to copy some final logs from the instance into some project-wide bucket. (In general, though, you really want to be doing that kind of thing continuously and not just at shutdown time, in case the instance shuts down unexpectedly and not in an orderly fashion that runs your shutdown script.)
Question 48:It is Saturday morning and you have been alerted to a serious issue in production that is both reducing availability to 95% and corrupting some data. Your monitoring tools noticed the issue 5 minutes ago and it was just escalated to you because the on-call tech in line before you did not respond to the page. Your system has an RPO of 10 minutes and an RTO of 120 minutes, with an SLA of 90% uptime. What should you do first?
A. Escalate the decision to the business manager responsible for the SLA
B. Take the system offline
C. Revert the system to the state it was in on Friday morning
D. Investigate the cause of the issue
ANSWER48:
B
Notes/References48:
The data corruption is your primary concern, as your Recovery Point Objective allows only 10 minutes of data loss and you may already have lost 5. (The data corruption means that you may well need to roll back the data to before that started happening.) It might seem crazy, but you should as quickly as possible stop the system so that you do not lose any more data. It would almost certainly take more time than you have left in your RPO to properly investigate and address the issue, but you should then do that next, during the disaster response clock set by your Recovery Time Objective. Escalating the issue to a business manager doesn't make any sense. And neither does it make sense to knee-jerk revert the system to an earlier state unless you have some good indication that doing so will address the issue. Plus, we'd better assume that “revert the system” refers only to the deployment and not the data, because rolling the data back that far would definitely violate the RPO.
Question 49:Which of the following are not processes or practices that you would associate with DevOps?
A. Raven-test the candidate
B. Obfuscate the code
C. Only one of the other options is made up
D. Run the code in your cardinal environment
E. Do a canary deploy
ANSWER49:
A and D
Notes/References49:
Testing your understanding of development and operations in DevOps. In particular, you need to know that a canary deploy is a real thing and it can be very useful to identify problems with a new change you're making before it is fully rolled out to and therefore impacts everyone. You should also understand that “obfuscating” code is a real part of a release process that seeks to protect an organization's source code from theft (by making it unreadable by humans) and usually happens in combination with “minification” (which improves the speed of downloading and interpreting/running the code). On the other hand, “raven-testing” isn't a thing, and neither is a “cardinal environment”. Those bird references are just homages to canary deployments.
Question 50:Your CTO is going into budget meetings with the board, next month, and has asked you to draw up plans to optimize your GCP-based systems for capex. Which of the following options will you prioritize in your proposal?
A. Object lifecycle management
B. BigQuery Slots
C. Committed use discounts
D. Sustained use discounts
E. Managed instance group autoscaling
F. Pub/Sub topic centralization
ANSWER50:
B and C
Notes/References50:
Pub/Sub usage is based on how much data you send through it, not any sort of “topic centralization” (which isn't really a thing). Sustained use discounts can reduce costs, but that's not really something you structure your system around. Now, most organizations prefer to turn Capital Expenditures into Operational Expenses, but since this question is instead asking you to prioritize CapEx, we need to consider the remaining options from the perspective of “spending” (or maybe reserving) defined amounts of money up-front for longer-term use. (Fair warning, though: You may still have some trouble classifying some cloud expenses as “capital” expenditures). With that in mind, GCE's Committed Use Discounts do fit: you “buy” (reserve/prepay) some instances ahead of time and then not have to pay (again) for them as you use them (or don't use them; you've already paid). BigQuery Slots are a similar flat-rate pricing model: you pre-purchase a certain amount of BigQuery processing capacity and your queries use that instead of the on-demand capacity. That means you won't pay more than you planned/purchased, but your queries may finish rather more slowly, too. Managed instance group autoscaling and object lifecycle management can help to reduce costs, but they are not really about capex.
Question 51:In your last retrospective, there was significant disagreement voiced by the members of your team about what part of your system should be built next. Your scrum master is currently away, but how should you proceed when she returns, on Monday?
A. The scrum master is the one who decides
B. The lead architect should get the final say
C. The product owner should get the final say
D. You should put it to a vote of key stakeholders
E. You should put it to a vote of all stakeholders
ANSWER51:
C
Notes/References51:
In Scrum, it is the Product Owner's role to define and prioritize (i.e. set order for) the product backlog items that the dev team will work on. If you haven't ever read it, the Scrum Guide is not too long and quite valuable to have read at least once, for context.
Question 52:Your development team needs to evaluate the behavior of a new version of your application for approximately two hours before committing to making it available to all users. Which of the following strategies will you suggest?
A. Split testing
B. Red-Black
C. A/B
D. Canary
E. Rolling
F. Blue-Green
G. Flex downtime
ANSWER52:
D and E
Notes/References52:
A Blue-Green deployment, also known as a Red-Black deployment, entails having two complete systems set up and cutting over from one of them to the other with the ability to cut back to the known-good old one if there’s any problem with the experimental new one. A canary deployment is where a new version of an app is deployed to only one (or a very small number) of the servers, to see whether it experiences or causes trouble before that version is rolled out to the rest of the servers. When the canary looks good, a Rolling deployment can be used to update the rest of the servers, in-place, one after another to keep the overall system running. “Flex downtime” is something I just made up, but it sounds bad, right? A/B testing–also known as Split testing–is not generally used for deployments but rather to evaluate two different application behaviours by showing both of them to different sets of users. Its purpose is to gather higher-level information about how users interact with the application.
Question 53:You are mentoring a Junior Cloud Architect on software projects. Which of the following “words of wisdom” will you pass along?
A. Identifying and fixing one issue late in the product cycle could cost the same as handling a hundred such issues earlier on
B. Hiring and retaining 10X developers is critical to project success
C. A key goal of a proper post-mortem is to identify what processes need to be changed
D. Adding 100% is a safe buffer for estimates made by skilled estimators at the beginning of a project
E. A key goal of a proper post-mortem is to determine who needs additional training
ANSWER53:
A and C
Notes/References53:
There really can be 10X (and even larger!) differences in productivity between individual contributors, but projects do not only succeed or fail because of their contributions. Bugs are crazily more expensive to find and fix once a system has gone into production, compared to identifying and addressing that issue right up front–yes, even 100x. A post-mortem should not focus on blaming an individual but rather on understanding the many underlying causes that led to a particular event, with an eye toward how such classes of problems can be systematically prevented in the future.
Question 54:Your team runs a service with an SLA to achieve p99 latency of 200ms. This month, your service achieved p95 latency of 250ms. What will happen now?
A. The next month’s SLA will be increased.
B. The next month’s SLO will be reduced.
C. Your client(s) will have to pay you extra.
D. You will have to pay your client(s).
E. There is no impact on payments.
F. There is not enough information to make a determination.
ANSWER54:
D
Notes/References54:
It would be highly unusual for clients to have to pay extra, even if the service performs better than agreed by the SLA. SLAs generally set out penalties (i.e. you pay the client) for below-standard performance. While SLAs are external-facing, SLOs are internal-facing and do not generally relate to performance penalties. Neither SLAs nor SLOs are adaptively changed just because of one month’s performance; such changes would have to happen through rather different processes. A p99 metric is a tougher measure than p95, and p95 is tougher than p90–so meeting the tougher measure would surpass a required SLA, but meeting a weaker measure would not give enough information to say.
Question 55:Your team runs a service with an SLO to achieve p90 latency of 200ms. This month, your service achieved p95 latency of 250ms. What will happen now?
A. The next month’s SLA will be increased.
B. There is no impact on payments.
C. There is not enough information to make a determination.
D. Your client(s) will have to pay you extra.
E. The next month’s SLO will be reduced.
F. You will have to pay your client(s).
ANSWER55:
B
Notes/References55:
It would be highly unusual for clients to have to pay extra, even if the service performs better than agreed by the SLA. SLAs generally set out penalties (i.e. you pay the client) for below-standard performance. While SLAs are external-facing, SLOs are internal-facing and do not generally relate to performance penalties. Neither SLAs nor SLOs are adaptively changed just because of one month’s performance; such changes would have to happen through rather different processes. A p99 metric is a tougher measure than p95, and p95 is tougher than p90–so meeting the tougher measure would surpass a required SLA, but meeting a weaker measure would not give enough information to say.
Question 56:For this question, refer to the Company C case study. How would you recommend Company C address their capacity and utilization concerns?
A. Configure the autoscaling thresholds to follow changing load
B. Provision enough servers to handle trough load and offload to Cloud Functions for higher demand
C. Run cron jobs on their application servers to scale down at night and up in the morning
D. Use Cloud Load Balancing to balance the traffic highs and lows
D. Run automated jobs in Cloud Scheduler to scale down at night and up in the morning
E. Provision enough servers to handle peak load and sell back excess on-demand capacity to the marketplace
ANSWER56:
A
Notes/References56:
The case study notes, “Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sitting idle.” Cloud Load Balancing could definitely scale itself to handle this type of load fluctuation, but it would not do anything to address the issue of having enough application server capacity. Provisioning servers to handle peak load is generally inefficient, but selling back excess on-demand capacity to the marketplace just isn’t a thing, so that option must be eliminated, too. Using Cloud Functions would require a different architectural approach for their application servers and it is generally not worth the extra work it would take to coordinate workloads across Cloud Functions and GCE–in practice, you’d just use one or the other. It is possible to manually effect scaling via automated jobs like in Cloud Scheduler or cron running somewhere (though cron running everywhere could create a coordination nightmare), but manual scaling based on predefined expected load levels is far from ideal, as capacity would only very crudely match demand. Rather, it is much better to configure the managed instance group’s autoscaling to follow demand curves–both expected and unexpected. A properly-architected system should rise to the occasion of unexpectedly going viral, and not fall over.
Google Cloud Latest News, Questions and Answers online:
Cloud Run vs App Engine: In a nutshell, you give Google’s Cloud Run a Docker container containing a webserver. Google will run this container and create an HTTP endpoint. All the scaling is automatically done for you by Google. Cloud Run depends on the fact that your application should be stateless. This is because Google will spin up multiple instances of your app to scale it dynamically. If you want to host a traditional web application this means that you should divide it up into a stateless API and a frontend app.
With Google’s App Engine you tell Google how your app should be run. The App Engine will create and run a container from these instructions. Deploying with App Engine is super easy. You simply fill out an app.yml file and Google handles everything for you.
With Cloud Run, you have more control. You can go crazy and build a ridiculous custom Docker image, no problem!Cloud Run is made for Devops engineers, App Engine is made for developers.Read more here…
The best choice depends on what you want to optimize, your use-cases and your specific needs.
If your objective is the lowest latency, choose Cloud Run.
Indeed, Cloud Run use always 1 vCPU (at least 2.4Ghz) and you can choose the memory size from 128Mb to 2Gb.
With Cloud Functions, if you want the best processing performance (2.4Ghz of CPU), you have to pay 2Gb of memory. If your memory footprint is low, a Cloud Functions with 2Gb of memory is overkill and cost expensive for nothing.
Cutting cost is not always the best strategy for customer satisfaction, but business reality may require it. Anyway, it highly depends of your use-case
Both Cloud Run and Cloud Function round up to the nearest 100ms. As you could play with the GSheet, the Cloud Functions are cheaper when the processing time of 1 request is below the first 100ms. Indeed, you can slow the Cloud Functions vCPU, with has for consequence to increase the duration of the processing but while staying under 100ms if you tune it well. Thus less Ghz/s are used and thereby you pay less.
the cost comparison between Cloud Functions and Cloud Run goes further than simply comparing a pricing list. Moreover, on your projects, you often will have to use the 2 solutions for taking advantage of their strengths and capabilities.
My first choice for development is Cloud Run. Its portability, its testability, its openess on the libraries, the languages and the binaries confer it too much advantages for, at least, a similar pricing, and often with a real advantage in cost but also in performance, in particular for concurrent requests. Even if you need the same level of isolation of Cloud functions (1 instance per request), simply set the concurrent param to 1!
In addition, the GA of Cloud Run is applied on all containers, whatever the languages and the binaries used. Read more here…
Google Cloud Storage : What bucket class for the best performance?: Multiregional buckets perform significantly better for cross-the-ocean fetches, however the details are a bit more nuanced than that. The performance is dominated by the latency of physical distance between the client and the cloud storage bucket.
If caching is on, and your access volume is high enough to take advantage of caching, there’s not a huge difference between the two offerings (that I can see with the tests). This shows off the power of Google’s Awesome CDN environment.
If caching is off, or the access volume is low enough that you can’t take advantage of caching, then the performance overhead is dominated directly by physics. You should be trying to get the assets as close to the clients as possible, while also considering cost, and the types of redundancy and consistency you’ll need for your data needs.
A video game is an electronic game that can be played on a computing device, such as a personal computer, gaming console or mobile phone. Depending on the platform, video games can be subcategorized into computer games and console games.
blood, last, lego, rise – 23 [BadBlood Battle Royal, Last Pirate: Survival Island Adventure, LEGO® Star Wars™: TFA, Rise of Empires: Ice and Fire, etc…]
brave, dawn, rivals, versus – 22 [Brave Frontier, Dawn of Zombies-Survival after the Last War Online, Rivals at War, Plants vs. Zombies FREE, Versus Video Games 3, etc…]
Instead of whining about our kids spending times playing video games, why not leverage video games as a powerful learning platform.
Various educational apps are now packaged as Video games and they have have tremendous success:
Prodigy Math Game: Prodigy delivers a unique learning experience through an interactive math game where success depends on correctly answering skill-building math questions. Players can earn rewards, go on quests and play with friends — all while learning new skills!
Monster Math 2: Fun Maths game for Kids: Monster Maths 2 is your child’s personal homework and math trainer. It’s fun learning games, engrossing story and an adaptive learning approach makes it a superior alternative to homework or planned lessons. Lay a solid foundation for success in Algebra or Calculus.
GramMars Wars – English Grammar Game: GramMars Wars is an educational game where you can learn and improve your English Grammar.
League of Legends in a multiplayer online game similar to Mobile Legends.
As in other multiplayer online battle arena (MOBA) games, each player in League of Legends controls a character (“champion”) with a set of unique abilities. Most games involve two teams of five players, with each player using a different champion.
The two teams compete to be the first to destroy the Nexus structure within the opposing base. Over the course of each game, champions become stronger and gain additional abilities by earning experience and thereby levelling up. Experience is earned by killing enemies (or being nearby when a teammate does). Champions also build strength over the course of the game by buying progressively more powerful items using gold, which is earned by killing non-player enemies, killing or assisting in killing enemy players, destroying enemy structures, or selling other items.
In the main game mode, players are assigned to either the attacking or defending team with each team having five players on it. Agents have unique abilities, each requiring charges, as well as a unique ultimate ability which requires charging through kills, deaths, or spike actions. Every player starts each round with a “classic” pistol and one or more “signature ability” charge. Other weapons and ability charges can be purchased using an in game economic system which awards money based on the outcome of the previous round, any kills the player is responsible for, and any actions taken with the spike. The game has an assortment of weapons including sidearms, submachine guns, shotguns, machine guns, assault rifles and sniper rifles. There are automatic and semi-automatic weapons that have a shooting pattern which has to be controlled by the player in order to be able to shoot accurately.
A PlayStation 3 version followed in December 2007 when The Orange Box was ported to the system.
Later in development, the game was released as a standalone title for Windows in April 2008, and was updated to support Mac OS X in June 2010 and Linux in February 2013. It is distributed online through Valve’s digital retailer Steam, with Electronic Arts handling all physical and console ports of the game.
The player can join one of two teams, RED or BLU, and choose one of 9 character classes to battle in game modes such as capture the flag and king of the hill. Development of the game was led by John Cook and Robin Walker, the developers of the original Team Fortress mod. Team Fortress 2 was first announced in 1998 under the name Team Fortress 2: Brotherhood of Arms. Initially, the game had more realistic, militaristic visuals and gameplay, but this changed over the protracted nine-year development. After Valve released no information for six years, Team Fortress 2 regularly featured in Wired News‘ annual vaporware list among other ignominies. The finished Team Fortress 2 has cartoon-like visuals influenced by the art of J. C. Leyendecker, Dean Cornwell, and Norman Rockwell and uses Valve’s Source game engine.
Fortnite is distributed as three different game modes, using the same engine; each has similar graphics, art assets, and game mechanics.
Fortnite: Save the World is a player-versus-environment cooperative game, with four players collaborating towards a common objective on various missions. The game is set after a fluke storm appears across Earth, causing 98% of the population to disappear, and the survivors to be attacked by zombie-like “husks”. The players take the role of commanders of home base shelters, collecting resources, saving survivors, and defending equipment that helps to either collect data on the storm or to push back the storm. From missions, players are awarded a number of in-game items, which include hero characters, weapon and trap schematics, and survivors, all of which can be leveled up through gained experience to improve their attributes.
Fortnite Battle Royale is a player-versus-player game for up to 100 players, allowing one to play alone, in a duo, or in a squad (usually consisting of three or four players). Weaponless players airdrop from a “Battle Bus” that crosses the game’s map. When they land, they must scavenge for weapons, items, resources, and even vehicles while trying to stay alive and to attack and eliminate other players. Over the course of a round, the safe area of the map shrinks down in size due to an incoming toxic storm; players outside that threshold take damage and can be eliminated if they fail to quickly evacuate. This forces remaining players into tighter spaces and encourages player encounters. The last player, duo, or squad remaining is the winner.
Fortnite Creative is a sandbox game mode, similar to Minecraft in that players are given complete freedom to spawn everything that is within the game on an island, and can create games such as battle arenas, race courses, platforming challenges, and more.
Players can use their pickaxe to knock down existing structures on the map to collect basic resources that are wood, brick, and metal. Subsequently, in all modes, the player can use these materials to build fortifications, such as walls, floors, and stairs. Such fortification pieces can be edited to add things like windows or doors. The materials used have different durability properties and can be updated to stronger variants using more materials of the same type. Within Save the World this enables players to create defensive fortifications around an objective or trap-filled tunnels to lure husks through. In Battle Royale, this provides the means to quickly traverse the map, protect oneself from enemy fire, or to delay an advancing foe. Players are encouraged to be very inventive in designing their fortifications in Creative.
While Battle Royal and Creative are free-to-play, Save the World is pay-to-play. The games are monetized through the use of V-Bucks, in-game currency that can be purchased with real-world funds, but also earned through completing missions and other achievements in Save the World. V-Bucks in Save the World can be used to buy loot boxes, in the form of piñatas shaped like llamas, to gain a random selection of items. In Battle Royale, V-Bucks can be used to buy cosmetic items like character models or the like, or can also be used to purchase the game’s battle pass, a tiered progression of customization rewards for gaining experience and completing certain objectives during the course of a Battle Royale season.
You can always play the Fortnite android version on Bluestacks.
Warning: Fortnite android version is not available on the play store. I don’t have time to explain that right now. Please watch a YouTube video on how to download it. (Fortnite Mobile was banned from the play store because Fortnite Mobile started using their own payment system instead of the Google Play one that gave Google 30% of their profit).
Call of Duty: Warzone, the only free game in the Call of Duty series, is a multiplayer online shooter game.
Warzone features two primary game modes: Battle Royale and Plunder. It is the second main battle royale installment in the Call of Duty franchise, following the “Blackout” mode of Call of Duty: Black Ops 4 (2018). Warzone differs from Black Ops 4 by reducing reliance on equipable gadgets and instead encouraging the accumulation of a new in-game currency called Cash.
Warzone supports up to 150 players in a single match, which exceeds the typical size of 100 players seen in other battle royale titles. Some limited-time modes support 200 players.
The Battle Royale mode is similar to other titles in the genre where players compete in a continuously shrinking map to be the last player remaining. Players parachute onto a large game map, where they encounter other players. As the game progresses and players are eliminated, the playable area shrinks forcing the remaining players into tighter spaces. In Warzone, the non-playable areas become contaminated with a green gas that depletes health and eventually kills the player if they do not return to the safe playable area.
Unlike other titles, Warzone introduces a new respawn mechanic, a greater emphasis on vehicles, and a new in-game currency mechanic. Parachuting is unrestricted, with the player being allowed to open and cut their parachute an unlimited number of times while in air. At launch, the game supported trios (squads of up to three players) with an option to disable squad filling. Infinity Ward has mentioned testing the number of squad members in future updates. Four-player squads and Solo BR modes were added in following updates, while Duos was added near the end of Season 3.
Character death in Battle Royale does not necessarily translate to player defeat like in other titles. Instead, the mode offers a respawn mechanic which players can take advantage of in various ways. Players who are killed are transported to the “Gulag”, where they engage in one-on-one combat with another defeated player, with both players being given the same weaponry. The guns that the players receive have little or no attachments. Players may only enter the gulag after their first death in a match. The winner of this combat is respawned into the game. Other methods of respawn are available using the in-game currency system. Players may use the in-game currency to purchase respawn tokens for other players should they not be revived by the Gulag mechanic.
In the Plunder mode teams have to search for stacks of Cash scattered around the map to accumulate $1 million. Once found, the game goes into overtime, multiplying all Cash sums by 1.5. The team who has gathered the most money when the clock runs out is declared the winner. Players respawn automatically in this gamemode.
In addition to Battle Royale and Plunder, several limited-time modes have been introduced throughout the course of the game’s life cycle:
BR Buy Backs (originally called BR Stimulus) is a variation of Battle Royale in which players automatically respawn upon death if they have sufficient money, and the Gulag is disabled.
Blood Money is a variation of Plunder in which players gain more cash rewards from completing contracts and performing “finishing moves” (execution kills) on other players.
Warzone Rumble is a 50v50 deathmatch type game mode taking place in cut-off sections of the main Verdansk map.
Mini Royale is a 50-player mode in which players drop within a smaller circle than normal Battle Royale modes, for more squad engagements.
Juggernaut Royale features the Juggernaut killstreak dropping in random places throughout the map. Up to three Juggernauts can be active at once in the map. Once a Juggernaut is killed, another Juggernaut care package will spawn in.
Armored Royale features squads spawning in with armored trucks, which players can upgrade to be more powerful over time. Unlike normal modes, players can continue to respawn as long as their squad’s truck is intact.
Slither io is a website based online game where you play as a worm/ snake (I’m not sure) and have to grow bigger by eating the glowing stuff and killing other players and eating their points. You get killed if your head bumps into another player’s body.
If video gaming is an addiction, there is a huge number of people with it because “The…number of video gamers worldwide in 2018, broken down by region, (indicates)…there were over 1.23 billion gamers in Asia Pacific in 2018, with the region generating 71.4 billion U.S. dollars of revenue in the same year.”
“There are approximately 2.2 billion gamers in the world. Out of the estimated 7.6 billion people living on earth, as of July 2018, that means almost a third of people on this planet are gamers.” Video gaming is a big business and enjoyed worldwide.
We’ve been warned about it: the white-collar apocalypse is on the way. Intellectual labour is under threat. Jobs in fields like medicine, banking, journalism and marketing are about to disappear in a puff of digital smoke. But… is it?
What the Future Holds Let’s look at what the future has in store for mobile gaming. One of the main “problems” this industry has had was the lack of quality and high-end games. This is changing day by day now.
With the increase of Cloud-Gaming, mobile has become a very valuable option for on the go gaming. With services like Google Stadia, GeForce Now, PlayStation Now, etc. you have a very big variety so you can play all the games you want on that small screen of your phone. Also, 5G will make this process even smoother.
The video game industry has (not so) quietly undergone a big number of changes: microtransactions, development costs, and competition.
The idea of a “Netflix for video games” is quite simple — a service that allows all people to play high-quality video games on any device through a subscription offering. It still remains uncertain how game streaming will shape up in the end, but reviewing the first attempts to create such a solution, we can identify some patterns.
The first name that looks like a potential winner of a future platform war is Microsoft. The company has been in the video games market for a long time with its Xbox game consoles (more than 17 years since the original Xbox was launched), which means it has enough experience and capabilities in the field. Microsoft was also the first to offer a comprehensive service for gaming on subscription, namely Xbox Game Pass, which gives access to more than 200 titles for a monthly fee. Regarding portfolio of games, it is interesting that Microsoft has acquired 7 game studios over the last year, suggesting the company is getting ready to create a ton of original content for its offering. In total, the corporation now has about 13 game studios under its management. (There is a detailed article on Microsoft and its potential video games service available here.)
Meta-Gaming is when you make in-game decisions based on out-of-game knowledge. This is mostly a bad thing.
Let’s consider several situations to illustrate the point.
Finding a Trap:
The Metagamer goes right to where the trap is located and spams “search” checks until he “finds” it, because he’s played this module before and remembers the trap.
The regular player searches the room once, fails, and blithely walks into a trap. Because while he knows it’s there, his character does not.
Do you see how one made a decision based on what he knew, possibly ruined a possible good storytelling moment, and cheated. The other player was able to separate what he knew from what his character knew, and made a decision based on character knowledge only. Sure, he just got lanced by a foot-spike, but everyone is in the moment, committed to the story.
Fighting:
The Metagamer plans an L-shaped ambush per Chapter 3–17 b. (2), FM7–8 Infantry Rifle Platoon and Squad, adjusted to account for swords, spears, and bows instead of rifles, machineguns, and grenades.
The regular gamer remembers his character barely knows which end of a sword to hold, and either lets the fighter plan the ambush, or just waits in the bushes by the trail for the target to get close.
I’ve been guilty of this several times. In my last game, I was literally planning an ambush for some hobgoblins before cutting myself off. “Nope, Katrina doesn’t know any of this,” and shut up.
Monsters:
The Metagamer knows the weak-spot of the monster and slams it right off the bat, ruining what could have been an epic fight. He’s memorized the monster manual, and despite his character never before even hearing of this monster, he’s got it’s MO memorized.
The regular gamer may or may not know about the monster, but fights it as his character would, because his character doesn’t know that it’s vulnerable to, say, cold.
In my last game, we fought a midlin-small red dragon. As it happened, Katrina had found a ring of fire resistance. Yay! And while I know that Red Dragons do not have a special vulnerability to cold, she assumed they did, and kept peppering it with Ray of Frost. And while it didn’t do extra damage, she did manage to distract it long enough for some teammates to get behind it, especially when it blasted her with fire and she just stood there and took it.
Leveling:
When it’s time to level up, the Metagamer makes decisions based on mechanical advantage. He may multiclass or pick up feats based on what he thinks the next adventure will be, or just try to get the biggest ACC, AC, Dam, or whatever he can get. He may multiclass his fighter into a Paladin to pick up Smite, because he thinks they’ll be dealing with undead soon.
The regular player levels up based on what makes the most sense for the character. He may also muticlass his fighter into a Paladin, but it’s because he found religion.
Now, for a counter-example. I was in a sci-fi game once, and our ship was damaged. The engines were non-responsive, but Engineering reported they were fully functional. I was playing the Engineer. I deduced that a micro-meteor hit had damaged the control lines, and that the cutout had failed to automatically re-route them to the backups, which I then went to go do manually.
I’m an electronics technician by trade, and I know a bit about naval architecture, and it since I was playing the Engineer, it was totally fine to use Murphy’s Player Knowledge for my Engineer Character. That was not bad metagaming.
Now, some forms of meta-gaming are worse than others. The leveling one doesn’t bother me too much. But other kinds can ruin other player’s fun, and that’s a problem. It cheats people out of the experience, and is goddamn frustrating as a GM.