Top 50 Google Certified Cloud Professional Architect Exam Questions and Answers Dumps

Azure Administrator AZ-104 Exam Questions and Answers Dumps

The Cloud is the future: Get Certified now.

Google Certified Cloud Professional Architect is the top high paying certification in the world: Google Certified Professional Cloud Architect Average Salary – $175,761

The Google Certified Cloud Professional Architect Exam assesses your ability to:

  • Design and plan a cloud solution architecture
  • Manage and provision the cloud solution infrastructure
  • Design for security and compliance
  • Analyze and optimize technical and business processes
  • Manage implementations of cloud architecture
  • Ensure solution and operations reliability
  • Designing and planning a cloud solution architecture

The Google Certified Cloud Professional Architect covers the following topics:

Designing and planning a cloud solution architecture: 36%

This domain tests your ability to design a solution infrastructure that meets business and technical requirements and considers network, storage and compute resources. It will test your ability to create a migration plan, and that you can envision future solution improvements.

Managing and provisioning a solution Infrastructure: 20%

This domain will test your ability to configure network topologies, individual storage systems and design solutions using Google Cloud networking, storage and compute services.

Designing for security and compliance: 12%

This domain assesses your ability to design for security and compliance by considering IAM policies, separation of duties, encryption of data and that you can design your solutions while considering any compliance requirements such as those for healthcare and financial information.

Managing implementation: 10%

This domain tests your ability to advise development/operation team(s) to make sure you have successful deployment of your solution. It also tests yours ability to interact with Google Cloud using GCP SDK (gcloud, gsutil, and bq).

‎AWS Cloud Practitioner PRO
‎AWS Cloud Practitioner PRO
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot

Ensuring solution and operations reliability: 6%

This domain tests your ability to run your solutions reliably in Google Cloud by building monitoring and logging solutions, quality control measures and by creating release management processes.

Analyzing and optimizing technical and business processes: 16%

This domain will test how you analyze and define technical processes, business processes and develop procedures to ensure resilience of your solutions in production.

Below are the Top 50 Google Certified Cloud Professional Architect Exam Questions and Answers Dumps: You will need to have the three case studies referred to in the exam open in separate tabs in order to complete the exam: Company A , Company B, Company C

Question 1:  Because you do not know every possible future use for the data Company A collects, you have decided to build a system that captures and stores all raw data in case you need it later. How can you most cost-effectively accomplish this goal?

 A. Have the vehicles in the field stream the data directly into BigQuery.

B. Have the vehicles in the field pass the data to Cloud Pub/Sub and dump it into a Cloud Dataproc cluster that stores data in Apache Hadoop Distributed File System (HDFS) on persistent disks.

C. Have the vehicles in the field continue to dump data via FTP, adjust the existing Linux machines, and use a collector to upload them into Cloud Dataproc HDFS for storage.

D. Have the vehicles in the field continue to dump data via FTP, and adjust the existing Linux machines to immediately upload it to Cloud Storage with gsutil.

ANSWER1:

D

Notes/References1:

D is correct because several load-balanced Compute Engine VMs would suffice to ingest 9 TB per day, and Cloud Storage is the cheapest per-byte storage offered by Google. Depending on the format, the data could be available via BigQuery immediately, or shortly after running through an ETL job. Thus, this solution meets business and technical requirements while optimizing for cost.

Reference: Streaming insertsApache Hadoop and Spark10 tips for building long running cluster using cloud dataproc

Question 2: Today, Company A maintenance workers receive interactive performance graphs for the last 24 hours (86,400 events) by plugging their maintenance tablets into the vehicle. The support group wants support technicians to view this data remotely to help troubleshoot problems. You want to minimize the latency of graph loads. How should you provide this functionality?

A. Execute queries against data stored in a Cloud SQL.

B. Execute queries against data indexed by vehicle_id.timestamp in Cloud Bigtable.

C. Execute queries against data stored on daily partitioned BigQuery tables.

D. Execute queries against BigQuery with data stored in Cloud Storage via BigQuery federation.

ANSWER2:

B

Notes/References2:

B is correct because Cloud Bigtable is optimized for time-series data. It is cost-efficient, highly available, and low-latency. It scales well. Best of all, it is a managed service that does not require significant operations work to keep running.

Reference: BigTables time series clusterBigQuery

Question 3: Your agricultural division is experimenting with fully autonomous vehicles. You want your architecture to promote strong security during vehicle operation. Which two architecture characteristics should you consider?

A. Use multiple connectivity subsystems for redundancy. 

B. Require IPv6 for connectivity to ensure a secure address space. 

C. Enclose the vehicle’s drive electronics in a Faraday cage to isolate chips.

D. Use a functional programming language to isolate code execution cycles.

E. Treat every microservice call between modules on the vehicle as untrusted.

F. Use a Trusted Platform Module (TPM) and verify firmware and binaries on boot.

ANSWER3:

E and F

Notes/References3:

E is correct because this improves system security by making it more resistant to hacking, especially through man-in-the-middle attacks between modules.

F is correct because this improves system security by making it more resistant to hacking, especially rootkits or other kinds of corruption by malicious actors.

Reference 3: Trusted Platform Module

Question 4: For this question, refer to the Company A case study.

Which of Company A’s legacy enterprise processes will experience significant change as a result of increased Google Cloud Platform adoption?

A. OpEx/CapEx allocation, LAN change management, capacity planning

B. Capacity planning, TCO calculations, OpEx/CapEx allocation 

C. Capacity planning, utilization measurement, data center expansion

D. Data center expansion, TCO calculations, utilization measurement

ANSWER4:

B

Notes/References4:

B is correct because all of these tasks are big changes when moving to the cloud. Capacity planning for cloud is different than for on-premises data centers; TCO calculations are adjusted because Company A is using services, not leasing/buying servers; OpEx/CapEx allocation is adjusted as services are consumed vs. using capital expenditures.

Reference: Cloud Economics

Question 5: For this question, refer to the Company A case study.

You analyzed Company A’s business requirement to reduce downtime and found that they can achieve a majority of time saving by reducing customers’ wait time for parts. You decided to focus on reduction of the 3 weeks’ aggregate reporting time. Which modifications to the company’s processes should you recommend?

A. Migrate from CSV to binary format, migrate from FTP to SFTP transport, and develop machine learning analysis of metrics.

B. Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine learning analysis of metrics.

C. Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine learning analysis of metrics.

D. Migrate from FTP to SFTP transport, develop machine learning analysis of metrics, and increase dealer local inventory by a fixed factor.

ANSWER5:

C

Notes/References5:

C is correct because using cellular connectivity will greatly improve the freshness of data used for analysis from where it is now, collected when the machines are in for maintenance. Streaming transport instead of periodic FTP will tighten the feedback loop even more. Machine learning is ideal for predictive maintenance workloads.

Question 6: Your company wants to deploy several microservices to help their system handle elastic loads. Each microservice uses a different version of software libraries. You want to enable their developers to keep their development environment in sync with the various production services. Which technology should you choose?

A. RPM/DEB

B. Containers 

C. Chef/Puppet

D. Virtual machines

ANSWER6:

B

Notes/References6:

B is correct because using containers for development, test, and production deployments abstracts away system OS environments, so that a single host OS image can be used for all environments. Changes that are made during development are captured using a copy-on-write filesystem, and teams can easily publish new versions of the microservices in a repository.

Question 7: Your company wants to track whether someone is present in a meeting room reserved for a scheduled meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a motion sensor that reports its status every second. You want to support the data upload and collection needs of this sensor network. The receiving infrastructure needs to account for the possibility that the devices may have inconsistent connectivity. Which solution should you design?

A. Have each device create a persistent connection to a Compute Engine instance and write messages to a custom application.

B. Have devices poll for connectivity to Cloud SQL and insert the latest messages on a regular interval to a device specific table. 

C. Have devices poll for connectivity to Cloud Pub/Sub and publish the latest messages on a regular interval to a shared topic for all devices.

D. Have devices create a persistent connection to an App Engine application fronted by Cloud Endpoints, which ingest messages and write them to Cloud Datastore.

ANSWER7:

C

Notes/References7:

C is correct because Cloud Pub/Sub can handle the frequency of this data, and consumers of the data can pull from the shared topic for further processing.

Question 8: Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of their log data to the cloud and test the analytics features available to them there, while also retaining that data as a long-term disaster recovery backup. Which two steps should they take?

A. Load logs into BigQuery. 

B. Load logs into Cloud SQL.

C. Import logs into Stackdriver. 

D. Insert logs into Cloud Bigtable.

E. Upload log files into Cloud Storage.

ANSWER8:

A and E

Notes/References8:

A is correct because BigQuery is the fully managed cloud data warehouse for analytics and supports the analytics requirement.

E is correct because Cloud Storage provides the Coldline storage class to support long-term storage with infrequent access, which would support the long-term disaster recovery backup requirement.

References: BigQueryStackDriverBigTableStorage Class: ColdLine

Question 9: You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address. You have verified that the appropriate web response is coming from each instance using the curl command. You want to ensure that the backend is configured correctly. What should you do?

A. Ensure that a firewall rule exists to allow source traffic on HTTP/HTTPS to reach the load balancer. 

B. Assign a public IP to each instance, and configure a firewall rule to allow the load balancer to reach the instance public IP.

C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.

D. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.

ANSWER9:

C

Notes/References9:

C is correct because health check failures lead to a VM being marked unhealthy and can result in termination if the health check continues to fail. Because you have already verified that the instances are functioning properly, the next step would be to determine why the health check is continuously failing.

Reference: Load balancingLoad Balancing Health Checking

Question 10: Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform. Each tier (web, API, and database) scales independently of the others. Network traffic should flow through the web to the API tier, and then on to the database tier. Traffic should not flow between the web and the database tier. How should you configure the network?

A. Add each tier to a different subnetwork.

B. Set up software-based firewalls on individual VMs. 

C. Add tags to each tier and set up routes to allow the desired traffic flow.

D. Add tags to each tier and set up firewall rules to allow the desired traffic flow.

ANSWER10:

D

Notes/References10:

D is correct because as instances scale, they will all have the same tag to identify the tier. These tags can then be leveraged in firewall rules to allow and restrict traffic as required, because tags can be used for both the target and source.

Reference: Using VPCRoutesAdd Remove Network

Question 11: Your organization has 5 TB of private data on premises. You need to migrate the data to Cloud Storage. You want to maximize the data transfer speed. How should you migrate the data?

A. Use gsutil.

B. Use gcloud.

C. Use GCS REST API. 

D. Use Storage Transfer Service.

ANSWER11:

A

Notes/References11:

A is correct because gsutil gives you access to write data to Cloud Storage.

Reference: gsutilsgcloud sdkcloud storage json apiuploading objectsstorage transfer

Question 12: You are designing a mobile chat application. You want to ensure that people cannot spoof chat messages by proving that a message was sent by a specific user. What should you do?

A. Encrypt the message client-side using block-based encryption with a shared key.

B. Tag messages client-side with the originating user identifier and the destination user.

C. Use a trusted certificate authority to enable SSL connectivity between the client application and the server. 

D. Use public key infrastructure (PKI) to encrypt the message client-side using the originating user’s private key.

ANSWER12:

D

Notes/References12:

D is correct because PKI requires that both the server and the client have signed certificates, validating both the client and the server.

Question 13: You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database backend. You want to store the credentials securely. Where should you store the credentials?

A. In the source code

B. In an environment variable 

C. In a key management system

D. In a config file that has restricted access through ACLs

ANSWER13:

C

Notes/References13:

Question 14: For this question, refer to the Company B case study.

Company B wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements. Which combination of Google technologies will meet all of their requirements?

A. Kubernetes Engine, Cloud Pub/Sub, and Cloud SQL

B. Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery 

C. Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow

D. Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc

ANSWER14:

B

Notes/References14:

B is correct because:
Cloud Dataflow dynamically scales up or down, can process data in real time, and is ideal for processing data that arrives late using Beam windows and triggers.
Cloud Storage can be the landing space for files that are regularly uploaded by users’ mobile devices.
Cloud Pub/Sub can ingest the streaming data from the mobile users.
BigQuery can query more than 10 TB of historical data.

References: GCP QuotasBeam Apache WindowingBeam Apache TriggersBigQuery External Data SolutionsApache Hive on Cloud Dataproc

Question 15: For this question, refer to the Company B case study.

Company B has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?A. Create a scalable environment in GCP for simulating production load.B. Use the existing infrastructure to test the GCP-based backend at scale. C. Build stress tests into each component of your application and use resources from the already deployed production backend to simulate load.D. Create a set of static environments in GCP to test different levels of load—for example, high, medium, and low.

ANSWER15:

A

Notes/References15:

A is correct because simulating production load in GCP can scale in an economical way.

Reference: Load Testing iot using gcp and locustDistributed Load Testing Using Kubernetes

Question 16: For this question, refer to the Company B case study.

Company B wants to set up a continuous delivery pipeline. Their architecture includes many small services that they want to be able to update and roll back quickly. Company B has the following requirements:

  • Services are deployed redundantly across multiple regions in the US and Europe
  • Only frontend services are exposed on the public internet.
  • They can reserve a single frontend IP for their fleet of services.
  • Deployment artifacts are immutable

Which set of products should they use?

A. Cloud Storage, Cloud Dataflow, Compute Engine

B. Cloud Storage, App Engine, Cloud Load Balancing

C. Container Registry, Google Kubernetes Engine, Cloud Load Balancing

D. Cloud Functions, Cloud Pub/Sub, Cloud Deployment Manager

ANSWER16:

C

Notes/References16:

C is correct because:
Google Kubernetes Engine is ideal for deploying small services that can be updated and rolled back quickly. It is a best practice to manage services using immutable containers.
Cloud Load Balancing supports globally distributed services across multiple regions. It provides a single global IP address that can be used in DNS records. Using URL Maps, the requests can be routed to only the services that Company B wants to expose.
Container Registry is a single place for a team to manage Docker images for the services.

References: Load Balancing https – load balancing overview GCP lb global forwarding rulesreserve static external ip addressbest practice for operating containerscontainer registrydataflowcalling https

Question 17: Your customer is moving their corporate applications to Google Cloud Platform. The security team wants detailed visibility of all resources in the organization. You use Resource Manager to set yourself up as the org admin. What Cloud Identity and Access Management (Cloud IAM) roles should you give to the security team?

A. Org viewer, Project owner

B. Org viewer, Project viewer 

C. Org admin, Project browser

D. Project owner, Network admin

ANSWER17:

B

Notes/References17:

B is correct because:
Org viewer grants the security team permissions to view the organization's display name.
Project viewer grants the security team permissions to see the resources within projects.

Reference: GCP Resource Manager – User Roles

Question 18: To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines (VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department. Which two steps should you take?

A. Use persistent disks to store the state. Start and stop the VM as needed. 

B. Use the –auto-delete flag on all persistent disks before stopping the VM. 

C. Apply VM CPU utilization label and include it in the BigQuery billing export.

D. Use BigQuery billing export and labels to relate cost to groups. 

E. Store all state in local SSD, snapshot the persistent disks, and terminate the VM.F. Store all state in Cloud Storage, snapshot the persistent disks, and terminate the VM.

ANSWER18:

A and D

Notes/References18:

A is correct because persistent disks will not be deleted when an instance is stopped.

D is correct because exporting daily usage and cost estimates automatically throughout the day to a BigQuery dataset is a good way of providing visibility to the finance department. Labels can then be used to group the costs based on team or cost center.

References: GCP instances life cycleGCP instances set disk auto deleteGCP Local Data PersistanceGCP export data BigQueryGCP Creating Managing Labels

Question 19: Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs. What should they do?

A. Configure a new load balancer for the new version of the API.

B. Reconfigure old clients to use a new endpoint for the new API. 

C. Have the old API forward traffic to the new API based on the path.

D. Use separate backend services for each API path behind the load balancer.

ANSWER19:

D

Notes/References19:

D is correct because an HTTP(S) load balancer can direct traffic reaching a single IP to different backends based on the incoming URL.

References: load balancing httpsload balancing backendGCP lb global forwarding rules

Question 20: The database administration team has asked you to help them improve the performance of their new database server running on Compute Engine. The database is used for importing and normalizing the company’s performance statistics. It is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD zonal persistent disk. What should they change to get better performance from this system in a cost-effective manner?

A. Increase the virtual machine’s memory to 64 GB.

B. Create a new virtual machine running PostgreSQL. 

C. Dynamically resize the SSD persistent disk to 500 GB.

D. Migrate their performance metrics warehouse to BigQuery.

ANSWER20:

C

Notes/References20:

C is correct because persistent disk performance is based on the total persistent disk capacity attached to an instance and the number of vCPUs that the instance has. Incrementing the persistent disk capacity will increment its throughput and IOPS, which in turn improve the performance of MySQL.

References: GCP compute disks pdsspecsGCP Compute Disks Performances

Question 21: You need to ensure low-latency global access to data stored in a regional GCS bucket. Data access is uniform across many objects and relatively high. What should you do to address the latency concerns?

A. Use Google’s Cloud CDN.

B. Use Premium Tier routing and Cloud Functions to accelerate access at the edges.

C. Do nothing.

D. Use global BigTable storage.

E. Use a global Cloud Spanner instance.

F. Migrate the data to a new multi-regional GCS bucket.

G. Change the storage class to multi-regional.

ANSWER21:

A

Notes/References21:

Cloud Functions cannot be used to affect GCS data access, so that option is simply wrong. BigTable does not have any “global” mode, so that option is wrong, too. Cloud Spanner is not a good replacement for GCS data: the data use cases are different enough that we can assume it would probably not be a good fit. You cannot change a bucket’s location after it has been created–not via the storage class nor any other way; you would have to migrate the data to a new bucket. Google’s Cloud CDN is very easy to turn on, but it does only work for data that comes from within GCP and only if the objects are being accessed frequently enough. 

Reference: Google Cloud Storage : What bucket class for the best performance?

Question 22: You are building a sign-up app for your local neighbourhood barbeque party and you would like to quickly throw together a low-cost application that tracks who will bring what. Which of the following options should you choose?

A. Python, Flask, App Engine Standard

B. Ruby, Nginx, GKE

C. HTML, CSS, Cloud Storage

D. Node.js, Express, Cloud Functions

E. Rust, Rocket, App Engine Flex

F. Perl, CGI, GCE

ANSWER22:

A

Notes/References22:

The Cloud Storage option doesn’t offer any way to coordinate the guest data. App Engine Flex would cost much more to run when no one is on the sign-up site. Cloud Functions could handle processing some API calls, but it would be more work to set up and that option doesn’t mention anything about storage. GKE is way overkill for such a small and simple application. Running Perl CGI scripts on GCE would also cost more than it needs (and probably make you very sad). App Engine Standard makes it super-easy to stand up a Python Flask app and includes easy data storage options, too. 

Reference: Building a Python 3.7 App on App Engine

Question 23: Your company has decided to migrate your AWS DynamoDB database to a multi-regional Cloud Spanner instance and you are designing the system to transfer and load all the data to synchronize the DBs and eventually allow for a quick cut-over. A member of your team has some previous experience working with Apache Hadoop. Which of the following options will you choose for the streamed updates that follow the initial import?

A. The DynamoDB table change is captured by Cloud Pub/Sub and written to Cloud Dataproc for processing into a Spanner-compatible format.

B. The DynamoDB table change is captured by Cloud Pub/Sub and written to Cloud Dataflow for processing into a Spanner-compatible format.

C. Changes to the DynamoDB table are captured by DynamoDB Streams. A Lambda function triggered by the stream writes the change to Cloud Pub/Sub. Cloud Dataflow processes the data from Cloud Pub/Sub and writes it to Cloud Spanner.

D. The DynamoDB table is rescanned by a GCE instance and written to a Cloud Storage bucket. Cloud Dataproc processes the data from Cloud Storage and writes it to Cloud Spanner.

E. The DynamoDB table is rescanned by an EC2 instance and written to an S3 bucket. Storage Transfer Service moves the data from S3 to a Cloud Storage bucket. Cloud Dataflow processes the data from Cloud Storage and writes it to Cloud Spanner.

ANSWER23:

C

Notes/References23:

Rescanning the DynamoDB table is not an appropriate approach to tracking data changes to keep the GCP-side of this in synch. The fact that someone on your team has previous Hadoop experience is not a good enough reason to choose Cloud Dataproc; that’s a red herring. The options purporting to connect Cloud Pub/Sub directly to the DynamoDB table won’t work because there is no such functionality. 

References: Cloud Solutions Architecture Reference

Question 24: Your client is a manufacturing company and they have informed you that they will be pausing all normal business activities during a five-week summer holiday period. They normally employ thousands of workers who constantly connect to their internal systems for day-to-day manufacturing data such as blueprints and machine imaging, but during this period the few on-site staff will primarily be re-tooling the factory for the next year’s production runs and will not be performing any manufacturing tasks that need to access these cloud-based systems. When the bulk of the staff return, they will primarily work on the new models but may spend about 20% of their time working with models from previous years. The company has asked you to reduce their GCP costs during this time, so which of the following options will you suggest?

A. Pause all Cloud Functions via the UI and unpause them when work starts back up.

B. Disable all Cloud Functions via the command line and re-enable them when work starts back up.

C. Delete all Cloud Functions and recreate them when work starts back up.

D. Convert all Cloud Functions to run as App Engine Standard applications during the break.

E. None of these options is a good suggestion.

ANSWER24:

E

Notes/References24:

Cloud Functions scale themselves down to zero when they’re not being used. There is no need to do anything with them.

Question 25: You need a place to store images before updating them by file-based render farm software running on a cluster of machines. Which of the following options will you choose?

A. Container Registry

B. Cloud Storage

C. Cloud Filestore

D. Persistent Disk

ANSWER25:

C

Notes/References25:

There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “images” refers to visual images, thus eliminating CI/CD products like Container Registry. Compute Engine is not a storage product and should be eliminated. The term “file-based” software means that it is unlikely to work well with object-based storage like Cloud Storage (or any of its storage classes). Persistent Disk cannot offer shared access across a cluster of machines when writes are involved; it only handles multiple readers. However, Cloud Filestore is made to provide shared, file-based storage for a cluster of machines as described in the question. 

Reference: Cloud Filestore | Google Cloud

Question 26: Your company has decided to migrate your AWS DynamoDB database to a multi-regional Cloud Spanner instance and you are designing the system to transfer and load all the data to synchronize the DBs and eventually allow for a quick cut-over. A member of your team has some previous experience working with Apache Hadoop. Which of the following options will you choose for the initial data import?

A. The DynamoDB table is scanned by an EC2 instance and written to an S3 bucket. Storage Transfer Service moves the data from S3 to a Cloud Storage bucket. Cloud Dataflow processes the data from Cloud Storage and writes it to Cloud Spanner.

B. The DynamoDB table data is captured by DynamoDB Streams. A Lambda function triggered by the stream writes the data to Cloud Pub/Sub. Cloud Dataflow processes the data from Cloud Pub/Sub and writes it to Cloud Spanner.

C. The DynamoDB table data is captured by Cloud Pub/Sub and written to Cloud Dataproc for processing into a Spanner-compatible format.

D. The DynamoDB table is scanned by a GCE instance and written to a Cloud Storage bucket. Cloud Dataproc processes the data from Cloud Storage and writes it to Cloud Spanner.

ANSWER26:

A

Notes/References26:

The same data processing will have to happen for both the initial (batch) data load and the incremental (streamed) data changes that follow it. So if the solution built to handle the initial batch doesn't also work for the stream that follows it, then the processing code would have to be written twice. A Professional Cloud Architect should recognize this project-level issue and not over-focus on the (batch) portion called out in this particular question. This is why you don’t want to choose Cloud Dataproc. Instead, Cloud Dataflow will handle both the initial batch load and also the subsequent streamed data. The fact that someone on your team has previous Hadoop experience is not a good enough reason to choose Cloud Dataproc; that’s a red herring. The DynamoDB streams option would be great for the db synchronization that follows, but it can’t handle the initial data load because DynamoDB Streams only fire for data changes. The option purporting to connect Cloud Pub/Sub directly to the DynamoDB table won’t work because there is no such functionality. 

Reference: Cloud Solutions Architecture Reference

Question 27: You need a managed service to handle logging data coming from applications running in GKE and App Engine Standard. Which option should you choose?

A. Cloud Storage

B. Logstash

C. Cloud Monitoring

D. Cloud Logging

E. BigQuery

F. BigTable

ANSWER27:

D

Notes/References27:

Cloud Monitoring is made to handle metrics, not logs. Logstash is not a managed service. And while you could store application logs in almost any storage service, the Cloud Logging service–aka Stackdriver Logging–is purpose-built to accept and process application logs from many different sources. Oh, and you should also be comfortable dealing with products and services by names other than their current official ones. For example, “GKE” used to be called “Container Engine”, “Cloud Build” used to be “Container Builder”, the “GCP Marketplace” used to be called “Cloud Launcher”, and so on. 

Reference: Cloud Logging | Google Cloud

Question 28: You need a place to store images before serving them from AppEngine Standard. Which of the following options will you choose?

A. Compute Engine

B. Cloud Filestore

C. Cloud Storage

D. Persistent Disk

E. Container Registry

F. Cloud Source Repositories

G. Cloud Build

H. Nearline

ANSWER28:

C

Notes/References28:

There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “images” refers to picture files, because that’s something that you would serve from a web server product like AppEngine Standard, so we eliminate Cloud Build (which isn’t actually for storage, at all) and the other two CI/CD products: Cloud Source Repositories and Container Registry. You definitely could store image files on Cloud Filestore or Persistent Disk, but you can’t hook those up to AppEngine Standard, so those options need to be eliminated, too. The only options left are both types of Cloud Storage, but since “Cloud Storage” sits next to “Coldline” as an option, we can confidently infer that the former refers to the “Standard” storage class. Since the question implies that these images will be served by AppEngine Standard, we would prefer to use the Standard storage class over the Coldline one–so there’s our answer. 

Reference: The App Engine Standard Environment Cloud Storage: Object Storage | Google Cloud Storage classes | Cloud Storage | Google Cloud

Question 29: You need to ensure low-latency global access to data stored in a multi-regional GCS bucket. Data access is uniform across many objects and relatively low. What should you do to address the latency concerns?

A. Use a global Cloud Spanner instance.

B. Change the storage class to multi-regional.

C. Use Google’s Cloud CDN.

D. Migrate the data to a new regional GCS bucket.

E. Do nothing.

F. Use global BigTable storage.

ANSWER29:

E

Notes/References29:

Cloud Functions cannot be used to affect GCS data access, so that option is simply wrong. BigTable does not have any “global” mode, so that option is wrong, too. Cloud Spanner is not a good replacement for GCS data: the data use cases are different enough that we can assume it would probably not be a good fit. You cannot change a bucket’s location after it has been created–not via the storage class nor any other way; you would have to migrate the data to a new bucket. But migrating the data to a regional bucket only helps when the data access will primarily be from that region. Google’s Cloud CDN is very easy to turn on, but it does only work for data that comes from within GCP and only if the objects are being accessed frequently enough to get cached based on previous requests. Because the access per object is so low, Cloud CDN won’t really help. This then brings us back to the question. Now, it may seem implied, but the question does not specifically state that there is currently a problem with latency, only that you need to ensure low latency–and we are already using what would be the best fit for this situation: a multi-regional CS bucket. 

Reference: Google Cloud Storage : What bucket class for the best performance?

Question 30: You need to ensure low-latency GCP access to a volume of historical data that is currently stored in an S3 bucket. Data access is uniform across many objects and relatively high. What should you do to address the latency concerns?

A. Use Premium Tier routing and Cloud Functions to accelerate access at the edges.

B. Use Google’s Cloud CDN.

C. Use global BigTable storage.

D. Do nothing.

E. Migrate the data to a new multi-regional GCS bucket.

F. Use a global Cloud Spanner instance.

ANSWER30:

E

Notes/References30:

Cloud Functions cannot be used to affect GCS data access, so that option is simply wrong. BigTable does not have any “global” mode, so that option is wrong, too. Cloud Spanner is not a good replacement for GCS data: the data use cases are different enough that we can assume it would probably not be a good fit–and it would likely be unnecessarily expensive. You cannot change a bucket’s location after it has been created–not via the storage class nor any other way; you would have to migrate the data to a new bucket. Google’s Cloud CDN is very easy to turn on, but it does only work for data that comes from within GCP and only if the objects are being accessed frequently enough. So even if you would want to use Cloud CDN, you have to migrate the data into a GCS bucket first, so that’s a better option. 

Reference: Google Cloud Storage : What bucket class for the best performance?

Question 31: You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend and backend tiers and will be deployed in three regions. How many subnets will you need?

A. Six

B. One

C. Three

D. Four

E. Two

F. Nine

ANSWER31:

A

Notes/References31:

A single subnet spans and can be used across all zones in a single region, but you will need different subnets in different regions. Also, to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have two tiers which will each need their own subnet in each of the three regions in which you will deploy this system. 

Reference: VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions

Question 32: You need a place to produce images before deploying them to AppEngine Flex. Which of the following options will you choose?

A. Container Registry

B. Cloud Storage

C. Persistent Disk

D. Nearline

E. Cloud Source Repositories

F. Cloud Build

G. Cloud Filestore

H. Compute Engine

ANSWER32:

F

Notes/References32:

There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “deploying [these images] to AppEngine Flex” lets us know that we are dealing with Docker container images, and thus although they would likely be stored in the Container Registry, after being built, this question asks us where that building might happen, which is Cloud Build. Cloud Build, which used to be called Container Builder, is ideal for building container images–though it can also be used to build almost any artifacts, really. You could also do this on Compute Engine, but that option requires much more work to manage and is therefore worse. 

Reference: Google App Engine flexible environment docs | Google Cloud Container Registry | Google Cloud

Question 33: You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend, app, and data tiers and will be deployed in three regions. How many subnets will you need?

A. Two

B. One

C. Three

D. Nine

E. Four

F. Six

ANSWER33:

D

Notes/References33:

A single subnet spans and can be used across all zones in a single region, but you will need different subnets in different regions. Also, to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have three tiers which will each need their own subnet in each of the three regions in which you will deploy this system. 

Reference: VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions

Question 34: You need a place to store images in case any of them are needed as evidence for a tax audit over the next seven years. Which of the following options will you choose?

A. Cloud Filestore

B. Coldline

C. Nearline

D. Persistent Disk

E. Cloud Source Repositories

F. Cloud Storage

G. Container Registry

ANSWER34:

B

Notes/References34:

There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “images” probably refers to picture files, and so Cloud Storage seems like an interesting option. But even still, when “Cloud Storage” is used without any qualifier, it generally refers to the “Standard” storage class, and this question also offers other storage classes as response options. Because the images in this scenario are unlikely to be used more than once a year (we can assume that taxes are filed annually and there’s less than 100% chance of being audited), the right storage class is Coldline. 

Reference: Cloud Storage: Object Storage | Google Cloud Storage classes | Cloud Storage | Google Cloud

Question 35: You need a place to store images before deploying them to AppEngine Flex. Which of the following options will you choose?

A. Container Registry

B. Cloud Filestore

C. Cloud Source Repositories

D. Persistent Disk

E. Cloud Storage

F. Code Build

G. Nearline

ANSWER35:

A

Notes/References35:

Compute Engine is not a storage product and should be eliminated. There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “deploying [these images] to AppEngine Flex” lets us know that we are dealing with Docker container images, and thus they would likely have been stored in the Container Registry. 

Reference: Google App Engine flexible environment docs | Google Cloud Container Registry | Google Cloud

Question 36: You are configuring a SaaS security application that updates your network’s allowed traffic configuration to adhere to internal policies. How should you set this up?

A. Install the application on a new appropriately-sized GCE instance running in your host VPC, and apply a read-only service account to it.

B. Create a new service account for the app to use and grant it the compute.networkViewer role on the production VPC.

C. Create a new service account for the app to use and grant it the compute.securityAdmin role on the production VPC.

D. Run the application as a container in your system’s staging GKE cluster and grant it access to a read-only service account.

E. Install the application on a new appropriately-sized GCE instance running in your host VPC, and let it use the default service account.

ANSWER36:

C

Notes/References36:

You do not install a Software-as-a-Service application yourself; instead, it runs on the vendor's own hardware and you configure it for external access. Service accounts are great for this, as they can be used externally and you maintain full control over them (disabling them, rotating their keys, etc.). The principle of least privilege dictates that you should not give any application more ability than it needs, but this app does need to make changes, so you'll need to grant securityAdmin, not networkViewer. 

Reference: VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions Understanding roles | Cloud IAM Documentation | Google Cloud

Question 37: You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend and backend tiers and will be deployed across three zones. How many subnets will you need?

A. One

B. Six

C. Four

D. Three

E. Nine

ANSWER37:

F

Notes/References37:

A single subnet spans and can be used across all zones in a given region. But to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have two tiers, so you only need two subnets. 

Reference: VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions

Question 38: You have been tasked with setting up a system to comply with corporate standards for container image approvals. Which of the following is your best choice for this project?

A. Binary Authorization

B. Cloud IAM

C. Security Key Enforcement

D. Cloud SCC

E. Cloud KMS

ANSWER38:

A

Notes/References38:

Cloud KMS is Google's product for managing encryption keys. Security Key Enforcement is about making sure that people's accounts do not get taken over by attackers, not about managing encryption keys. Cloud IAM is about managing what identities (both humans and services) can access in GCP. Cloud DLP–or Data Loss Prevention–is for preventing data loss by scanning for and redacting sensitive information. Cloud SCC–the Security Command Center–centralizes security information so you can manage it all in one place. Binary Authorization is about making sure that only properly-validated containers can run in your environments. 

Reference: Cloud Key Management Service | Google Cloud Cloud IAM | Google Cloud Cloud Data Loss Prevention | Google Cloud Security Command Center | Google Cloud Binary Authorization | Google Cloud Security Key Enforcement – 2FA

Question 39: For this question, refer to the Company B‘s case study. Which of the following are most likely to impact the operations of Company B’s game backend and analytics systems?

A. PCI

B. PII

C. SOX

D. GDPR

E. HIPAA

ANSWER39:

B and D

Notes/References39:

There is no patient/health information, so HIPAA does not apply. It would be a very bad idea to put payment card information directly into these systems, so we should assume they’ve not done that–therefore the Payment Card Industry (PCI) standards/regulations should not affect normal operation of these systems. Besides, it’s entirely likely that they never deal with payments directly, anyway–choosing to offload that to the relevant app stores for each mobile platform. Sarbanes-Oxley (SOX) is about proper management of financial records for publicly traded companies and should therefore not apply to these systems. However, these systems are likely to contain some Personally-Identifying Information (PII) about the users who may reside in the European Union and therefore the EU’s General Data Protection Regulations (GDPR) will apply and may require ongoing operations to comply with the “Right to be Forgotten/Erased”. 

Reference: Sarbanes–Oxley Act – Wikipedia Payment Card Industry Data Security Standard – Wikipedia Personal data – Wikipedia Personal data – Wikipedia

Question 40: Your new client has advised you that their organization falls within the scope of HIPAA. What can you infer about their information systems?

A. Their customers located in the EU may require them to delete their user data and provide evidence of such.

B. They will also need to pass a SOX audit.

C. They handle money-linked information.

D. Their system deals with medical information.

ANSWER40:

D

Notes/References40:

SOX stands for Sarbanes Oxley and is US regulation governing financial reporting for publicly-traded companies. HIPAA–the Health Insurance Portability and Accountability Act of 1996–is US regulation aimed at safeguarding individuals' (i.e. patients’) health information. PCI is the Payment Card Industry, and they have Data Security Standards (DSS) that must be adhered to by systems handling payment information of any of their member brands (which include Visa, Mastercard, and several others). 

Reference: Cloud Compliance & Regulations Resources | Google Cloud

Question 41: Your new client has advised you that their organization needs to pass audits by ISO and PCI. What can you infer about their information systems?

A. They handle money-linked information.

B. Their customers located in the EU may require them to delete their user data and provide evidence of such.

C. Their system deals with medical information.

D. They will also need to pass a SOX audit.

ANSWER42:

A

Notes/References42:

SOX stands for Sarbanes Oxley and is US regulation governing financial reporting for publicly-traded companies. HIPAA–the Health Insurance Portability and Accountability Act of 1996–is US regulation aimed at safeguarding individuals' (i.e. patients’) health information. PCI is the Payment Card Industry, and they have Data Security Standards (DSS) that must be adhered to by systems handling payment information of any of their member brands (which include Visa, Mastercard, and several others). ISO is the International Standards Organization, and since they have so many completely different certifications, this does not tell you much. 

Reference: Cloud Compliance & Regulations Resources | Google Cloud

Question 43: Your new client has advised you that their organization deals with GDPR. What can you infer about their information systems?

A. Their system deals with medical information.

B. Their customers located in the EU may require them to delete their user data and provide evidence of such.

C. They will also need to pass a SOX audit.

D. They handle money-linked information.

ANSWER43:

B

Notes/References43:

SOX stands for Sarbanes Oxley and is US regulation governing financial reporting for publicly-traded companies. HIPAA–the Health Insurance Portability and Accountability Act of 1996–is US regulation aimed at safeguarding individuals' (i.e. patients’) health information. PCI is the Payment Card Industry, and they have Data Security Standards (DSS) that must be adhered to by systems handling payment information of any of their member brands (which include Visa, Mastercard, and several others). 

Reference: Cloud Compliance & Regulations Resources | Google Cloud

Question 44: For this question, refer to the Company C case study. Once Company C has completed their initial cloud migration as described in the case study, which option would represent the quickest way to migrate their production environment to GCP?

A. Apply the strangler pattern to their applications and reimplement one piece at a time in the cloud

B. Lift and shift all servers at one time

C. Lift and shift one application at a time

D. Lift and shift one server at a time

E. Set up cloud-based load balancing then divert traffic from the DC to the cloud system

F. Enact their disaster recovery plan and fail over

ANSWER44:

F

Notes/References44:

The proposed Lift and Shift options are all talking about different situations than Dress4Win would find themselves in, at that time: they’d then have automation to build a complete prod system in the cloud, but they’d just need to migrate to it. “Just”, right? 🙂 The strangler pattern approach is similarly problematic (in this case), in that it proposes a completely different cloud migration strategy than the one they’ve almost completed. Now, if we purely consider the kicker’s key word “quickest”, using the DR plan to fail over definitely seems like it wins. Setting up an additional load balancer and migrating slowly/carefully would take more time. 

Reference: Strangler pattern – Cloud Design Patterns | Microsoft Docs StranglerFigApplication Monolith to Microservices Using the Strangler Pattern – DZone Microservices Understanding Lift and Shift and If It’s Right For You

Question 45: Which of the following commands is most likely to appear in an environment setup script?

A. gsutil mb -l asia gs://${project_id}-logs

B. gcloud compute instances create –zone–machine-type=n1-highmem-16 newvm

C. gcloud compute instances create –zone–machine-type=f1-micro newvm

D. gcloud compute ssh ${instance_id}

E. gsutil cp -r gs://${project_id}-setup ./install

F. gsutil cp -r logs/* gs://${project_id}-logs/${instance_id}/

ANSWER45:

A

Notes/References45:

The context here indicates that “environment” is an infrastructure environment like “staging” or “prod”, not just a particular command shell. In that sort of a situation, it is likely that you might create some core per-environment buckets that will store different kinds of data like configuration, communication, logging, etc. You're not likely to be creating, deleting, or connecting (sshing) to instances, nor copying files to or from any instances. 

Reference: mb – Make buckets | Cloud Storage | Google Cloud cp – Copy files and objects | Cloud Storage | Google Cloud gcloud compute instances | Cloud SDK Documentation | Google Cloud

Question 46: Your developers are working to expose a RESTful API for your company’s physical dealer locations. Which of the following endpoints would you advise them to include in their design?

A. /dealerLocations/get

B. /dealerLocations

C. /dealerLocations/list

D. Source and destination

E. /getDealerLocations

ANSWER46:

B

Notes/References46:

It might not feel like it, but this is in scope and a fair question. Google expects Professional Cloud Architects to be able to advise on designing APIs according to best practices (check the exam guide!). In this case, it's important to know that RESTful interfaces (when properly designed) use nouns for the resources identified by a given endpoint. That, by itself, eliminates most of the listed options. In HTTP, verbs like GET, PUT, and POST are then used to interact with those endpoints to retrieve and act upon those resources. To choose between the two noun-named options, it helps to know that plural resources are generally already understood to be lists, so there should be no need to add another “/list” to the endpoint. 

Reference: RESTful API Design — Step By Step Guide – By

Question 47: Which of the following commands is most likely to appear in an instance shutdown script?

A. gsutil cp -r gs://${project_id}-setup ./install

B. gcloud compute instances create –zone–machine-type=n1-highmem-16 newvm

C. gcloud compute ssh ${instance_id}

D. gsutil mb -l asia gs://${project_id}-logs

E. gcloud compute instances delete ${instance_id}

F. gsutil cp -r logs/* gs://${project_id}-logs/${instance_id}/

G. gcloud compute instances create –zone–machine-type=f1-micro newvm

ANSWER47:

F

Notes/References47:

The startup and shutdown scripts run on an instance at the time when that instance is starting up or shutting down. Those situations do not generally call for any other instances to be created, deleted, or connected (sshed) to. Also, those would be a very unusual time to make a Cloud Storage bucket, since buckets are the overall and highly-scalable containers that would likely hold the data for all (or at least many) instances in a given project. That said, instance shutdown time may be a time when you'd want to copy some final logs from the instance into some project-wide bucket. (In general, though, you really want to be doing that kind of thing continuously and not just at shutdown time, in case the instance shuts down unexpectedly and not in an orderly fashion that runs your shutdown script.)

Reference:  Running startup scripts | Compute Engine Documentation | Google Cloud Running shutdown scripts | Compute Engine Documentation | Google Cloud cp – Copy files and objects | Cloud Storage | Google Cloud gcloud compute instances | Cloud SDK Documentation | Google Cloud

Question 48: It is Saturday morning and you have been alerted to a serious issue in production that is both reducing availability to 95% and corrupting some data. Your monitoring tools noticed the issue 5 minutes ago and it was just escalated to you because the on-call tech in line before you did not respond to the page. Your system has an RPO of 10 minutes and an RTO of 120 minutes, with an SLA of 90% uptime. What should you do first?

A. Escalate the decision to the business manager responsible for the SLA

B. Take the system offline

C. Revert the system to the state it was in on Friday morning

D. Investigate the cause of the issue

ANSWER48:

B

Notes/References48:

The data corruption is your primary concern, as your Recovery Point Objective allows only 10 minutes of data loss and you may already have lost 5. (The data corruption means that you may well need to roll back the data to before that started happening.) It might seem crazy, but you should as quickly as possible stop the system so that you do not lose any more data. It would almost certainly take more time than you have left in your RPO to properly investigate and address the issue, but you should then do that next, during the disaster response clock set by your Recovery Time Objective. Escalating the issue to a business manager doesn't make any sense. And neither does it make sense to knee-jerk revert the system to an earlier state unless you have some good indication that doing so will address the issue. Plus, we'd better assume that “revert the system” refers only to the deployment and not the data, because rolling the data back that far would definitely violate the RPO. 

Reference: Disaster recovery – Wikipedia

Question 49: Which of the following are not processes or practices that you would associate with DevOps?

A. Raven-test the candidate

B. Obfuscate the code

C. Only one of the other options is made up

D. Run the code in your cardinal environment

E. Do a canary deploy

ANSWER49:

A and D

Notes/References49:

Testing your understanding of development and operations in DevOps. In particular, you need to know that a canary deploy is a real thing and it can be very useful to identify problems with a new change you're making before it is fully rolled out to and therefore impacts everyone. You should also understand that “obfuscating” code is a real part of a release process that seeks to protect an organization's source code from theft (by making it unreadable by humans) and usually happens in combination with “minification” (which improves the speed of downloading and interpreting/running the code). On the other hand, “raven-testing” isn't a thing, and neither is a “cardinal environment”. Those bird references are just homages to canary deployments.

Reference: Intro to deployment strategies: blue-green, canary, and more – DEV Community ‍‍

Question 50: Your CTO is going into budget meetings with the board, next month, and has asked you to draw up plans to optimize your GCP-based systems for capex. Which of the following options will you prioritize in your proposal?

A. Object lifecycle management

B. BigQuery Slots

C. Committed use discounts

D. Sustained use discounts

E. Managed instance group autoscaling

F. Pub/Sub topic centralization

ANSWER50:

B and C

Notes/References50:

Pub/Sub usage is based on how much data you send through it, not any sort of “topic centralization” (which isn't really a thing). Sustained use discounts can reduce costs, but that's not really something you structure your system around. Now, most organizations prefer to turn Capital Expenditures into Operational Expenses, but since this question is instead asking you to prioritize CapEx, we need to consider the remaining options from the perspective of “spending” (or maybe reserving) defined amounts of money up-front for longer-term use. (Fair warning, though: You may still have some trouble classifying some cloud expenses as “capital” expenditures). With that in mind, GCE's Committed Use Discounts do fit: you “buy” (reserve/prepay) some instances ahead of time and then not have to pay (again) for them as you use them (or don't use them; you've already paid). BigQuery Slots are a similar flat-rate pricing model: you pre-purchase a certain amount of BigQuery processing capacity and your queries use that instead of the on-demand capacity. That means you won't pay more than you planned/purchased, but your queries may finish rather more slowly, too. Managed instance group autoscaling and object lifecycle management can help to reduce costs, but they are not really about capex. 

Reference: CapEx vs OpEx: Capital Expenses and Operating Expenses Explained – BMC Blogs Sustained use discounts | Compute Engine Documentation | Google Cloud Committed use discounts | Compute Engine Documentation | Google Cloud Slots | BigQuery | Google Cloud Autoscaling groups of instances | Compute Engine Documentation Object Lifecycle Management | Cloud Storage | Google Cloud

Question 51: In your last retrospective, there was significant disagreement voiced by the members of your team about what part of your system should be built next. Your scrum master is currently away, but how should you proceed when she returns, on Monday?

A. The scrum master is the one who decides

B. The lead architect should get the final say

C. The product owner should get the final say

D. You should put it to a vote of key stakeholders

E. You should put it to a vote of all stakeholders

ANSWER51:

C

Notes/References51:

In Scrum, it is the Product Owner's role to define and prioritize (i.e. set order for) the product backlog items that the dev team will work on. If you haven't ever read it, the Scrum Guide is not too long and quite valuable to have read at least once, for context. 

Reference: Scrum Guide | Scrum Guides

Question 52: Your development team needs to evaluate the behavior of a new version of your application for approximately two hours before committing to making it available to all users. Which of the following strategies will you suggest?

A. Split testing

B. Red-Black

C. A/B

D. Canary

E. Rolling

F. Blue-Green

G. Flex downtime

ANSWER52:

D and E

Notes/References52:

A Blue-Green deployment, also known as a Red-Black deployment, entails having two complete systems set up and cutting over from one of them to the other with the ability to cut back to the known-good old one if there’s any problem with the experimental new one. A canary deployment is where a new version of an app is deployed to only one (or a very small number) of the servers, to see whether it experiences or causes trouble before that version is rolled out to the rest of the servers. When the canary looks good, a Rolling deployment can be used to update the rest of the servers, in-place, one after another to keep the overall system running. “Flex downtime” is something I just made up, but it sounds bad, right? A/B testing–also known as Split testing–is not generally used for deployments but rather to evaluate two different application behaviours by showing both of them to different sets of users. Its purpose is to gather higher-level information about how users interact with the application. 

Reference: BlueGreenDeployment design patterns – What's the difference between Red/Black deployment and Blue/Green Deployment? – Stack Overflow design patterns – What's the difference between Red/Black deployment and Blue/Green Deployment? – Stack Overflow What is rolling deployment? – Definition from WhatIs.com A/B testing – Wikipedia

Question 53: You are mentoring a Junior Cloud Architect on software projects. Which of the following “words of wisdom” will you pass along?

A. Identifying and fixing one issue late in the product cycle could cost the same as handling a hundred such issues earlier on

B. Hiring and retaining 10X developers is critical to project success

C. A key goal of a proper post-mortem is to identify what processes need to be changed

D. Adding 100% is a safe buffer for estimates made by skilled estimators at the beginning of a project

E. A key goal of a proper post-mortem is to determine who needs additional training

ANSWER53:

A and C

Notes/References53:

There really can be 10X (and even larger!) differences in productivity between individual contributors, but projects do not only succeed or fail because of their contributions. Bugs are crazily more expensive to find and fix once a system has gone into production, compared to identifying and addressing that issue right up front–yes, even 100x. A post-mortem should not focus on blaming an individual but rather on understanding the many underlying causes that led to a particular event, with an eye toward how such classes of problems can be systematically prevented in the future. 

Reference: 403 Forbidden 403 Forbidden Google – Site Reliability Engineering The Cone of Uncertainty

Question 54: Your team runs a service with an SLA to achieve p99 latency of 200ms. This month, your service achieved p95 latency of 250ms. What will happen now?

A. The next month’s SLA will be increased.

B. The next month’s SLO will be reduced.

C. Your client(s) will have to pay you extra.

D. You will have to pay your client(s).

E. There is no impact on payments.

F. There is not enough information to make a determination.

ANSWER54:

D

Notes/References54:

It would be highly unusual for clients to have to pay extra, even if the service performs better than agreed by the SLA. SLAs generally set out penalties (i.e. you pay the client) for below-standard performance. While SLAs are external-facing, SLOs are internal-facing and do not generally relate to performance penalties. Neither SLAs nor SLOs are adaptively changed just because of one month’s performance; such changes would have to happen through rather different processes. A p99 metric is a tougher measure than p95, and p95 is tougher than p90–so meeting the tougher measure would surpass a required SLA, but meeting a weaker measure would not give enough information to say. 

Reference: What's the Difference Between DevOps and SRE? (class SRE implements DevOps) – YouTube Percentile rank – Wikipedia

Question 55: Your team runs a service with an SLO to achieve p90 latency of 200ms. This month, your service achieved p95 latency of 250ms. What will happen now?

A. The next month’s SLA will be increased.

B. There is no impact on payments.

C. There is not enough information to make a determination.

D. Your client(s) will have to pay you extra.

E. The next month’s SLO will be reduced.

F. You will have to pay your client(s).

ANSWER55:

B

Notes/References55:

It would be highly unusual for clients to have to pay extra, even if the service performs better than agreed by the SLA. SLAs generally set out penalties (i.e. you pay the client) for below-standard performance. While SLAs are external-facing, SLOs are internal-facing and do not generally relate to performance penalties. Neither SLAs nor SLOs are adaptively changed just because of one month’s performance; such changes would have to happen through rather different processes. A p99 metric is a tougher measure than p95, and p95 is tougher than p90–so meeting the tougher measure would surpass a required SLA, but meeting a weaker measure would not give enough information to say. 

Reference: What's the Difference Between DevOps and SRE? (class SRE implements DevOps) – YouTube Percentile rank – Wikipedia

Question 56: For this question, refer to the Company C case study. How would you recommend Company C address their capacity and utilization concerns?

A. Configure the autoscaling thresholds to follow changing load

B. Provision enough servers to handle trough load and offload to Cloud Functions for higher demand

C. Run cron jobs on their application servers to scale down at night and up in the morning

D. Use Cloud Load Balancing to balance the traffic highs and lows

D. Run automated jobs in Cloud Scheduler to scale down at night and up in the morning

E. Provision enough servers to handle peak load and sell back excess on-demand capacity to the marketplace

ANSWER56:

A

Notes/References56:

The case study notes, “Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sitting idle.” Cloud Load Balancing could definitely scale itself to handle this type of load fluctuation, but it would not do anything to address the issue of having enough application server capacity. Provisioning servers to handle peak load is generally inefficient, but selling back excess on-demand capacity to the marketplace just isn’t a thing, so that option must be eliminated, too. Using Cloud Functions would require a different architectural approach for their application servers and it is generally not worth the extra work it would take to coordinate workloads across Cloud Functions and GCE–in practice, you’d just use one or the other. It is possible to manually effect scaling via automated jobs like in Cloud Scheduler or cron running somewhere (though cron running everywhere could create a coordination nightmare), but manual scaling based on predefined expected load levels is far from ideal, as capacity would only very crudely match demand. Rather, it is much better to configure the managed instance group’s autoscaling to follow demand curves–both expected and unexpected. A properly-architected system should rise to the occasion of unexpectedly going viral, and not fall over. 

Reference: Load Balancing | Google Cloud Google Cloud Platform Marketplace Solutions Cloud Functions | Google Cloud Cloud Scheduler | Google Cloud

Google Cloud Latest News, Questions and Answers online:

Cloud Run vs App Engine: In a nutshell, you give Google’s Cloud Run a Docker container containing a webserver. Google will run this container and create an HTTP endpoint. All the scaling is automatically done for you by Google. Cloud Run depends on the fact that your application should be stateless. This is because Google will spin up multiple instances of your app to scale it dynamically. If you want to host a traditional web application this means that you should divide it up into a stateless API and a frontend app.

With Google’s App Engine you tell Google how your app should be run. The App Engine will create and run a container from these instructions. Deploying with App Engine is super easy. You simply fill out an app.yml file and Google handles everything for you.

With Cloud Run, you have more control. You can go crazy and build a ridiculous custom Docker image, no problem! Cloud Run is made for Devops engineers, App Engine is made for developers. Read more here…

Cloud Run VS Cloud Functions: What to consider?

The best choice depends on what you want to optimize, your use-cases and your specific needs.

If your objective is the lowest latency, choose Cloud Run.

Indeed, Cloud Run use always 1 vCPU (at least 2.4Ghz) and you can choose the memory size from 128Mb to 2Gb.

With Cloud Functions, if you want the best processing performance (2.4Ghz of CPU), you have to pay 2Gb of memory. If your memory footprint is low, a Cloud Functions with 2Gb of memory is overkill and cost expensive for nothing.

Cutting cost is not always the best strategy for customer satisfaction, but business reality may require it. Anyway, it highly depends of your use-case

Both Cloud Run and Cloud Function round up to the nearest 100ms. As you could play with the GSheet, the Cloud Functions are cheaper when the processing time of 1 request is below the first 100ms. Indeed, you can slow the Cloud Functions vCPU, with has for consequence to increase the duration of the processing but while staying under 100ms if you tune it well. Thus less Ghz/s are used and thereby you pay less.

the cost comparison between Cloud Functions and Cloud Run goes further than simply comparing a pricing list. Moreover, on your projects, you often will have to use the 2 solutions for taking advantage of their strengths and capabilities.

My first choice for development is Cloud Run. Its portability, its testability, its openess on the libraries, the languages and the binaries confer it too much advantages for, at least, a similar pricing, and often with a real advantage in cost but also in performance, in particular for concurrent requests. Even if you need the same level of isolation of Cloud functions (1 instance per request), simply set the concurrent param to 1!

In addition, the GA of Cloud Run is applied on all containers, whatever the languages and the binaries used. Read more here…

What does the launch of Google’s App Maker mean for professional app developers?

Should I go with AWS Elastic Beanstalk or Google App Engine (Managed VMs) for deploying my Parse-Server backend?

Why can a company such as Google sell me a cloud gaming service where I can “rent” GPU power over miles of internet, but when I seek out information on how to create a version of this everyone says that it is not possible or has too much latency?

AWS wins hearts of developers while Azure those of C-levels. Google is a black horse with special expertise like K8s and ML. The cloud world is evolving. Who is the winner in the next 5 years?

What is GCP (Google Cloud Platform) and how does it work?

What is the maximum amount of storage that you could have in your Google drive?

How do I deploy Spring Boot application (Web MVC) on Google App Engine(GAE) or HEROKU using Eclipse IDE?

What are some downsides of building softwares on top of Google App Engine?

Why is Google losing the cloud computing race?

How did new products like Google Drive, Microsoft SkyDrive, Yandex.Disk and other cloud storage solutions affect Dropbox’s growth and revenues?

What is the capacity of Google servers?

What is the Hybrid Cloud platform?

What is the difference between Docker and Google App engines?

How do I get to cloud storage?

How does Google App Engine compare to Heroku?

What is equivalent of Google Cloud BigTable in Microsoft Azure?

How big is the storage capacity of Google organization and who comes second?

It seems strange that Google Cloud Platform offer “everything” except cloud search/inverted index?

Where are the files on Google Drive stored?

Is Google app engine similar to lambda?

Was Diane Greene a failure as the CEO of Google Cloud considering her replacement’s strategy and philosophy is the polar opposite?

How is Google Cloud for heavy real-time traffic? Is there any optimization needed for handling more than 100k RT?

When it comes to iCloud, why does Apple rely on Google Cloud instead of using their own data centers?

Google Cloud Storage : What bucket class for the best performance?: Multiregional buckets perform significantly better for cross-the-ocean fetches, however the details are a bit more nuanced than that. The performance is dominated by the latency of physical distance between the client and the cloud storage bucket.

  • If caching is on, and your access volume is high enough to take advantage of caching, there’s not a huge difference between the two offerings (that I can see with the tests). This shows off the power of Google’s Awesome CDN environment.
  • If caching is off, or the access volume is low enough that you can’t take advantage of caching, then the performance overhead is dominated directly by physics. You should be trying to get the assets as close to the clients as possible, while also considering cost, and the types of redundancy and consistency you’ll need for your data needs.

Top- high paying certifications:

  1. Google Certified Professional Cloud Architect – $139,529
  2. PMP® – Project Management Professional – $135,798
  3. Certified ScrumMaster® – $135,441
  4. AWS Certified Solutions Architect – Associate – $132,840
  5. AWS Certified Developer – Associate – $130,369
  6. Microsoft Certified Solutions Expert (MCSE): Server Infrastructure – $121,288
  7. ITIL® Foundation – $120,566
  8. CISM – Certified Information Security Manager – $118,412
  9. CRISC – Certified in Risk and Information Systems Control – $117,395
  10. CISSP – Certified Information Systems Security Professional – $116,900
  11. CEH – Certified Ethical Hacker – $116,306
  12. Citrix Certified Associate – Virtualization (CCA-V) – $113,442
  13. CompTIA Security+ – $110,321
  14. CompTIA Network+ – $107,143
  15. Cisco Certified Networking Professional (CCNP) Routing and Switching – $106,957

According to the 2020 Global Knowledge report, the top-paying cloud certifications for the year are (drumroll, please):

1- Google Certified Professional Cloud Architect — $175,761

2- AWS Certified Solutions Architect – Associate — $149,446

3- AWS Certified Cloud Practitioner — $131,465

4- Microsoft Certified: Azure Fundamentals — $126,653

5- Microsoft Certified: Azure Administrator Associate — $125,993

Sources:

1- Google Cloud

2- Linux Academy

3- WhizLabs

4- GCP Space on Quora

5- Udemy

6- Acloud Guru

7. Question and Answers are sent to us by good people all over the world.

AZ-900: Microsoft Azure Fundamentals – Top 50 Questions and Answers Dumps

Azure Administrator AZ-104 Exam Questions and Answers Dumps

The Cloud is the future: Get Certified now.

Microsoft Certified: Azure Fundamentals Average Salary — $126,653/year

Amazon’s AWS and Microsoft’s Azure are the big boys of the cloud computing world, even though AWS is much bigger than Azure.

Revenue from Microsoft Azure grew 72% from 2018 from $7.56 billion to $13 billion. Azure contributed to almost 10.5% of Microsoft’s total revenue in 2019. It has also been noted that the US defense chose Azure in its tactical operations. The last quarter earnings of 2019 grew by 64%.

Azure Fundamentals exam is an opportunity to prove knowledge of cloud concepts (20%), Azure services (20%), Azure workloads, security and privacy in Azure (30%), as well as Azure pricing and support (25%). This blog also includes Azure Services Cheat Sheet.

The exam is intended for candidates who are just beginning to work with cloud-based solutions and services or are new to Azure.
Candidates should be familiar with the general technology concepts, including concepts of
networking, storage, compute, application support, and application development.

Azure Fundamentals can be used to prepare for other Azure role-based or specialty
certifications, but it is not a prerequisite for any of them.

Below are the top 40 AZ-900 Microsoft Azure fundamentals Certification exam questions and answers dumps.

I- AZ-900 Cloud Concepts – Azure Services

Question 1: Microsoft Office 365 is an example of which cloud deployment model?

A. PaaS

B. IaaS

C. CASB

‎AWS Cloud Practitioner PRO
‎AWS Cloud Practitioner PRO
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot

D. SaaS

Answer1:

D

Notes:

Software as a service (SaaS) allows users to connect to and use cloud-based apps over the internet. Common examples are email, calendar, and office tools, such as Microsoft Office 365.

Reference1: SAAS

Question 2: You have an on-premises application that processes incoming Simple Message Submission Service (SMSS) queue messages and records the data to a log file. You migrate this application to an Azure function app. What kind of cloud service would this be considered?

A. Software-as-a-Service (SaaS)

B. Infrastructure-as-a-Service (IaaS)

C. Serverless

D. Platform-as-a-Service (PaaS)

Answer2:

D

Notes2:

Serverless computing is the abstraction of servers, infrastructure, and operating systems. When you build serverless apps, you don’t need to provision and manage any servers, so you don't have to worry about infrastructure. Serverless computing is driven by the reaction to events and triggers happening in near-real time in the cloud.

Reference2: Platform-as-a-Service (PaaS)

Question 3: Define “economy of scale”.

A. Spending money on products or services now and being billed for them now. You can deduct this expense from your tax bill in the same year.

B. Spending money on physical infrastructure up front, and then deducting that expense from your tax bill over time.

C. Prices for individual resources and services are provided so you can predict how much you will spend in a given billing period based on your expected usage.

D. The ability to do things more efficiently or at a lower cost per unit when operating at a larger scale.

Answer 3:

D

Notes 3:

Cloud providers such as Microsoft, Google, and Amazon are large businesses that leverage the benefits of economies of scale and then pass the savings on to their customers.

Reference3: Cloud: Economies at scale

Question 4: Which of the following are characteristic of private clouds?

A. Lower costs

B. High scalability

C. Improved security

D. Limited flexibility

Answer 4:

B and C

Notes 4:

Private clouds still afford the scalability and efficiency of a public cloud. Resources are purchased and available to meet your business needs.

Video for reference: The Private Cloud Model

Because resources are not shared with others, private clouds provide higher levels of control and security.

Reference 4: The private cloud model

Question 5: Which of the following Azure solutions allows you to geographically cache and distribute high-bandwidth content, such as streaming videos, to users in different parts of the world?

A. Content Delivery Network (CDN)

B. Load Balancer

C. Application Gateway

D. Virtual Network Gateway

Answer 5:

A

Notes 5:

Azure Content Delivery Network (CDN) offers developers a global solution for rapidly delivering high-bandwidth content to users by caching their content at strategically placed physical nodes around the world. Azure CDN can also accelerate dynamic content, which cannot be cached, by leveraging various network optimizations using CDN POPs.

Reference 5: CDN

Question 6: You are beginning to extend your on-premises data center into Azure. You have created a new Azure subscription and resource group called RG-One. You deploy two virtual machines into RG-One with the intent of promoting these to Active Directory domain controllers. What kind of cloud service would this be considered?

A. Platform-as-a-Service (PaaS)

B. Infrastructure-as-a-Service (IaaS)

C. Software-as-a-Service (SaaS)

D. Hybrid-as-a-Service (HaaS)

Answer 6:

B

Notes 6:

Infrastructure as a service (IaaS) is an instant computing infrastructure, provisioned and managed over the internet. Deploying virtual machines into an Azure subscription would be considered an IaaS service.

Reference 6: IAAS

Question 7: Select the concept that is defined as ensuring that servers are available if a single data center goes offline.

A. Scalability

B. Fault tolerance

C. Elasticity

D. Agility

Answer 7:

B

Notes 7:

Fault tolerance is the property that enables a system to continue operating properly in the event of the failure of one or more of its components. In Azure, it refers to ensuring that a portion of the production systems are available online (via a failover cluster, available set, or available zone) if a subset of the system components (or an entire data center) goes offline.

Reference 7: Fault Tolerance

Question 8: In regards to comparing Public Cloud and Private Cloud, which of these best describe the characteristics of a Public Cloud?

A. No-upfront costs

B. More control over the security

C. Less reliability

D. Less maintenance

Answer 8:

A and D

Notes 8

The public cloud provides a pay-as-you-go pricing model which can lead to lower costs than those in private cloud solutions where capital expenditures are high.

The public cloud provides agility to provision and de-provision resources quickly with far less maintenance than that of private cloud solutions.

Reference 8: Pay as you go

Question 9: Which of the following are considered capital expenditures (CapEx)?

A. Storage area network

B. Cloud-based virtual machine

C. Office 365 licenses

D. Hyper-V host server

Answer 9:

A and D

Notes 9:

Storage costs are typically considered CapEx and include storage hardware components and the cost of supporting them. Depending on the application and level of fault tolerance, centralized storage can be expensive.

Server costs are considered CapEx and include all server hardware components and the cost of supporting them. When purchasing servers, make sure to design for fault tolerance and redundancy (e.g., server clustering, redundant power supplies, and uninterruptible power supplies). When a server needs to be replaced or added to a data center, you need to pay for the computer. This can affect your immediate cash flow because you must pay for the server up front.

Reference 9: Storage area networkHyper-V host server

Question 10: You are in the process of migrating your existing on-premises SQL databases to Azure. You will migrate them to Azure SQL databases, as opposed to deploying SQL database servers in Azure. What kind of cloud service would this be considered?

A. Software-as-a-Service (SaaS)

B. Platform-as-a-Service (PaaS)

C. Serverless

D. Infrastructure-as-a-Service (IaaS)

Answer 10:

B

Notes 10:

Platform as a service (PaaS) is a complete development and deployment environment in the cloud with resources that enable you to deliver everything from simple cloud-based apps to sophisticated, cloud-enabled enterprise applications. An Azure SQL instance would be considered a PaaS service.

Reference 10: PAAS

Question 11: Which of the following statements are true for IaaS cloud services?

A. The client is responsible for purchasing all Operating System (OS) host licensing.

B. Services can be scaled automatically to support system load.

C. The client has complete control over the host operating system.

D. The client is responsible for all guest OS and application updates.

Answer 11:

B and D

Notes 11:

IaaS host services are scaled automatically to combat increased system load and scaled back during periods of inactivity.

The cloud service provider performs all underlying hardware, OS, and middleware updates. The client performs all guest OS and application updates.

Question 12: Which of the following tools can be used to manage Azure resources on a Google Chromebook?

A. Azure portal

B. PowerShell

C. Azure Cloud Shell

D. Azure CLI

Answer 12:

A and C

Notes 12:

You can run the Azure portal on all modern desktop, tablet devices, and browsers.

Azure Cloud Shell is an interactive, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work. Linux users can opt for a Bash experience, while Windows users can opt for PowerShell.

Reference 12: Azure PortalAzure Cloud Shell

Question 13: Which Azure service can provide big data analysis for machine learning?

A. Azure App Service

B. Azure WebJobs

C. Application Insights

D. Azure Databricks

Answer 13:

D

Notes 13:

Azure Databricks is an Apache Spark-based analytics platform optimized for the Microsoft Azure cloud services platform. Databricks enables collaboration between data scientists, data engineers, and business analysts.

Reference 13: Azure Databricks

Question 14: You need to create an Azure storage solution that will store messages created by an Azure web role. The messages will then be processed by an Azure worker role. What type of storage solution should you create?

A. A Queue service in a storage account

B. A virtual machine data disk

C. A File service in a storage account

D. A Blob service in a storage account

Answer 14:

A

Notes 14:

Azure Queue storage is a service for storing large numbers of messages that can be accessed from anywhere in the world via authenticated calls using HTTP or HTTPS.

Reference: Azure Queue storage

Question 15: You have an on-premises application that sends email notifications automatically based on a rule. You plan to migrate the application to Azure. You need to recommend a computing solution for the application that should minimize costs by incurring charges only when it is executed.

Which Azure solution is best for this type of application?

A. Logic App

B. A web app

C. Service Bus App

D. IaaS web server in Azure

Answer 15:

A

Notes 15:

Azure Logic Apps is a cloud service that helps you automate and orchestrate tasks, business processes, and workflows when you need to integrate apps, data, systems, and services across enterprises or organizations. Logic Apps simplifies how you design and build scalable solutions for app integration, data integration, system integration, enterprise application integration (EAI), and business-to-business (B2B) communication, whether in the cloud, on-premises, or both.

For example, here are just a few workloads you can automate with logic apps:Process and route orders across on-premises systems and cloud services.

Send email notifications with Office 365 when events happen in various systems, apps, and services.

Move uploaded files from an SFTP or FTP server to Azure Storage.

Monitor tweets for a specific subject, analyze the sentiment, and create alerts or tasks for items that need review.

For new logic apps that run in the public or “global” Azure Logic Apps service, you pay only for what you use. These logic apps use a consumption-based plan and pricing model.

Reference 15: Logic App Logic App Pricing

Question 16: You are the Systems Administrator for a local university. You are deploying several sets of systems that will be used for research and development teams. Each set of systems will be uniform in nature, containing the same number and type of Azure resources.

What should you recommend to automate the creation of these Azure resources?

A. Azure Resource Manager templates

B. Multiple Azure subscriptions

C. Management groups

D. Virtual machine scale sets

Answer 16:

A

Notes 16:

An Azure Resource Manager template is the framework by which resources are created. They can be used to define and automate the creation of similar resources.

Reference 16: ARM Templates

Question 17: You are deploying a pair of Azure virtual machines. You want to ensure that the application will remain available in the event of a complete data center failure. What Azure technology will help most in this task?

A. Locally redundant storage

B. Zone Redundant Storage

C. Availability zone

D. Availability set

Answer 17:

C

Notes 17:

An Availability zone consists of two or more virtual machines in different physical locations within an Azure region. This configuration ensures that only a subset of the virtual machines in an availability zone will be affected in the event of hardware failure, OS update, or a complete data center outage. This configuration offers 99.99% SLA.

Question 18: Which of the following database solutions has the ability to add data concurrently from multiple regions simultaneously?

A. SQL managed instances

B. Cosmos DB

C. SQL Data Warehouses

D. Azure SQL Databases

Answer 18:

B

Notes 18:

Azure Cosmos DB is Microsoft's globally distributed, multi-model database service. Cosmos DB elastically and independently scales throughput and storage across any number of Azure regions worldwide.

Reference: Azure Cosmo DB

Question 19: Which Azure service can host your web apps without you having to manage underlying infrastructure?

A. Azure App Service

B. Azure WebJobs

C. Azure Databricks

D. Application Insights

Answer 19:

A

Notes 19:

Azure App Service enables you to build and host web apps, mobile back-ends, and RESTful APIs in the programming language of your choice without managing infrastructure.

Reference 19: Azure App Services

Question 20: Which of the following components can be used to load balance traffic to web applications, such as Azure App Service web apps using layer 7 of the OSI model?

A. Virtual Network

B. Virtual Network Gateway

C. Route table

D. Load Balancer

E. Application Gateway

Answer 20:

E

Notes 20:

Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. Traditional load balancers operate at the transport layer (OSI layer 4 — TCP and UDP) and route traffic based on source IP address and port to a destination IP address and port.

Reference 20: Application Gateway

Question 21: Which Azure service can help you collect, analyze, and act on telemetry from your cloud and on-premises environments?

A. Azure App Service

B. Azure Monitor

C. Azure Analyzer

D. Azure WebJobs

Answer 21:

B

Notes 21:

Azure Monitor is a service that can help you understand how your applications are performing and proactively identify issues affecting them and the resources they depend on.

Reference 21: Azure Monitor

II- Azure workloads, Security, Privacy, Compliance, and Trust

Question 22: Which of the following components are required to establish communication between on-premises resources and resources in Azure?

A. Virtual Network

B. VNet peer

C. Route tables

D. Virtual network gateway

Answer 22:

A and D

Notes 22:

Question 23: Which Azure service should you use to correlate metrics and logs from multiple resources into a centralized repository?
A. Azure Event Grid

B. Azure Event Hubs

C. Azure SQL Data Warehouse

D. Azure Monitor

Answer 23:

D

Notes 23:

Log data collected by Azure Monitor (formerly Azure Log Analytics) is stored in a Log Analytics workspace, which is based on Azure Data Explorer. It collects telemetry from a variety of sources and uses the Kusto query language used by Data Explorer to retrieve and analyze data.

Reference 23: Azure Monitor – Log Query Overview

Question 24: You are the Azure Administrator for Radio Gaga, LTD. You have a resource group named RG-RG and need to ensure no other administrators can create virtual networks in this resource group. What can you implement to accomplish this?

A. Access Control (IAM)

B. Azure policy

C. Locks

D. Properties

Answer 24:

B

Notes 24:

Azure Policy is a service in Azure used to create, assign, and manage policies. These policies enforce different rules and effects over your resources, so those resources stay compliant with your corporate standards and service level agreements.

For example, you can have the policy to allow only a certain SKU size of virtual machines in your environment. Once this policy is implemented, new and existing resources are evaluated for compliance. With the right type of policy, existing resources can be brought into compliance.

Reference 24: Azure Policy

Question 25: Which of the following is the organization that defines standards used by the United States government?

A. NIST

B. ITIL

C. GDPR

D. ISO

Answer 25:

A

Notes 25:

The National Institute of Standards and Technology (NIST) promotes and maintains measurement standards and guidance to help organizations assess risk. It defines the standards that are used by the United States government as well as the US Department of Defense (DoD).

Reference 25: NIST

Question 26: You have an Azure virtual network named VNet in a resource group named Bob-RG. You assign an Azure policy specifying virtual networks are not an allowed resource type in Bob-RG. What happens to VNet once this policy is applied?

A. VNet is moved to a new resource group.

B. Bob-RG is deleted automatically

C. VNet continues to function normally, but no new subnets can be added.

D. VNet is deleted automatically.

Answer 26:

C

Notes 26:

Azure policies that determine the allowed types of resources can only prevent non-compliant resources from being created. Existing non-compliant resources are not affected. However, the policy is flagged as non-compliant so that the administrator can determine action (if any).

Reference: Here

Question 27: Which Azure tool allows you to view which user turned off a specific virtual machine during the last 14 days?

A. Azure Event Hubs

B. Azure Activity Log

C. Azure Service Health

D. Azure Monitor

Answer 27:

B

Notes 27:

The Azure Activity Log is a subscription log that provides insight into subscription-level events that have occurred in Azure. This includes a range of data, from Azure Resource Manager operational data to updates on Service Health events. Events such as starting and stopping of virtual machines can be found here.

Reference 27: Here

Question 28: What kind of information does Azure Information Protection protect?

A. Email messages

B. Office documents

C. Azure Blob Storage

D. Virtual hard disks

E. PDF documents

Answer 28:

A B E

Notes 28:

Azure Information Protection (sometimes referred to as AIP) is a cloud-based solution that helps an organization classify and, optionally, protect its documents and emails by applying labels. Labels can be applied automatically by administrators who define rules and conditions, manually by users, or a combination where users are given recommendations.

Azure Information Protection (sometimes referred to as AIP) is a cloud-based solution that helps an organization classify and, optionally, protect its documents and emails by applying labels. Labels can be applied automatically by administrators who define rules and conditions, manually by users, or a combination where users are given recommendations.

A collaboration between Microsoft and Adobe brings you a more simplified and consistent experience for PDF documents that have been classified and, optionally, protected. This collaboration provides support for Adobe Acrobat native integration with Microsoft Information Protection solutions, such as Azure Information Protection.

Question 29: Which of the following is true regarding HDInsight?

A. It is an on-demand analytics job service that simplifies big data. Instead of deploying, configuring, and tuning hardware, you write queries to transform your data and extract valuable insights.

B. It is a managed relational cloud database service.

C. It is a cloud-based service that is a limitless analytics service that brings together enterprise data warehousing and Big Data analytics.

D. It is an open-source framework for the distributed processing and analysis of big datasets in clusters.

Answer 29:

D

Notes 29:

Azure HDInsight is a managed, full-spectrum, open-source analytics service for enterprises. HDInsight is a cloud service that makes it easy, fast, and cost-effective to process massive amounts of data. HDInsight also supports a broad range of scenarios, like extract, transform, and load (ETL); data warehousing; machine learning; and IoT.

Question 30: Which of the following website provides information on Azure product updates, roadmaps, and announcements?

A. https://preview.portal.azure.com/

B. https://azure.microsoft.com/en-us/updates/

C. https://portal.azure.com/

D. https://azure.microsoft.com/en-us/services/updates/

Answer 30:

B

Notes 30:

Learn about important Azure product updates, roadmap, and announcements here

Questions 31: Azure virtual machines can be moved between which of the following Azure resources?

A. Subscriptions

B. Regions

C. Availability Sets

D. Resource Groups

E. Availability Zones

Answer 31:

A, B, D, E

Notes 31:

Azure virtual machines can be moved between subscriptions with either Azure PowerShell or the Azure portal. Using Azure Site Recovery, you can migrate Azure VMs to other regions. Azure virtual machines can be moved between resource groups with either Azure PowerShell or the Azure portal. Using Azure Site Recovery, you can migrate Azure VMs to other Availability Zones.

II- Azure Pricing and Support

Question 32: Which Azure support plans can open support cases?

A. Professional Direct

B. Basic

C. Standard

D. Developer

E. Premier

Answer 32:

A, C, D, E

Notes 32:

Question 33: For any Single Instance virtual machine using premium SSD or Ultra Disk for all Operating System Disks and Data Disks, what is the SLA guarantee for virtual machine connectivity?

A. 99.9%

B. 99.99

C. 99.95%

D. There is no SLA guarantee

Answer 33:

A

Notes 33:

Question 34: Which of the following Azure services is a cloud-based service that leverages massively parallel processing (MPP) to quickly run complex queries across petabytes of data in a relational database?

A. Azure SQL database

B. Azure HDInsight

C. Azure SQL Data Warehouse (Azure Synapse )

D. Azure Data Lake Analytics

Answer 34:

C

Notes 34:

Azure SQL Data Warehouse (Azure Synapse ) is a cloud-based service that leverages massively parallel processing (MPP) to quickly run complex queries across petabytes of data in a relational database.

Reference 34: Azure SQL Data Warehouse (Azure Synapse )

Question 35: You have an Azure subscription that contains the following unused resources:

Name Type Configuration
nic0 Network Interface 10.0.0.6
pip1 Public IP Static
lb1 Load Balancer Standard, 5 rules configured
VNet2 Virtual Network 10.1.0.0/16
VM3 Virtual Machine Stopped (Deallocated)

Based on this information, which of the following unused resources should you remove to lower cost?

A. lb1

B. VNet2

C. pip1

D. nic0

E. VM3

Answer 35:

A and C

Notes 35:

The pricing for Standard Load Balancer is based on the number of rules configured (load balancer rules and NAT rules) and data processed. However, there is no hourly charge for the Standard Load Balancer itself when no rules are configured. Since this load balancer contains rules, it should be removed to save money.

In ARM deployment model, there is no charge for dynamic public IP addresses when the associated virtual machine is “stopped-deallocated”. However, you’re charged for a static public IP address irrespective of the associated resource (unless it is part of the first five static ones in the region). This resource should be removed.

Reference 35: IP addresses

Question 36: Which of the following recommendations is provided by Azure Advisor?

A. Azure resource costs

B. Azure virtual machine IP configuration

C. Storage performance and reliability

D. Azure App Service security

Answer 36:

A C D

Notes 36:

Question 37: You are planning on purchasing Azure AD Premium for your Azure subscription. What is the SLA for this product?

A. 99.99%

B. 99.9%

C. 99.95%

D. Azure AD products are not covered under an SLA.

Answer 37:

B

Notes 37:

Per the Azure documentation: We guarantee at least 99.9% availability of the Azure Active Directory Basic and Premium services. The services are considered available in the following scenarios:

Users are able to login to the service, log in to the Access Panel, access applications on the Access Panel and reset passwords. IT administrators are able to create, read, write and delete entries in the directory or provision or de-provision users to applications in the directory.

No SLA is provided for the Free tier of Azure Active Directory.

Question 38: Which of the following Azure support plans offer Severity “A” and “B” cases to be opened?

A. Premier

B. Standard

C. Developer

D. Professional Direct

E. Basic

Answer 38:

A B D

Notes 38:

Question 39: This question requires that you evaluate the underlined text to determine if it is correct.
When you are implementing a software as a service (SaaS) solution, you are responsible for configuring high availability.
Instructions: Review the underlined text. If it makes the statement correct, select “No change is needed”. If the statement is incorrect, select the answer choice that makes the statement correct.

A. No change is needed.

B. defining scalability rules

C. installing the SaaS solution

D. configuring the SaaS solution

Answer 39:

D

Notes 39:

Question 40: You have an on-premises network that contains several servers.
You plan to migrate all the servers to Azure.
You need to recommend a solution to ensure that some of the servers are available if a single Azure data center goes offline for an extended period.
What should you include in the recommendation?

A. fault tolerance

B. elasticity

C. scalability

D. low latency

Answer 40:

A

Notes 40:

Question 41: This question requires that you evaluate the underlined text to determine if it is correct.
When planning to migrate a public website to Azure, you must plan to pay monthly usage costs.
Instructions: Review the underlined text. If it makes the statement correct, select “No change is needed”. If the statement is incorrect, select the answer choice that makes the statement correct.

A. No change is needed

B. Deploy a VPN

C. pay to transfer all the website data to Azure

D. reduce the number of connections to the website

Answer 41:

A

Notes 41:

No change is needed

Question 42: You have an on-premises network that contains 100 servers.
You need to recommend a solution that provides additional resources to your users. The solution must minimize capital and operational expenditure costs.
What should you include in the recommendation?

A. a complete migration to the public cloud

B. an additional data center

C. a private cloud

D. a hybrid cloud

Answer 42:

  C

Notes 42:

Question 43: Which Azure offering refers to a set of development, testing, and automation tools?

A. Azure Cognitive Services

B. Azure Boards

C. Azure DevOps

D. GitHub

Answer 43:

  C

Notes: Azure DevOps Services provides development collaboration tools, including high-performance pipelines, free private Git repositories, configurable Kanban boards, and extensive automated and continuous testing capabilities.

Question 44: Which of the following are available in the Azure Marketplace?

A. Virtual machine images

B. SaaS applications

C. Solution templates

D. Sample application code

Answer 44:

A B C

Notes: Virtual machine images are available in the Azure Marketplace. Images are available for Windows and Linux. Stock operating system images, as well as custom images with pre-installed applications, are also available.

SaaS applications make up the majority of the Azure Marketplace. One click allows you to install and use many popular applications — such as Office365, Salesforce, Zoom, and others — seamlessly with your Azure subscription.

Solution templates allow you to deploy entire IaaS solutions with a simple click. Examples include complete SharePoint farms as well as SQL Always Available clusters.

Question 45: Which of the following regulates data privacy in the European Union (EU)?

A. ITIL

B. GDPR

C. ISO

D. NIST

Answer 45:

B

Notes: The General Data Protection Regulation (EU) 2016/679 (“GDPR”) is a regulation in EU law on data protection and privacy for all individuals within the European Union (EU) and the European Economic Area (EEA). It also addresses the export of personal data outside the EU and EEA areas. The GDPR aims primarily to give control to individuals over their personal data and to simplify the regulatory environment for international business by unifying the regulation within the EU.

Question 46: You currently have two Azure Pay-As-You-Go subscriptions. You would like to transfer billing ownership of the subscriptions to another account while moving the subscriptions into the other accounts Azure AD tenant. How can you accomplish this?

A. Open a support ticket by contacting Microsoft Azure Support

B. In the Azure Portal, under Azure Subscriptions click Change Directory

C. Using Azure CLI, run the az account merge command

D. In the Azure Portal, under Cost Management + Billing under Azure Subscriptions

Answer 46:

 

Notes: It is here that we can transfer billing ownership by clicking on the context menu for the subscription. We then select “Transfer billing ownership” and as part of the process, we can provide the email associated with the other account, and can also choose to move the subscription into the Azure AD tenant of the other account. This will move the subscription into the default Azure AD tenant of the destination account.

Azure Documentation

Question 47: Where can you open a new Azure support request?

A. Knowledge Center

B. support.microsoft.com

C. Azure Portal

D. Security Center

Answer 47:

C

Notes: A support request can only be opened via the Azure Portal.

Question 48: You attempt to create several managed disks in your Azure environment. In the Portal, you receive a message that you must increase your Azure subscription limits. What should you do to increase the limits?

A. Modify an Azure policy.

B. Use Azure PowerShell to create the new managed disks.

C. Create a new support request.

D. Upgrade your support plan.

Answer 48:

Question 49:

A.

B.

C.

D.

Answer 49:

 

Notes:

Question 50:  This question requires that you evaluate the underlined text to determine if it is correct.
When you are implementing a software as a service (SaaS) solution, you are responsible for configuring high availability.
Instructions: Review the underlined text. If it makes the statement correct, select “No change is needed”. If the statement is incorrect, select the answer choice that makes the statement correct.

A. No change is needed.

B. defining scalability rules

C. installing the SaaS solution

D. configuring the SaaS solution

Answer 50:

D

Notes: configuring the SaaS solution

IV- Microsoft Azure Question and Answers

 

In Azure, every VM – regardless if Linux or Windows – gets a temporary disk assigned automatically. This temporary disk is located on the physical server (the hypervisor) where the Azure VM is hosted and is non-persistent. Disks used by the operating system or additionally added data disks are persistent disks and stored in Azure Storage.

Azure VM’s can be moved from its current host to new host at any time due to maintenance, hardware failures or other reasons. In such an event, the data from the temporary storage will not preserve or moved to the new host. Apart from the hardware failures, there are many other reasons data from the temporary disk will be lost:

  • Resizing of the VM
  • Restarting of the VM
  • Moving from one host to another
  • Updating/upgrading of host

Really, the temporary disk should never be used for data that has to be persistent. To avoid misconfiguration, the disk also has the drive label “Temporary Storage” and includes a text file “DATALOSS_WARNING_README.txt”. Read more here…

2- Does Azure VM include a Windows license?

It depends on the virtual machine type we talk about. Some Azure virtual machines include a Windows operating system license in their price (some even include a SQL Server). Some do not, however, there is an “Azure Hybrid Use Benefit” in certain Microsoft licensing programs, where basically the customer can use its previously acquired software licenses on Azure virtual machines (“bring you own license”). Also, there are Azure virtual machines available with different Linux distributions (both commercial and community), Windows Server license is obviously not included in these. Continue reading here

Please consult the Windows virtual machine pricing guide of Azure for details: Pricing – Windows Virtual Machines | Microsoft Azure

Does Azure VM cost include an OS disk?
Hello. Yes They charge you for the disk usage too. So its Disk, Network, License (if Windows Server Instance) and Processor/RAM that are taken into consideration
(more)
 
Why don’t I see the N-Series (vga enabled) VMs in my Azure vm sizes list (I have Bizspark subscription)?
 
It has nothing to do with BizSpark. N series VMs are generally available since 1 December, 2016 (Azure N-Series: General availability on December 1 ), but only in select Azure datacenter regions. Please consult the Azure Products by Region | Microsoft Azure website for regional availability.
(more)
 
What is a data disk in Azure VM?
 
What are things to look out for when choosing a location for your Microsoft Azure VM?
 
The argument in placing a cloud vm would be performance. Performance in the cloud world means cost. The better performance you need the more its going to cost you. But the other side of that is the faster you can solve the problem you are trying to solve. The business problem to evaluate in placement of a VM is loosley these two things: Does increasing the performance of the application provide the overall answers required faster? Are there things you can do to your application that will allow it to better take advantage of cloud capabilities…
(more)
 
 
Please review Azure Monitor, the built-in monitoring service in Azure. Azure Monitor provides metrics and logs for many services in Azure including VMs. A quick overview : Product documentation: Get started with Azure Monitor Note: As of today (Apr ‘17) Cloud Services metrics are served using an older telemetry pipeline but that is the process of being migrated to Azure Monitor pipeline. You will soon be able to consume Cloud Service metrics via Azure Monitor, the same way you can for Azure VMs, Web Apps or Azure SQL DBs.
(more)
 
 
Azure Backup introduces a reinforcement expansion to the Azure VM specialist that is running on the VM. This expansion backs up the whole VM. You can back up explicit records and organizers on the Azure VM by running the MARS operator.
 
 
Make sure you have VMs in Availability set. Before selecting a VM collect below inputs either from Application or from Performance monitoring team 1. Maximum IOPS required. 2. Maximum size of DB in next 2 years at least. Based on these inputs select the VM size and required storage tier – Standard or Premium. For high performance, you can perform disk stripping if you require more than 5000 IOPS. Also you can configure Backup to URL.
(more)
 
Could I connect to a Linux Azure VM using SSH and private IP through Putty?
 
Absolutely. You can check your VM’s public IP address on the Azure Portal and SSH into it with the SSH client of your choice. A private IP allows Azure VMs to communicate with other resources in a virtual network or an on-premises network through a VPN or ExpressRoute. So you can SSH into an Azure VM using the private IP from the same virtual network or via VPN / ExpressRoute.
(more)
 
 
Azure portal ( Microsoft Azure ) now has a feature called Cloud Shell. This basically gets you a command line interface, in the browser, where you can make an authenticated access to Azure resources, including your virtual machines. Both Bash and PowerShell are available, and you can also save your frequently used scripts, etc for later re-use. More details here: Azure Cloud Shell – Browser-Based Command Line | Microsoft Azure
 
How to resize a Linux VM with the Azure CLI – Azure Linux Virtual Machines az vm resize –resource-group mygroup –name mytestvm –size Standard_D4s_v3 This call would trigger instance restart in the background if needed.
(more)
 
 
This document indicates how a Linux VM password can be reset Reset Linux VM password and SSH key from the CLI. There is also an option in the Azure portal (https://portal.azure.com). Go to the details of the virtual machine you wish to reset the password for and look for “reset password” at the bottom left:
(more)
 
 
Depending on what OS you are using lets say Linux. You could use properJavaRDP you will need a Java VM installed. I’ve used this with success the screen refresh was not great tho.
(more)
 
Monitor VM in Azure Management Portal.
  1. Step 1 − Login to Azure Management Portal.
  2. Step 2 − Go to Virtual Machine.
  3. Step 3 − Select the virtual machine you want to monitor.
  4. Step 4 − Select Monitor from the top menu as shown in following image.
 
Accessing Azure VM port from Outside of VM
  1. Open VM instance and run the server on port 80 and checked the localhost is running in the local browse,
  2. added port 80 in the inbound of Network security group,
  3. turned off all three types of firewall from the VM windows.
How do you see the memory usage of an Azure VM?
Steps For Existing Windows VMs:
  1. Click on a Windows VM.
  2. Select Diagnostics settings from the Azure UI blade.
  3. Under the Overview tab: Pick a Storage account: Select your storage account so that the metrics stats can be stored. Click on ‘Enable guest level monitoring‘ and wait for the process to complete.
 
 
1. From the Azure portal, on the left menu, select All services. 2. In the All services search box, enter disks and then select Disks to display the list of available disks. 3. Select the disk that you would like to use. … 4. In the Overview page, ensure that DISK STATE is listed as Unattached.
 
 
Configure App Service Certificate to Azure Virtual machines
1. Step 1: Create an Azure Virtual machine with IIS web server.
2. Step 2: Add a Custom Domain to your virtual machine.
3. Step 3: Place an SSL Certificate order.
4. Step 4 – Store the certificate in Azure Key Vault.
5. Step 5: Verify the domain ownership.
6. Step 6: Assign certificate to Virtual machine.
 
If I change the size of my Azure VM while running a script, will that stop the execution of the script? (Currently using a Linux VM).
 
Changing the size of an Azure VM (scaling up or down) is only possible with a reboot. That will most definitely stop the execution of your script.
(more)
 
How do I make an Azure VM snapshot?
 
1. On the Azure portal, select Create a resource. 2. Search for and select Snapshot. 3. In the Snapshot window, select Create. 4. Enter a Name for the snapshot. 5. Select an existing Resource group or enter the name of a new one. 6. Select an Azure datacenter Location.
(more)
  • On the Azure portal, select Create a resource.
  • Search for and select Snapshot.
  • In the Snapshot window, select Create. …
  • Enter a Name for the snapshot.
  • Select an existing Resource group or enter the name of a new one.
  • Select an Azure datacenter Location.
 
Can we restrict a developer (on Microsoft Azure VM) to not upload a source code on any website or email?
 
You can restrict a developer from uploading a source code on any website by following the below steps: 1. Go to the desired VM instance in the Azure portal 2. Select “Access control (IAM)” option from the left pane 3. Select Role Assignment option under +Add option 4. Now, you will be able to assign any one of the available pre-defined roles to a user 5. Give contributor level access to the respective developer, now he will not be able to access/upload a file to the website
(more)
 
 
 
The region prices are related to pricing conditions in particular region. In details it is about tenancy of physical area, prices of the hardware from vendors, the cost of man-hours in a particular region for IT specialists and other Azure datacenter workers, and so on. Unfortunately, I can’t find any reference for that information, I’m talking here personally as the person who works with Azure every day and have a contact with Microsoft teams.
(more)
 
Can we spin up a Windows Azure VM programmatically from a php page? We can assume that we have valid Microsoft Credentials.
 
The REST Management API is the one you want to go for. Authentication is certificate based. You’ll have to upload a management certificate using the Windows Azure portal in able for your PHP application to authenticate. A good starting point on how to use the Windows Azure REST APIs for management can be found here How to use Windows Azure service management APIs (PHP). Like Rahul suggested, once you have that up-and-running use the Operations on Virtual Machines API set to manipulate your Virtual Machine deployments.
(more)
 
How do you reduce the size of my Azure VM disk?
 
Hi, Below are some points that would be beneficial for you. 1. Pick the best possible disk size. 2. Compress the panel size in the VM. 3. Export the managed disk to a VHD. 4. Compress the exported VHD. 5. Make another new managed disk from the VHD. 6. Make another VM from the new recently created disk. 7. Alternatively, clean up all the old resources. Hope it helps.
(more)
 
 
 

Azure Services Cheat Sheet


Azure Virtual Machine vs Azure Web App Cheat Sheet
Azure Virtual Machine vs Azure Web App

Azure Containers vs Azure Kubernetes Cheat Sheet
Azure Containers vs Azure Kubernetes Cheat Sheet

Azure Containers vs Azure Kubernetes Cheat Sheet
Azure Containers vs Azure Kubernetes Cheat Sheet

Use the following flowchart to select a candidate compute service.

Decision tree for Azure compute services

Definitions:

  • “Lift and shift” is a strategy for migrating a workload to the cloud without redesigning the application or making code changes. Also called rehosting. For more information, see Azure migration center.
  • Cloud optimized is a strategy for migrating to the cloud by refactoring an application to take advantage of cloud-native features and capabilities.
  • App Service. A managed service for hosting web apps, mobile app back ends, RESTful APIs, or automated business processes.
  • Azure Kubernetes Service (AKS). A managed Kubernetes service for running containerized applications.
  • Batch. A managed service for running large-scale parallel and high-performance computing (HPC) applications
  • Container Instances. The fastest and simplest way to run a container in Azure, without having to provision any virtual machines and without having to adopt a higher-level service.
  • Functions. A managed FaaS service.
  • Service Fabric. A distributed systems platform that can run in many environments, including Azure or on premises.
  • Virtual machines. Deploy and manage VMs inside an Azure virtual network.
  • Infrastructure-as-a-Service (IaaS) lets you provision individual VMs along with the associated networking and storage components. Then you deploy whatever software and applications you want onto those VMs. This model is the closest to a traditional on-premises environment, except that Microsoft manages the infrastructure. You still manage the individual VMs.
  • Platform-as-a-Service (PaaS) provides a managed hosting environment, where you can deploy your application without needing to manage VMs or networking resources. Azure App Service is a PaaS service.
  • Functions-as-a-Service (FaaS) goes even further in removing the need to worry about the hosting environment. In a FaaS model, you simply deploy your code and the service automatically runs it. Azure Functions are a FaaS service.

There is a spectrum from IaaS to pure PaaS. For example, Azure VMs can autoscale by using virtual machine scale sets. This automatic scaling capability isn’t strictly PaaS, but it’s the type of management feature found in PaaS services.

Azure Data Store:

Use the following flowchart to select a candidate data store.

Data store decision tree

Top-paying Cloud certifications:

  1. Google Certified Professional Cloud Architect — $175,761/year
  2. AWS Certified Solutions Architect – Associate — $149,446/year
  3. Azure/Microsoft Cloud Solution Architect – $141,748/yr
  4. Google Cloud Associate Engineer – $145,769/yr
  5. AWS Certified Cloud Practitioner — $131,465/year
  6. Microsoft Certified: Azure Fundamentals — $126,653/year
  7. Microsoft Certified: Azure Administrator Associate — $125,993/year

Sources:

Laptop 101 – Pick a laptop for me – MacOS or Windows or Chrome – Apple MacBook or Google ChromeBook or Microsoft Surface or Dell or HP or Lenovo

Videoconferencing Apps: Zoom vs Teams vs Google Meet vs Slack

The Cloud is the future: Get Certified now.

More than 215 000 000 computers will be sold in 2020. As of 2020, 75 percent of Americans owned a desktop or laptop computer. Among all households, about 78 percent had a desktop or laptop, 75 percent had a handheld computer such as a smartphone or other hand- held wireless computer, and 77 percent had a broadband Internet subscription.

In this Laptop 101 blog, we are going to help you understand laptop specs: pick a laptop for you, pick an MacBook for you, pick a chromebook for you, pick a windows surface tablet/laptop for you, save you money on laptop.

I- Laptop Specs

The technical specifications of  laptops usually fall into the following categories: Processor, Memory, Graphics, Screen Size and Storage. Depending on your needs, you might be either overpaying for something you don’t need, or haven’t set aside the budget for something you do need. Let’s break it down and find out.

When you’re choosing a new laptop  computer, it’s important to understand the specs and features you’ll see listed by each model. That way, you can be sure to choose the right laptop for your own particular needs.

Knowing what processor to go for, how much Ram you’ll need and whether or not you require a graphics card are all questions will have a bearing on your setup, and your budget.

What are Good Laptop Specs?

 

  • Processor – CPU – The brains of the laptop, the better the processor, the faster your computer will run. For a dependable laptop, an Intel i3 is fine, but an i5 will guarantee good speeds. Laptops with i7 chips cost a lot more, and are more suited to those running design software or games.
  • Screen – Size and resolution of screen will have a big impact on your experience. It’s best not to go smaller than a 13-inch screen, though you can live without 4K displays unless you’re a professional designer or photo-editor. Full HD resolution is fine.
  • Storage Space – The amount of space you can use to store your files. It’s best not to accept less than 256GB for a solid state drive (SSD, which helps laptops run faster), or less than 1TB for a traditional hard drive (not as fast, but more generous with the storage).
  • RAM – Used for juggling multiple applications at once. More RAM can give you a speed boost. These days, 8GB RAM is the minimum to aim for. 16GB or 32GB is only needed for high-end machines.
  • Graphics card – An additional graphics card is used for gaming and image editing. If you only need to browse the web, email and stream video, you can live without an advanced graphics card.

Below are the laptops with the best overall specs as of November 2020:

Image Laptop Specs   Price
Best Overall Laptops Over $1000
MacBook PRO 2020 OS: Mac OS
CPU: intel Core i9
RAM: 16 GB Up to 64GB
Storage: 512GB Up to 8TB SSD
Screen: 16 inch
Note: 
Intel Core i9 processor with up to 8 cores and 16 threads of processing power sustains higher performance for longer periods of time
$2199
Surface Pro 7 – Platinum OS: Win 10 Pro
CPU: intel Core i7
RAM: 16 GB
Storage: 1TB SSD
Screen: 16 inch
Note: 
Ultra-light and versatile. At your desk, on the couch, or in the yard, get more done your way with Surface Pro 7, featuring a laptop-class Intel® Core™ processor, all-day battery,¹ and HD cameras.
$2100
HP ZBook OS: Win 10 Pro
CPU: Intel Core i9-8950HK(2.9-4.8GHz) Processor Count 6
RAM: 64 GB DDR4
Storage: 2 TB SSD
Screen: 15.6″ Full HD 1920×1080
Note:
NVIDIA Quadro P1000(4GB GDDR5 VRAM)
$1960
Dell xps 15 7590 OS: Win 10 Pro
CPU: Intel Core i9-9980HK Processor Count 6
RAM: 32GB RAM
Storage: 1 TB SSD
Screen: 15.6″ 4K uhd touch ( 3840 x 2160)
Note:
NVIDIA GTX 1650
$1999
Lenovo ThinkPad P53 i9 OS: Win 10 PRO
CPU: Intel i9-9880H 8-Core,
RAM: 64GB RAM,
Storage: 1TB
Graphics: PCIe SSD, Quadro RTX 4000, Screen: 15.6″ Full HD (1920×1080),
Notes: Fingerprint, WiFi, with USB3.0 Hub
$3999
 

Best Overall Laptops Under $1000

Image Laptop Specs  Price
Macbook Air 2020 OS: MacOS
CPU: Apple M1 Chip 8-core CPU with 4 perform­ance cores and 4 efficiency cores
RAM: 8GB to 16 GB
Storage: 256GB SSD Up to 512 GB SSD
Screen: 13.3-inch
Note: 
LED-backlit display with IPS technology;
Height: 0.16–0.63 inch (0.41–1.61 cm)
Width: 11.97 inches (30.41 cm)
Depth: 8.36 inches (21.24 cm)
Weight: 2.8 pounds (1.29 kg)3
$929
2020 HP EliteBook 840 G6 OS: Windows 10 Pro 64
CPU: Intel i5-8265U 4-Core
RAM: 16GB RAM|1TB SSD;
Storage: 256 GB SSD
Screen: 14″ diagonal FHD display
Notes: 1TB PCIe SSD, Intel UHD 620, 14.0″ Full HD (1920×1080), Fingerprint, WiFi, Bluetooth, Webcam, with Hub
$999-$1200
Dell XPS 13 OS: Windows 10
CPU: 10th Generation Intel® Core™ i5-10210U Processor (6MB Cache, up to 4.2 GHz)
RAM: 8 GB, LPDDR3, 2133 MHz, Integrated
Storage:256 GB M.2 PCIe NVMe Solid-State Drive
Screen: 13.3-inch FHD (1920 x 1080)
Note: InfinityEdge Non-Touch Display
$886
Acer CB715-1W-59YQ OS: Chrome OS™
CPU: Intel® Core™ i5-8250U processor Quad-core 1.60 GHz
Screen: 15.6″ Full HD (1920 x 1080) 16:9
RAM: 16 GB, DDR4 SDRAM
Notes: Intel® UHD Graphics 620 shared memory
64 GB Flash Memory
$745

Back to Top

II- Laptop Batteries

‎AWS Cloud Practitioner PRO
‎AWS Cloud Practitioner PRO
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot

 

Three types of batteries power the laptops you’ll find in service today, nickel cadmium (NiCad), nickel metal hydride (NiMH), and lithium ion (Li-ion), with Li-ion being the most common in newer laptops. Each battery type has a different chemistry for generating a charge and, therefore, different characteristics.

Which is best battery for laptop? Top battery life performers

# Name Minutes
1 Asus ExpertBook B9450F 989
2 LG Gram 14 963
3 LG Gram 17 779
4 Samsung Notebook 9 Pro 750
Is it bad to leave your laptop plugged in all the time?
Laptops are only as good as their batteries, however, and proper care of your battery is essential to making sure it retains a long life and charge. Leaving your laptop plugged in constantly is not bad for your battery, but you will need to be careful of other factors, such as heat, to prevent your battery from damage.

Can we use laptop without battery?
You can Use a Laptop without the Battery

There is no reason why a laptop wouldn’t work just fine without the battery in it, as long as you take a few aspects into account. First of all, make sure you’re using the original power adapter that came with the laptop.

Is it OK to use laptop while charging?
In short, yes. It is perfectly fine to use your laptop while plugged in and fully charged. Laptops these days are designed to be used while plugged in, as most automatically switch to a power saving mode when running on battery only to extend usage.

How long do batteries last in laptops?
between two and four years

A laptop computer battery should last between two and four years, or around 1,000 full charges. The total lifetime of a battery is dependent on several factors. These factors include battery type (NiCad, NiMH, or Li-ion), how often the battery is used, and its age.

How do I know if I need a new battery for my laptop?
Once your battery reaches a low enough capacity, Windows will warn you that your battery needs to be replaced. A red “X” will appear over the battery icon. If you click the icon to display more info, you will likely see a message that reads “plugged in, not charging. Consider replacing your battery.”

Back to Top

II- Laptop CPUs – Processors

The processor, sometimes called the CPU (central processing unit), is the heart of any laptop and has the greatest impact on your productivity. A faster processor means apps load quickly, you can run multiple apps at once, and the computer won’t lag and cause slowdowns when you run processor-intensive tasks.

What Laptop CPU is the best? What is the most powerful CPU?
 
Rank Device 3DMark Physics Score
1 Intel Core i9-10900K Processor DirectX 12.00 13944
2 Intel Core i9-10900KF Processor DirectX 12.00 13878
3 AMD Ryzen 9 5950X DirectX 12.00 13768
4 Intel Core i9-10850K Processor DirectX 12.00 13560

Is 2.6 GHz good for a laptop?

Modern laptops rarely ever run always at 2.6Ghz all cores. If yours does then expect subpar battery life but very good performance. If this happens to be the boost clock than your performance will be objectively terrible. This laptop must be fanless or something.

What is a good CPU speed?

A clock speed of 3.5 GHz to 4.0 GHz is generally considered a good clock speed for gaming but it’s more important to have good single-thread performance. This means that your CPU does a good job of understanding and completing single tasks. This is not to be confused with having a single-core processor.

What’s more important RAM or processor?

RAM is essentially the core of any computer or smartphone and in most cases, more is always betterRAM is as significant at the processor. A right amount of RAM on your smartphone or computer optimizes performance and the ability to support various types of software.

Back to Top

III- Laptop RAM – Memory

Used for juggling multiple applications at once. More RAM can give you a speed boost. These days, 8GB RAM is the minimum to aim for. 16GB or 32GB is only needed for high-end machines.
 
How much RAM does a laptop need?
For anyone looking for the bare computing essentials, 4GB of laptop RAM should be sufficient. If you want your PC to be able to flawlessly accomplish more demanding tasks at once, such as gaming, graphic design, and programming, you should have at least 8GB of laptop RAM.
 
Can you put RAM in a laptop?
Adding or upgrading RAM in a laptop does not require any computer skills, just a screwdriver. First, determine how much memory you‘d like to add. See our guide to estimate the amount of computer memory you need. Another way to improve performance is to upgrade your hard disk drive to a solid state drive.
 
Which Laptop RAM is best?
Corsair Vengeance LED. Corsair is one of the most trusted names when it comes to the best RAM on the market. Its Vengeance series, especially, has something for everyone with its LED DDR4 offerings.
 
Which RAM is fastest?
Corsair releases their fastest ever DDR4 RAM. Corsair has announced that their Vengeance LPX DDR4 memory kits are soon to be available in their highest ever speed, and a record for commercially available RAM, 4,866MHz. This RAM will be available in 2x 8GB kits.
 
How do I get more RAM on my laptop for free?
Using A USB Flash Drive Or SD Card To Increase RAM. ReadyBoost in Windows allows you to increase your PC RAM with the help of a USB drive or SD card. The way ReadyBoost works is by creating a Swap file on the USB drive or SD card. This makes them be utilized as a memory cache.

Back to Top

     

IV- Laptop Storage

What is the storage of a laptop?

The most common hard drive capacity in today’s laptops is 1 Terabyte (TB) or 1,000 Gigabytes (GB). Many of the cheap laptops come with a smaller 500 GB hard drive, while 2 TB size is occasionally used in some more expensive notebooks.
 
Can you add storage to your laptop?
If you can open up your laptopyou can replace its internal drive with a larger drive — or insert a second internal drive, in the off chance that your laptop has a second drive bay. Upgrading your laptop is often possible, but it’s definitely more work than quickly plugging in an external storage device!
 
How much storage does a laptop need?

If you mainly store text files and photos, then 1TB of storage space is sufficient. However, if you want to store a lot of movies, games, and other large files on your PC, it’s wise to reserve at least 2TB of storage space on your laptop.
 
Types of storage devices
  • Primary Storage: Random Access Memory (RAM) Random Access Memory, or RAM, is the primary storage of a computer. 
  • Secondary Storage: Hard Disk Drives (HDD) & Solid-State Drives (SSD) …
  • Hard Disk Drives (HDD) 
  • Solid-State Drives (SSD) 
  • External HDDs and SSDs. 
  • Flash memory devices
  • Optical Storage Devices
  • Floppy Disks.

Back to Top

V- Laptop Camera

If you can’t find your web camera, follow the steps below:
  1. Click the Start button, located at the bottom left of the screen.
  2. Open the Control Panel (as shown in red below).
  3. Select Hardware and Sound.
  4. Open Device Manager and double-click on Imaging Devices. Your webcam should be listed there.
  1. Update your imaging software to the recent version.
  2. Adjust the lighting condition.
  3. Soften the light.
  4. Your Background matters.
  5. Don’t overload the laptop with multiple tasks.
  6. Adjust your laptop camera video settings.
  7. Adjust the screen resolution.
Most modern laptops and all-in-one computers now come with integrated webcams built into the display. While these built-in models are more convenient to use, external webcam models do have some advantages.
 
While they necessitate expense beyond a laptop or PC, external webcams are apt to have higher quality components that allow for fine tuning. Embedded webcams are typically small; small components directly impact the level of camera performance and image quality. 

Which laptop camera is best?

The Logitech HD Webcam C920 has incredibly high image quality for a low price, making it the best overall webcam you can buy.

  1. Logitech HD Webcam C920. Here’s the best overall webcam.
  2. Razer Kiyo. Here’s the best webcam for streaming.
  3. Logitech StreamCam. …
  4. Logitech Brio 4K Pro Webcam. …
  5. Logitech Webcam C930e. …
  6. Logitech HD Webcam C310.

How do I turn on my built in camera on my laptop?

A: To turn on a built-in camera in Windows 10, just type “camera” into the Windows search bar and find “Settings.” Alternatively, press the Windows button and “I” to open Windows Settings, then select “Privacy” and find “Camera” on the left sidebar.

Back to Top

VI- Laptop Screen – Size Display

Laptops frequently share memory between the CPU and the GPU, saving space and reducing power consumption. … A laptop displays its graphics on a liquid crystal display (LCDscreen. Most screens measure between 12 and 17 inches, and the size of the screen affects the overall size of the laptop.

How do I get my laptop screen replaced?

Ordering by the laptop screen (LCD) model number.

Every screen installed in any laptop has a screen model number on the back of the LCD screen. This is without a doubt the best way to order replacement screens. The model number denotes the size, the resolution and the backlight type.

Which laptop screen is best?

  1. Alienware m15 (2019): 265% The Alienware m15 tops the list with its incredibly vibrant display.
  2. HP Spectre x360 (15-inch OLED): 258% .
  3. Razer Blade 15 (OLED): 243%
  4. Dell XPS 15: 239%
  5. Dell Precision 7730: 211%
  6. Asus ZenBook Pro Duo: 203%
  7. Lenovo ThinkPad X1 Yoga: 201%
  8. Samsung Galaxy Book Ion: 200%

Which matters the most, screen resolution or CPU processor? I’m choosing between two laptops for programming and photoshop- the 1st one has an i7-7500U, but it’s 1366×768. The 2nd one has the i5–7200U, but its screen resolution is 1080p.

Is it better to have a laptop with a smaller screen with a more powerful CPU and memory or a larger screen with lesser specs?

I would prefer the laptop with the smaller screen and make sure you have miracast or google chromecast. That way you could use mostly all TVs as your screen when needed. The other option you have a big screen and a computer that might not preform.

I got water on my laptop, now a few hours later it works except for the scren brightness, which glitches often. What is the best thing to do?

Sounds like a screen cable is damaged. You might have to take apart the laptop and clean the cable, or just replace the screen entirely.

Back to Top

VII- Laptop Weight – Size

What is a good laptop weight?

Choose the Right Size

11 to 12 inches: The thinnest and lightest systems around have 11- to 12-inch screens and typically weigh 2.5 to 3.5 pounds. 13 to 14 inches: Provides the best balance of portability and usability, particularly if you get a laptop that weighs under 4 pounds.

How do I know my laptop size?

To measure a laptop screen size, take a measuring tape and start measuring from the bottom left of the laptop screen diagonally to the top right of the laptop screen. That is your laptop size.

15″ laptops weigh less, are easier to carry around and have better battery life. There isn’t much difference performance wise between a 17″ and a 15″. The main differences are portability and the optional keypad.

The larger the laptop, the bigger the screen. Big screens are nice, but for a laptop it comes at the cost of size and weight. So basically you have to balance the tradeoffs between portability and screen size.

Battery life tends to be better for larger laptops due to there being more space for a battery, but not all models make use of it. And in some cases the larger sizes of machine have a different set of hardware which may actually drain the battery faster.

The larger the laptop, the bigger the screen. Big screens are nice, but for a laptop it comes at the cost of size and weight. So basically you have to balance the tradeoffs between portability and screen size.

The larger the laptop, the bigger the screen. Big screens are nice, but for a laptop it comes at the cost of size and weight. So basically you have to balance the tradeoffs between portability and screen size.

Battery life tends to be better for larger latpops due to there being more space for a battery, but not all models make use of it. And in some cases the larger sizes of machine have a different set of hardware which may actually drain the battery faster.

2 Lbs Or Less Laptops

Back to Top

X- Latest Laptops Q&A

 

Of course Macs (of any model) are “worth it” to literally tens of millions of people a year. That’s not hype. Apple actually sells that many every year. And Macs have topped customer satisfaction ratings for literal decades as a result:

Apple has cemented its place atop the American Customer Satisfaction Index, a sort of Michelin guide for customer service, for the eleventh straight year.

In a new report released by ACSI, Apple continued its lead over big name rivals such as Dell, Acer, Hewlett-Packard and the catch-all “All Others” when it comes to satisfaction with computing devices — including desktops, laptops and tablets. Scores are based on everything from pre-sale customer expectations, to perceived value and quality, customer complaint incidents and overall consumer loyalty.

Reference: Apple tops customer satisfaction survey for 11th straight year

If you are new to Macs, it would be good for you to do a little of your own research before making a final decision. Have a look at Apple’s webpage just for new Mac users:

New to Mac – Official Apple Support

If you can, go to your local Apple Store and play around with some Macs. That will also give you a good idea of the differences between the different models.

And with that, I’ll give you my own personal perspective on this topic:

I’ve used and written software for Macs (and all of the other mainstream platforms) since the 1980s when there were no standardized mainstream platforms. I make a living developing enterprise software for Linux, Windows, macOS, etc on a daily basis. I’ve done the overwhelming majority of my professional work – especially later in my career while earning much more money – on Macs. But I use them for lots of other things like encoding HD video, editing AV media, graphics work, running virtual machines and Docker containers, and so on.

Windows flat-out sucks for software development and system architecture work – and it’s pretty bad for general desktop use as well, IMO. I can say this because I use Windows for such things daily. Most of the development tools I need to use regularly aren’t built into Windows, and are more complicated to use and configure. For instance, GitBash has a different console than Windows. OpenSSH, Ruby, Python, Apache, and tons of other languages / tools aren’t built in and are more of a pain to install and configure, and the list goes on and on. In general, Windows makes you work harder than you would on a Mac. Microsoft has tried to make things better by providing a Linux subsystem – and I’ve used Cygwin long before that – but even those are a kludge in comparison to Unix being the core of the OS in macOS, and all of the normal tools coming pre-installed. And macOS has lots of features (like Continuity) that significantly improve your productivity – those mostly don’t exist on other platforms.

I can tell you the 2019/2020 model MacBook Pros are huge upgrades from previous MacBook Pro models. The 2019 16-inch MacBook Pro in particular is a fantastic machine.

For all of the hype about keyboard issues in previous models, Apple’s return to the scissor-switch keyboard is of course great, and the Touch Bar includes a physical Escape key as well. But the truth is those keyboard problems affected a relatively small number of people, and there are much bigger improvements than that.

In terms of processing power, RAM, and storage, you can configure it all the way up to a 2.4GHz 8-core 9th-generation Intel Core i9 processor (Turbo Boost up to 5.0GHz), an AMD Radeon Pro 5500M with 8GB of GDDR6 memory, 64GB of 2666MHz DDR4 memory, and 8TB of 3.2GB/s SSD storage.

The thermal cooling system is redesigned and much more efficient. It’s so efficient, in fact, that thermal throttling is truly a thing of the past.

The screen outputs 500 nits of brightness and has automatic True Tone color shifting with a consistent P3 wide color gamut. It’s just plain gorgeous.

The sound system is a completely redesigned six-speaker setup that has way more bass and much less distortion than any model before it. Stereo separation is remarkable, mids are clear, and the bass is better than I’ve heard on any laptop before. It sounds amazing. And they redesigned the microphones into a three-mic array which really improves audio recordings too.

I own the beefiest 2019 16-inch MacBook Pro with the 2.4GHz 8-core 9th-generation Intel Core i9 processor (Turbo Boost up to 5.0GHz), the AMD Radeon Pro 5500M with 8GB of GDDR6 memory, 32GB of DDR4 memory, and 2TB of SSD storage. I bought it when it was first released. And I’ve been putting it through different work loads since I got it.

And I’m pleased to report my MacBook Pro happily encodes HD video (one of the most demanding tasks I do on a regular basis) with all cores maxed out for literal hours, at a sustained speed of somewhere above 3GHz, without any CPU throttling, and the temperatures hovering in the 83-95° C range. The fans while this is going on are surprisingly quiet. The bottom of the case (which often feels like a heat sink in older models) doesn’t even feel very warm to the touch compared with older models.

It’s downright speedy. The display is gorgeous and bright. The keyboard is really nice and responsive. The trackpad, like Apple’s others, is the best in the industry. It’s a beast in a really thin, fairly lightweight, relatively small package. I’m loving it!™

It’s worth every single penny.

As someone who has used and owned various model Apple laptops since the 1998 PowerBook G3 250 (Wallstreet), I’m happy to report that Apple has indeed designed their laptops to be more durable as time goes on and newer models are released.

Anyone who has used Apple laptops for a minute can tell you today’s aluminum unibody MacBook Pros are far more durable than the PowerBook G4 models they replaced, which were in turn far more durable than the plastic PowerBook G3 models they replaced.

The unibody construction Apple introduced in 2008 was a huge leap forward in rigidity of the chassis which translates into a corresponding natural increase in durability. It’s no mistake that all of Apple’s top competitors in the laptop space are copying this design. Before that, the chassis consisted of multiple parts that were held together with brackets and screws, which naturally made them much less durable. And before that, lots of plastic was used, which was even weaker. And as anyone who was alive and using laptops back then knows, that was the state of laptops until Apple innovated in that area.

As is so often the case, Apple leads, the industry follows.

The biggest difference is the fan.

  • The MacBook Pro has a fan, the MacBook Air does not have any fan at all
  • That means that the biggest difference is that the MacBook Pro can spin up the fan to provide much more cooling so it then can ramp up the processor clock rate (what Intel calls Turbo Boost)
    • There is minor difference in that the lowest cost MacBook Air (below $1,000 USD) has 7 GPU cores instead of the 8 cores in the more expensive MacBook Air model, and 8 GPU cores in the more expensive MacBook Air model. This is a relatively minor difference and most users will not notice any difference

That is because Apple actively expends a huge amount of effort into protecting your privacy and security, farmer effort than any and all other operating systems (including Windows, Android, and Linux)

Buying refurbished products directly from Apple is a good way to pay less, in my experience.

When Apple refurbishes a product, the product is sent back to the factory where faulty or damaged parts – including parts with scratches or blemishes – are removed and replaced with brand-new parts. Then it undergoes a battery of low-level and high-level tests to ensure all parts function correctly. When you receive an Apple-refurbished product, it is virtually indistinguishable from a new product. It quite literally looks and smells new. And it comes with the same standard warranty and support as a new product. The only noticeable difference is that the packaging is plain rather than being the retail packaging, and of course the price is often significantly reduced compared to a new product. I’ve purchased Apple refurbished products for years and have been pleased with every single one.

Other retailers do not use Apple’s process when they refurbish items. Each one is different. Some give you a product with scratches, blemishes, and even fingerprints on it. Some do little more than wipe the product with a rag and throw it back in the box. Some do component-level (soldering, etc) repairs that may or may not actually fix all of the problems. Some don’t even bother testing functionality after repair is done. And often there is no adjustment to the original manufacturer’s warranty, which means depending on the age of the device, you’re only going to get a partial warranty – assuming the warranty wasn’t voted by whoever “repaired” it before you got it. For this reason, I don’t recommend buying refurbished Apple products from anyone but Apple directly.

6) Would you use a Surface Pro 2 to learn programming?

You could probably use it for that – but I have to say if you listed every computer on sale today – and asked me to rank them in order of which one I’d get – the surface pro would be at the very bottom of the list.

The MAIN thing I find essential is to have a large, high-res screen and a proper keyboard and mouse.

Personally – I’d get a cheap used laptop – and spend your money on a 27″ external monitor and a nice external keyboard and mouse. That way, you have a good setup for home/office use – and can still pick up the laptop and take it with you at other times.

I guess you could probably to that with the Surface Pro 2 – but it’s three times the price and half as good as a low end laptop.

No they are different devices intended for different uses.

  • MacBooks can perform all computers tasks
    • Can run an unlimited variety of apps installed on the machine
      • Runs all macOS apps
      • Runs all Windows apps (if you install Windows)
      • Runs web apps
      • Runs UNIX apps
      • Runs (well written) Linux apps
    • Well built, top quality hardware that lasts for many years
    • Known for having the best security and privacy
  • Chromebooks are strictly restricted
    • Can only run Google apps
    • Must have a good connection to access all apps
    • Known as a “Thin Client” or a “NetBook” which means it it the least powerful, lowest cost device to barely get a minimal job done with greatly reduced functionality.
    • Built on the cheapest possible hardware because Cost is the primary driving factor for this product
    • Unknown security and has no privacy because this a Google product

To Conclude: Chromebook is the cheapest possible way to get a few tasks done, where a MacBook is the most versatile, powerful, dependable, secure way to get everything done over many years.

The Chrome operating system is a and Heritage from Linux and has many of the Great features found in Linux I believe that Windows is inferior to Linux and therefore inferior to Chromebook an advantage of Chromebook is that in general it is less costly than a comfortable Windows system

Linux vs Windows

MS generally works as does Linux. Both have places where they break but both work most of the time.

Linux has almost all of its software in a repository, that is like an app store but Linux is free and also has no advertisements. It is also all security checked and tested to work.

In 99.99% of the cases you will need no other software.

Virus

Linux can have viruses etc but in reality that is not true. I have used it with no malware (virus) checker and NEVER had any sort of invasive or take over software. There few attempts to create malware for Linux because Linux designers are security virulent so it is very difficult to break while Windows is easy.

There are more Windows systems The result is hackers attack Windows not Linux.

Linux has some things that I find wonderful. When you update you almost never have to reboot. Updates take place in the background and never interrupt your work or force you to reboot when you don’t want to. You never have to type in software keys.

My website http://waynes-web-design.com/ I also share resources that may help you in your learning journey.

9) How do I make my laptop battery last longer?

You can do practically anything on a modern laptop, but their advanced features drain battery life to the extent that you can often only get a couple of hours out of your laptop before its battery drains.

1. Dim your screen

2. Change power settings > >Power Saver

3. Switch off Wi-Fi > >If not Needed

4. Disable features > >Disable Built-in Features

5. Invest in some Battery > > Usually six-cell battery is used by most of the people, but many manufacturers offer eight- or even 12-cell optional upgrades, which can double your power.

6. Charge it properly – Dont overcharge or discharge completely

7. Set your screen brightness to medium

8. Close any unnecessary programs that you are not using at the moment.

9. Keep your laptop safe from viruses and trojans ect as they can create processes in the background which inturn will increase cpu utilisation which wil consume battery.

10. Keep the laptop to sleep if you are leaving it idle for a short period of time

11. Do not connect any unwanted devices to the laptop as they will draw some power from the ports

12. Turn off the radios(Bluetooth,wifi etc) when not in use.

Back to Top

XI- Laptops Reviews

LG Gram 17 (2020) review: A lightweight productivity machine with a big screen

With a weight under 3 pounds and a tall 17-inch display, you get the space for keeping up your productivity in a body made for working at home or anywhere else.

If you’ve been working from home on a 13-, 14- or 15-inch laptop and you’re finding your productivity suffering by working on its small screen, you may be craving moving to something larger. An external display might make the most sense assuming you’ve got the room for one. But, if you need something more mobile and lap-friendly, the LG Gram 17 might do the trick because, despite its tall 17-inch display, it’s incredibly light with a long battery life making it a standout in the category. 

Get the LG 17 now here

Back to Top

Apple M1 Mac Review: Time to Recalibrate!

2020 MacBook Pro 13 Apple Silicon Unboxing, first thoughts and initial benchmarks

Yeah, Apple’s M1 MacBook Pro is powerful, but it’s the battery life that will blow you away

Back to Top

HP laptop 15S-DU2002TU laptop review

Yes it as the 10th generation I am giving you some information about this laptop.

  • Processor: 1.2GHz Intel Core i3-1005G1 processor
  • Memory: 8GB DDR4 RAM
  • Storage: 1TB 5400rpm hard drive
  • Display: 15.6-inch screen, Intel UHD Graphics
  • Operating system: Windows 10 Home operating system
  • Weight of laptop: 1.75kg laptop

Pros

  • Slim and portable design
  • Long-lasting battery life
  • Micro-edge bezel design
  • Enough storage space
  • Sufficient RAM to speed up the task
    Get it here

Apple MacBook Air with Apple M1 Chip is an Astonishing Breakthrough

Image for post

Apple Silicon launches with stellar battery life and uncompromising power

Image for post
Image for post
Image for post
Image for post

Get the Macbook Air at discounted price here

Loans Debts Mortgages Finances Calculators – Unit and Currency Converters

Loans and Financial Calculators

The Cloud is the future: Get Certified now.

At some stage, we all need or want more money than we have. Funding a new set of wheels is the number one reason to take out a personal loan. Perhaps unsurprisingly, men are more likely than women to take out a personal loan. According to our survey, 69.05% of men said they’ve taken out a loan compared to 62.09% of women.

What to expect when taking out a loan? what is the total cost of a loan? Use these calculators below to find out.

  1. Unit Converter
  2. Currency Converter
  3. Financial Calculator
  4. Loan Calculator
  5. Loans Comparison Calculator
  6. Car Auto Loans Calculator
  7. Mortgages Comparison Calculator
  8. Mortgage Calculator
  9. Reverse Mortgage Calculator
  10. Compare Mortgages & Loans
  11. Credit Card Dues Calculator
  12. Credit Card Repayment Calculator

Unit Converter

With this Quick Unit Converter Calculator you will be able to convert all types of units from Metric to Imperial systems & Vice-versa in seconds.

Back to top

Currency Converter

This FREE currency converter calculator will convert your money based on current values from around the world.

Back to top

Financial Calculator

‎AWS Cloud Practitioner PRO
‎AWS Cloud Practitioner PRO
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot

This FREE 10 in 1 Financial Calculator provides: Loan Calculator, Mortgage Calculator, Credit Card Dues Calculator, RD Calculator, Annuity Calculator, TD / FD Calculator, SIP Calculator, Compare loans Calculator, EMI Calculator, Loan Amount & Tenure Calculator

Back to top

Loan Calculator

Use this simple and FREE loan calculator to calculate the real cost of any type of loans before accepting and signing. Remember, banks are not your friends.

Back to top

Loan Mortgage Comparison Calculator

Amount:

Term:

Show more results

Use this simple and FREE loan and mortgage comparison calculator to compare the real cost difference of any type of loans or mortgages before choosing. Remember, banks are not your friends.

Back to top

Car Loan – Auto Loan Calculator




Auto Loan Calculator

Enter a "0" (zero) for one unknown value above.



  Original Size  




Auto Loan Calculator

The average monthly car payment in the U.S. is $550 for new vehicles, $393 for used and $452 for leased.
Americans borrow an average $32,480 for new vehicles and $20,446 for used.
The average loan term is 69 months for new cars, 35 months for used and 37 months for leased vehicles.
Gen Xers are the most likely to have a car loan, and carry the highest auto loan balances with a median of $19,313.

Back to top

Mortgage Calculator

Use this simple and FREE mortgage calculator to calculate the real cost of a mortgage before accepting it. Remember, banks are not your friends. Always shop around and never forget that you are the boss. Negotiate, negotiate and negotiate

Back to top

Reverse Mortgage Calculator

A reverse mortgage is a mortgage loan, usually secured by a residential property, that enables the borrower to access the unencumbered value of the property. The loans are typically promoted to older homeowners and typically do not require monthly mortgage payments.Use this FREE reverse mortgage calculator to know the real cost of a reverse mortgage before accepting it. Remember, Banks are not your friends.

Back to top

Interest and Rate of Return Calculator

Yield is also the annual profit that an investor receives for an investment. The interest rate is the percentage charged by a lender for a loan. Interest rate is also used to describe the amount of regular return an investor can expect from a debt instrument such as a bond or certificate of deposit (CD).

Back to top

Credit Card Repayment Calculator

How do you pay back a credit card?
Here’s how it works: Step 1: Make the minimum payment on all of your accounts.
Step 2: Put as much extra money as possible toward the account with the highest interest rate.
Step 3: Once the debt with the highest interest is paid off, start paying as much as you can on the account with the next highest interest rate.

Back to top

Sources:

Back to top

CyberSecurity 101 and Top 25 AWS Certified Security Specialty Questions and Answers Dumps

AWS Certified Security – Specialty Questions and Answers Dumps

The Cloud is the future: Get Certified now.

Almost 4.57 billion people were active internet users as of July 2020, encompassing 59 percent of the global population.  94% of enterprises use cloud. 77% of organizations worldwide have at least one application running on the cloud. This results in an exponential growth of cyber attacks. Therefore, CyberSecurity is one  the biggest challenge to individuals and organizations worldwide:  158,727 cyber attacks per hour, 2,645 per minute and 44 every second of every day.  

In this blog, we cover the Top 25 AWS Certified Security Specialty Questions and Answers Dumps and all latest and relevant information about CyberSecurity including:

I- The AWS Certified Security – Specialty (SCS-C01) examination is intended for  individuals who perform a security role. This exam validates an examinee’s ability to effectively demonstrate knowledge about securing the AWS platform.

It validates an examinee’s ability to demonstrate:

An understanding of specialized data classifications and AWS data protection mechanisms.

An understanding of data-encryption methods and AWS mechanisms to implement them.

An understanding of secure Internet protocols and AWS mechanisms to implement them.

A working knowledge of AWS security services and features of services to provide a secure production environment.

Competency gained from two or more years of production deployment experience using AWS security services and features.

The ability to make tradeoff decisions with regard to cost, security, and deployment complexity given a set of application requirements.

‎AWS Cloud Practitioner PRO
‎AWS Cloud Practitioner PRO
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot
  • ‎AWS Cloud Practitioner PRO Screenshot

AWS Certified Security Specialty
AWS Certified Security Specialty

An understanding of security operations and risks.

Below are the Top 25 AWS Certified Security Specialty Questions and Answers Dumps including Notes, Hint and References:

Question 1:  When requested through an STS API call, credentials are returned with what three components?

A)  Security Token, Access Key ID, Signed URL
B) Security Token, Access Key ID, Secret Access Key
C) Signed URL, Security Token, Username
D) Security Token, Secret Access Key, Personal Pin Code

ANSWER1:

B

Notes/Hint1:

Security Token, Access Key ID, Secret Access Key

Reference1: Security Token, Access Key ID, Secret Access Key

Get mobile friendly version of the quiz @ the App Store

Back to Top

Question 2: A company has AWS workloads in multiple geographical locations. A Developer has created an Amazon Aurora database in the us-west-1 Region. The database is encrypted using a customer-managed AWS KMS key. Now the Developer wants to create the same encrypted database in the us-east-1 Region. Which approach should the Developer take to accomplish this task?

A) Create a snapshot of the database in the us-west-1 Region. Copy the snapshot to the us-east-1 Region and specify a KMS key in the us-east-1 Region. Restore the database from the copied snapshot.
B) Create an unencrypted snapshot of the database in the us-west-1 Region. Copy the snapshot to the useast-1 Region. Restore the database from the copied snapshot and enable encryption using the KMS key from the us-east-1 Region
C) Disable encryption on the database. Create a snapshot of the database in the us-west-1 Region. Copy the snapshot to the us-east-1 Region. Restore the database from the copied snapshot.
D) In the us-east-1 Region, choose to restore the latest automated backup of the database from the us-west1 Region. Enable encryption using a KMS key in the us-east-1 Region

ANSWER2:

A

Notes/Hint2:

If a user copies an encrypted snapshot, the copy of the snapshot must also be encrypted. If a user copies an encrypted snapshot across Regions, users cannot use the same AWS KMS encryption key for the copy as used for the source snapshot, because KMS keys are Region specific. Instead, users must specify a KMS key that is valid in the destination Region

Reference2: copies an encrypted snapshot, KMS Keys are Region-specific

Get mobile friendly version of the quiz @ the App Store

Question 3: A corporate cloud security policy states that communication between the company’s VPC and KMS must travel entirely within the AWS network and not use public service endpoints. Which combination of the following actions MOST satisfies this requirement? (Select TWO.) 

A) Add the aws:sourceVpce condition to the AWS KMS key policy referencing the company’s VPC endpoint ID.
B) Remove the VPC internet gateway from the VPC and add a virtual private gateway to the VPC to prevent direct, public internet connectivity.
C) Create a VPC endpoint for AWS KMS with private DNS enabled.
D) Use the KMS Import Key feature to securely transfer the AWS KMS key over a VPN. E) Add the following condition to the AWS KMS key policy: “aws:SourceIp”: “10.0.0.0/16“.

ANSWER3:

A and C

Notes/Hint3: 

An IAM policy can deny access to AWS KMS except through your VPC endpoint with the following condition statement: 
“Condition”:  {
     “StringNotEquals”: { 
             “aws:sourceVpce”: “vpce-0295a3caf8414c94a” 
                 } 
}
 If you select the Enable Private DNS Name option, the standard AWS KMS DNS hostname resolves to your VPC endpoint.

Reference3: AWS KMS

Get mobile friendly version of the quiz @ the App Store

Question 4: An application team is designing a solution with two applications. The security team wants the applications’ logs to be captured in two different places, because one of the applications produces logs with sensitive data. Which solution meets the requirement with the LEAST risk and effort? 

A) Use Amazon CloudWatch Logs to capture all logs, write an AWS Lambda function that parses the log file, and move sensitive data to a different log.
B) Use Amazon CloudWatch Logs with two log groups, with one for each application, and use an AWS IAM policy to control access to the log groups, as required.
C) Aggregate logs into one file, then use Amazon CloudWatch Logs, and then design two CloudWatch metric filters to filter sensitive data from the logs.
 D) Add logic to the application that saves sensitive data logs on the Amazon EC2 instances’ local storage, and write a batch script that logs into the Amazon EC2 instances and moves sensitive logs to a secure location.

ANSWER4:

B

Notes/Hint4: 

Each application's log can be configured to send the log to a specific Amazon CloudWatch Logs log group.

Reference4: Amazon CloudWatch Logs log group.

Get mobile friendly version of the quiz @ the App Store

Question 5: A security engineer must set up security group rules for a three-tier application: 

  • Presentation tier – Accessed by users over the web, protected by the security group presentation-sg
  • Logic tier – RESTful API accessed from the presentation tier through HTTPS, protected by the security group logic-sg
  • Data tier – SQL Server database accessed over port 1433 from the logic tier, protected by the security group data-sg
Which combination of the following security group rules will allow the application to be secure and functional? (Select THREE.)
A) presentation-sg: Allow ports 80 and 443 from 0.0.0.0/0
B) data-sg: Allow port 1433 from presentation-sg
C) data-sg: Allow port 1433 from logic-sg
D) presentation-sg: Allow port 1433 from data-sg
 E) logic-sg: Allow port 443 from presentation-sg
F) logic-sg: Allow port 443 from 0.0.0.0/0

ANSWER5:

A C and E

Notes/Hint5: 

In an n-tier architecture, each tier’s security group allows traffic from the security group sending it traffic only. The presentation tier opens traffic for HTTP and HTTPS from the internet. Since security groups are stateful, only inbound rules are required.

Reference5: n-tier architecture

Get mobile friendly version of the quiz @ the App Store

Question 6: A security engineer is working with a product team building a web application on AWS. The application uses Amazon S3 to host the static content, Amazon API Gateway to provide RESTful services, and Amazon DynamoDB as the backend data store. The users already exist in a directory that is exposed through a SAML identity provider. Which combination of the following actions should the engineer take to enable users to be authenticated into the web application and call APIs? (Select THREE). 

A) Create a custom authorization service using AWS Lambda.
B) Configure a SAML identity provider in Amazon Cognito to map attributes to the Amazon Cognito user pool attributes.
C) Configure the SAML identity provider to add the Amazon Cognito user pool as a relying party.
D) Configure an Amazon Cognito identity pool to integrate with social login providers.
E) Update DynamoDB to store the user email addresses and passwords.
F) Update API Gateway to use an Amazon Cognito user pool authorizer.

ANSWER6:

B, C and F

Notes/Hint6: 

When Amazon Cognito receives a SAML assertion, it needs to be able to map SAML attributes to user pool attributes. When configuring Amazon Cognito to receive SAML assertions from an identity provider, you need ensure that the identity provider is configured to have Amazon Cognito as a relying party. Amazon API Gateway will need to be able to understand the authorization being passed from Amazon Cognito, which is a configuration step.

Reference6: user pool attributes Amazon API Gateway 

Get mobile friendly version of the quiz @ the App Store

Question 7: A company is hosting a web application on AWS and is using an Amazon S3 bucket to store images. Users should have the ability to read objects in the bucket. A security engineer has written the following bucket policy to grant public read access:

Attempts to read an object, however, receive the error: “Action does not apply to any resource(s) in statement.” What should the engineer do to fix the error? 
A) Change the IAM permissions by applying PutBucketPolicy permissions.
B) Verify that the policy has the same name as the bucket name. If not, make it the same.
C) Change the resource section to “arn:aws:s3:::appbucket/*”.
D) Add an s3:ListBucket action.

ANSWER7:

C

Notes/Hint7: 

The resource section should match with the type of operation. Change the ARN to include /* at the end, as it is an object operation.

Reference7: IAM Policy – Access to S3 bucket

Get mobile friendly version of the quiz @ the App Store

Question 8: A company decides to place database hosts in its own VPC, and to set up VPC peering to different VPCs containing the application and web tiers. The application servers are unable to connect to the database. Which network troubleshooting steps should be taken to resolve the issue? (Select TWO.)

 A) Check to see if the application servers are in a private subnet or public subnet.
B) Check the route tables for the application server subnets for routes to the VPC peering connection.
C) Check the NACLs for the database subnets for rules that allow traffic from the internet.
D) Check the database security groups for rules that allow traffic from the application servers.
E) Check to see if the database VPC has an internet gateway.

ANSWER8:

B and D

Notes/Hint8: 

You must configure the route tables in each VPC to route to each other through the peering connection. You also must add rules to the security group for the databases to accept requests from the application server security group in the other VPC. 

Reference8: route tables ,  rules to the security groupsecurity group in the other VPC

Get mobile friendly version of the quiz @ the App Store

Question 9: A company is building a data lake on Amazon S3. The data consists of millions of small files containing sensitive information. The security team has the following requirements for the architecture: 

  • Data must be encrypted in transit. 
  • Data must be encrypted at rest. 
  • The bucket must be private, but if the bucket is accidentally made public, the data must remain confidential. 
Which combination of steps would meet the requirements? (Select TWO.) 
A) Enable AES-256 encryption using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) on the S3 bucket.
B) Enable default encryption with server-side encryption with AWS KMS-managed keys (SSE-KMS) on the S3 bucket.
C) Add a bucket policy that includes a deny if a PutObject request does not include aws:SecureTransport.
D) Add a bucket policy with aws:SourceIp to allow uploads and downloads from the corporate intranet only.
E) Enable Amazon Macie to monitor and act on changes to the data lake’s S3 bucket.

ANSWER9:

B and C

Notes/Hint9: 

Bucket encryption using KMS will protect both in case disks are stolen as well as if the bucket is public. This is because the AWS KMS key would need to have privileges granted to it for users outside of AWS. HTTPS will protect data in transit.

Reference9: Bucket encryption using KMS, privileges granted data in transit

Get mobile friendly version of the quiz @ the App Store

Question 10: A security engineer must ensure that all API calls are collected across all company accounts, and that they are preserved online and are instantly available for analysis for 90 days. For compliance reasons, this data must be restorable for 7 years. Which steps must be taken to meet the retention needs in a scalable, cost-effective way? 

A) Enable AWS CloudTrail logging across all accounts to a centralized Amazon S3 bucket with versioning enabled. Set a lifecycle policy to move the data to Amazon Glacier daily, and expire the data after 90 days.
B) Enable AWS CloudTrail logging across all accounts to S3 buckets. Set a lifecycle policy to expire the data in each bucket after 7 years.
C) Enable AWS CloudTrail logging across all accounts to Amazon Glacier. Set a lifecycle policy to expire the data after 7 years.
D) Enable AWS CloudTrail logging across all accounts to a centralized Amazon S3 bucket. Set a lifecycle policy to move the data to Amazon Glacier after 90 days, and expire the data after 7 years.

ANSWER10:

D

Notes/Hint10: 

Meets all requirements and is cost effective by using lifecycle policies to transition to Amazon Glacier.

Reference10: lifecycle policies

Get mobile friendly version of the quiz @ the App Store

Question 11: A security engineer has been informed that a user’s access key has been found on GitHub. The engineer must ensure that this access key cannot continue to be used, and must assess whether the access key was used to perform any unauthorized activities. Which steps must be taken to perform these tasks? 

A) Review the user’s IAM permissions and delete any unrecognized or unauthorized resources.
B) Delete the user, review Amazon CloudWatch Logs in all regions, and report the abuse.
C) Delete or rotate the user’s key, review the AWS CloudTrail logs in all regions, and delete any unrecognized or unauthorized resources.
D) Instruct the user to remove the key from the GitHub submission, rotate keys, and re-deploy any instances that were launched.

ANSWER11:

C

Notes/Hint11: 

 Removes keys and audits the environment for malicious activities.

Reference11: malicious activities

Get mobile friendly version of the quiz @ the App Store

Question 12: You have a CloudFront distribution configured with the following path patterns: When users request objects that start with ‘static2/’, they are receiving 404 response codes. What might be the problem?

A) CloudFront distributions cannot have multiple different origin types

B) The ‘*’ path pattern must appear after the ‘static2/*’ path

C) CloudFront distributions cannot have origins in different AWS regions
D) The ‘*’ path pattern must appear before ‘static1/*’ path
        

ANSWER12:

C

Notes/Hint12: 

CloudFront distributions cannot have origins in different AWS regions

Reference12: CloudFront

Get mobile friendly version of the quiz @ the App Store

Question 13: An application running on EC2 instances processes sensitive information stored on Amazon S3. The information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 is a security risk. Which solution will resolve the security concern?

A) Access the data through an Internet Gateway.”,
B) Access the data through a VPN connection.”,
C) Access the data through a NAT Gateway.”,
D) Access the data through a VPC endpoint for Amazon S3″,

ANSWER13:

D

Notes/Hint13: 

VPC endpoints for Amazon S3 provide secure connections to S3 buckets that do not require a gateway or NAT instances. NAT Gateways and Internet Gateways still route traffic over the Internet to the public endpoint for Amazon S3. There is no way to connect to Amazon S3 via VPN.

Reference13: S3 VPC Endpoints

Get mobile friendly version of the quiz @ the App Store

Question 14: An organization is building an Amazon Redshift cluster in their shared services VPC. The cluster will host sensitive data. How can the organization control which networks can access the cluster?

A) Run the cluster in a different VPC and connect through VPC peering
B) Create a database user inside the Amazon Redshift cluster only for users on the network
 C) Define a cluster security group for the cluster that allows access from the allowed networks
  D) Only allow access to networks that connect with the shared services network via VPN

ANSWER14:

C

Notes/Hint14: 

A security group can grant access to traffic from the allowed networks via the CIDR range for each network. VPC peering and VPN are connectivity services and cannot control traffic for security. Amazon Redshift user accounts address authentication and authorization at the user level and have no control over network traffic

Reference14: AWS Security best practice

Get mobile friendly version of the quiz @ the App Store

Question 15: From a security perspective, what is a principal?

A) An identity
B) An anonymous user
C) An authenticated user
D) A resource

ANSWER15:

B and C

Notes/Hint15: 

An anonymous user falls under the definition of a principal. A principal can be an anonymous user acting on a system.  An authenticated user falls under the definition of a principal. A principal can be an authenticated user acting on a system

Reference15: IAM

Get mobile friendly version of the quiz @ the App Store

Question 16: A company is storing an access key (access key ID and secret access key) in a text file on a custom AMI. The company uses the access key to access DynamoDB tables from instances created from the AMI. The security team has mandated a more secure solution. Which solution will meet the security team’s mandate?

A) Put the access key in an S3 bucket, and retrieve the access key on boot from the instance.
B) Pass the access key to the instances through instance user data.
C) Obtain the access key from a key server launched in a private subnet
D) Create an IAM role with permissions to access the table, and launch all instances with the new role

ANSWER16:

D

Notes/Hint16: 

IAM roles for EC2 instances allow applications running on the instance to access AWS resources without having to create and store any access keys. Any solution involving the creation of an access key then introduces the complexity of managing that secret

Reference16: IAM Roles for EC2

Get mobile friendly version of the quiz @ the App Store

Question 17: While signing in REST/ Query requests, for additional security, you should transmit your requests using Secure Sockets Layer (SSL) by using ____.”,

A) HTTP
B) Internet Protocol Security(IPsec)
C) TLS (Transport Layer Security)
D) HTTPS

ANSWER17:

D

Notes/Hint17: 

REST/ Query requests should use HTTPS

Reference17: Rest API

Get mobile friendly version of the quiz @ the App Store

Question 18: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?

A) Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
B) Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
C) Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
D) Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.”,

ANSWER18:

D

Notes/Hint18:

With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys.

Reference18: KMS

Get mobile friendly version of the quiz @ the App Store

Question 19: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?

A) Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
B) Configure S3 bucket tags with your AWS access keys for your bucket hosting your website so that the application can query them for access.
C) Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
D) Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.

ANSWER2:

C

Notes/Hint19: 

With web identity federation, you don't need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don't have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used

Reference19: About Web Identity Federation

Get mobile friendly version of the quiz @ the App Store

Question 20: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?

A) Cognito Data
B) Cognito Events
C) Cognito Streams
D) Cognito Callbacks

ANSWER20:

C

Notes/Hint20: 

Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams

Reference20: Cognito Streams

Question 21: Which of the following statements is correct in relation to kMS / (Choose 2)
A) KMS Encryption keys are regional
B) You cannot export your customer master key
C) You can export your customer master key.
D) KMS encryption Keys are global”,

ANSWER21:

A and B

Notes/Hint21:

AWS Key Management Service FAQs: You cannot export your customer master key, KMS Encryption keys are regional

Reference21: AWS Key Management Service FAQs

Question 22: Which of the following statements are correct? (Choose 2)

A) The Customer Master Key is used to encrypt and decrypt the Envelope Key or Data Key
B) The Envelope Key or Data Key is used to encrypt and decrypt plain text files.
C) The envelope Key or Data Key is used to encrypt and decrypt the Customer Master Key.
D) The Customer MasterKey is used to encrypt and decrypt plain text files.

ANSWER22:

A and B

Notes/Hint22:

AWS Key Management Service Concepts: The Customer Master Key is used to encrypt and decrypt the Envelope Key or Data Key, The Envelope Key or Data Key is used to encrypt and decrypt plain text files.

Reference22: KMS

Question 23: Which of the following is an encrypted key used by KMS to encrypt your data
A) Customer Managed Key
 B) Encryption Key
C) Envelope Key
D) Customer Master Key

ANSWER23:

C

Notes/Hint23:

Your Data key also known as the Enveloppe key is encrypted using the master key. This approach is known as Envelope encryption. Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key

Reference23: Envelope encryption

Question 24: Which command can you use to encrypt a plain text file using CMK?

A) aws kms-encrypt
B) aws iam encrypt
C) aws kms encrypt
D) aws encrypt

ANSWER24:

C

Notes/Hint24:

aws kms encrypt –key-id 1234abcd-12ab-34cd-56ef-1234567890ab —plaintext fileb://ExamplePlaintextFile –output text –query CiphertextBlob > C:\\Temp\\ExampleEncryptedFile.base64

Reference24: AWS CLI Encrypt

Question 25: If an EC2 instance uses an instance role, key rotation is automatic and handled by __.

A) A script containing a valid IAM username and password stored on the EC2 instance.
B) ssh-keygen on the EC2 instance
C) The EC2 service
D) IAM/STS

ANSWER25:

D

Notes/Hint25:

Instance role key rotation is handled by IAM/STS.

Reference25: IAM/STS

Get mobile friendly version of the quiz @ the App Store

Back to Top

II- SOURCES:

0- Djamga Cloud Security Playlist on Youtube:

1- Developer Certified Exam Prep Pro App

2- Prepare for Your AWS Certification Exam

Back to Top

III-

1- Security Key Terms:

    • Cryptography:  Practice and study of techniques for secure communication in the presence of third parties called adversaries.
    • Hacking: catch-all term for any type of misuse of a computer to break the security of another computing system to steal data, corrupt systems or files, commandeer the environment or disrupt data-related activities in any way.
    • Cyberwarfare: Uuse of technology to attack a nation, causing comparable harm to actual warfare. There is significant debate among experts regarding the definition of cyberwarfare, and even if such a thing exists
    • Penetration testing: Colloquially known as a pen test, pentest or ethical hacking, is an authorized simulated cyberattack on a computer system, performed to evaluate the security of the system. Not to be confused with a vulnerability assessment.
      • Malwares: Any software intentionally designed to cause damage to a computer, server, client, or computer network. A wide variety of malware types exist, including computer viruses, worms, Trojan horses, ransomware, spyware, adware, rogue software, and scareware. 
    • Malware Analysis Tool: Any .Run Malware hunting with live access to the heart of an incident https://any.run/Malware Analysis Total:  VirusTotal – Analyze suspicious files and URLs to detect types of malware, automatically share them with the security community https://www.virustotal.com/gui/
    • VPN: A virtual private network (VPN) extends a private network across a public network and enables users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network. Applications running across a VPN may therefore benefit from the functionality, security, and management of the private network. Encryption is a common, although not an inherent, part of a VPN connection.
    • Antivirus: Antivirus software, or anti-virus software (abbreviated to AV software), also known as anti-malware, is a computer program used to prevent, detect, and remove malware.
    • DDos: A distributed denial-of-service (DDoS) attack is one of the most powerful weapons on the internet. When you hear about a website being “brought down by hackers,” it generally means it has become a victim of a DDoS attack.
    • Fraud Detection: Set of activities undertaken to prevent money or property from being obtained through false pretenses. Fraud detection is applied to many industries such as banking or insurance. In banking, fraud may include forging checks or using stolen credit cards.
    • Spywares: Spyware describes software with malicious behavior that aims to gather information about a person or organization and send such information to another entity in a way that harms the user; for example by violating their privacy or endangering their device's security.
    • Spoofing: Disguising a communication from an unknown source as being from a known, trusted source
    • Pharming: Malicious websites that look legitimate and are used to gather usernames and passwords.
    • Catfishing: Creating a fake profile for fraudulent or deceptive purposes
    • SSL: Stands for secure sockets layer. Protocol for web browsers and servers that allows for the authentication, encryption and decryption of data sent over the Internet.
    • Phishing emails: Disguised as trustworthy entity to lure someone into providing sensitive information
    • Intrusion detection System: Device or software application that monitors a network or systems for malicious activity or policy violations. Any intrusion activity or violation is typically reported either to an administrator or collected centrally using a security information and event management system.
    • Encryption: Encryption is the method by which information is converted into secret code that hides the information's true meaning. The science of encrypting and decrypting information is called cryptography. In computing, unencrypted data is also known as plaintext, and encrypted data is called ciphertext.
    • MFA: Multi-factor authentication (MFA) is defined as a security mechanism that requires an individual to provide two or more credentials in order to authenticate their identity. In IT, these credentials take the form of passwords, hardware tokens, numerical codes, biometrics, time, and location.
    • Vulnerabilities:vulnerability is a hole or a weakness in the application, which can be a design flaw or an implementation bug, that allows an attacker to cause harm to the stakeholders of an application. Stakeholders include the application owner, application users, and other entities that rely on the application.
    • SQL injections: SQL injection is a code injection technique, used to attack data-driven applications, in which malicious SQL statements are inserted into an entry field for execution.
    • Cyber attacks: In computers and computer networks an attack is any attempt to expose, alter, disable, destroy, steal or gain unauthorized access to or make unauthorized use of an asset.
    • Confidentiality: Confidentiality involves a set of rules or a promise usually executed through confidentiality agreements that limits access or places restrictions on certain types of information.
    • Secure channel: In cryptography, a secure channel is a way of transferring data that is resistant to overhearing and tampering. A confidential channel is a way of transferring data that is resistant to overhearing, but not necessarily resistant to tampering.
    • Tunneling: Communications protocol that allows for the movement of data from one network to another. It involves allowing private network communications to be sent across a public network through a process called encapsulation.
    • SSH: Secure Shell is a cryptographic network protocol for operating network services securely over an unsecured network. Typical applications include remote command-line, login, and remote command execution, but any network service can be secured with SSH.
    • SSL Certificates: SSL certificates are what enable websites to move from HTTP to HTTPS, which is more secure. An SSL certificate is a data file hosted in a website's origin server. SSL certificates make SSL/TLS encryption possible, and they contain the website's public key and the website's identity, along with related information.
    • Phishing: Phishing is a cybercrime in which a target or targets are contacted by email, telephone or text message by someone posing as a legitimate institution to lure individuals into providing sensitive data such as personally identifiable information, banking and credit card details, and passwords.
    • Cybercrime: Cybercrime, or computer-oriented crime, is a crime that involves a computer and a network. The computer may have been used in the commission of a crime, or it may be the target. Cybercrime may threaten a person, company or a nation's security and financial health.
    • Backdoor: A backdoor is a means to access a computer system or encrypted data that bypasses the system's customary security mechanisms. A developer may create a backdoor so that an application or operating system can be accessed for troubleshooting or other purposes.
    • Salt and Hash: A cryptographic salt is made up of random bits added to each password instance before its hashing. Salts create unique passwords even in the instance of two users choosing the same passwords. Salts help us mitigate rainbow table attacks by forcing attackers to re-compute them using the salts.
    • Password: A password, sometimes called a passcode,[1] is a memorized secret, typically a string of characters, usually used to confirm the identity of a user.[2] Using the terminology of the NIST Digital Identity Guidelines,[3] the secret is memorized by a party called the claimant while the party verifying the identity of the claimant is called the verifier. When the claimant successfully demonstrates knowledge of the password to the verifier through an established authentication protocol,[4] the verifier is able to infer the claimant's identity.
    • Fingerprint: fingerprint is an impression left by the friction ridges of a human finger. The recovery of partial fingerprints from a crime scene is an important method of forensic science. Moisture and grease on a finger result in fingerprints on surfaces such as glass or metal.
    • Facial recognition: Facial recognition works better for a person as compared to fingerprint detection. It releases the person from the hassle of moving their thumb or index finger to a particular place on their mobile phone. A user would just have to bring their phone in level with their eye.
    • Asymmetric key ciphers versus symmetric key ciphers (Difference between symmetric and  Asymmetric encryption): The basic difference between these two types of encryption is that symmetric encryption uses one key for both encryption and decryption, and the asymmetric encryption uses public key for encryption and a private key for decryption.
    • Decryption: The conversion of encrypted data into its original form is called Decryption. It is generally a reverse process of encryption. It decodes the encrypted information so that an authorized user can only decrypt the data because decryption requires a secret key or password.
    • Algorithms: Finite sequence of well-defined, computer-implementable instructions, typically to solve a class of problems or to perform a computation.
    • DFIR: Digital forensic and incident response: Multidisciplinary profession that focuses on identifying, investigating, and remediating computer network exploitation. This can take varied forms and involves a wide variety of skills, kinds of attackers, an kinds of targets. We’ll discuss those more below.
      • OTP: One Time Password: A one-time password, also known as one-time PIN or dynamic password, is a password that is valid for only one login session or transaction, on a computer system or other digital device
    • Proxy Server and Reverse Proxy Server:A proxy server is a go‑between or intermediary server that forwards requests for content from multiple clients to different servers across the Internet. A reverse proxy server is a type of proxy server that typically sits behind the firewall in a private network and directs client requests to the appropriate backend server.

Back to Top

Back to Top

IV-

Back to Top

V- Cybersecurity Certification:  

cybersecurity certification roadmap
cybersecurity certification roadmap

Back to Top

VI-

Wireshark Cheat Sheet
Wireshark Cheat Sheet

Back to Top

VII- HACKING TOOLS CHEAT SHEET:

hacking Cheat Sheet
hacking Cheat Sheet

Back to Top

VIII-

Back to Top

IX-

Back to Top

X-

Offensive * Exploit Database – The Exploit Database is maintained by Offensive Security, an information security training company that provides various Information Security Certifications as well as high end penetration testing services. https://www.exploit-db.com/

Back to Top

XI-

CYBERSECURITY NEWS

  • Krebs On Security In depth security news and investigation https://krebsonsecurity.com/
  • Dark Reading Cyber security's comprehensive news site is now an online community for security professionals. https://www.darkreading.com/
  • The Hacker News – The Hacker News (THN) is a leading, trusted, widely-acknowledged dedicated cybersecurity news platform, attracting over 8 million monthly readers including IT professionals, researchers, hackers, technologists, and enthusiasts. https://thehackernews.com
  • SecuriTeam – A free and independent source of vulnerability information. https://securiteam.com/
  • SANS NewsBites – “A semiweekly high-level executive summary of the most important news articles that have been published on computer security during the last week. Each news item is very briefly summarized and includes a reference on the web for detailed information, if possible.” Published for free on Tuesdays and Fridays. https://www.sans.org/newsletters/newsbites

Back to Top

XII-

CYBERSECURITY YOUTUBE CHANNELS

YouTube Channels

This list was originally forked/curated from here: https://wportal.xyz/collection/cybersec-yt1 on (7/29/2020) Attribution and appreciation to d4rckh

Back to Top

XIII-

CYBERSECURITY PODCASTS:

Podcasts

  • Risky Business Published weekly, the Risky Business podcast features news and in-depth commentary from security industry luminaries. Hosted by award-winning journalist Patrick Gray, Risky Business has become a must-listen digest for information security professionals. https://risky.biz/
  • Pauls Security Weekly This show features interviews with folks in the security community; technical segments, which are just that, very technical; and security news, which is an open discussion forum for the hosts to express their opinions about the latest security headlines, breaches, new exploits and vulnerabilities, “not” politics, “cyber” policies and more. https://securityweekly.com/category-shows/paul-security-weekly/
  • Security Now – Steve Gibson, the man who coined the term spyware and created the first anti-spyware program, creator of Spinrite and ShieldsUP, discusses the hot topics in security today with Leo Laporte. https://twit.tv/shows/security-now
  • Daily Information Security Podcast (“StormCast”) Stormcasts are daily 5-10 minute information security threat updates. The podcast is produced each work day, and typically released late in the day to be ready for your morning commute. https://isc.sans.edu/podcast.html
  • ShadowTalk Threat Intelligence Podcast by Digital Shadow_. The weekly podcast highlights key findings of primary-source research our Intelligence Team is conducting, along with guest speakers discussing the latest threat actors, campaigns, security events and industry news. https://resources.digitalshadows.com/threat-intelligence-podcast-shadowtalk
  • Don't Panic – The Unit 42 Podcast Don't Panic! is the official podcast from Unit 42 at Palo Alto Networks. We find the big issues that are frustrating cyber security practitioners and help simplify them so they don't need to panic. https://unit42.libsyn.com/
  • Recorded Future Recorded Future takes you inside the world of cyber threat intelligence. We’re sharing stories from the trenches and the operations floor as well as giving you the skinny on established and emerging adversaries. We also talk current events, technical tradecraft, and offer up insights on the big picture issues in our industry. https://www.recordedfuture.com/resources/podcast/
  • The Cybrary Podcast Listen in to the Cybrary Podcast where we discuss a range topics from DevSecOps and Ransomware attacks to diversity and how to retain of talent. Entrepreneurs at all stages of their startup companies join us to share their stories and experience, including how to get funding, hiring the best talent, driving sales, and choosing where to base your business. https://www.cybrary.it/info/cybrary-podcast/
  • Cyber Life The Cyber Life podcast is for cyber security (InfoSec) professionals, people trying to break into the industry, or business owners looking to learn how to secure their data. We will talk about many things, like how to get jobs, cover breakdowns of hot topics, and have special guest interviews with the men and women “in the trenches” of the industry. https://redcircle.com/shows/cyber-life
  • Career Notes Cybersecurity professionals share their personal career journeys and offer tips and advice in this brief, weekly podcast from The CyberWire. https://www.thecyberwire.com/podcasts/career-notes

Below podcasts Added from here: https://infosec-conferences.com/cybersecurity-podcasts/

  • Down the Security Rabbithole http://podcast.wh1t3rabbit.net/ Down the Security Rabbithole is hosted by Rafal Los and James Jardine who discuss, by means of interviewing or news analysis, everything about Cybersecurity which includes Cybercrime, Cyber Law, Cyber Risk, Enterprise Risk & Security and many more. If you want to hear issues that are relevant to your organization, subscribe and tune-in to this podcast.
  • The Privacy, Security, & OSINT Show https://podcasts.apple.com/us/podcast/the-privacy-security-osint-show/id1165843330 The Privacy, Security, & OSINT Show, hosted by Michael Bazzell, is your weekly dose of digital security, privacy, and Open Source Intelligence (OSINT) opinion and news. This podcast will help listeners learn some ideas on how to stay secure from cyber-attacks and help them become “digitally invisible”.
  • Defensive Security Podcast https://defensivesecurity.org/ Hosted by Andrew Kalat (@lerg) and Jerry Bell (@maliciouslink), the Defensive Security Podcasts aims to look/discuss the latest security news happening around the world and pick out the lessons that can be applied to keeping organizations secured. As of today, they have more than 200 episodes and some of the topics discussed include Forensics, Penetration Testing, Incident Response, Malware Analysis, Vulnerabilities and many more.
  • Darknet Diaries https://darknetdiaries.com/episode/ Darknet Diaries Podcast is hosted and produced by Jack Rhysider that discuss topics related to information security. It also features some true stories from hackers who attacked or have been attacked. If you’re a fan of the show, you might consider buying some of their souvenirs here (https://shop.darknetdiaries.com/).
  • Brakeing Down Security https://www.brakeingsecurity.com/ Brakeing Down Security started in 2014 and is hosted by Bryan Brake, Brian Boettcher, and Amanda Berlin. This podcast discusses everything about the Cybersecurity world, Compliance, Privacy, and Regulatory issues that arise in today’s organizations. The hosts will teach concepts that Information Security Professionals need to know and discuss topics that will refresh the memories of seasoned veterans.
  • Open Source Security Podcast https://www.opensourcesecuritypodcast.com/ Open Source Security Podcast is a podcast that discusses security with an open-source slant. The show started in 2016 and is hosted by Josh Bressers and Kurt Siefried. As of this writing, they now posted around 190+ podcasts
  • Cyber Motherboard https://podcasts.apple.com/us/podcast/cyber/id1441708044 Ben Makuch is the host of the podcast CYBER and weekly talks to Motherboard reporters Lorenzo Franceschi-Bicchierai and Joseph Cox. They tackle topics about famous hackers and researchers about the biggest news in cybersecurity. The Cyber- stuff gets complicated really fast, but Motherboard spends its time fixed in the infosec world so we don’t have to.
  • Hak5 https://shop.hak5.org/pages/videos Hak5 is a brand that is created by a group of security professionals, hardcore gamers and “IT ninjas”. Their podcast, which is mostly uploaded on YouTube discusses everything from open-source software to penetration testing and network infrastructure. Their channel currently has 590,000 subscribers and is one of the most viewed shows when you want to learn something about security networks.
  • Threatpost Podcast Series https://threatpost.com/category/podcasts/ Threatpost is an independent news site which is a leading source of information about IT and business security for hundreds of thousands of professionals worldwide. With an award-winning editorial team produces unique and high-impact content including security news, videos, feature reports and more, with their global editorial activities are driven by industry-leading journalist Tom Spring, editor-in-chief.
  • CISO-Security Vendor Relationship Podcast https://cisoseries.com Co-hosted by the creator of the CISO/Security Vendor Relationship Series, David Spark, and Mike Johnson, in 30 minutes, this weekly program challenges the co-hosts, guests, and listeners to critique, share true stories. This podcast, The CISO/Security Vendor Relationship, targets to enlighten and educate listeners on improving security buyer and seller relationships.
  • Getting Into Infosec Podcast Stories of how Infosec and Cybersecurity pros got jobs in the field so you can be inspired, motivated, and educated on your journey. – https://gettingintoinfosec.com/
  • Unsupervised Learning Weekly podcasts and biweekly newsletters as a curated summary intersection of security, technology, and humans, or a standalone idea to provoke thought, by Daniel Miessler. https://danielmiessler.com/podcast/

Back to Top

XIV-

SECURITY BOOKS:

Back to Top

XV-

CYBERSECURITY TRAINING:

Training

  • WebSecurity Academy Free online web security training from the creators of Burp Suite https://portswigger.net/web-security
  • Mosse Cyber Security Institute Introduction to cybersecurity free certification with 100+ hours of training, no expiry/renewals, https://www.mosse-institute.com/certifications/mics-introduction-to-cyber-security.html
  • BugCrowd University Free bug hunting resources and methodologies in form of webinars, education and training. https://www.bugcrowd.com/hackers/bugcrowd-university/
  • Certified Network Security Specialist Certification and training; Expires Aug 31 2020 Use coupon code #StaySafeHome during checkout to claim your free access. Offer is valid till 31/08/2020. £500.00 Value https://www.icsi.co.uk/courses/icsi-cnss-certified-network-security-specialist-covid-19
  • Metasploit Unleashed Most complete and in-depth Metasploit guide available, with contributions from the authors of the No Starch Press Metasploit Book. https://www.offensive-security.com/metasploit-unleashed/
  • AWS Cloud Certified Get skills in AWS to be more marketable. Training is quality and free. https://www.youtube.com/watch?v=3hLmDS179YE Have to create an AWS account, Exam is $100.
  • SANS Faculty Free Tools List of OSS developed by SANS staff. https://www.sans.org/media/free/free-faculty-tools.pdf?msc=sans-free-lp
  • “Using ATT&CK for Cyber Threat Intelligence Training” – 4 hour training The goal of this training is for students to understand the following: at: https://attack.mitre.org/resources/training/cti/
  • Coursera -“Coursera Together: Free online learning during COVID-19” Lots of different types of free training. https://blog.coursera.org/coursera-together-free-online-learning-during-covid-19/
  • Fortinet Security Appliance Training Free access to the FortiGate Essentials Training Course and Network Security Expert courses 1 and 2 https://www.fortinet.com/training/cybersecurity-professionals.html
  • Chief Information Security Officer (CISO) Workshop Training – The Chief Information Security Office (CISO) workshop contains a collection of security learnings, principles, and recommendations for modernizing security in your organization. This training workshop is a combination of experiences from Microsoft security teams and learnings from customers. – https://docs.microsoft.com/en-us/security/ciso-workshop/ciso-workshop
  • CLARK Center Plan C – Free cybersecurity curriculum that is primarily video-based or provide online assignments that can be easily integrated into a virtual learning environments https://clark.center/home
  • Hack.me is a FREE, community based project powered by eLearnSecurity. The community can build, host and share vulnerable web application code for educational and research purposes. It aims to be the largest collection of “runnable” vulnerable web applications, code samples and CMS's online. The platform is available without any restriction to any party interested in Web Application Security. https://hack.me/
  • Hacker101 – Free classes for web security – https://www.hacker101.com/
  • ElasticStack – Free on-demand Elastic Stack, observability, and security courses. https://training.elastic.co/learn-from-home
  • Hoppers Roppers – Community built around a series of free courses that provide training to beginners in the security field. https://www.hoppersroppers.org/training.html
  • IBM Security Learning Academy Free technical training for IBM Security products. https://www.securitylearningacademy.com/
  • M.E. Kabay Free industry courses and course materials for students, teachers and others are welcome to use for free courses and lectures. http://www.mekabay.com/courses/index.htm
  • Open P-TECH Free digital learning on the tech skills of tomorrow. https://www.ptech.org/open-p-tech/
  • Udemy – Online learning course platform “collection from the free courses in our learning marketplace” https://www.udemy.com/courses/free/
  • Enroll Now Free: PCAP Programming Essentials in Python https://www.netacad.com/courses/programming/pcap-programming-essentials-python Python is the very versatile, object-oriented programming language used by startups and tech giants, Google, Facebook, Dropbox and IBM. Python is also recommended for aspiring young developers who are interested in pursuing careers in Security, Networking and Internet-of-Things. Once you complete this course, you are ready to take the PCAP – Certified Associate in Python programming. No prior knowledge of programming is required.
  • Packt Web Development Course Web Development Get to grips with the fundamentals of the modern web Unlock one year of free online access. https://courses.packtpub.com/pages/free?fbclid=IwAR1FtKQcYK8ycCmBMXaBGvW_7SgPVDMKMaRVwXYcSbiwvMfp75gazxRZlzY
  • Stanford University Webinar – Hacked! Security Lessons from Big Name Breaches 50 minute cyber lecture from Stanford.You Will Learn: — The root cause of key breaches and how to prevent them; How to measure your organization’s external security posture; How the attacker lifecycle should influence the way you allocate resources https://www.youtube.com/watch?v=V9agUAz0DwI
  • Stanford University Webinar – Hash, Hack, Code: Emerging Trends in Cyber Security Join Professor Dan Boneh as he shares new approaches to these emerging trends and dives deeper into how you can protect networks and prevent harmful viruses and threats. 50 minute cyber lecture from Stanford. https://www.youtube.com/watch?v=544rhbcDtc8
  • Kill Chain: The Cyber War on America's Elections (Documentary) (Referenced at GRIMMCON), In advance of the 2020 Presidential Election, Kill Chain: The Cyber War on America’s Elections takes a deep dive into the weaknesses of today’s election technology, an issue that is little understood by the public or even lawmakers. https://www.hbo.com/documentaries/kill-chain-the-cyber-war-on-americas-elections
  • Intro to Cybersecurity Course (15 hours) Learn how to protect your personal data and privacy online and in social media, and why more and more IT jobs require cybersecurity awareness and understanding. Receive a certificate of completion. https://www.netacad.com/portal/web/self-enroll/c/course-1003729
  • Cybersecurity Essentials (30 hours) Foundational knowledge and essential skills for all cybersecurity domains, including info security, systems sec, network sec, ethics and laws, and defense and mitigation techniques used in protecting businesses. https://www.netacad.com/portal/web/self-enroll/c/course-1003733
  • Pluralsight and Microsoft Partnership to help you become an expert in Azure. With skill assessments and over 200+ courses, 40+ Skill IQs and 8 Role IQs, you can focus your time on understanding your strengths and skill gaps and learn Azure as quickly as possible.https://www.pluralsight.com/partners/microsoft/azure
  • Blackhat Webcast Series Monthly webcast of varying cyber topics. I will post specific ones in the training section below sometimes, but this is worth bookmarking and checking back. They always have top tier speakers on relevant, current topics. https://www.blackhat.com/html/webcast/webcast-home.html
  • Federal Virtual Training Environment – US Govt sponsored free courses. There are 6 available, no login required. They are 101 Coding for the Public, 101 Critical Infrastructure Protection for the Public, Cryptocurrency for Law Enforcement for the Public, Cyber Supply Chain Risk Management for the Public, 101 Reverse Engineering for the Public, Fundamentals of Cyber Risk Management. https://fedvte.usalearning.gov/public_fedvte.php
  • Harrisburg University CyberSecurity Collection of 18 curated talks. Scroll down to CYBER SECURITY section. You will see there are 4 categories Resource Sharing, Tools & Techniques, Red Team (Offensive Security) and Blue Teaming (Defensive Security). Lot of content in here; something for everyone. https://professionaled.harrisburgu.edu/online-content/
  • OnRamp 101-Level ICS Security Workshop Starts this 4/28. 10 videos, Q&A / discussion, bonus audio, great links. Get up to speed fast on ICS security. It runs for 5 weeks. 2 videos per week. Then we keep it open for another 3 weeks for 8 in total. https://onramp-3.s4xevents.com
  • HackXOR WebApp CTF Hackxor is a realistic web application hacking game, designed to help players of all abilities develop their skills. All the missions are based on real vulnerabilities I've personally found while doing pentests, bug bounty hunting, and research. https://hackxor.net/
  • Suricata Training 5-part training module using a simulation as a backdrop to teach how to use Suricata. https://rangeforce.com/resource/suricata-challenge-reg/
  • flAWS System Through a series of levels you'll learn about common mistakes and gotchas when using Amazon Web Services (AWS). Multiple levels, “Buckets” of fun. http://flaws.cloud/
  • Stanford CS 253 Web Security A free course from Stanford providing a comprehensive overview of web security. The course begins with an introduction to the fundamentals of web security and proceeds to discuss the most common methods for web attacks and their countermeasures. The course includes video lectures, slides, and links to online reading assignments. https://web.stanford.edu/class/cs253
  • Linux Journey A free, handy guide for learning Linux. Coverage begins with the fundamentals of command line navigation and basic text manipulation. It then extends to more advanced topics, such as file systems and networking. The site is well organized and includes many examples along with code snippets. Exercises and quizzes are provided as well. https://linuxjourney.com
  • Ryan's Tutorials A collection of free, introductory tutorials on several technology topics including: Linux command line, Bash scripting, creating and styling webpages with HTML and CSS, counting and converting between different number systems, and writing regular expressions. https://ryanstutorials.net
  • The Ultimate List of SANS Cheat Sheets Massive collection of free cybersecurity cheat sheets for quick reference (login with free SANS account required for some penetration testing resources). https://www.sans.org/blog/the-ultimate-list-of-sans-cheat-sheets/
  • CYBER INTELLIGENCE ANALYTICS AND OPERATIONS Learn:The ins and outs of all stages of the intelligence cycle from collection to analysis from seasoned intel professionals. How to employ threat intelligence to conduct comprehensive defense strategies to mitigate potential compromise. How to use TI to respond to and minimize impact of cyber incidents. How to generate comprehensive and actionable reports to communicate gaps in defenses and intelligence findings to decision makers. https://www.shadowscape.io/cyber-intelligence-analytics-operat
  • Linux Command Line for Beginners 25 hours of training – In this course, you’ll learn from one of Fullstack’s top instructors, Corey Greenwald, as he guides you through learning the basics of the command line through short, digestible video lectures. Then you’ll use Fullstack’s CyberLab platform to hone your new technical skills while working through a Capture the Flag game, a special kind of cybersecurity game designed to challenge participants to solve computer security problems by solving puzzles. Finally, through a list of carefully curated resources through a series of curated resources, we’ll introduce you to some important cybersecurity topics so that you can understand some of the common language, concepts and tools used in the industry. https://prep.fullstackacademy.com/
  • Hacking 101 6 hours of free training – First, you'll take a tour of the world and watch videos of hackers in action across various platforms (including computers, smartphones, and the power grid). You may be shocked to learn what techniques the good guys are using to fight the bad guys (and which side is winning). Then you'll learn what it's like to work in this world, as we show you the different career paths open to you and the (significant) income you could make as a cybersecurity professional. https://cyber.fullstackacademy.com/prepare/hacking-101
  • Choose Your Own Cyber Adventure Series: Entry Level Cyber Jobs Explained YouTube Playlist (videos from my channel #simplyCyber) This playlist is a collection of various roles within the information security field, mostly entry level, so folks can understand what different opportunities are out there. https://www.youtube.com/playlist?list=PL4Q-ttyNIRAqog96mt8C8lKWzTjW6f38F
  • NETINSTRUCT.COM Free Cybersecurity, IT and Leadership Courses – Includes OS and networking basics. Critical to any Cyber job. https://netinstruct.com/courses
  • HackerSploit – HackerSploit is the leading provider of free and open-source Infosec and cybersecurity training. https://hackersploit.org/
  • Resources for getting started (Free and Paid)Practice
    • DetectionLab (Free)
    • LetsDefend.io (Free/Paid)
    • DetectionLabELK (Free)

    Log Analysis

    Network Monitoring

    Linux Distributions

    Memory Analysis Tools

    Professional Training

    • FOR578: Cyber Threat Intelligence (Paid)
    • SEC511: Continuous Monitoring & Security Operations (Paid)
    • SEC445: SIEM Design & Implementation (Paid)
    • AEGIS Certification (Paid)

    Conferences

Back to Top

XVI-

CYBERSECURITY COURSES: (Multi-week w/Enrollment)

College Courses

  • Computer Science courses with video lectures Intent of this list is to act as Online bookmarks/lookup table for freely available online video courses. Focus would be to keep the list concise so that it is easy to browse. It would be easier to skim through 15 page list, find the course and start learning than having to read 60 pages of text. If you are student or from non-CS background, please try few courses to decide for yourself as to which course suits your learning curve best. https://github.com/Developer-Y/cs-video-courses?utm_campaign=meetedgar&utm_medium=social&utm_source=meetedgar.com
  • Cryptography I -offered by Stanford University – Rolling enrollment – Cryptography is an indispensable tool for protecting information in computer systems. In this course you will learn the inner workings of cryptographic systems and how to correctly use them in real-world applications. The course begins with a detailed discussion of how two parties who have a shared secret key can communicate securely when a powerful adversary eavesdrops and tampers with traffic. We will examine many deployed protocols and analyze mistakes in existing systems. The second half of the course discusses public-key techniques that let two parties generate a shared secret key. https://www.coursera.org/learn/crypto
  • Software Security Rolling enrollment -offered by University of Maryland, College Park via Coursera – This course we will explore the foundations of software security. We will consider important software vulnerabilities and attacks that exploit them — such as buffer overflows, SQL injection, and session hijacking — and we will consider defenses that prevent or mitigate these attacks, including advanced testing and program analysis techniques. Importantly, we take a “build security in” mentality, considering techniques at each phase of the development cycle that can be used to strengthen the security of software systems. https://www.coursera.org/learn/software-security
  • Intro to Information Security Georgia Institute of Technology via Udacity – Rolling Enrollment. This course provides a one-semester overview of information security. It is designed to help students with prior computer and programming knowledge — both undergraduate and graduate — understand this important priority in society today. Offered at Georgia Tech as CS 6035 https://www.udacity.com/course/intro-to-information-security–ud459
  • Cyber-Physical Systems Security Georgia Institute of Technology via Udacity – This course provides an introduction to security issues relating to various cyber-physical systems including industrial control systems and those considered critical infrastructure systems. 16 week course – Offered at Georgia Tech as CS 8803 https://www.udacity.com/course/cyber-physical-systems-security–ud279
  • Finding Your Cybersecurity Career Path – University of Washington via edX – 4 weeks long – self paced – In this course, you will focus on the pathways to cybersecurity career success. You will determine your own incoming skills, talent, and deep interests to apply toward a meaningful and informed exploration of 32 Digital Pathways of Cybersecurity. https://www.edx.org/course/finding-your-cybersecurity-career-path
  • Building a Cybersecurity Toolkit – University of Washington via edX – 4 weeks self-paced The purpose of this course is to give learners insight into these type of characteristics and skills needed for cybersecurity jobs and to provide a realistic outlook on what they really need to add to their “toolkits” – a set of skills that is constantly evolving, not all technical, but fundamentally rooted in problem-solving. https://www.edx.org/course/building-a-cybersecurity-toolkit
  • Cybersecurity: The CISO's View – University of Washington via edX – 4 weeks long self-paced – This course delves into the role that the CISO plays in cybersecurity operations. Throughout the lessons, learners will explore answers to the following questions: How does cybersecurity work across industries? What is the professionals' point of view? How do we keep information secure https://www.edx.org/course/cybersecurity-the-cisos-view
  • Introduction to Cybersecurity – University of Washington via edX – In this course, you will gain an overview of the cybersecurity landscape as well as national (USA) and international perspectives on the field. We will cover the legal environment that impacts cybersecurity as well as predominant threat actors. – https://www.edx.org/course/introduction-to-cybersecurity
  • Cyber Attack Countermeasures New York University (NYU) via Coursera – This course introduces the basics of cyber defense starting with foundational models such as Bell-LaPadula and information flow frameworks. These underlying policy enforcements mechanisms help introduce basic functional protections, starting with authentication methods. Learners will be introduced to a series of different authentication solutions and protocols, including RSA SecureID and Kerberos, in the context of a canonical schema. – https://www.coursera.org/learn/cyber-attack-countermeasures
  • Introduction to Cyber Attacks New York University (NYU) via Coursera – This course provides learners with a baseline understanding of common cyber security threats, vulnerabilities, and risks. An overview of how basic cyber attacks are constructed and applied to real systems is also included. Examples include simple Unix kernel hacks, Internet worms, and Trojan horses in software utilities. Network attacks such as distributed denial of service (DDOS) and botnet- attacks are also described and illustrated using real examples from the past couple of decades. https://www.coursera.org/learn/intro-cyber-attacks
  • Enterprise and Infrastructure Security New York University (NYU) via Coursera – This course introduces a series of advanced and current topics in cyber security, many of which are especially relevant in modern enterprise and infrastructure settings. The basics of enterprise compliance frameworks are provided with introduction to NIST and PCI. Hybrid cloud architectures are shown to provide an opportunity to fix many of the security weaknesses in modern perimeter local area networks. https://www.coursera.org/learn/enterprise-infrastructure-security
  • Network Security Georgia Institute of Technology via Udacity – This course provides an introduction to computer and network security. Students successfully completing this class will be able to evaluate works in academic and commercial security, and will have rudimentary skills in security research. The course begins with a tutorial of the basic elements of cryptography, cryptanalysis, and systems security, and continues by covering a number of seminal papers and monographs in a wide range of security areas. – https://www.udacity.com/course/network-security–ud199
  • Real-Time Cyber Threat Detection and Mitigation – New York University (NYU) via Coursera This course introduces real-time cyber security techniques and methods in the context of the TCP/IP protocol suites. Explanation of some basic TCP/IP security hacks is used to introduce the need for network security solutions such as stateless and stateful firewalls. Learners will be introduced to the techniques used to design and configure firewall solutions such as packet filters and proxies to protect enterprise assets. https://www.coursera.org/learn/real-time-cyber-threat-detection

Back to Top

XVIII-

Back to Top

XIX-

SANS Massive List of Cheat Sheets Curated from here: https://www.sans.org/blog/the-ultimate-list-of-sans-cheat-sheets/

General IT Security * Windows and Linux Terminals & Command Lines https://assets.contentstack.io/v3/assets/blt36c2e63521272fdc/bltea7de5267932e94b/5eb08aafcf88d36e47cf0644/Cheatsheet_SEC301-401_R7.pdf

Digital Forensics and Incident Response

  • SIFT Workstation Cheat Sheet