What is Problem Formulation in Machine Learning and Top 4 examples of Problem Formulation in Machine Learning?

Summary of Machine Learning and Artificial Intelligence Capabilities

You can translate the content of this page by selecting a language in the select box.

What is Problem Formulation in Machine Learning and Top 4 examples of Problem Formulation in Machine Learning?

Machine Learning (ML) is a field of Artificial Intelligence (AI) that enables computers to learn from data, without being explicitly programmed. Machine learning algorithms build models based on sample data, known as “training data”, in order to make predictions or decisions, rather than following rules written by humans. Machine learning is closely related to and often overlaps with computational statistics; a discipline that also focuses on prediction-making through the use of computers. Machine learning can be applied in a wide variety of domains, such as medical diagnosis, stock trading, robot control, manufacturing and more.

Football/Soccer World Cup 2022 Guide and Past World Cups History and Quiz illustrated

The process of machine learning consists of several steps: first, data is collected; then, a model is selected or created; finally, the model is trained on the collected data and then applied to new data. This process is often referred to as the “machine learning pipeline”. Problem formulation is the second step in this pipeline and it consists of selecting or creating a suitable model for the task at hand and determining how to represent the collected data so that it can be used by the selected model. In other words, problem formulation is the process of taking a real-world problem and translating it into a format that can be solved by a machine learning algorithm.

There are many different types of machine learning problems, such as classification, regression, prediction and so on. The choice of which type of problem to formulate depends on the nature of the task at hand and the type of data available. For example, if we want to build a system that can automatically detect fraudulent credit card transactions, we would formulate a classification problem. On the other hand, if our goal is to predict the sale price of houses given information about their size, location and age, we would formulate a regression problem. In general, it is best to start with a simple problem formulation and then move on to more complex ones if needed.

Some common examples of problem formulations in machine learning are:
Classification: given an input data point (e.g., an image), predict its category label (e.g., dog vs cat).
Regression: given an input data point (e.g., size and location of a house), predict a continuous output value (e.g., sale price).
Prediction: given an input sequence (e.g., a series of past stock prices), predict the next value in the sequence (e.g., future stock price).
Anomaly detection: given an input data point (e.g., transaction details), decide whether it is normal or anomalous (i.e., fraudulent).
Recommendation: given information about users (e.g., age and gender) and items (e.g., books and movies), recommend items to users (e.g., suggest books for someone who likes romance novels).
Optimization: given a set of constraints (e.g., budget) and objectives (e.g., maximize profit), find the best solution (e.g., product mix).

Machine Learning For Dummies
Machine Learning For Dummies

ML For Dummies on iOs

ML PRO without ADS on iOs [No Ads]

ML PRO without ADS on Windows [No Ads]

ML PRO For Web/Android on Amazon [No Ads]

Problem Formulation: What this pipeline phase entails and why it’s important

The problem formulation phase of the ML Pipeline is critical, and it’s where everything begins. Typically, this phase is kicked off with a question of some kind. Examples of these kinds of questions include: Could cars really drive themselves?  What additional product should we offer someone as they checkout? How much storage will clients need from a data center at a given time?

The problem formulation phase starts by seeing a problem and thinking “what question, if I could answer it, would provide the most value to my business?” If I knew the next product a customer was going to buy, is that most valuable? If I knew what was going to be popular over the holidays, is that most valuable? If I better understood who my customers are, is that most valuable?

However, some problems are not so obvious. When sales drop, new competitors emerge, or there’s a big change to a company/team/org, it can be easy to say, “I see the problem!” But sometimes the problem isn’t so clear. Consider self-driving cars. How many people think to themselves, “driving cars is a huge problem”? Probably not many. In fact, there isn’t a problem in the traditional sense of the word but there is an opportunity. Creating self-driving cars is a huge opportunity. That doesn’t mean there isn’t a problem or challenge connected to that opportunity. How do you design a self-driving system? What data would you look at to inform the decisions you make? Will people purchase self-driving cars?

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLFC01 book below.

Part of the problem formulation phase includes seeing where there are opportunities to use machine learning.

In the following practice examples, you are presented with four different business scenarios. For each scenario, consider the following questions:

Invest in your future today by enrolling in this Azure Fundamentals - Microsoft Azure Certification and Training ebook below. This Azure Fundamentals Exam Prep Book will prepare you for the Azure Fundamentals AZ900 Certification Exam.

Microsoft Azure AZ900 Certification and Training

  1. Is machine learning appropriate for this problem, and why or why not?
  2. What is the ML problem if there is one, and what would a success metric look like?
  3. What kind of ML problem is this?
  4. Is the data appropriate?’

The solutions given in this article are one of the many ways you can formulate a business problem.

I)  Amazon recently began advertising to its customers when they visit the company website. The Director in charge of the initiative wants the advertisements to be as tailored to the customer as possible. You will have access to all the data from the retail webpage, as well as all the customer data.

  1. ML is appropriate because of the scale, variety and speed required. There are potentially thousands of ads and millions of customers that need to be served customized ads immediately as they arrive to the site.
  2. The problem is ads that are not useful to customers are a wasted opportunity and a nuisance to customers, yet not serving ads at all is a wasted opportunity. So how does Amazon serve the most relevant advertisements to its retail customers?
    1. Success would be the purchase of a product that was advertised.
  3. This is a supervised learning problem because we have a labeled data point, our success metric, which is the purchase of a product.
  4. This data is appropriate because it is both the retail webpage data as well as the customer data.

II) You’re a Senior Business Analyst at a social media company that focuses on streaming. Streamers use a combination of hashtags and predefined categories to be discoverable by your platform’s consumers. You ran an analysis on unique streamer counts by hashtags and categories over the last month and found that out of tens of thousands of streamers, almost all use only 40 hashtags and 10 categories despite innumerable hashtags and hundreds of categories. You presume the predefined categories don’t represent all the possibilities very well, and that streamers are simply picking the closest fit. You figure there are likely many categories and groupings of streamers that are not accounted for. So you collect a dataset that consists of all streamer profile descriptions (all text), all the historical chat information for each streamer, and all their videos that have been streamed.

  1. ML is appropriate because of the scale and variability.
  2. The problem is the content of streamers is not being represented by the existing categories. Success would be naturally grouping the streamers into categories based on content and seeing if those align with the hashtags and categories that are being commonly used.  If they do not, then the streamers are not being well represented and you can use these groupings to create new categories.
  3. There isn’t a specific outcome variable. There’s no target or label. So this is an unsupervised problem.
  4. The data is appropriate.

III) You’re a headphone manufacturer who sells directly to big and small electronic stores. As an attempt to increase competitive pricing, Store 1 and Store 2 decided to put together the pricing details for all headphone manufacturers and their products (about 350 products) and conduct daily releases of the data. You will have all the specs from each manufacturer and their product’s pricing. Your sales have recently been dropping so your first concern is whether there are competing products that are priced lower than your flagship product.

  1. ML is probably not necessary for this. You can just search the dataset to see which headphones are priced lower than the flagship, then compare their features and build quality.

IV) You’re a Senior Product Manager at a leading ridesharing company. You did some market research, collected customer feedback, and discovered that both customers and drivers are not happy with an app feature. This feature allows customers to place a pin exactly where they want to be picked up. The customers say drivers rarely stop at the pin location. Drivers say customers most often put the pin in a place they can’t stop. Your company has a relationship with the most used maps app for the driver’s navigation so you leverage this existing relationship to get direct, backend access to their data. This includes latitude and longitude, visual photos of each lat/long, traffic delay details, and regulation data if available (ie- No Parking zones, 3 minute parking zones, fire hydrants, etc.).

  1. ML is appropriate because of the scale and automation involved. It’s not feasible to drive everywhere and write down all the places that are ok for pickup. However, maybe we can predict whether a location is ok for pickup.
  2. The problem is drivers and customers are having poor experiences connecting for pickup, which is pushing customers away from the platform.
    1. Success would be properly identifying appropriate pickup locations so they can be integrated into the feature.
  3. This is a supervised learning problem even though there aren’t any labels, yet. Someone will have to go through a sample of the data to label where there are ok places to park and not park, giving the algorithms some target information.
  4. The data is appropriate once a sample of the dataset has been labeled. There may be some other data that could be included too. What about asking UPS for driver stop information? Where do they stop?

In conclusion, problem formulation is an important step in the machine learning pipeline that should not be overlooked or underestimated. It can make or break a machine learning project; therefore, it is important to take care when formulating machine learning problems.”

Football/Soccer World Cup 2022 Guide and Past World Cups History and Quiz illustrated

AWS machine Learning Specialty Exam Prep MLS-C01
AWS machine Learning Specialty Exam Prep MLS-C01

Step by Step Solution to a Machine Learning Problem – Feature Engineering

Feature Engineering is the act of reshaping and curating existing data to make patters more apparent. This process makes the data easier for an ML model to understand. Using knowledge of the data, features are engineered and  tuned to make ML algorithms work more efficiently.

 

For this problem, imagine a scenario where you are running a real estate brokerage and you want to predict the selling price of a house. Using a specific county dataset and simple information (like the location, total square footage, and number of bedrooms), let’s practice training a baseline model, conducting feature engineering, and tuning a model to make a prediction.

First, load the dataset and take a look at its basic properties.

# Load the dataset
import pandas as pd
import boto3

df = pd.read_csv(“xxxxx_data_2.csv”)
df.head()


housing dataset example
housing dataset example: xxxxx_data_2.csv

Output:

With average increases in salary of over 25% for certified individuals, you’re going to be in a much better position to secure your dream job or promotion if you earn your AWS Certified Solutions Architect Associate or AWS Cloud Practitioner certification. Get the books below to for real practice exams:

AWS Certified Solutions Architect Associate SAA-C03 Practice Exams
feature_engineering_dataset_example
feature_engineering_dataset_example

This dataset has 21 columns:

  • id – Unique id number
  • date – Date of the house sale
  • price – Price the house sold for
  • bedrooms – Number of bedrooms
  • bathrooms – Number of bathrooms
  • sqft_living – Number of square feet of the living space
  • sqft_lot – Number of square feet of the lot
  • floors – Number of floors in the house
  • waterfront – Whether the home is on the waterfront
  • view – Number of lot sides with a view
  • condition – Condition of the house
  • grade – Classification by construction quality
  • sqft_above – Number of square feet above ground
  • sqft_basement – Number of square feet below ground
  • yr_built – Year built
  • yr_renovated – Year renovated
  • zipcode – ZIP code
  • lat – Latitude
  • long – Longitude
  • sqft_living15 – Number of square feet of living space in 2015 (can differ from sqft_living in the case of recent renovations)
  • sqrt_lot15 – Nnumber of square feet of lot space in 2015 (can differ from sqft_lot in the case of recent renovations)

This dataset is rich and provides a fantastic playground for the exploration of feature engineering. This exercise will focus on a small number of columns. If you are interested, you could return to this dataset later to practice feature engineering on the remaining columns.

A baseline model

Now, let’s  train a baseline model.

People often look at square footage first when evaluating a home. We will do the same in the oflorur model and ask how well can the cost of the house be approximated based on this number alone. We will train a simple linear learner model (documentation). We will compare to this after finishing the feature engineering.

import sagemaker
import numpy as np
from sklearn.model_selection import train_test_split
import time


We know you like your hobbies and especially coding, We do too, but you should find time to build the skills that’ll drive your career into Six Figures. Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career. 85% of hiring managers say cloud certifications make a candidate more attractive. Start your cloud journey with these excellent books below:

t1 = time.time()

# Split training, validation, and test
ys = np.array(df[‘price’]).astype(“float32”)
xs = np.array(df[‘sqft_living’]).astype(“float32”).reshape(-1,1)

np.random.seed(8675309)
train_features, test_features, train_labels, test_labels = train_test_split(xs, ys, test_size=0.2)
val_features, test_features, val_labels, test_labels = train_test_split(test_features, test_labels, test_size=0.5)

# Train model
linear_model = sagemaker.LinearLearner(role=sagemaker.get_execution_role(),
instance_count=1,
instance_type=’ml.m4.xlarge’,
predictor_type=’regressor’)

train_records = linear_model.record_set(train_features, train_labels, channel=’train’)
val_records = linear_model.record_set(val_features, val_labels, channel=’validation’)
test_records = linear_model.record_set(test_features, test_labels, channel=’test’)

linear_model.fit([train_records, val_records, test_records], logs=False)

sagemaker.analytics.TrainingJobAnalytics(linear_model._current_job_name, metric_names = [‘test:mse’, ‘test:absolute_loss’]).dataframe()

 

If you examine the quality metrics, you will see that the absolute loss is about $175,000.00. This tells us that the model is able to predict within an average of $175k of the true price. For a model based upon a single variable, this is not bad. Let’s try to do some feature engineering to improve on it.

Throughout the following work, we will constantly be adding to a dataframe called encoded. You will start by populating encoded with just the square footage you used previously.

 

encoded = df[[‘sqft_living’]].copy()

Categorical variables

Let’s start by including some categorical variables, beginning with simple binary variables.

The dataset has the waterfront feature, which is a binary variable. We should change the encoding from 'Y' and 'N' to 1 and 0. This can be done using the map function (documentation) provided by Pandas. It expects either a function to apply to that column or a dictionary to look up the correct transformation.

Binary categorical

Let’s write code to transform the waterfront variable into binary values. The skeleton has been provided below.

encoded[‘waterfront’] = df[‘waterfront’].map({‘Y’:1, ‘N’:0})

You can also encode many class categorical variables. Look at column condition, which gives a score of the quality of the house. Looking into the data source shows that the condition can be thought of as an ordinal categorical variable, so it makes sense to encode it with the order.

Ordinal categorical

Using the same method as in question 1, encode the ordinal categorical variable condition into the numerical range of 1 through 5.

encoded[‘condition’] = df[‘condition’].map({‘Poor’:1, ‘Fair’:2, ‘Average’:3, ‘Good’:4, ‘Very Good’:5})

A slightly more complex categorical variable is ZIP code. If you have worked with geospatial data, you may know that the full ZIP code is often too fine-grained to use as a feature on its own. However, there are only 7070 unique ZIP codes in this dataset, so we may use them.

However, we do not want to use unencoded ZIP codes. There is no reason that a larger ZIP code should correspond to a higher or lower price, but it is likely that particular ZIP codes would. This is the perfect case to perform one-hot encoding. You can use the get_dummies function (documentation) from Pandas to do this.

Nominal categorical

Using the Pandas get_dummies function,  add columns to one-hot encode the ZIP code and add it to the dataset.

encoded = pd.concat([encoded, pd.get_dummies(df[‘zipcode’])], axis=1)

In this way, you may freely encode whatever categorical variables you wish. Be aware that for categorical variables with many categories, something will need to be done to reduce the number of columns created.

One additional technique, which is simple but can be highly successful, involves turning the ZIP code into a single numerical column by creating a single feature that is the average price of a home in that ZIP code. This is called target encoding.

To do this, use groupby (documentation) and mean (documentation) to first group the rows of the DataFrame by ZIP code and then take the mean of each group. The resulting object can be mapped over the ZIP code column to encode the feature.

Nominal categorical II

Complete the following code snippet to provide a target encoding for the ZIP code.

means = df.groupby(‘zipcode’)[‘price’].mean()
encoded[‘zip_mean’] = df[‘zipcode’].map(means)

Normally, you only either one-hot encode or target encode. For this exercise, leave both in. In practice, you should try both, see which one performs better on a validation set, and then use that method.

Scaling

Take a look at the dataset. Print a summary of the encoded dataset using describe (documentation).

encoded.describe()

Scaling  - summary of the encoded dataset using describe
Scaling – summary of the encoded dataset using describe

One column ranges from 290290 to 1354013540 (sqft_living), another column ranges from 11 to 55 (condition), 7171 columns are all either 00 or 11 (one-hot encoded ZIP code), and then the final column ranges from a few hundred thousand to a couple million (zip_mean).

In a linear model, these will not be on equal footing. The sqft_living column will be approximately 1300013000 times easier for the model to find a pattern in than the other columns. To solve this, you often want to scale features to a standardized range. In this case, you will scale sqft_living to lie within 00 and 11.

Feature scaling

Fill in the code skeleton below to scale the column of the DataFrame to be between 00 and 11.

sqft_min = encoded[‘sqft_living’].min()
sqft_max = encoded[‘sqft_living’].max()
encoded[‘sqft_living’] = encoded[‘sqft_living’].map(lambda x : (x-sqft_min)/(sqft_max – sqft_min))

cond_min = encoded[‘condition’].min()
cond_max = encoded[‘condition’].max()
encoded[‘condition’] = encoded[‘condition’].map(lambda x : (x-cond_min)/(cond_max – cond_min))]

Read more here….

Amazon Reviews Solution

Predicting Credit Card Fraud Solution

Predicting Airplane Delays Solution

Data Processing for Machine Learning Example

Model Training and Evaluation Examples

Targeting Direct Marketing Solution

Azure Solutions Architect Expert Certification Questions And Answers Dumps

Azure Solutions Architect Expert Exam Preparation

You can translate the content of this page by selecting a language in the select box.

Azure Solutions Architect Expert Certification Questions And Answers Dumps

This exam measures your ability to accomplish the following technical tasks: design identity, governance, and monitoring solutions; design data storage solutions; design business continuity solutions; and design infrastructure solutions.

Football/Soccer World Cup 2022 Guide and Past World Cups History and Quiz illustrated

This blog covers the Designing Microsoft Azure Infrastructure Solutions.

Azure Solutions Architect Expert Certification Questions And Answers Dumps
Azure Solutions Architect Expert Certification Questions And Answers Dumps

A candidate for this certification should have advanced experience and knowledge of IT operations, including networking, virtualization, identity, security, business continuity, disaster recovery, data platforms, and governance. A professional in this role should manage how decisions in each area affect an overall solution. In addition, they should have experience in Azure administration, Azure development, and DevOps processes.

Skills measured

  • Design identity, governance, and monitoring solutions (25-30%)
  • Design data storage solutions (25-30%)
  • Design business continuity solutions (10-15%)
  • Design infrastructure solutions (25-30%)

Below are the top 50 Questions and Answers for AZ303, AZ304 and AZ305 Certification Exam:

What is one reason to regularly review Azure role assignments?

A. ensure naming conventions are properly applied.

B. To reduce the risk associated with stale role assignments.

C. To eliminate extra distribution groups that are no longer used.

Answer: B:  You should regularly review access of privileged Azure resource roles to reduce the risk associated with stale role assignment

What is an access package?

A. An access package is a group of users with the access they need to work on a project or perform a task.

B. An access package is a bundle of all the resources with the access a user needs to work on a project or perform their task.

C. An access package is a used to create a transitive trust between B2B organizations.

Answer: B:  An access package is a bundle of all the resources with the access a user needs to work on a project or perform their task. For example, you may want to create an Access Package that includes all the applications that developers in your organization need, or all applications to which external users should have access.

How can Discovery and insights for privileged identity management help an organization?

A. Discovery and insights can find privileged role assignments across Azure AD, and then provide recommendations on how to secure them using Azure AD governance features like Privileged Identity Management (PIM).

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLFC01 book below.

B. Discovery and insights can find when guest’s access resources across Azure AD.

C. Discovery and insights can find security group assignments across Azure AD, and then provide recommendations on how to secure them using Azure AD governance features like Privileged Identity Management (PIM).

Invest in your future today by enrolling in this Azure Fundamentals - Microsoft Azure Certification and Training ebook below. This Azure Fundamentals Exam Prep Book will prepare you for the Azure Fundamentals AZ900 Certification Exam.

Microsoft Azure AZ900 Certification and Training

D. N/A


Answer: A – Discovery and insights can find privileged role assignments across Azure AD, and then provide recommendations on how to secure them using Azure AD governance features like Privileged Identity Management (PIM).

Whether to assign a role to a group instead of to individual users is a strategic decision. When planning, consider assigning a role to a group to manage role assignments when the desired outcome is to delegate assigning the role and what else?

A. You want to use Conditional Access policies.

B. Many Azure resources need to be managed.

Football/Soccer World Cup 2022 Guide and Past World Cups History and Quiz illustrated

C. Many users are assigned to a role.

D. N/A


Answer: C – Management of one group is much easier than management many individual users.

Which roles can only be assigned using Privileged Identity Management?

A. Permanently active roles.

B. Eligible roles.

C. Transient roles.

D. N/A



Answer: B. – Permanently active roles are the normal roles assigned through Azure Active Directory and Azure resources while eligible roles can only be assigned in Privileged Identity Management.

What is the purpose of the audit logs?

A. Azure AD audit logs provide a comparison of budgeted Azure usage compared to actual.

With average increases in salary of over 25% for certified individuals, you’re going to be in a much better position to secure your dream job or promotion if you earn your AWS Certified Solutions Architect Associate or AWS Cloud Practitioner certification. Get the books below to for real practice exams:

AWS Certified Solutions Architect Associate SAA-C03 Practice Exams

B. Azure AD audit logs provide records of system activities for compliance reporting.

C. Azure AD audit logs allow customer to monitor activity when provisioning new services within Azure.

D. N/A


Answer: B. – An audit log has a default list view that shows data, like the date and time of the occurrence, the service that logged the occurrence, the category and name of the activity (what), the status of the activity (success or failure), the target, and the initiator/actor (who) of an activity.

Can Azure export logging data to third-party SIEM tools?

A. Yes, Azure supports exporting log data to several common third-party SIEM tools.

B. No, Azure only supports the export to Azure Sentinel.


We know you like your hobbies and especially coding, We do too, but you should find time to build the skills that’ll drive your career into Six Figures. Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career. 85% of hiring managers say cloud certifications make a candidate more attractive. Start your cloud journey with these excellent books below:

C. Yes, Splunk is the 3rd Party SIEM Azure can export to.

D. N/A


Answer: A. – Azure can export to many of the most popular SIEM tools. The most common are Splunk, IBM QRadar, and ArcSight.

A Solutions Architect wants to configure email notifications to be sent from Azure AD Domain Services when issues are detected. In Azure, where this would be configured?

A. Azure Microsoft Portal > Azure Active Directory > Monitoring > Notifications > Add email recipient.

B. Azure Microsoft Portal > Azure AD Domain Services > Notification settings > Add email recipient.

C. Azure Microsoft Portal > Notification Hubs > Azure Active Directory > Add email recipient.

D. N/A


Answer: B – The health of an Azure Active Directory Domain Services (Azure AD DS) managed domain is monitored by the Azure platform. The health status page in the Azure Microsoft Portal shows any alerts for the managed domain. To make sure issues are responded to in a timely manner, email notifications can be configured to report on health alerts as soon as they’re detected in the Azure AD DS managed domain.

You are architecting a web application that constantly reads and writes important medical imaging data in blob storage.

To ensure the web application is resilient, you have been asked to configure Azure Storage as follows:

  • Protect against a regional disaster.
  • Leverage synchronous replication of storage data across multiple data centers.

How would you configure Azure Storage to meet these requirements?

GZRS provides asynchronous replication to a single physical location in the secondary region. Additionally, this includes synchronous replication across three availability zones within the primary region (ZRS).

Video for reference: Storage Account Replication

You need to ensure your virtual machine boot and data volumes are encrypted. Your virtual machine is already deployed using an Azure marketplace Windows OS image and managed disks. Which  tasks should you complete to enable the required encryption?

Configure a Key Vault Access Policy: A Key Vault Access Policy will be required to allow Azure Disk Encryption for volume encryption.

Create an Azure Key Vault: Azure Disk Encryption leverages a Key Vault for the secure storage of cryptographic information.

Video for reference: Azure Disk Encryption

You have configured Azure multi-factor authentication (MFA) for your company. Some staff have reported they are receiving MFA verification requests, even when they didn’t initiate any authentication themselves. They believe this might be hackers.
Which feature would you enable to help protect against this type of security issue?

Fraud alert helps users to protect against MFA verification requests they did not initiate. It provides the ability to report fraudulent attempts, as well as the ability to automatically block users who report fraud.

Reference: Fraud Alert

You are configuring a new storage account using PowerShell. The storage account must support Queue storage. The PowerShell command you are using is as follows:

New-AzStorageAccount -name "tpcstore01" -ResourceGroupName "rg1" -location "auseast" -SkuName "standard_lrs"

Which two arguments could you use to complete the PowerShell command to meet the above requirements?

-Kind "Storage"

General Purpose v1 supports blob, file, queue, table, and disk.

-Kind "StorageV2"

General Purpose v2 supports blob, file, queue, table, disk, and data lake.

You need to ensure your virtual machine boot and data volumes are encrypted. Your virtual machine is already deployed using an Azure marketplace Linux OS image and managed disks.
Which  two commands would you use to enable the required encryption?

New-AzKeyvault

Azure Disk Encryption leverages a Key Vault for the secure storage of cryptographic information.

Set-AzVMDiskEncryptionExtension

Azure Disk Encryption leverages a VM extension to enable BitLocker (Windows) or DM-Crypt (Linux) to encrypt boot/OS/data volumes.

CompanyA is planning on making some significant changes to their governance solution. They have asked for your assistance with recommendations and questions. Here are the specific requirements.

– Consistency across subscriptions. It appears each subscription has different policies for the creation of virtual machines. The IT department would like to standardize the policies across the Azure subscriptions.

– Ensure critical storage is highly available. There are several critical applications that use storage. The IT department wants to ensure the storage is made highly available across regions.

– Identify R&D costs. The CTO wants to know how much a new project is costing. The costs are spread out across multiple departments.

– ISO compliance. CompanyA wants to certify that it complies with the ISO 27001 standard. The standard will require resources groups, policy assignments, and templates.

How can CompanyA to ensure policies are implemented across multiple subscriptions?

Create a management group and place all the relevant subscriptions in the new management group.
A management group could include all the subscriptions. Then a policy could be scoped to the management group and applied to all the subscriptions.

How can CompanyA ensure applications use geo-redundancy to create highly available storage applications?

Add an Azure policy that requires geo-redundant storage.
An Azure policy can enforce different rules over your resource configurations.

How can CompanyA report all the costs associated with a new product?

Add a resource tag to identify which resources are used for the new product.
Resource tagging provides extra information, or metadata, about your resources. You could then run a cost report on all resources with that tag.

Which governance tool should CompanyA use for the ISO 27001 requirements?

Azure blueprints.
Azure blueprints will deploy all the artifacts for ISO 27001 compliance.

You are configuring an Azure Automation runbook using the Azure sandbox.
For your runbook to work, you need to install a PowerShell module. You would like to minimize the administrative overhead for maintaining and operating your runbook.
Which option should you choose to install an additional PowerShell module?

Navigate to Shared Resources > Modules, and configure the additional module.
Additional PowerShell modules can be added to the sandbox environment for use by your runbooks.

CompanyA is planning on making some significant changes to their identity and access management solution. They have asked for your assistance on some recommendations and questions. Here are the specific requirements.

– Device access to company applications. The CTO has agreed to allow some level of device access. Employees at the company’s retail stores will now be able to access certain company applications. This access, however, should be restricted to only approved devices.

– Company reorganization. A company-wide reorganization has affected many employees. These employees are now in new roles. The IT team needs to ensure users have the correct access based on their new jobs.

– External developer accounts. A new development project requires external software developers to access company data files. The IT team needs to create user accounts for approximately five developers.

– User sign-in attempts. A recent audit of user sign-ins attempts revealed anonymous IP addresses and unusual locations. The IT team wants to require multifactor authentication for these attempted sign-ins.

How can CompanyA ensure that employees at the company’s retail stores can access company applications only from approved tablet devices?

Conditional access: Conditional Access enables you to require users to access your applications only from approved, or managed, devices.

What should CompanyA do to ensure employees have the correct permissions for their job role?

Require an access review: An access review would give managers an opportunity to validate the employees access.

What should CompanyA do to give access to the partner developers?

Invite the developers as guest users to their directory: In Business-to-Business scenarios guest user accounts are created. You can then apply the appropriate permissions

What solution would be best for the user sign-in attempts requirement?

Create a sign-in risk policy: That’s correct. A sign-in risk policy can identify anonymous IP and atypical locations. Secondary multifactor authentication can then be required.

You are working as a network administrator, managing the following virtual networks:

VNET1

  • Location: Australia East

  • Resource groupRG1

  • Address space: 10.1.0.0/16

    VNET2

  • Location: Australia Southeast

  • Resource groupRG2

  • Address space: 10.1.0.0/16

You have been asked to connect VNET1 and VNET2, to allow private communication between resources in each virtual network. Do you need to modify either of the two virtual networks before virtual network peering is supported?

Yes: IP address ranges cannot overlap. One of the virtual networks must have their address space changed before VNet peering would be able to be configured.

You are architecting identity management for a hybrid environment, and you plan to use Azure AD Connect with password hash sync (PHS).
It is important that you design the solution to be highly available. How would you implement high availability for the synchronization service?

Configure an additional server with Azure AD Connect in staging mode.

Azure AD Connect can be configured in staging mode, which helps with high availability.

You are responsible for monitoring a major web application for your company. The application is implemented using Azure App Service Web Apps and Application Insights.
The chief marketing officer has asked you to provide information to help analyze user behavior based on a group of characteristics. To start with, it will be a simple query looking at all active users from Australia.
Which of the following would you use to provide this information?

Cohorts leverage analytics queries to analyze users, sessions, events, or operations that have something in common (e.g., location, event, etc.). Reference: App insights

You work for a company with multiple Active Directory domains: exampledomain1.com and test.lab.com. Your company would like to use Azure AD Connect to synchronize your on-premises Active Directory domain, exampledomain1.com, with Azure AD. You do not wish to synchronize test.lab.com.

Which tasks should you complete, requiring minimal administrative effort and causing the least disruption to the existing environment?

Run the Azure AD Connect wizard, and configure Domain and OU filtering.

You are architecting a mission-critical processing solution for your company. The solution will leverage virtual machines for the processing tier, and it is critical that high performance levels are maintained at all times.
You need to leverage a managed disk that guarantees up to 900 MB/s throughput and 2,000 IOPS — but also minimizes costs.
Which of the following would you use within your solution?

Premium SSD Managed Disks:  Premium SSDs provide high performance and low latency, and include guaranteed capacity, IOPS, and throughput.

CompanyA wants to reduce storage costs by reducing duplicate content and, whenever applicable, migrating it to the cloud. The company would like a solution that centralizes maintenance while still providing nation-wide access for customers. Customers should be able to browse and purchase items online even in a case of a failure affecting an entire Azure region. Here are some specific requirements.

  • Warranty document retention. The company’s risk and legal teams requires warranty documents be kept for three years.

  • New photos and videos. The company would like each product to have a photo or video to demonstrate the product features.

  • External vendor development. A vendor will create and develop some of the online ecommerce features. The developer will need access to the HTML files, but only during the development phase.

  • Product catalog updates. The product catalog is updated every few months. Older versions of the catalog aren’t viewed frequently but must be available immediately if accessed.

What is the best way for CompanyA to protect their warranty information?

Time-based retention policy: With a time-based retention policy, users can set policies to store data for a specified interval. When a time-based retention policy is in place, objects can be created and read, but not modified or deleted.

What type of storage should CompanyA use for their photos and videos?

Blob storage: That’s correct. Blob storage is best for their photos.

What is the best way to provide the developer access to the ecommerce HTML files?

Shared access signatures: That’s correct. Shared access signatures provide secure delegated access. This functionality can be used to define permissions and how long access is allowed.

Which access tier should be used for the older versions of the product catalog?

Cool access tier: That’s correct. The cool access tier is for content that wouldn’t be viewed frequently but must be available immediately if accessed.

What tool would you use to identify underutilized and idle Azure resources in order to help reduce overall spend?

Azure Advisor: Advisor helps you optimize and reduce your overall Azure spend by identifying idle and underutilized resources. Reference

You work as a network administrator for a company. You manage several virtual machines within the following virtual network:

  • NameVNET1
  • Address space: 10.1.0.0/16
  • SubnetSUBNET1 (10.1.1.0/24)

You need to configure DNS for a VM called VM1, that is located in SUBNET1. DNS should be set to 8.8.8.8. All other VMs must keep their existing settings.

What should you do?

Navigate to the network interface of VM1, DNS Servers, and enable Custom DNS Servers and set to 8.8.8.8.

Custom DNS can be set at the network interface level, so that the settings only apply for a specific virtual machine.

You are architecting a web application that constantly reads and writes important medical imaging data in blob storage. To ensure the web application is resilient, you have proposed the use of storage account failover. Management has asked you whether any data loss might occur for this solution, in the event of a failover. How would you respond?

There may be data loss, and the extent of data loss can be estimated using the Last Sync Time.

The Last Sync Time property provides an indication of how far the secondary is behind from the primary. This can be used to estimate the extent of data loss that may occur. 

What storage service should you implement for an application that streams video content?

Azure Blobs: Azure blobs are used for storing large amounts of unstructured data, such as documents, images, and video files. This service is best used for streaming audio and video, particularly over HTTP/S.

What storage service should you implement for an application that needs to access data using SMB?

Azure Files: Azure files allow you to create and maintain highly available file shares that are accessible anywhere. They can be considered as a replacement to traditional file servers. They provide SMB access.

You are architecting a mission-critical solution for your company using virtual machines.
The solution must qualify for a Microsoft service level agreement (SLA) of 99.95%.
You deploy your solution to a single virtual machine in an availability set. The virtual machine uses premium storage. Does this meet the required SLA?

No: The virtual machine does use premium storage; however, this only provides a 99.9% SLA.

You are implementing Azure Backup using the Microsoft Azure Backup Server.
Which of the following would you use to allow the server to register with your recovery services vault?

Vault Credentials: Vault Credentials are used by the Microsoft Azure Backup Server software to register with the vault.

You are developing a solution on a server hosted on-premises. The solution needs to access data within Azure Key Vault.
Which two options would you use to ensure the application has access to Azure Key Vault?

Register the application in Azure AD and use a client secret.
To allow an on-premises application to authenticate with Azure AD, it can be registered in Azure AD and given a client secret (or client certificate). If this application was hosted on a supported Azure service, it could have been possible to use a managed identity instead.

Configure an access policy in Azure Key Vault.
To allow access to Key Vault, any identity (application, user, etc.) must be provided permissions using an Access Policy.

You have a Windows virtual machine within Azure, which must be backed up.
You have the following requirements:
– Back up the virtual machine three times per day
– Include system state backups
You configure a backup to a recovery services vault using the Microsoft Azure Recovery Services (MARS) agent.
Does this fulfill the requirements above?

Yes: The Microsoft Azure Recovery Services (MARS) agent can perform backups of files, folders, and system states up to three times a day.

You are planning a migration of machines to Azure from your on-premises Hyper-V host.
You would like to estimate how much it will cost to migrate your operating machines to Azure. Which of the following two items would you include in your migration solution?
The effort required to estimate pricing, and then ultimately go on to perform a migration, should be minimized.

Azure Migrate Project: All migrations (both assessment and migration) require an Azure Migrate Project for the storage of related metadata.

You are implementing Azure Blueprints to help improve standards and compliance for your Azure environment.
You would like to ensure that when an Azure Blueprint is used, a user is assigned ‘owner’ permissions to a specific resource group defined in the blueprint.
Does Azure Blueprints provide this functionality?

Yes: Azure Blueprints includes several different artifacts, one of which is ‘Role Assignment’. This allows a user to be assigned permissions as part of the blueprint definition.

You are planning a migration from on-premises to Azure.
Your on-premises environment is made up of the following:
– VMware hosted virtual machines
– Hyper-V hosted virtual machines
– Physical servers
Will the Azure Migrate: Server Migration tool provided by Microsoft support your environment for migrations to Azure?

Yes, for VMware, Hyper-V, and physical machines. The Azure Migrate: Server Migration tool support migrating VMware VMs, Hyper-V VMs, and physical servers.

For a new container image you are developing, you need to ensure a local HTML file, index.html, is included in the image. Which command would you include in the Dockerfile?

COPY ./index.html /usr/share/nginx/html

The COPY command can be used within a Dockerfile to copy files and directories from source to destination.

You have developed a financial management application for your company.
It is currently hosted as an Azure App Service Web App within Azure.
To improve security, you need to ensure that the web application is only accessible when users connect from your head-office IP address of 14.78.162.190.
Within the Azure Portal settings for your web app, which section would you use to configure this security?

Networking > Access Restrictions
Access Restrictions allows you to filter inbound connectivity to Azure App service, based on the IP address of the requesting user/service.
This meets the requirements of this scenario, as an Access Restriction could be configured for the Web App. To configure this, an ALLOW rule would be created for the web app (and the management interface, SCM, if needed). Adding the ALLOW rule for the IP address of 13.77.161.179 would automatically create a DENY ALL rule, which will prevent any other network location from accessing this resource.

You are responsible for improving the availability of a web application. The web application has the following characteristics:
– Hosted using Azure App Service.
– Leverages an Azure SQL back-end.
You need to configure Azure SQL Database to meet the following needs:
Must be able to continue operations in the event of a region failure.
Must support automatic failover in the event of failure.
You must recommend a solution that requires the least amount of effort to implement, and can manage in the event of a failover. Which configuration do you recommend?

Azure SQL auto-failover group: Using Azure SQL auto-failover groups provides protection at a geographic scale. By using the read-write listener, an application will seamlessly point to the primary, even in the event of a failover. Azure SQL auto-failover groups simplify the deployment and management of geo-replicated databases. It supports replication, and failover, for one or more databases on Azure SQL Database, or Azure SQL Managed Instances. A key benefit of auto-failover groups, is the built-in management of DNS for read, and read-write listeners.

You have been asked to implement high availability for an Azure SQL Managed Instance.
The solution is critical, and data loss must be minimized. If the data platform fails you must wait 1 hour before automatic failover occurs.
You must determine: (1) How to configure replication. (2) How to configure the 1 hour delay.

Enable replication using Auto-Failover Groups. Enable the 1 hour delay using the Grace Period.
Auto-Failover Groups are supported by Azure SQL Managed Instances, and the Grace Period is used to define how many hours to wait before an automatic read/write failover occurs.

You are helping to architect a social media application.
The solution must ensure that all users read data in the order it has been completely written.
You propose the use of Cosmos DB. What else do you include in your proposal to meet the requirements?

Cosmos DB Strong Consistency: Strong consistency ensures that reads are guaranteed to return the most recent committed write. This is useful when order matters.

You need to configure high availability for Azure SQL Databases.
You would like the service to include the following:
– Automatic failover policy.
– Ability to manually failover.
– DNS management for primary read/write access.
You configure Azure SQL Active Geo-Replication. Does this meet the requirements?

No: Active Geo-Replication does not include DNS automatically managed for primary read/write access. This is a feature of auto-failover groups. The inclusion of DNS for both the primary read/write endpoint, and the secondary read endpoint, reduces the management overhead for ensuring applications are pointing to the correct resources in the event of a disaster.

https://awscertifiedsolutionarchitectexamprep.com/

Top 60 AWS Solution Architect Associate Exam Tips

 

Pros and Cons of Cloud Computing

Cloud User insurance and Cloud Provider Insurance

You can translate the content of this page by selecting a language in the select box.

Wha are the Pros and Cons of Cloud Computing?

Cloud computing is the new big thing in Information Technology. Everyone, every business will sooner or later adopt it, because of hosting cost benefits, scalability and more.

Football/Soccer World Cup 2022 Guide and Past World Cups History and Quiz illustrated

This blog outlines the Pros and Cons of Cloud Computing, Pros and Cons of Cloud Technology, Faqs, Facts, Questions and Answers Dump about cloud computing.

AWS Cloud Practitioner Exam Prep App – Free

AWS Certified Cloud Practitioner Exam Prep App: Pros and Cons of Cloud Computing
AWS Certified Cloud Practitioner Exam Prep PWA App

Cloud Practitioner Exam Prep AWS vs Azure vs Google
Cloud Practitioner Exam Prep AWS vs Azure vs Google

What is cloud computing?

Cloud computing is an information technology paradigm that enables ubiquitous access to shared pools of configurable system resources and higher-level services that can be rapidly provisioned with minimal management effort, often over the Internet. Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a public utility.
Simply put, cloud computing is the delivery of computing services including servers, storage, databases, networking, software, analytics, and intelligence—over the Internet (“the cloud”) to offer faster innovation, flexible resources, and economies of scale. You typically pay only for cloud services you use, helping you lower your operating costs, run your infrastructure more efficiently, and scale as your business needs change.

What are the Pros of using cloud computing? What are characteristics of cloud computing?

  • Trade Capital expense for variable expense
  • Benefit from massive economies of scale
  • Stop guessing capacity
  • Increase speed and agility
  • Stop spending money on running and maintaining data centers
  • Go global in minutes
  • Benefits of AWS Cloud Computing - Pros and Cons of Cloud Computing
    Benefits of AWS Cloud Computing


  • Cost effective & Time saving: Cloud computing eliminates the capital expense of buying hardware and software and setting up and running on-site datacenters; the racks of servers, the round-the-clock electricity for power and cooling, and the IT experts for managing the infrastructure.
  • The ability to pay only for cloud services you use, helping you lower your operating costs.
  • Powerful server capabilities and Performance: The biggest cloud computing services run on a worldwide network of secure datacenters, which are regularly upgraded to the latest generation of fast and efficient computing hardware. This offers several benefits over a single corporate datacenter, including reduced network latency for applications and greater economies of scale.
  • Powerful and scalable server capabilities: The ability to scale elastically; That means delivering the right amount of IT resources—for example, more or less computing power, storage, bandwidth—right when they’re needed, and from the right geographic location.
  • SaaS ( Software as a service). Software as a service is a method for delivering software applications over the Internet, on demand and typically on a subscription basis. With SaaS, cloud providers host and manage the software application and underlying infrastructure, and handle any maintenance, like software upgrades and security patching. Users connect to the application over the Internet, usually with a web browser on their phone, tablet, or PC.
  • PaaS ( Platform as a service). Platform as a service refers to cloud computing services that supply an on-demand environment for developing, testing, delivering, and managing software applications. PaaS is designed to make it easier for developers to quickly create web or mobile apps, without worrying about setting up or managing the underlying infrastructure of servers, storage, network, and databases needed for development.
  • IaaS ( Infrastructure as a service). The most basic category of cloud computing services. With IaaS, you rent IT infrastructure—servers and virtual machines (VMs), storage, networks, operating systems—from a cloud provider on a pay-as-you-go basis
  • Serverless: Running complex Applications without a single server. Overlapping with PaaS, serverless computing focuses on building app functionality without spending time continually managing the servers and infrastructure required to do so. The cloud provider handles the setup, capacity planning, and server management for you. Serverless architectures are highly scalable and event-driven, only using resources when a specific function or trigger occurs.
  • Infrastructure provisioning as code, helps recreating same infrastructure by re-running the same code in a few click.
  • Automatic and Reliable Data backup and storage of data: Cloud computing makes data backup, disaster recovery, and business continuity easier and less expensive because data can be mirrored at multiple redundant sites on the cloud provider’s network.
  • Increase Productivity: On-site datacenters typically require a lot of “racking and stacking”—hardware setup, software patching, and other time-consuming IT management chores. Cloud computing removes the need for many of these tasks, so IT teams can spend time on achieving more important business goals.
  • Security: Many cloud providers offer a broad set of policies, technologies, and controls that strengthen your security posture overall, helping protect your data, apps, and infrastructure from potential threats.
  • Speed: Most cloud computing services are provided self service and on demand, so even vast amounts of computing resources can be provisioned in minutes, typically with just a few mouse clicks, giving businesses a lot of flexibility and taking the pressure off capacity planning. In a cloud computing environment, new IT resources are only a click away. This means that the time those resources are available to your developers is reduced from weeks to minutes. As a result, the organization experiences a dramatic increase in agility because the cost and time it takes to experiment and develop is lower
  • Go global in minutes
    Easily deploy your application in multiple regions around the world with just a few clicks. This means that you can provide a lower latency and better experience for your customers simply and at minimal cost.

What are the Cons of using cloud computing?

  • Privacy: Cloud computing poses privacy concerns because the service provider can access the data that is in the cloud at any time. It could accidentally or deliberately alter or delete information.Many cloud providers can share information with third parties if necessary for purposes of law and order without a warrant. That is permitted in their privacy policies, which users must agree to before they start using cloud services.
  • Security: According to the Cloud Security Alliance, the top three threats in the cloud are Insecure Interfaces and API’s, Data Loss & Leakage, and Hardware Failure—which accounted for 29%, 25% and 10% of all cloud security outages respectively. Together, these form shared technology vulnerabilities.
  • Ownership of Data: There is the problem of legal ownership of the data (If a user stores some data in the cloud, can the cloud provider profit from it?). Many Terms of Service agreements are silent on the question of ownership.
  • Limited Customization Options: Cloud computing is cheaper because of economics of scale, and—like any outsourced task—you tend to get what you get. A restaurant with a limited menu is cheaper than a personal chef who can cook anything you want.
  • Downtime: Technical outages are inevitable and occur sometimes when cloud service providers (CSPs) become overwhelmed in the process of serving their clients. This may result to temporary business suspension.
  • Insurance : It can be expensive to insure the customer and business data and infrastructure hosted in the cloud. A cyber insurance is necessary when using the cloud.
  • Other concerns of cloud computing.

      • Users with specific records-keeping requirements, such as public agencies that must retain electronic records according to statute, may encounter complications with using cloud computing and storage. For instance, the U.S. Department of Defense designated the Defense Information Systems Agency (DISA) to maintain a list of records management products that meet all of the records retention, personally identifiable information (PII), and security (Information Assurance; IA) requirements
      • Cloud storage is a rich resource for both hackers and national security agencies. Because the cloud holds data from many different users and organizations, hackers see it as a very valuable target.
    • Piracy and copyright infringement may be enabled by sites that permit filesharing. For example, the CodexCloud ebook storage site has faced litigation from the owners of the intellectual property uploaded and shared there, as have the GrooveShark and YouTube sites it has been compared to.

What are the different types of cloud computing?


  • Public clouds: A cloud is called a “public cloud” when the services are rendered over a network that is open for public use. They are owned and operated by a third-party cloud service providers, which deliver their computing resources, like servers and storage, over the Internet. Microsoft Azure is an example of a public cloud. With a public cloud, all hardware, software, and other supporting infrastructure is owned and managed by the cloud provider. You access these services and manage your account using a web browser. For infrastructure as a service (IaaS) and platform as a service (PaaS), Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP) hold a commanding position among the many cloud companies.
  • Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third party, and hosted either internally or externally. A private cloud refers to cloud computing resources used exclusively by a single business or organization. A private cloud can be physically located on the company’s on-site datacenter. Some companies also pay third-party service providers to host their private cloud. A private cloud is one in which the services and infrastructure are maintained on a private network.
  • Hybrid cloud is a composition of a public cloud and a private environment, such as a private cloud or on-premise resources, that remain distinct entities but are bound together, offering the benefits of multiple deployment models. Hybrid cloud can also mean the ability to connect collocation, managed and/or dedicated services with cloud resources. Hybrid clouds combine public and private clouds, bound together by technology that allows data and applications to be shared between them. By allowing data and applications to move between private and public clouds, a hybrid cloud gives your business greater flexibility, more deployment options, and helps optimize your existing infrastructure, security, and compliance.
  • Community Cloud: A community cloud in computing is a collaborative effort in which infrastructure is shared between several organizations from a specific community with common concerns, whether managed internally or by a third-party and hosted internally or externally. This is controlled and used by a group of organizations that have shared interest. The costs are spread over fewer users than a public cloud, so only some of the cost savings potential of cloud computing are realized.

Invest in your future today by enrolling in this Azure Fundamentals - Microsoft Azure Certification and Training ebook below. This Azure Fundamentals Exam Prep Book will prepare you for the Azure Fundamentals AZ900 Certification Exam.

Microsoft Azure AZ900 Certification and Training


Football/Soccer World Cup 2022 Guide and Past World Cups History and Quiz illustrated

Other AWS Facts and Summaries and Questions/Answers Dump

Reference

error: Content is protected !!