Programmers are always looking for ways to make their code more efficient. One way to do this is to use a faster loop. Python is a high-level programming language that is widely used by developers and software engineers. It is known for its readability and ease of use. However, one downside of Python is that its loops can be slow. This can be a problem when you need to process large amounts of data. There are several ways to make Python loops faster. One way is to use a faster looping construct, such as C. Another way is to use an optimized library, such as NumPy. Finally, you can vectorize your code, which means converting it into a format that can be run on a GPU or other parallel computing platform. By using these techniques, you can significantly speed up your Python code.
According to Vladislav Zorov, If not talking about NumPy or something, try to use list comprehension expressions where possible. Those are handled by the C code of the Python interpreter, instead of looping in Python. Basically same idea like the NumPy solution, you just don’t want code running in Python.
TO keep track of indices and values inside a loop.
Twice faster, and the code looks better.
Another option is to write loops in C instead of Python. This can be done by using a Python extension module such as pyximport. By doing this, programmers can take advantage of the speed of C while still using the convenient syntax of Python.
Finally, developers can also improve the performance of their code by making use of caching. By caching values that are computed inside a loop, programmers can avoid having to recalculate them each time through the loop. By taking these steps, programmers can make their Python code more efficient and faster.
Very Important: Don’t worry about code efficiency until you find yourself needing to worry about code efficiency.
The place where you think about efficiency is within the logic of your implementations.
Hello everyone, im more or less proficient in sql, already manage my database as is, and i dont know if i need an orm or not and mostly where do i need to look to not use it but still have some goodies, i have seen mayim project, i feel like its more or less the degree of freedom i need, id like opinions on that and maybe alternatives to evaluate.. thanks everyone submitted by /u/malajunk [link] [comments]
I am able to perform actions on the same row of a data frame using an expression such as, df['total'] = df['Col1"] + df['Col2"] However, I am not sure how to perform calculations involving different rows. I am hoping to avoid turning my dataframe columns into list or arrays for computation and then having to add the list as a dataframe column. Really wanting to know how this is best achieved. Thank you for advance submitted by /u/appachie [link] [comments]
It's been more than a year since last time i posted about my web-framework Lona, and it evolved quite a bit since then! Lona is an easy to use, full Python, framework to create beautiful web-applications in minutes, without dealing with JavaScript or CSS. It has a very flat learning curve to get you started, and scales as your project grows. It is written in vanilla Python and JavaScript, so you don't have to deal with tools and libraries like npm, vue, react etc. One of the newest additions to the project is the tutorial i wrote (https://lona-web.org/1.x/tutorial/index.html) to make the first steps even easier. It contains many examples, and small clips of them. Feedback in any form would be very welcome! Github: https://github.com/lona-web-org/lona Documentation: https://lona-web.org/1.x/ Demos: https://lona-web.org/1.x/demos/index.html submitted by /u/ki3selerde [link] [comments]
Hey everyone! I created a Python application to export Conway's Game of Life simulations as GIFs and PNGs after becoming fascinated with the cool behaviour this game produces. There is a library of some common seeds/automata from the original game rules, as well as a library of different cells that can be used to achieve different behaviour. The configuration of the simulation can either take a set amount of generations/frames to run for, or can be instructed to run until the simulation stagnates. It is currently geared towards developers as you will have to mess around with the main.py file to use different cell types and seeds (which I am working on changing to be more user-friendly). There is some sample code in the README documentation, and a GitHub Wiki that explains some features. Any feedback is welcome! I am especially looking for a speedier way to create the long GIFs, as right now longer simulations can take a while to scale and stitch together. \"Maze cell\" simulation The \"Shoebox\" seed using classic cells from the original Game Of Life rules submitted by /u/1linguini1 [link] [comments]
I made a python program to randomly generate a game board with numbers for Settler's of Catan - the original idea was to get my friends to shut up about making unfair boards, but it's actually a pretty good beginner lesson so I made a tutorial on it here: https://www.youtube.com/watch?v=7h3sFhBAgcw And all the code is available here: https://github.com/plemaster01/CatanBoardGeneration submitted by /u/plemaster01 [link] [comments]
This is a simple wrapper that introduces any imaginable complex context to each question submitted to Open AI API. The main goal is to enhance the accuracy obtained in its answers in a TRANSPARENT way to end users. This idea arose in the context of a health-related experiment lead by CiTIUS.(more coming soon). https://github.com/citiususc/Smarty-GPT submitted by /u/usc-ur [link] [comments]
Over the weekend I wrote a simple wrapper for ctypes. It simplifies the process of compiling, loading, and calling C functions from Python. I wrote it mostly for fun, I'm sure there are much better library wrappers out there but it was a nice exercise in Python packaging and Mypy. I do enjoy statically-typed languages for the verboseness, so I took a crack at type hints and static-type checking with Python for once. Only 260 LOC but I'm happy it passes Mypy fully. Here's an example to show some speed differences: from cporter.cporter import CPorter def fibonacci_iterative(n): a = 0 b = 1 elif n == 0: return a elif n == 1: return b else: for i in range(2,n+1): c = a + b a = b b = c return b cporter = CPorter() cporter.set_library_path("examples/lib") cporter.add_library("fib") print("Calculating 100th fibonacci number") py_results = cporter.profile_python_function(fibonacci_iterative, 100) c_results = cporter.profile_function("fib", "fibonacci_iterative", 100) print(f"C Result:{c_results[0]} Time: {c_results[1]} seconds") print(f"Python Result:{c_results[0]} Time: {py_results[1]} seconds") And our result: Calculating 100th fibonacci number C Result:3736710778780434371 Time: 0.0001399169999999339 seconds Python Result:3736710778780434371 Time: 5.000000000032756e-06 seconds Anyway, here's the repo: https://github.com/snacsnoc/cporter The inspiration came from another project I submitted a few PRs to, sushi. It's another library to run functions from foreign languages within Python. Check it out, it's pretty cool. submitted by /u/snacsnoc [link] [comments]
Hi gouyss, I made a quick video for web devs that want to self-host their apps. I showcase Docker, Docker-Compose and Traefik very quickly and show web developers how to get their APIs public quickly. Nothing groundbreaking but I needed this information and I didn't have a clear understanding of where to find it but it's my utmost hope it helps others that were chasing project requirements that are outlined in this video. Video Link: https://www.youtube.com/watch?v=NIHzYIkXFhE Github Link: https://github.com/Wizock/how-to-host-FastAPI-Apis-on-traefik All feedback is appreciated :). submitted by /u/ronnytec [link] [comments]
I created a CLI wrapper for ChatGPT called gptty because I was dissatisfied with the categorization tools available in the ChatGPT web UI. It can be installed on Github https://github.com/signebedi/gptty. I've added preliminary user docs on installation, configuration, and usage. I'd love feedback on (and contributions to) the code base. What gptty does differently than other tools. gptty adds support for tagged questions that, when used correctly, allow you to access past question context across sessions. So, for example, if you prepend a question with the [shakespeare] tag, then tag another question with the same, it allows you to access the prior conversation with ChatGPT - thus largely replicating the context-preserving behavior of the web application while giving users control over how to tag and categorize these conversations. Here is an example using the [shakespeare] tag Fundamentally, this wrapper is focused on user control over the categorization of their conversations, but it also wants to provide an aesthetically pleasing experience. If it gains some traction, I'd like to add support for a bash runtime that allows you to send one-off questions using the same categorization logic, like: gptty --question "how old is the universe" --tag "physics". Thanks for any feedback, contributions, or installs you can give! submitted by /u/liturgicalLorax [link] [comments]
If you're struggling to get correct Python dependencies for an older project, requirements_wayback_machine will try to tell you how your dependencies would have been resolved at given date in the past! submitted by /u/tkarabela_ [link] [comments]
Comment any project ideas beginner or advanced in this thread for others to give a try! If you complete one make sure to reply to the comment with how you found it and attach some source code! If you're looking for project ideas, you might be interested in checking out Al Sweigart's, "The Big Book of Small Python Projects" which provides a list of projects and the code to make them work. submitted by /u/Im__Joseph [link] [comments]
I've decided to throw together a little thing that's pretty helpful with keeping track of transactions. Like a register book you'd get from a bank. It works how you'd expect it to work, with a way to describe what the transaction was for, whether it was money going "in" or moving "out". It'll automatically update the total, starting with a starting balance that'd you have to set it up with to begin with. Like a normal register book. You can use this project to help with balancing a checkbook, keeping track of money moving in and out of your wallet, etc. I don't expect it to be used at all, but I thought it was neat enough to share as it'll most definitely help me out a decent amount. You can find the project here: https://github.com/therealOri/pyWave submitted by /u/thereal0ri_ [link] [comments]
Hello! I am working on a library, and in a part of it, I perform a customized search over large bytes objects. In my experience, C code runs about an order of magnitude faster when working with primitive data like byte arrays, I wanted to rewrite that part of the code in C to gain performance. I know about the ctypes module, but I am worried about portability. The library has about 30k downloads, so I want it to be compatible will any system it is installed on. To my knowledge, numpy is also largely written in C. Do they compile their code for every possible platform and choose the binaries dynamically, or is there some other good way to do it? If someone has any experience regarding this, any help would be appreciated greatly! submitted by /u/M8Ir88outOf8 [link] [comments]
You can translate the content of this page by selecting a language in the select box.
What are the corresponding Azure and Google Cloud services for each of the AWS services?
What are unique distinctions and similarities between AWS, Azure and Google Cloud services? For each AWS service, what is the equivalent Azure and Google Cloud service? For each Azure service, what is the corresponding Google Service? AWS Services vs Azure vs Google Services? Side by side comparison between AWS, Google Cloud and Azure Service?
Cloud Practitioner Exam Prep: AWS vs Azure vs Google
1
Category: Marketplace Easy-to-deploy and automatically configured third-party applications, including single virtual machine or multiple virtual machine solutions. References: [AWS]:AWS Marketplace [Azure]:Azure Marketplace [Google]:Google Cloud Marketplace Tags: #AWSMarketplace, #AzureMarketPlace, #GoogleMarketplace Differences: They are both digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on their respective cloud platform.
Tags: #AlexaSkillsKit, #MicrosoftBotFramework, #GoogleAssistant Differences: One major advantage Google gets over Alexa is that Google Assistant is available to almost all Android devices.
If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLFC01 book below.
Tags: #AmazonLex, #CogintiveServices, #AzureSpeech, #Api.ai, #DialogFlow, #Tensorflow Differences: api.ai provides us with such a platform which is easy to learn and comprehensive to develop conversation actions. It is a good example of the simplistic approach to solving complex man to machine communication problem using natural language processing in proximity to machine learning. Api.ai supports context based conversations now, which reduces the overhead of handling user context in session parameters. On the other hand in Lex this has to be handled in session. Also, api.ai can be used for both voice and text based conversations (assistant actions can be easily created using api.ai).
Category: Big data and analytics: Data warehouse Description: Apache Spark-based analytics platform. Managed Hadoop service. Data orchestration, ETL, Analytics and visualization References: [AWS]:EMR, Data Pipeline, Kinesis Stream, Kinesis Firehose, Glue, QuickSight, Athena, CloudSearch [Azure]:Azure Databricks, Data Catalog Cortana Intelligence, HDInsight, Power BI, Azure Datafactory, Azure Search, Azure Data Lake Anlytics, Stream Analytics, Azure Machine Learning [Google]:Cloud DataProc, Machine Learning, Cloud Datalab Tags:#EMR, #DataPipeline, #Kinesis, #Cortana, AzureDatafactory, #AzureDataAnlytics, #CloudDataProc, #MachineLearning, #CloudDatalab Differences: All three providers offer similar building blocks; data processing, data orchestration, streaming analytics, machine learning and visualisations. AWS certainly has all the bases covered with a solid set of products that will meet most needs. Azure offers a comprehensive and impressive suite of managed analytical products. They support open source big data solutions alongside new serverless analytical products such as Data Lake. Google provide their own twist to cloud analytics with their range of services. With Dataproc and Dataflow, Google have a strong core to their proposition. Tensorflow has been getting a lot of attention recently and there will be many who will be keen to see Machine Learning come out of preview.
Category: Serverless Description: Integrate systems and run backend processes in response to events or schedules without provisioning or managing servers. References: [AWS]:AWS Lambda [Azure]:Azure Functions [Google]:Google Cloud Functions Tags:#AWSLAmbda, #AzureFunctions, #GoogleCloudFunctions Differences: Both AWS Lambda and Microsoft Azure Functions and Google Cloud Functions offer dynamic, configurable triggers that you can use to invoke your functions on their platforms. AWS Lambda, Azure and Google Cloud Functions support Node.js, Python, and C#. The beauty of serverless development is that, with minor changes, the code you write for one service should be portable to another with little effort – simply modify some interfaces, handle any input/output transforms, and an AWS Lambda Node.JS function is indistinguishable from a Microsoft Azure Node.js Function. AWS Lambda provides further support for Python and Java, while Azure Functions provides support for F# and PHP. AWS Lambda is built from the AMI, which runs on Linux, while Microsoft Azure Functions run in a Windows environment. AWS Lambda uses the AWS Machine architecture to reduce the scope of containerization, letting you spin up and tear down individual pieces of functionality in your application at will.
Category:Caching Description:An in-memory–based, distributed caching service that provides a high-performance store typically used to offload non transactional work from a database. References: [AWS]:AWS ElastiCache (works as an in-memory data store and cache to support the most demanding applications requiring sub-millisecond response times.) [Azure]:Azure Cache for Redis (based on the popular software Redis. It is typically used as a cache to improve the performance and scalability of systems that rely heavily on backend data-stores.) [Google]:Memcache (In-memory key-value store, originally intended for caching) Tags:#Redis, #Memcached <Differences: They all support horizontal scaling via sharding.They all improve the performance of web applications by allowing you to retrive information from fast, in-memory caches, instead of relying on slower disk-based databases.”, “Differences”: “ElastiCache supports Memcached and Redis. Memcached Cloud provides various data persistence options as well as remote backups for disaster recovery purposes. Redis offers persistence to disk, Memcache does not. This can be very helpful if you cache lots of data, since you remove the slowness around having a fully cold cache. Redis also offers several extra data structures that Memcache doesn’t— Lists, Sets, Sorted Sets, etc. Memcache only has Key/Value pairs. Memcache is multi-threaded. Redis is single-threaded and event driven. Redis is very fast, but it’ll never be multi-threaded. At hight scale, you can squeeze more connections and transactions out of Memcache. Memcache tends to be more memory efficient. This can make a big difference around the magnitude of 10s of millions or 100s of millions of keys. ElastiCache supports Memcached and Redis. Memcached Cloud provides various data persistence options as well as remote backups for disaster recovery purposes. Redis offers persistence to disk, Memcache does not. This can be very helpful if you cache lots of data, since you remove the slowness around having a fully cold cache. Redis also offers several extra data structures that Memcache doesn’t— Lists, Sets, Sorted Sets, etc. Memcache only has Key/Value pairs. Memcache is multi-threaded. Redis is single-threaded and event driven. Redis is very fast, but it’ll never be multi-threaded. At hight scale, you can squeeze more connections and transactions out of Memcache. Memcache tends to be more memory efficient. This can make a big difference around the magnitude of 10s of millions or 100s of millions of keys.
Category: Enterprise application services Description:Fully integrated Cloud service providing communications, email, document management in the cloud and available on a wide variety of devices. References: [AWS]:Amazon WorkMail, Amazon WorkDocs, Amazon Kendra (Sync and Index) [Azure]:Office 365 [Google]:G Suite Tags: #AmazonWorkDocs, #Office365, #GoogleGSuite Differences: G suite document processing applications like Google Docs are far behind Office 365 popular Word and Excel software, but G Suite User interface is intuite, simple and easy to navigate. Office 365 is too clunky. Get 20% off G-Suite Business Plan with Promo Code: PCQ49CJYK7EATNC
Category: Management Description: A unified management console that simplifies building, deploying, and operating your cloud resources. References: [AWS]:AWS Management Console, Trusted Advisor, AWS Usage and Billing Report, AWS Application Discovery Service, Amazon EC2 Systems Manager, AWS Personal Health Dashboard, AWS Compute Optimizer (Identify optimal AWS Compute resources) [Azure]:Azure portal, Azure Advisor, Azure Billing API, Azure Migrate, Azure Monitor, Azure Resource Health [Google]:Google CLoud Platform, Cost Management, Security Command Center, StackDriver Tags: #AWSConsole, #AzurePortal, #GoogleCloudConsole, #TrustedAdvisor, #AzureMonitor, #SecurityCommandCenter Differences: AWS Console categorizes its Infrastructure as a Service offerings into Compute, Storage and Content Delivery Network (CDN), Database, and Networking to help businesses and individuals grow. Azure excels in the Hybrid Cloud space allowing companies to integrate onsite servers with cloud offerings. Google has a strong offering in containers, since Google developed the Kubernetes standard that AWS and Azure now offer. GCP specializes in high compute offerings like Big Data, analytics and machine learning. It also offers considerable scale and load balancing – Google knows data centers and fast response time.
Build and connect intelligent bots that interact with your users using text/SMS, Skype, Teams, Slack, Office 365 mail, Twitter, and other popular services.
Enables both Speech to Text, and Text into Speech capabilities. The Speech Services are the unification of speech-to-text, text-to-speech, and speech-translation into a single Azure subscription. It’s easy to speech enable your applications, tools, and devices with the Speech SDK, Speech Devices SDK, or REST APIs. Amazon Polly is a Text-to-Speech (TTS) service that uses advanced deep learning technologies to synthesize speech that sounds like a human voice. With dozens of lifelike voices across a variety of languages, you can select the ideal voice and build speech-enabled applications that work in many different countries. Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to their applications. Using the Amazon Transcribe API, you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech.
Computer Vision: Extract information from images to categorize and process visual data. Amazon Rekognition is a simple and easy to use API that can quickly analyze any image or video file stored in Amazon S3. Amazon Rekognition is always learning from new data, and we are continually adding new labels and facial recognition features to the service.
Face: Detect, identy, and analyze faces in photos.
The Virtual Assistant Template brings together a number of best practices we’ve identified through the building of conversational experiences and automates integration of components that we’ve found to be highly beneficial to Bot Framework developers.
Processes and moves data between different compute and storage services, as well as on-premises data sources at specified intervals. Create, schedule, orchestrate, and manage data pipelines.
Virtual servers allow users to deploy, manage, and maintain OS and server software. Instance types provide combinations of CPU/RAM. Users pay for what they use with the flexibility to change sizes.
Allows you to automatically change the number of VM instances. You set defined metric and thresholds that determine if the platform adds or removes instances.
Redeploy and extend your VMware-based enterprise workloads to Azure with Azure VMware Solution by CloudSimple. Keep using the VMware tools you already know to manage workloads on Azure without disrupting network, security, or data protection policies.
Azure Container Instances is the fastest and simplest way to run a container in Azure, without having to provision any virtual machines or adopt a higher-level orchestration service.
Deploy orchestrated containerized applications with Kubernetes. Simplify monitoring and cluster management through auto upgrades and a built-in operations console.
Fully managed service that enables developers to deploy microservices applications without managing virtual machines, storage, or networking. AWS App Mesh is a service mesh that provides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure. App Mesh standardizes how your services communicate, giving you end-to-end visibility and ensuring high-availability for your applications.
Integrate systems and run backend processes in response to events or schedules without provisioning or managing servers. AWS Lambda is an event-driven, serverless computing platform provided by Amazon as a part of the Amazon Web Services. It is a computing service that runs code in response to events and automatically manages the computing resources required by that code
Managed relational database service where resiliency, scale, and maintenance are primarily handled by the platform. Amazon Relational Database Service is a distributed relational database service by Amazon Web Services. It is a web service running “in the cloud” designed to simplify the setup, operation, and scaling of a relational database for use in applications. Administration processes like patching the database software, backing up databases and enabling point-in-time recovery are managed automatically. Scaling storage and compute resources can be performed by a single API call as AWS does not offer an ssh connection to RDS instances.
An in-memory–based, distributed caching service that provides a high-performance store typically used to offload non transactional work from a database. Amazon ElastiCache is a fully managed in-memory data store and cache service by Amazon Web Services. The service improves the performance of web applications by retrieving information from managed in-memory caches, instead of relying entirely on slower disk-based databases. ElastiCache supports two open-source in-memory caching engines: Memcached and Redis.
Migration of database schema and data from one database format to a specific database technology in the cloud. AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.
Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers. AWS X-Ray is an application performance management service that enables a developer to analyze and debug applications in the Amazon Web Services (AWS) public cloud. A developer can use AWS X-Ray to visualize how a distributed application is performing during development or production, and across multiple AWS regions and accounts.
A cloud service for collaborating on code development. AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define. AWS CodeCommit is a source code storage and version-control service for Amazon Web Services’ public cloud customers. CodeCommit was designed to help IT teams collaborate on software development, including continuous integration and application delivery.
Collection of tools for building, debugging, deploying, diagnosing, and managing multiplatform scalable apps and services. The AWS Developer Tools are designed to help you build software like Amazon. They facilitate practices such as continuous delivery and infrastructure as code for serverless, containers, and Amazon EC2.
Built on top of the native REST API across all cloud services, various programming language-specific wrappers provide easier ways to create solutions. The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.
Configures and operates applications of all shapes and sizes, and provides templates to create and manage a collection of resources. AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers.
Provides a way for users to automate the manual, long-running, error-prone, and frequently repeated IT tasks. AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. CloudFormation allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts.
Provides an isolated, private environment in the cloud. Users have control over their virtual networking environment, including selection of their own IP address range, creation of subnets, and configuration of route tables and network gateways.
Connects Azure virtual networks to other Azure virtual networks, or customer on-premises networks (Site To Site). Allows end users to connect to Azure services through VPN tunneling (Point To Site).
A service that hosts domain names, plus routes users to Internet applications, connects user requests to datacenters, manages traffic to apps, and improves app availability with automatic failover.
Application Gateway is a layer 7 load balancer. It supports SSL termination, cookie-based session affinity, and round robin for load-balancing traffic.
Azure Digital Twins is an IoT service that helps you create comprehensive models of physical environments. Create spatial intelligence graphs to model the relationships and interactions between people, places, and devices. Query data from a physical space rather than disparate sensors.
Provides analysis of cloud resource configuration and security so subscribers can ensure they’re making use of best practices and optimum configurations.
Allows users to securely control access to services and resources while offering data security and protection. Create and manage users and groups, and use permissions to allow and deny access to resources.
Role-based access control (RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.
Provides managed domain services such as domain join, group policy, LDAP, and Kerberos/NTLM authentication that are fully compatible with Windows Server Active Directory.
Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources, so those resources stay compliant with your corporate standards and service level agreements.
Azure management groups provide a level of scope above subscriptions. You organize subscriptions into containers called “management groups” and apply your governance conditions to the management groups. All subscriptions within a management group automatically inherit the conditions applied to the management group. Management groups give you enterprise-grade management at a large scale, no matter what type of subscriptions you have.
Helps you protect and safeguard your data and meet your organizational security and compliance commitments.
Key Management Service AWS KMS, CloudHSM | Key Vault
Provides security solution and works with other services by providing a way to manage, create, and control encryption keys stored in hardware security modules (HSM).
Provides inbound protection for non-HTTP/S protocols, outbound network-level protection for all ports and protocols, and application-level protection for outbound HTTP/S.
An automated security assessment service that improves the security and compliance of applications. Automatically assess applications for vulnerabilities or deviations from best practices.
Object storage service, for use cases including cloud applications, content distribution, backup, archiving, disaster recovery, and big data analytics.
Provides a simple interface to create and configure file systems quickly, and share common files. Can be used with traditional protocols that access files over a network.
Easily join your distributed microservice architectures into a single global application using HTTP load balancing and path-based routing rules. Automate turning up new regions and scale-out with API-driven global actions, and independent fault-tolerance to your back end microservices in Azure—or anywhere.
Cloud technology to build distributed applications using out-of-the-box connectors to reduce integration challenges. Connect apps, data and devices on-premises or in the cloud.
Serverless technology for connecting apps, data and devices anywhere, whether on-premises or in the cloud for large ecosystems of SaaS and cloud-based connectors.
Azure Stack is a hybrid cloud platform that enables you to run Azure services in your company’s or service provider’s datacenter. As a developer, you can build apps on Azure Stack. You can then deploy them to either Azure Stack or Azure, or you can build truly hybrid apps that take advantage of connectivity between an Azure Stack cloud and Azure.
Basically, it all comes down to what your organizational needs are and if there’s a particular area that’s especially important to your business (ex. serverless, or integration with Microsoft applications).
Some of the main things it comes down to is compute options, pricing, and purchasing options.
Here’s a brief comparison of the compute option features across cloud providers:
Here’s an example of a few instances’ costs (all are Linux OS):
Each provider offers a variety of options to lower costs from the listed On-Demand prices. These can fall under reservations, spot and preemptible instances and contracts.
Both AWS and Azure offer a way for customers to purchase compute capacity in advance in exchange for a discount: AWS Reserved Instances and Azure Reserved Virtual Machine Instances. There are a few interesting variations between the instances across the cloud providers which could affect which is more appealing to a business.
Another discounting mechanism is the idea of spot instances in AWS and low-priority VMs in Azure. These options allow users to purchase unused capacity for a steep discount.
With AWS and Azure, enterprise contracts are available. These are typically aimed at enterprise customers, and encourage large companies to commit to specific levels of usage and spend in exchange for an across-the-board discount – for example, AWS EDPs and Azure Enterprise Agreements.
You can read more about the differences between AWS and Azure to help decide which your business should use in this blog post