What are Azure Native Services?

What are Azure Native Services?

Azure Native Services Overview.

Azure native services are cloud-based solutions that are developed, managed, and supported by Microsoft. These services are designed to help organizations build and deploy applications on the Azure cloud platform, and take advantage of the scalability, security, and reliability of the Azure infrastructure. In this blog post, we’ll take a look at some of the key Azure native services that are available, and how they can be used to build and run cloud-based applications.

What are the native services in Azure?

Azure Virtual Machines Overview.

Azure Virtual Machines (VMs): Azure VMs allow you to create and manage virtual machines in the Azure cloud. You can choose from a variety of VM sizes and configurations, and you can use your own images or choose from a wide range of pre-configured images that are available in the Azure Marketplace. Azure VMs are a cost-effective way to run a wide range of workloads in the cloud, including web servers, databases, and applications.

Azure Virtual Machines (VMs) are a service provided by Microsoft Azure that allow users to create and run virtual machines in the cloud. VMs are based on a variety of operating systems and can be used for a wide range of workloads, including running applications, hosting websites, and performing data processing tasks.

With Azure VMs, users can quickly spin up a new VM, choosing from a variety of pre-configured virtual machine images or creating their own custom image. Users also have the ability to scale the resources of a VM (such as CPU and memory) up or down as needed, and can also create multiple VMs in a virtual network to build a scalable and fault-tolerant solution.

Azure VMs also provide an additional layer of security by allowing to apply security policies, firewall and also integrate with Azure AD for identity-based access to the VMs.

Additionally, Azure VMs can be combined with other Azure services, such as Azure Storage and Azure SQL Database, to create a complete and highly-available solution for running applications and storing data in the cloud.

Azure Kubernetes Service Overview.

Azure Kubernetes Service (AKS): AKS is a fully managed Kubernetes service that makes it easy to deploy, scale, and manage containerized applications. With AKS, you can deploy and run containerized applications on Azure with just a few clicks, and you can scale your deployments up or down as needed to meet changing demand. AKS is a great choice for organizations that are looking to build cloud-native applications that are scalable, resilient, and easy to manage.

AKS makes it easy to deploy and manage a Kubernetes cluster on Azure, allowing developers to focus on their applications, rather than the underlying infrastructure.

AKS is built on top of the Kubernetes open-source container orchestration system and enables users to quickly create and manage a cluster of virtual machines that run containerized applications.

With AKS, users can easily deploy and scale their containerized applications and services, and can also take advantage of built-in Kubernetes features such as automatic scaling, self-healing, and rolling updates. Additionally, it allows you to monitor and troubleshoot the kubernetes clusters with the help of Azure Monitor and log analytics.

AKS also integrate well with other Azure services, such as Azure Container Registry, Azure Monitor and Azure Active Directory to provide a complete solution for managing containerized applications in the cloud. Additionally, AKS supports Azure DevOps and other CI/CD tools.

By using AKS, organizations can benefit from the flexibility and scalability of containers, and can also take advantage of Azure’s global network of data centers and worldwide compliance certifications to build and deploy applications with confidence.

Azure Functions Overview.

Azure Functions: Azure Functions is a serverless compute service that allows you to run code in response to specific triggers, such as a change in data or a request from another service. Azure Functions is a great way to build and deploy microservices, and it’s especially useful for organizations that need to process large volumes of data or perform tasks on a regular schedule.

Azure Functions is a serverless compute service provided by Microsoft Azure that allows developers to run event-triggered code in the cloud without having to provision or manage any underlying infrastructure.

Azure Functions allows developers to write code in a variety of languages, including C#, JavaScript, F#, Python, and more, and can be triggered by a wide range of events, including HTTP requests, messages in a queue, or changes in data stored in Azure. Once an Azure Function is triggered, it is executed in an ephemeral container, meaning that the developer does not need to worry about the underlying infrastructure or scaling.

Azure functions are designed to be small, single-purpose functions that respond to specific events, they can also integrate with other Azure services and connectors, to create a end-to-end data processing and workflow pipelines.

Azure Functions provide an efficient and cost-effective way to run and scale code in the cloud. Because Azure automatically provisions and scales the underlying infrastructure, it can be a cost-effective option for running infrequently used or unpredictable workloads. Additionally, Azure functions can be hosted in Consumption Plan, App Service Plan or as a Kubernetes pod, this provide more flexibility and options for production workloads.

Overall, Azure Functions is a powerful, serverless compute service that enables developers to build and run event-driven code in the cloud, without having to worry about the underlying infrastructure.

Azure SQL Database Overview.

Azure SQL Database is a fully managed relational database service provided by Microsoft Azure for deploying and managing structured data in the cloud. Azure SQL Database is built on top of Microsoft SQL Server and is designed to make it easy for developers to create and manage relational databases in the cloud without having to worry about infrastructure or scaling.

Azure SQL Database supports a wide range of data types and provides robust security features, such as transparent data encryption and advanced threat protection. Additionally, it provides built-in High availability and disaster recovery options which eliminates the need to setup and configure on-premises infrastructure.

With Azure SQL Database, developers can quickly and easily create a new database and start working with data, while the service automatically manages the underlying infrastructure and scaling. Additionally, it provides rich set of tools for monitoring, troubleshooting and performance tuning the databases.

Azure SQL Database also provides a number of options for deploying and managing databases, including single databases, elastic pools and Managed Instance. Single databases and elastic pools are ideal for smaller workloads with predictable traffic patterns and Managed Instance is suitable for larger and more complex workloads which needs more control over the infrastructure.

Azure SQL Database can be integrated with other Azure services, such as Azure Data Factory and Azure Machine Learning, to create a complete data platform for building and deploying cloud-based applications and services.

Azure Storage Overview.

Azure Storage is a cloud-based service provided by Microsoft Azure for storing and managing unstructured data, such as binary files, text files, and media files. Azure Storage includes several different storage options, including Azure Blob storage, Azure File storage, Azure Queue storage, and Azure Table storage.

Azure Blob storage is designed for unstructured data and is optimized for storing large amounts of unstructured data, such as text or binary data. It allows to store and access massive amounts of unstructured data, such as text or binary data, and serves as the foundation of many other Azure services, including Azure Backup, Azure Media Services and Azure Machine Learning.

Azure File storage is a service that allows you to create file shares in the cloud, accessible from any SMB 3.0 compliant client. This can be useful for scenarios where you have legacy applications that need to read and write files to a file share.

Azure Queue storage is a service for storing and retrieving messages in a queue, used to exchange messages between components of a distributed application.

Azure Table storage is a service for storing and querying structured NoSQL data in the form of a key-value store.

All these services allows you to store and retrieve data in the cloud using standard REST and SDK APIs, and they can be accessed from anywhere in the world via HTTP or HTTPS.

Azure Storage also provides built-in redundancy and automatically replicates data to ensure that it is always available, even in the event of an outage. It also provides automatic load balancing and offers built-in data protection, data archiving, and data retention options.

With Azure Storage, developers can easily and cost-effectively store and manage large amounts of unstructured data in the cloud, and take advantage of Azure’s global network of data centers and worldwide compliance certifications to build and deploy applications with confidence.

Azure Networking Overview.

Azure Networking is a set of services provided by Microsoft Azure for creating, configuring, and managing virtual networks, or VNet, in the cloud. Azure Networking allows users to connect Azure resources and on-premises resources together in a secure and scalable manner.

With Azure Virtual Network, you can create a virtual representation of your own network in the Azure cloud, and define subnets, private IP addresses, security policies, and routing rules for those subnets. Virtual Network allows you to create fully isolated network environment in Azure, this includes ability to host your own IP addresses, and also use your own domain name system (DNS) servers.

Azure ExpressRoute enables you to create private connections between Azure data centers and infrastructure that’s on your premises or in a colocation facility. ExpressRoute connections don’t go over the public internet, and they offer more reliability, faster speeds, and lower latencies than typical connections over the internet.

Azure VPN Gateway allows you to create secure, cross-premises connections to your virtual network from anywhere in the world. You can use VPN gateways to establish secure connections to your virtual network from other virtual networks in Azure, or from on-premises locations.

Azure Load Balancer distributes incoming traffic among multiple virtual machines, ensuring that no single virtual machine is overwhelmed by too much traffic. Load Balancer supports external or internal traffic, also it is agnostic to the underlying protocols.

Azure Network Security Group allows you to apply network-level security to your Azure resources by creating security rules to control inbound and outbound traffic.

Overall, Azure Networking services provide a comprehensive set of tools for creating, configuring, and managing virtual networks in the cloud, and allows organizations to securely connect their Azure resources with on-premises resources. It also provides a set of security features to protect the resources in the network.

Azure Native Services Conclusion.

In conclusion, Azure native services provide a powerful and flexible platform for building and running cloud-based applications. Whether you’re looking to create a new application from scratch, or you’re looking to migrate an existing application to the cloud, Azure has a range of native services that can help you achieve your goals. By using Azure native services, you can take advantage of the scalability, security, and reliability of the Azure infrastructure, and you can build and deploy applications that are designed to meet the needs of your organization.

How to change the Tiering of Azure Blobs

How to change the Tiering of Azure Blobs

If you are an Azure Cloud Storage user, you will know that Azure offers three different storage tiers for storing data: hot, cool, and archive tiers. These Azure Storage tiers offer different levels of performance and cost efficiency. In this post, we’ll show you how to move blob content between these storage tiers, whether it is an individual Azure Blob, multiple Azure Blobs, or all the Azure Blobs in your Storage Accounts, using Cloud Storage Manager.

Why Change Azure Blob Tiering?

There are cost benefits to moving your Azure Blobs down to a lower Storage Tier. Hot is the most expensive, with cool a little bit cheaper, and the Archive Blob Tier having the lowest cost option. For more Azure Storage Cost-saving ideas, we cover some great cost-saving initiatives in another blog, or perhaps you want to see just how much Azure Storage you are consuming across your complete Microsoft Tenancy in this blog. These are just some of the functions you can perform with Cloud Storage Manager.

 

Cloud Storage Manager is a powerful tool for managing your Azure Cloud Storage. In this blog post, I will show you how easy it is to move a single Azure Blob or even select multiple or the complete container and move those blobs from any storage tiering to another with just a few clicks.

Easily change single, multiple or all your Blobs to any selected Azure Storage Tier

In this blog post, I will show you how easy it is to move a single Azure Blob or even select multiple or the complete container and move those blobs from any storage tiering to another with just a few clicks. 

There are cost benefits moving your Azure Blobs down to a lower Storage Tier, Hot being the most expensive, with a cool a little bit cheaper, and the Archive Blob Tier having the lowest cost option. For more Azure Storage Cost saving ideas, we cover some great cost saving initiatives in another blog, or perhaps you wont to see just how much Azure Storage you are consuming across your complete Microsoft Tenancy in this blog. These are just some of the functions you can perform with Cloud Storage Manager.

Now perhaps you want to setup Azure Storage Lifecycle Management Policy, which will automatically move Azure Blobs to the Azure Storage Tier when certain conditions have been met. Well in one of our previous blog posts shows you exactly how to do this.

This blog post however, is if you would like to manually move a single or bulk amount of Azure Blobs to a different storage tier. 

Launch Cloud Storage Manager

Hopefully you have installed Cloud Storage Manager and have completed a full scan of your Azure Storage Accounts.

If you havent installed it yet, or want to download a Demo and try it for yourself, you can get a trial by visiting this page

So moving on, Ill assume have the above completed. First Open Cloud Storage Manager and navigate by the Azure Storage Tree and locate the Storage Account you want to change the Blob Storage Tiering for. 


Choose Azure Storage Account

Select the Blob View

Select the Azure Storage Container you want to change the Blob Tiering for, then select the Blob View Window.

Here you will see information about all the Azure Blobs in that Storage container. 

You will notice the Tiering of each of the Blobs, with a Fire Symbol showing they are in the Hot Storage Tier, an Icycle Symbol showing they reside in the Cool Storage Tier, or lastly a Tape Icon detailing the Azure Blobs are in the Azure Archive Storage Tier.


Azure Blob Storage Tiering

Select the Blobs you want to change Storage Tiering for.

Now from this view you can see all the Azure Blobs in that Storage Container. 

See details like the Blob Name, which Subscription, Storage Account and Container the Blob resides in, also the size, Tiering (of course thats why we are here), when the Blob was created and last modified. 

Simply select the Azure Blobs you want to change the Tiering of, using control and a left click of the mouse to select multiple individual Blobs, or press Control and A to select them all.


Select Blobs to change tier

Select individual or multiple Azure Blobs.

Now right click a Blob and you will be shown the Cloud Storage Manager right click menu. 

You can perform the folliwing functions to selected Azure Blobs from this menu; 

  • Change the Tier of the selected Azure Blobs
  • Delete the Selected Azure Blobs
  • Show or Hide the Parent Columns
  • Jump back to the Storage Account Window
  • Jump back to the Container Window
  • See the Blob Properties
  • Copy the Values displayed of the Blobs
  • Download the selected Blobs to your own Computer
  • Refresh the view you are currently looking at.

Now, since we are wanting to change the tiering of the selected Azure Blobs, I am going to left click on Change Tier


Change individual and multiple Blob Tier

Select individual or multiple Azure Blobs.

This is where we now select which tier you want the selected Azure Blobs changed to. 

In this example, Ill move those Blobs to the Archive Storage Tier.

Select Archive Tier and then click on OK.


Change Azure Blob Access Tier

Azure Blob Tiering Change

Cloud Storage Manager has now initiated a tiering change job for all the Azure Blobs I have selected. 

Cloud Storage Manager will now change the storage tiering type for each of these Azure Blobs selected.


Azure Blob Tier Updates

Check the status of the Blob Tiering Change

You can see the status of Azure Blob Tiering change by viewing the Activity Log within Cloud Storage Manager.

Each of those Blobs selected are now being moved to the Azure Blob Storage Archive Tier.


Cloud Storage Manager Activity Log

View the change in Tiering of your Azure Blobs

Now go back to the Blob view of the Storage Container we orginally selected. 

You can see here that those Azure Blobs I selected have now changed to the Archive Tier.

And thats all there is to it, we changed selected Azure Blobs storage tiers from the Hot Tier to the Archive Tier with just a few clicks. 

If you want to trial Cloud Storage Manager for yourself, simply visit the Cloud Storage Manager product page and download a trial for yourself.

Not only can Cloud Storage Manager change the tiering of bulk amounts of Azure Blobs with a few clicks, you can also see which Storage Accounts are not being used, and gain further insights in to your Azure Storage Consumption. 


Azure Blob Storage Tiering FAQs

Question Answer
What does blob stand for in Azure?
Blob stands for Binary Large OBject.
What are the 3 tiers of blob storage?
The 3 tiers of blob storage are Hot, Cool, and Archive.
What is block blob vs append blob vs page blob?
Block blobs are used for streaming and storing large binary objects. Append blobs are used for logging scenarios. Page blobs are used for random read/write operations.
What is hot tier in used for in Azure Storage?
The Hot tier is used for frequently accessed data that requires high-performance access times.
What is the cold tier used for in Azure Storage?
The Cool tier is used for infrequently accessed data that requires low-cost storage.
What is the archive tier used for in Azure Storage?
The Archive tier is used for rarely accessed data that needs to be stored for a long time at a very low cost.
What is the difference between blob and block blob?
Blob is a general term for data stored in Azure Storage. Block blobs are a type of blob that are used for large binary objects.
What is the content type of Azure blobs?
Azure blobs can have any content type specified by the user.
How can I get the consumption of my Azure Blob Storage?
Use Cloud Storage Manager to provide you with an overview and more indepth insights in to your Azure Storage consumption. Find out exactly how much storage each storage account, container or folder contains.
Is Azure blob storage equivalent to S3?
Yes, Azure blob storage is equivalent to S3 in AWS.
Is Azure blob storage a data lake?
No, Azure blob storage is not a data lake.
Can Azure blob storage store structured data?
Yes, Azure blob storage can store structured data in the form of text, JSON, or XML files.
What is the maximum number of blob containers in an Azure storage account?
The maximum number of blob containers in an Azure storage account is 500,000.
How to update to SCCM 1906

How to update to SCCM 1906

Step by Step guide on how to update to SCCM 1906​

SCCM 1906 upgrade

Hi all, Microsoft has just recently released their latest iteration to SCCM, version 1906. With this update of SCCM comes quite a few welcome features and to find out more go to this link on the Microsoft site.

But you are here to see how easy it is to upgrade your SCCM to the current branche 1906.

So let’s get started on the upgrade process for SCCM 1906.


  1. As always when performing any upgrades, make sure that you have a good known valid backup or snapshot of the servers you are targeting. Once you have confirmed you have both or either of these in place, open your SCCM console with your SCCM administration account.
    SCCM Console Main Window


  2. Now click on Administration then highlight Updates and Servicing.
    SCCM Administration Pane


     

  3. Now in the main Window you should hopefully see the SCCM 1906 update is Ready to Install. If not, click on Check for Updates and allow for sometime for SCCM to download the latest branch update from the Microsoft site. Remember to refresh your window as it wont show up automatically.SCCM 1906 download ready

  4. As you can see from the prior picture, our environment has the SCCM 1906 update all ready to install and that our previous SCCM 1902 updates had been installed. First thing we need to do is to Run prerequisite check. Highlight the Configuration Manager 1906 update, right click and choose Run prerequisite check.SCCM 1906 Run Prerequisite Check

  5. This check will take a little time, you can see the status of the check by refreshing the window.
    SCCM 1906 checking prerequisites
    You can also select Monitoring in the SCCM console Window, navigate to Updates and Servicing, highlight the Configuration Manager 1906 update, right click that and choose show status for a more detailed display of the status of the prerequisite check of SCCM 1906.
    (now this may take sometime to allow the 1906 upgrade to confirm that your SCCM environment is ready for the install, so always be patient)
    SCCM 1906 checking prerequisites detailed

  6. OK, hopefully your SCCM environment passed all the checks to confirm it is ready for the SCCM 1906 installation. Now go back to the Administration and you should see that the Prerequisite Check Passed for the Configuration Manager 1906 update as below.
    SCCM 1906 checking prerequisites passed

  7. Now to the gritty stuff, we are ready to start the upgrade process for SCCM 1906. Again, highlight the Configuration Manager 1906 update, right click and choose Install Update Pack.SCCM 1906 install update pack

     

  8. You are now presented with the Configuration Manager Updates Wizard. Click on Next when you are ready to start the installation, this will take around 30 – 45 minutes to complete so make sure you have a big enough change window for the update process.SCCM 1906 Configuration Manager Updates Wizard

  9. On the second windows of the SCCM updates wizard, carefully choose any options / features  that you require and then click on Next.SCCM 1906 Configuration Manager Updates Wizard Options

     

  10. Client Update Settings window allows you to choose if you would like to validate the client update on members of a Pre-Production collection so that you can test there are no issues with the update. As this is one of SmiKar’s software test environments, I am going to just Upgrade without Validating. When you are ready to proceed click on Next.SCCM 1906 Configuration Manager Updates Wizard Client Updates

     

  11. We are almost ready to start the installation and upgrade to SCCM 1906. On the License Terms window, you can read the License Terms if you wish to do so, click on Next when you are ready for the next step.
    SCCM 1906 Configuration Manager Updates Wizard License Acceptance

  12. Now the last step before the upgrade process starts off. On the Configuration Manager Updates Wizard Summary Window, check the settings and details you want have been selected and then finally click on Next. This will start the upgrade to SCCM 1906 so make sure you want to do this.SCCM 1906 Configuration Manager Updates Wizard Summary

     

  13. The last window you can now close and the SCCM 1906 update will complete in the background.SCCM 1906 Configuration Manager Updates Wizard Completed

  14. Now what if you dont want to refresh the SCCM console window and wish to see more details about what is happening with the update to SCCM 1906. Well you can easily get a more detailed view of the upgrade process by going to the local C Drive of your SCCM server and opening the ConfigMgrSetup.log file. If you have Trace32 installed to read your log files, it will display this in a nice and easy to read fashion.SCCM 1906 ConfigMgrSetup Log

  15. Alternatively to see the update, in your SCCM Console go to Monitoring then Updates and Servicing Status, highlight the Configuration Manage 1906 update, right click and choose Show Status. In the Update Pack Installation Status window, highlight Installation and you can also see the what the SCCM 1906 update is doing.SCCM 1906 Update Status Window

  16. Hopefully after some time (it took around 30 minutes to complete the upgrade to SCCM 1906 in our test environment), everything should have installed and updated your SCCM to the latest branche. You may get a warning that your Configuration Manager console needs to be updated as well.SCCM 1906 Console Update

  17. Close the console then reopen to update. (the SCCM 1906 update took a few minutes to complete in the lab)SCCM 1906 Console Update

  18. Finally after quite a few easy steps, we can confirm that the SCCM environment successfully installed to the current branche, SCCM 1906.SCCM 1906 install successSCCM 1906 Installed

Now that you have updated your SCCM to version 1906, perhaps you use SCCM to patch your virtual server environment. While you are here, check out SnaPatch and see how it allows you to have an easy roll back position should any issues with your patch deployment occurs.

Schedule Azure VM Deployment

Schedule Azure VM Deployment

How to schedule the deployment of Azure VMs

Automate Azure VM Deployment

If you need to schedule the regular deployment of your Azure VMs, you can do this easily with one of our Azure tools, AVMD (Azure VM Deployer). With the Azure VM deployer you can deploy single and multiple Virtual Machines quickly, easily and repeatably.

AVMD is completely FREE, you can download it from here and start using it right away.

Hopefully you have AVMD all setup and an azure admin account with the appropriate permissions to your Azure tenant to start the scheduled deployment of your Azure VMs.

Azure VM Deployment Use Cases

Use Case Description

Dev/Test Environments

Scheduling Azure VM deployment for development and testing environments allows for efficient use of resources by creating and deleting VMs as needed. For example, a development team might schedule VM deployment during business hours and delete them at the end of the day to avoid wasting resources.

Batch Processing

Scheduling Azure VM deployment for batch processing can help optimize resource utilization by only creating VMs when needed. This can be particularly useful for applications that require large amounts of compute resources for short periods of time, such as data analysis or video encoding.

Disaster Recovery

Scheduling Azure VM deployment for disaster recovery can help ensure that backup VMs are always available and up-to-date. This can be critical in the event of a system failure or other outage.

Scheduled Maintenance

Scheduling Azure VM deployment for scheduled maintenance can help minimize the impact of maintenance on users by automatically redirecting traffic to other VMs while maintenance is performed.

High Availability

Scheduling Azure VM deployment for high availability can help ensure that VMs are always available to users. This can be achieved by automatically creating new VMs when existing ones fail or become unavailable.

Cost Optimization

Scheduling Azure VM deployment can help optimize costs by only creating VMs when they are needed, and deleting them when they are no longer needed. This can be particularly useful for organizations that have variable workloads or need to closely manage their cloud spend.

Launch the Azure VM Deployer

To start automating and scheduling deployment of your Azure VMS, simply open up Azure VM Deployer and let is synchronise with your Azure environment.

Schedule Azure VM Deployment

 

Scan your Azure Tenancy First

First, ensure that you have the AVMD tool downloaded and set up on your machine. You’ll also need an Azure admin account with the necessary permissions to your Azure tenant to deploy VMs. Once you’re ready to begin, launch the Azure VM Deployer and allow it to synchronize with your Azure environment.

Azure VM Deployment Settings

  • You can now start filling out all the settings to deploy your VMs to your Azure subscription.
    Provide a Name for your Virtual Server
    Choose the Servers Operating System
    Azure Subscription
    Azure Availability Zone Location
    Azure Resource Group
    Choose the Azure Virtual Machine size
    Provide a local Administrator account and password
    Join the VM automatically a domain (You will need an account with Domain Join Permissions)
    Which Azure Storage account you wish to deploy the VM to
    Any additional disks you want to add to the VM during deployment
    The Azure vNet and Subnet
    Further additional options if you want VM diagnostics, Azure Log Analytics, a Basic NSG, Public IP, Azure Resource Tags and finally if you want to shutdown the VM post deployment.Click Add to queue when you have filled out all the Azure VM details and it will populate these settings to the Deployment Queue.

Schedule Azure VM Deployment Selection

Provide a name for your Azure VM

For any additional VMs, keep filling out the details and add them to the queue too. If the servers are all the same type and settings you just need to update the server name before adding them to the deployment queue.

Schedule Azure VM Deployment Server

Azure VM Details

Now when you have added all the Virtual Machines you wish to deploy to Azure you are now ready to start deployment, but you may want to just confirm that your VM settings are correct. Simply right click the blue icon next to each Virtual Machine in the Deployment Queue and choose Show Details

Schedule Azure VM Deployment Server Overview 2

Azure VM Deployment Schedule Creation

Now lets start the deployment of your Azure Virtual Machines. Click the DEPLOY button in the bottom left hand corner and you are now presented with the options to deploy right now or schedule the VM deployment of your Azure VMs.

  1. Schedule Azure VM Deployment Now

Azure VM New Deployment Schedule

In this example we will schedule the deployment of the Azure VMs, so click on schedule, then on OK to start the schedule creation.

Schedule Azure VM Deployment 3

Azure VM New Deployment Schedule

We are now prompted to create the schedule to deploy your Azure Virtual Machines, click on New.

Schedule Azure VM Deployment New

Scheduled Deployment Execution

In this example we will schedule the deployment of the Azure VMs, so click on schedule, then on OK to start the schedule creation.

Schedule Azure VM Deployment 3

Schedule Creation

We are now prompted to create the schedule to deploy your Azure Virtual Machines, click on New.

Schedule Azure VM Deployment New

Schedule Settings

Simply put in the date and time you wish for your Azure VMs to deploy at then click OK. (You can set to schedule this once off, or a reoccurring daily, weekly or monthly schedule.)

Schedule Azure VM Deployment Trigger

Schedule Date and Time Settings

Confirm that the date and time you want to schedule the VMs for deployment to your Azure subscription is correct.

Schedule Azure VM Deployment Trigger 2

Confirm the Schedule is Correct

Click on OK in the scheduler window and the deployment task is now confirmed.

Schedule Azure VM Deployment confirmed

Scheduled Deployment is underway

My scheduled deployment has now kicked off and we can see in my Azure Portal that the machines are now deploying.

Schedule Azure VM Deployment Creation

Scheduled Deployment Alerting

If you had set up your email alerts, you will receive an email letting you know that your VMs have now deployed to Azure.

Schedule Azure VM Deployment Finished Email


  1. After some time your Azure Virtual Machines should have now deployed. As seen in the Azure Portal I can see that our Azure VMs are up and running, in the correct resource group, Azure Subscription and Azure Location.Schedule Azure VM Deployment Complete


Dont forget that the Azure VM Deployer is completely free and one of our Azure Management Tools.

Azure VMs are now deployed

After some time your Azure Virtual Machines should have now deployed. As seen in the Azure Portal I can see that our Azure VMs are up and running, in the correct resource group, Azure Subscription and Azure Location.

Schedule Azure VM Deployment Complete

Dont forget that the Azure VM Deployer is completely free and one of our Azure Management Tools.

Azure FAQs

Question Answer

What is Azure VM deployment?

Azure VM deployment is the process of creating and managing virtual machines in the Microsoft Azure cloud platform.

What are the benefits of using Azure VMs?

Azure VMs offer a wide range of benefits, including scalability, flexibility, security, and cost-effectiveness.

How do I create a new Azure VM?

You can create a new Azure VM through the Azure portal, Azure CLI, or Azure PowerShell.

What operating systems are supported on Azure VMs?

Azure VMs support a wide range of operating systems, including Windows Server, Linux, and various distributions of Unix.

What are the different VM sizes available in Azure?

Azure offers a variety of VM sizes, ranging from small, low-cost instances to large, high-performance instances.

How can I manage and monitor my Azure VMs?

You can manage and monitor your Azure VMs through the Azure portal, Azure CLI, or Azure PowerShell, as well as third-party tools such as Azure Monitor and Azure Log Analytics.

What are availability sets in Azure VM deployment?

Availability sets are used to ensure high availability for VMs by distributing them across multiple physical servers in a data center.

How can I secure my Azure VMs?

You can secure your Azure VMs through a variety of measures, including network security groups, firewall rules, and encryption.

What is Azure Site Recovery and how does it work with VMs?

Azure Site Recovery is a disaster recovery solution that can be used to replicate and recover VMs in the event of a site outage or other disaster.

How can I optimize the performance of my Azure VMs?

You can optimize the performance of your Azure VMs through various means, such as selecting the appropriate VM size, optimizing disk performance, and using caching.
How to update SCCM 1902 Hotfix Rollup KB4500571

How to update SCCM 1902 Hotfix Rollup KB4500571

How to update SCCM 1902 Hotfix Rollup KB4500571

 

SCCM Hotfix rollup KB4500571

SCCM Hotfix rollup KB4500571 bug fix overview

Microsoft has released yet another update for SCCM, hotfix rollup KB4500571.

First off, we will cover the update fixes issues with SCCM including; (how to update your SCCM environment to Hotfix rollup KB4500571 is further down the page)

  • The Download Package Content task sequence action fails and the OsdDownload.exe process terminates unexpectedly. When this occurs, the following exit code is recorded in the Smsts.log on the client:
    Process completed with exit code 3221225477
  • Screenshots that are submitted through the Send a Smile or Send a Frown product feedback options cannot be deleted until the Configuration Manager console is closed.
  • Hardware inventory data that relies on the MSFT_PhysicalDisk class reports incomplete information on computers that have multiple drives. This is because the ObjectId property is not correctly defined as a key field.
  • Client installation fails on workgroup computers in an HTTPS-only environment. Communication with the management point fails, indicating that a client certificate is required even after one has been provisioned and imported.
  • A “success” return code of 0 is incorrectly reported as an error condition when you monitor deployment status in the Configuration Manager console.
  • When the option to show a dialog window is selected for app deployments that require a computer restart, that window is not displayed again if it is closed before the restart deadline. Instead, a temporary (toast) notification is displayed. This can cause unexpected computer restarts.
  • If it is previously selected, the “When software changes are required, show a dialog window to the user instead of a toast notification” check box is cleared after you make property changes to a required application deployment.
  • Expired Enhanced HTTPS certificates that are used for distribution points are not updated automatically as expected. When this occurs, clients cannot retrieve content from the distribution points. This can cause increased network traffic or failure to download content. Errors that resemble the following are recorded in the Smsdpprov.log:
    Begin to select client certificateUsing certificate selection criteria ‘CertHashCode:’.
    There are no certificate(s) that meet the criteria.
    Failed in GetCertificate(…): 0x87d00281
    Failed to find certificate ” from store ‘MY’. Error 0x87d00281
    UpdateIISBinding failed with error – 0x87d00281

    The distribution points certificates are valid when you view them in the SecurityCertificates node of the Configuration Manager console, but the SMS Issuing certificate will appear to be expired.
    Renewing the certificate from the console has no effect. After you apply this update, the SMS Issuing certificate and any distribution point certificates will automatically renew as required.

  • A management point may return an HTTP Error 500 in response to client user policy requests. This can occur if Active Directory User Discovery is not enabled. The instance of Dllhost.exe that hosts the Notification Server role on the management point may also continue to consume memory as more user policy requests arrive.
  • Content downloads from a cloud-based distribution point fail if the filename contains the percent sign (%) or other special characters. An error entry that resembles the following is recorded in the DataTransferService.log file on the client:AddUntransferredFilesToBITS : PathFileExists returned unexpected error 0x8007007b
    The DataTransferService.log may also record error code 0x80190194 when it tries to download the source file. One or both errors may be present depending on the characters in the filename.
  • After you update to Configuration Manager current branch, version 1902, the Data Warehouse Synchronization Service (Data_Warehouse_Service_Point) records error status message ID 11202. An error entry that resembles the following is recorded in the Microsoft.ConfigMgrDataWarehouse.log file:
    View or function ‘v_UpdateCIs’ has more column names specified than columns defined.
    Could not use view or function ‘vSMS_Update_ComplianceStatus’ because of binding errors.
  • User collections may appear to be empty after you update to Configuration Manager current branch, version 1902. This can occur if the collection membership rules query user discovery data that contains Unicode characters, such as ä.
  • The Delete Aged Log Data maintenance task fails if it is run on a Central Administration Site (CAS). Errors that resemble the following are recorded in the Smsdbmon.log file on the server.
    TOP is not allowed in an UPDATE or DELETE statement against a partitioned view. : spDeleteAgedLogData
    An error occurred while aging out DRS log data.
  • When you select the option to save PowerShell script output to a task sequence variable, the output is incorrectly appended instead of replaced.
  • The SMS Executive service on a site server may terminate unexpectedly after a change in operating system machine keys or after a site recovery to a different server. The Crash.log file on the server contains entries that resemblie the following.
    Note Multiple components may be listed, such as SMS_DISTRIBUTION_MANAGER, SMS_CERTIFICATE_MANAGER, or SMS_FAILOVERMANAGER. The following Crash.log entries are truncated for readability.
    EXCEPTION INFORMATION
    Service name = SMS_EXECUTIVE
    Thread name = SMS_FAILOVER_MANAGER
    Exception = c00000fd (EXCEPTION_STACK_OVERFLOW)Description = “The thread used up its stack.”
  • Old status messages may be overwritten by new messages after promoting a passive site server to active.
  • User targeted software installations do not start from Software Center after you update to Configuration Manager current branch, version 1902. The client displays an “Unable to make changes to your software” error message. Errors entries that resemble the following are recorded in the ServicePortalWebSitev3.log::GetDeviceIdentity – Could not convert 1.0,GUID:{guid} to device identity because the deviceId string is either null or larger than the allowed max size of input
    :System.ArgumentException: DeviceId
    at Microsoft.ConfigurationManager.SoftwareCatalog.Website.PortalClasses.PortalContextUtilities.GetDeviceIdentity(String deviceId)
    at Microsoft.ConfigurationManager.SoftwareCatalog.Website.PortalClasses.Connection.ServiceProxy.InstallApplication(UserContext user, String deviceId, String applicationId)
    at Microsoft.ConfigurationManager.SoftwareCatalog.Website.ApplicationViewService.InstallApplication(String applicationID, String deviceID, String reserved)

    This issue occurs if the PKI certificates that are used have a key length that is greater than 2,048 bits.

  • Audit status messages are not transmitted to the site server in an environment with a remote SMS provider.
  • The Management Insights rule “Enable the software updates product category for Windows 10, version 1809 and later” does not work as expected for Windows 10, version 1903.

SCCM Hotfix rollup KB4500571 additional changes

Further improvements and additional functional changes to SCCM included in the KB4500571 hotfix are;

  • Manager and the Microsoft Desktop Analytics service.
  • Multiple improvements are made to support devices that are managed by using both Configuration Manager and a thirty-party MDM service.
  • Client computers that use IPv6 over UDP (Teredo tunneling) may generate excessive traffic to management points. This, in turn, can also increase load on the site database.
    This traffic occurs because of the frequent network changes that are associated with the Teredo refresh interval. After you apply this update, this data is filtered by default and is no longer passed to the notification server on the management point. This filtering can be customized by creating the following registry string under HKEY_LOCAL_MACHINESoftwareMicrosoftCCM:
    Type: String
    Name: IPv6IFTypeFilterList
    Value: If the string is created without any data (blank), the pre-update behavior applies and no filtering occurs.
    The default behavior of filtering Teredo tunnel data (interface type IF_TYPE_TUNNEL, 131) is overwritten if new values are entered. Multiple values should be separated by semicolons.
  • The Configuration Manager client now handles a return code of 0x800f081f (CBS_E_SOURCE_MISSING) from the Windows Update Agent as a retriable condition. The result will be the same as the retry for return code 0x8024200D (WU_E_UH_NEEDANOTHERDOWNLOAD).
  • The SMSTSRebootDelayNext task sequence variable is now available. For more information, see the “Improvements to OS deployment” section of Features in Configuration Manager technical preview version 1904.
  • SQL database performance is improved for operations that involve a configuration item (CI) that has associated file content by the addition of a new index on the CI_Files table.

How to update your SCCM to Hotfix rollup KB4500571

Now we get to the nitty gritty of the update process for KB4500571.

  1. Open your SCCM Console, and navigate to Administration, then highlight Updates and Servicing.
    KB4500571 Administration
  2. Now with Updates and Servicing highlighted in main window you should hopefully see the KB4500571 update has downloaded and is ready to install.
    (If you cant see it downloaded, right click on Updates and Servicing and choose Check for Updates.)
    KB4500571 Downloaded
  3. Firstly we need to run the prerequisite check for SCCM KB4500571 to ensure your environment is ready for the update.
    Right Click the downloaded update and choose Run Prerequisite Check.
    KB4500571 PrerequisiteCheck
  4. The prerequisite check will take around 10 minutes or so to complete the check.
    You can use the ConfigMgrPrereq.log located in the root of the SCCM server’s C Drive to see the status and it’s completion.
    SCCM KB4500571 Prerequisite Check
  5. Now on to the fun bit, let’s start the installation of SCCM KB4500571. Again right click the update in the main window and choose Install Update Pack.
    SCCM KB4500571 Install Update Pack
  6. The first window of the Configuration Manager Updates Wizard pops up. Choose Next to continue the installation
    SCCM KB4500571 Updates Wizard
  7. The Client Updates Settings window lets you choose whether you want to validate the update against a pre-production collection. We wont bother with that here as this is our test environment. Choose Next to continue when ready to do so.SCCM KB4500571 Client Update Settings
  8. Accept the License Terms – only if you are happy with them 🙂 – and click Next.
    SCCM KB4500571 License Terms
  9. Now the Summary tab of the Configuration Manager Updates Wizard details the installation settings you have chosen. If you are happy to proceed with the installation click Next.
    This did take some time in the SmiKar SCCM lab environment, so best go make yourself a cup of coffee and come back. 🙂
    SCCM KB4500571 Install Confirmation
  10. Hopefully all went well with your upgrade to SCCM KB4500571 and you are presented with a screen similar to this.
    SCCM KB4500571 Completed
  11. If you had any issues or want to view the status (rather than look in the logs) go to Monitoring, then high Updates and Servicing Status. Highlight and Right Click the update and choose Show Status.
    SCCM KB4500571 Updates and Servicing Status
Is Disaster Recovery Really Worth The Trouble (Part 4)

Is Disaster Recovery Really Worth The Trouble (Part 4)

Is Disaster Recovery Really Worth The Trouble

(Part 4 of 4 part series)

Guest Post by Tommy Tang – Cloud Evangelist

In this final chapter of the Disaster Recovery (part 1, part 2 and part 3) discussion I am going to explore some of the common practices, and myths, regarding DR in the Cloud. I’m sure you must have heard about the argument for deploying application to the Cloud because of the inherited built-in resilience and disaster recovery capability. Multi-AZ, 11 9’s durability, auto-scaling, Availability Sets and multi-region recovery (e.g. Azure Site Recovery) and many more, are widely adopted and embraced without hesitation. No doubt these resilient features are part of the charm of using Cloud services and each vendor will invest and promote their own unique strength and differentiation to win market share. It’ll only take 30 minutes to fail over to another AZ so she’ll be right, yes?

If you remember in Part 2 of the DR article I stated the number one resilience design principle is to “eliminate single point of failure”. Any Cloud vendor could also become the single point of failure. If you’ve deployed the well architected, highly modularised and API rich application in Amazon Web Services (AWS), do you still need to worry about DR? The short answer is YES. You ought to consider DR capability provided by AWS, or any other Cloud vendor for that matter, to determine whether it does meet your requirement. The solution is indeed fit for purpose. Do not assume anything just because it is in the Cloud.

AWS is not immune to unplanned outages because Could infrastructure is also built on physical devices like disks, compute and network switches. Some online stores like Big W and Afterpay had been impacted due to unexpected AWS outage on 14th Feb 2019 for about 2 hours. What is your Recovery Time Objective (RTO) requirement? Similarly Microsoft Azure is also not immune to outages either. On 1st February 2019 Microsoft had inadvertently deleted several Transparent Data Encryption (TDE) databases after encountering DNS issues. The TDE database were quickly restored from snapshot backup, but unfortunately customers would have lost 5 minutes worth of transactions. Image what would you do if your Recovery Point Objective (RPO) is meant to be Zero? No data loss?

At this very moment I hope I have stirred up plenty of emotions and a good dose of anxiety. Cloud infrastructure and Cloud service provider is not the imaginative Nirvana or Utopia that you have been searching for. It’s perhaps multi-generation better than what you have installed in your data centre today, but any Cloud deployment still warrants careful consideration, design and planning. I’m going to briefly discuss 3 areas in which you should start exploring in your own IT environment tomorrow.

Disaster Recovery Overview

1. Backup and Restore

As a common practice you’d take regular backup for your precious application and data so you’d be able to recover in the most unfortunate event. Same logic applies here when you have deployed applications in AWS or Azure Cloud. Ensure you are taking regular backup in the Cloud, which is likely to be auto-configured, as well as a secondary backup stored outside the Cloud service provider. It’s exactly the same concept and reason for taking offsite backup which is, proverbially speaking, you don’t put all your eggs in one basket. Unless you don’t have a data centre anymore, your own data centre would be the perfect offsite backup location. I understand getting backup off AWS S3 could pose a bit of challenge and I’d urge you to consider using AWS Storage Gateway for managing offsite backups. It should make backup management a lot easier.

Once you’ve secured the backup of application and data away from the Cloud vendor, you’re now empowered to restore (or relocate) the application to your own data centre or to different Cloud provider as desired. Bearing in mind that you’re likely to suffer some data loss using backup and restore technique. Depending on the backup cycle it’s typically a daily backup (i.e. 24 hours) or weekly backup (i.e. 7 days). You must diligently consider all recovery scenarios to determine if backup and restore is sufficed for the Recovery Point Objective (RPO) of the targeted application.

2. Data Replication

What if you can’t afford to lose data for your Tier-1 critical application? (i.e. RPO is Zero) Can you still deploy it to the Cloud? Again the short answer is YES but it probably requires some amendment to the existing architecture design, and notwithstanding the additional cost and effort involved. I believe I have already touched on the design patterns Active-Active and Active-Passive in Part 2 of the DR discussion. If Recovery Point Objective (RPO) is Zero then you must establish synchronous data replication across 2 sites, 2 regions or 2 separate Cloud vendors. Ok, even though it’s feasible to establish synchronous data replication over long distances, the Law of Physics still applies and that means your application performance is likely to suffer from elevated network latency. Is it still worth pursuing? It’s your call.

There are generally 2 ways to achieve data replication across multi-region or multi-cloud. The first method is to leverage storage replication technology. It’s the most common and proven data replication solution found in the modern data centre, however it’s extremely difficult to implement in the Cloud. The simple reason is you don’t own Cloud storage but vendors do. There will be limited API and software available for you to synchronise data say between AWS S3 and the on-premises EMC storage array. The only alternative solution I can think of, and you might have other brilliant idea, is to deploy your own Cloud edge storage (e.g. NetApp Cloud Volumes OnTap) and presented to the applications hosted in various Cloud vendors. Effectively you still own and manage the storage (and data) rather than utilising the unlimited storage generously provisioned by the vendor, and as such you are able to synchronise your storage to any secondary location of your choice. You have the power!

As opposed to using storage replication technology you can opt for host or software based replication. Generally you are probably more concerned of the data stored in database than say the configuration file saved on the Tomcat server. Following on this logic data replication at the database tier is our first and foremost consideration. If you are running Oracle database then you can choose to configure Data Guard with synchronous data replication between AWS EC2 and on-premises Linux database server. On the other hand if your preference is Microsoft SQL Server then you’d configure SQL Server Always On Cluster with synchronous replication for databases hosted in Azure Cloud and on-premises VMWare Windows server. You can even set up database replication between different Cloud vendors as long as the Cloud infrastructure supports it. The single most important prerequisite for implementing database replication, wether it is between Cloud vendors or Cloud to on-premises, is the underlying Operating System (OS). Ideally you’d have already standardised your on-premises operating environment to be Cloud ready. For example, retaining large scale AIX or Solaris servers in your data centre, rather than switching to Windows or Linux based Cloud compatible OS, does nothing to inspire a romantic Cloud story.

3. Orchestration Tool

The last area I’d like to explore is how to minimise RTO while recovering application to your on-premises data centre or to another Cloud vendor during major disaster event. If you are well versed in the DevOps world and being a good practitioner then you are already standing on good foundation. The most common problem found during recovery is the complexity and human intervention required to instantiate the targeted application software and hardware. Keeping with the true CI/CD spirit the proliferation use of orchestration tool to deploy immutable infrastructure and application is the very heart and soul of DevOps. By adopting the same principle you’d be able to recover the entire application stack via orchestration tool like Jenkins to another Cloud or on-premises Cloud like environment with minimal effort and time. No more human fat finger syndrome and slack think time during recovery. Consider using open source and Cloud vendor agnostic tool like Terraform (as opposed to AWS CloudFormation) can greatly enhance portability and reusability for recovery. Armed with the suitable containerisation technology (e.g. Kubernetes) that is harmonised in your IT landscape, you’d further enhance deployment flexibility and manageability. Running DR at an alternate site becomes a breeze.

In closing, I’d like to remind you that just because your application is deployed to the Cloud (i.e. someone else infrastructure) you are not exonerated from neglecting the basic Disaster Recovery design principles and making ill-informed decision. Certainly it’s my opinion that the buck will stop with you when the application is blown to smithereens in the Cloud. This is the last article of the Disaster Recovery series and hopefully I have imparted a little bit of the knowledge, practical examples and stories to you that you can tackle DR from a whole new light without fear and prejudice. I’m looking forward to sharing with you some more Cloud stories in a not too distant future. Stay tuned.

This article is a guest post by Tommy Tang (https://www.linkedin.com/in/tangtommy/). Tommy is a well rounded and knowledgeable Cloud Evangelist with over 25+ years IT experience covering many industries like Telco, Finance, Banking and Government agencies in Australia. He is currently focusing on the Cloud phenomena and ways to best advise customers on their often confused and thorny Cloud journey.