Basic Patch Management

Note: If you want to implement this, you can get a script to set it all up here.

One of the most common topics I get asked about is managing software updates. Some customers are new to CM altogether and are looking to get started, others are grizzled veterans that are looking to simplify processes that have grown out of control over time. While there is no one size fits all method, the method I will describe has proven to be a good starting point for many organizations I have helped over the years. The basics of my method was taught to me by Steve Rachui and I have built on the foundation he provided.

In the “old days” a single OS or Office version could end up with hundreds of updates. Get a few OSes and a version of Office or two and suddenly you are over the 1000 update limit for a deployable Software Update Group (SUG). While we do not run in to that limit as quickly, if at all, in today’s cumulative update world, I still follow the same process. The update limit required you to break up the updates you deploy into multiple SUGs. While there are many ways to break them up, the two most logical are by release date or by product. I have seen by release date being the most common with a new SUG being created every month then combining the monthly SUGs into a quarter or year SUG down the road. I prefer the other method, by product. The main reason for this is limiting the number of updates deployed to any system to only the OS and Office version required, which speeds up the Software Update Deployment Scan Cycle.


The first piece of the puzzle is the collections. I like to use folders to keep things organized. As the client behavior is the sum of all policy, I separate out the functions to help me keep things straight. I start with the top-level folder, which in this example is labeled “Patching”. Under that I create three more folders, “Testing Deployments”, “Production Deployments”, and “Maintenance Windows”.

Patching folder structure

In the “Patching” folder, I create two collections, “All Workstations” and “All Servers”. I base both query rules off Operating System – ProductType.
1 = Workstation
2 = Domain Controller
3 = All Other Servers
I use where ProductType = 1 for workstations and where ProductType != 1 for servers. Note: you shouldn’t have your Domain Controllers in your regular CM environment.

Base patching collections

Next, I add a Maintenance Window on the “All Servers” collection. This window is designed so that it never happens. I choose a time in the past and set it to never recur. If a system has no maintenance window, it is going to install deployments at the deadline. With any maintenance window, even one that will never happen again, the system will wait for another maintenance window to install deployments. This is my protection that no server can reboot unexpectedly.

Maintenance window definition

I then create two more collections in this folder, one for systems to exclude from patching, and one for systems with no maintenance window.

Patching collections

The systems with no maintenance window collection gets an include rule for All Servers, then an exclude for the servers excluded from patching.

Next up is the “Testing Deployments” folder. Here I create two collections as in my lab it is easy to get away with an extremely simplified testing process. The first collection is my pilot collection, which I look at as a very small group with the main testing goal to be the updates do not blue screen the systems. Then I have a User Acceptance Testing (UAT) collection. This is envisioned as testing against the various line of business apps in the environment, my last check before I deploy to production. Both collections use direct rules in my lab.

Testing collections

On to Production Deployments folder. Here I create one collection per product using queries.

Production deployment collections

Finally, we come to the Maintenance Windows folder. Here I create a collection for each maintenance window I want. In addition to your desired query or rules to populate this collection, you also include an exclusion rule for the servers excluded from patching, then head back to your servers with no maintenance window collection and add an exclusion for your new maintenance window collection.

Maintenance window collection membership rules

When a service owner calls and says they have some super important event happening and they can’t patch this month, you tell them to take it up with security, security grants them an exemption, then they email you approval in writing (I know, I am a jerk). Only then do you add the server(s) to the exclusion collection and the server(s) drop from all the maintenance windows. Next month when security does not renew the exemption, you remove the server(s) from the exemption, and they are automagically back in their appropriate maintenance collections.

Any systems that you do not get into a maintenance window and have not excluded will show up in the systems with no maintenance window collection for you to act on.

Patching collections with example device membership numbers

I urge you to do your best to limit the number of maintenance windows to the lowest reasonable number. There must be balance between service owners and the patching team. In my lab I just went with two as an example, but any production environment is likely to require more than that. I have seen both extremes here, everything from just a couple windows for the service owner to choose from, all the way to 4 patching weeks, 7 days each week, 24 hours each day for (4x7x24) 672 distinct maintenance windows (and they tore them down each month just to recreate the next month).

Example maintenance window collections

Software Update Groups (SUGs):

I mentioned earlier that I organize my updates by product. We will end up with two SUGs per product, a persistent group that is everything except the current month and a SUG for the current month. When you take a new month to production, you roll the previous month in to the persistent group. Then no matter when a system is built or how long it has been offline, when it comes online to update it can get current. When I initially create these groups, I select all updates for the specific product that are required or installed in the environment. While I could choose all updates for the product, there are likely to be some that will never apply to your environment. Every few months I will also look for any old updates that I have not selected that are now showing as required or installed just to make sure none have slipped through.

Persistent Software Update Groups (SUGs) for operating systems.

While not required, I create one deployment package per SUG just to keep things simple.

Deployment packages

I deploy each SUG to the matching collection. In my lab, I make them immediately available with an immediate deadline. If your environment is up to date, there is negligible risk. If any of the updates have not been in your environment previously, then I recommend running through the change request and testing process to CYA.

Automatic Deployment Rules (ADRs):

Automatic Deployment Rules (ADRs)

I create an ADR per product that creates a new SUG each time it runs and to download to the matching deployment package.

ADR general tab showing setting for create new SUG each time ADR is run

I have the ADR look for all updates released in the last 2 days with exceptions for “security only” and “preview”. For Windows 10, I also add inclusions for the specific builds I have.

ADR Software Updates tab showing example rules for Windows 10.

Each ADR is set to run either the night of patch Tuesday or the early morning of exploit Wednesday.

ADR schedule

Each ADR gets three deployments.

For Workstations*:
Pilot – immediately available, immediately required, enabled
UAT – available after two days, immediately required, enabled
Prod – available after 7 days, required 7 days after available, disabled

I like giving that window of available but not yet required for users to install at their convenience. Most won’t, but when they call to complain about updates being forced on them, I point out they had a week to install them when convenient.

For Servers*:
Pilot – immediately available, immediately required, enabled
UAT – available after two days, immediately required, enabled
Prod – available after 7 days, immediately required, disabled

For the server updates, I want them required immediately. I manage the timing of the actual install via the maintenance windows. Until a server enters a maintenance window, the updates just sit in Software Center and show as past due.

* – These settings are for standard internet facing environments. For changes to make for disconnected environments, see this post.

ADR deployments showing production deployment will be disabled

Notice the production deployment is set to be created as disabled. This allows for the testing deployments to be completely automated, but the admin must take a positive action to light the deployment up for production once testing is complete. Taking this action is as simple as right click on the deployment, left click on enable. Just two clicks per product and you’re back to whatever project you’d rather be working on.

When Patch Tuesday rolls around, your ADRs run and you get the second SUG per product.

SUGs with ADR generated SUGs present

Go through your testing process to make sure there are no issues then you can approve each monthly SUG for production.

To enable the production deployment, right click on the disabled deployment then left click enable

Alternatively, you can follow Russ Rimmerman’s method for using a PowerApp to enable the deployment.


I want to stress two things.

1. The behavior you will see is the sum of the policies a client gets. I find it easier to separate these policies to keep them straight and avoid duplication, organizing them via collection folders.

2. This is a starting point. Some may find it works great exactly as described, others may need to modify to fit their environment.