MDT – Windows 10 deployment watch

With our MDT environment up and running we’ve been refining our Windows 10 build over the past couple of months, sending out pilot builds to specific areas so we’re confident in the process when it comes to large-scale deployment over summer.#

This post focuses on a few Windows 10-specific tweaks that we’ve made to the Task Sequence that may be of interest…

Thin image approach

In the past I was a fan of what could be called a Hybrid image model in as much that I’d create a “Base” Reference image in a VM, usually comprised of Windows + Office + Updates. That would get captured and become the WIM file that goes into the Task Sequence.

However with Windows 10 I’ve decided to go down the completely thin approach that’s best represented as either a sandwich or hamburger depending on your culinary preference (!) Effectively the deployment gets built from its component parts, starting from an unaltered source Windows 10 WIM file extracted from its parent ISO image.

In our case we’ve settled on Education 1709 x64 as the build to deploy, due to some useful features such as OneDrive Files on Demand and Windows Defender Exploit Prevention. Along the way we’ve also used the 1607 and 1703 builds. The advantage of using the Thin image method is that we can swap the OS out at will with two clicks, rather than having to go through a Capture process that seems to have the potential for error.

Secure Boot validation

Windows 10 1709 brought in some new security features which benefit from machines being converted to UEFI rather than BIOS mode and in some cases (Windows Defender Credential Guard) needs Secure Boot too. Seeing as we need to update the BIOS > UEFI on older machines anyway it made sense to enable Secure Boot at the same time.

Ref: https://docs.microsoft.com/en-us/windows/whats-new/whats-new-windows-10-version-1709

The question was how to ensure that a machine is correctly configured before starting the imaging process (as converting later on is far from ideal).

The answer is to run cmd.exe to send a non-zero return code if specific requirements are met:

  1. Task Sequence variable isUEFI is false and \ or
  2. UEFISecureBootEnabled registry key is 0

If the machine is configured incorrectly the Task Sequence will fail before it even starts to pull down the image. To ensure you catch it early enough add the step here:

Putting the two together looks like this:

  

Removing the cruft

Sadly despite Microsoft giving Education our very own specific build of Windows they didn’t extend the effort into cleaning up the junk that gets pushed down with a standard Windows 10 installation. Seriously who wants Candy Crush on their business machines?!

Fortunately scripts exist to assist with cleaning up the junk shipped with the OS so it’s suitable for deployment. Now we can do this with DISM at image level but again my aim is to avoid tinkering with the Microsoft media if possible so I prefer the following PowerShell method…

PowerShell: Removing UWP apps from Windows 10 1607/1703/1709

Disable Refresh \ Reset

Another Windows 10-specific tweak is to disable the Refresh \ Reset menu that users can access either by using the Settings app or by holding shift while a machine reboots. In our case we don’t want users to wipe their machine clean of provisioned applications and it appears that this functionality will work even without local admin rights (!)

The solution to this one came via the EduGeek forums courtesy of ErVaDy using bcdedit commands:

Ref: http://www.edugeek.net/forums/windows-10/164236-preventing-shift-restart-into-recovery-mode-5.html

Place the commands below into a batch file and run as an Application or Task Sequence step:

reagentc /disable
bcdedit /deletevalue {current} recoverysequence
bcdedit /set {bootmgr} bootems off
bcdedit /set {bootmgr} advancedoptions off
bcdedit /set {bootmgr} optionsedit off
bcdedit /set {bootmgr} recoveryenabled off
bcdedit /set {current} bootems off
bcdedit /set {current} advancedoptions off
bcdedit /set {current} optionsedit off
bcdedit /set {current} bootstatuspolicy IgnoreAllFailures
bcdedit /set {current} recoveryenabled off

Updating OneDrive Files on Demand Client

In a way that only Microsoft can Windows 1709 shipped with an old version of the OneDrive client that doesn’t work with the much-anticipated Files on Demand feature straight out the box 😦

Although the client does auto-update we didn’t want any automatic sync starting without the placeholder functionality being in place so I’ve scripted an Application in the MDT Task Sequence to take ownership of the file on the newly deployed image, copy the latest version of the client over and then set everything back as it was.

For more details and the script itself please see my previous post OneDrive Files on Demand – update!

Pre-staging printer drivers

During our Windows 10 deployment we’re also migrating to a new set of Windows Print Servers, along with new GPOs to map them. However in initial testing I noted the first user to log in had a long wait whilst drivers were copied down from the server and installed.

Although subsequent logins won’t get this issue it doesn’t give a good first impression to the initial user so I wanted to find a way around it.

Step forward the very useful printui.dll 🙂

Ref: https://larslohmann.blogspot.co.uk/2013/12/install-printer-driver.html

Because we’ve rationalised our print fleet over the past few years in a move towards MFDs I only have 3 drivers to cover the entire range of hardware. By using a script method I can then pre-stage the drivers onto the machine at image time and speed up that first logon significantly!

Again paste this into a batch file and call as an Application (use an Application step instead of  Run Command Line as you want the driver files copied into the Deployment Share)

cscript "prndrvr.vbs" -a -m "HP Universal Printing PCL 6" -i "%CD%\HP Universal Print Driver\pcl6-x64-6.4.x.xxxxx\hpcu196u.inf"

Note the use of %CD% to ensure the path to the driver file is resolved correctly!

WSUS resources

Although there’s nothing special about running Windows Updates in MDT (use the built-in Task Sequence steps) we noticed that our WSUS server was struggling and sometimes hung the “Install Updates” step of the Sequence. The WSUS console then become unresponsive on the server end too.

After further research it turns out our increasing number of machines needs more resource than the default WSUS limit of 2GB  in the IIS Application Pool to handle the connections. Upon making the change below it’s back to being stable again.

Ref: https://sysadminplus.blogspot.co.uk/2016/11/wsus-console-crashed-after-running-some.html

Ref: https://www.saotn.org/wsuspool-keeps-crashing-stops

Ref: https://blogs.technet.microsoft.com/configurationmgr/2017/08/18/high-cpuhigh-memory-in-wsus-following-update-tuesdays

Run WinSAT

An oldie-but-goodie; running the WinSAT assessment tool at the end of setup will make sure your machine is properly benchmarked and appropriate performance tuning is performed by Windows. It doesn’t take long so I thought it worth continuing with:

Ref: https://deploymentresearch.com/Research/Post/624/Why-adding-WinSAT-formal-to-your-task-sequence-can-be-a-shiny-thing-to-do

Just add a Run Command Line step with the following in the box:

winsat.exe formal

Adobe CC and the case of the vanishing Explorer

As far as blog titles go this one has ended up sounding more like an adventure book than a technical post! Unfortunately not that exciting but useful nonetheless.

My colleague Tristan Revell has recently been building new installer packages for our Adobe CC apps but ran into an odd-yet-irritating bug where explorer.exe would disappear during the install process and not be restarted at the end, leaving the user stranded on a blank desktop.

It didn’t happen every time either, so the behaviour looked to be rather unpredictable. Everything else on the install in terms of the Adobe side went through fine

Upon reading around the support forums it seems to be an issue Adobe have been aware of for some time but still not fixed (first post in 2013 and still being reported 5 years later!)

Ref: https://forums.adobe.com/thread/1351913?start=40&tstart=0

Whilst trying to find a solution I remembered a useful script from a while back that checked if a process was running and then took action based on the results. Looked perfect to use here so we tried adding it as a post-install action on our ZCM Bundle.

Dim i 
 Dim strComputer
 Dim FindProc
Dim oShell
Set oShell = WScript.CreateObject ("WScript.Shell")
 
 strComputer = "."

FindProc = "explorer.exe"

Set objWMIService = GetObject("winmgmts:" _
 & "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")
 Set colProcessList = objWMIService.ExecQuery _
 ("Select Name from Win32_Process WHERE Name LIKE '" & FindProc & "%'")

If colProcessList.count>0 then
 'wscript.echo FindProc & " is running"
 else
 'wscript.echo FindProc & " is not running"
 oShell.run "explorer.exe"
 Set oShell = Nothing
 End if

Set objWMIService = Nothing
 Set colProcessList = Nothing

Et voila, if explorer disappears during install it comes back neatly at the end; in the other instance that it’s already running no action will be taken. Sorted.

Here’s our new ZCM 2017 “My Apps” window taking shape with the new Adobe CC packages ready to go, doesn’t it look pretty 🙂

Image credit: Photo by Stefan Stefancik from Pexels https://www.pexels.com/photo/light-mountains-sky-night-42148/

MDT imaging megapost – part 2 (database automation)

With MDT installed we initially used some basic out-the-box Task Sequences to get up and running. Deployment worked as expected but it was quite a manual process (entering the machine name, selecting Applications to install and so on).

On our old ZCM \ Windows 7 imaging project we were starting from scratch to some extent with a lot of new hardware so entering certain information manually at image time was actually a desired behaviour. Not so much so now with a fairly settled estate and ever increasing time pressures – automation is name of the game.

As such the database-driven model now makes a lot more sense as we were able to export a list of machines and roles from ZENWorks so MDT could “know” what it needs to do with a machine rather than anyone needing to tell it.

SQL Installation

Nice and simple (free too) with SQL Express as per the previous post. One thing you need to watch out for is to ensure Named Pipes are enabled in SQL Server Configuration Manager or you’ll get errors when trying to connect to the database remotely.

Ref: http://www.vkernel.ro/blog/creating-and-configuring-the-mdt-database

Now go ahead and create the database itself…

Ref: https://docs.microsoft.com/en-us/windows/deployment/deploy-windows-mdt/use-the-mdt-database-to-stage-windows-10-deployment-information

Managing the database

The MDT console is functional when it comes to managing the database but it’s not the ideal interface, especially if you need to make a lot of changes as MMC can be somewhat clunky at times. Although you can use the MDT Workbench remotely it’s not perhaps something you’d want to give everyone access to.

However, there is a better way 🙂

Whilst browsing across forums I came across a link to a brilliant little tool called MDT Administrator, it’s currently hosted on the soon-to-be-defunct Codeplex site. Although Microsoft say an archive will be kept running how long for is anyone’s guess so keep a copy saved somewhere safe!

Ref: https://mdtadmin.codeplex.com

It’s a nifty HTA-based front-end that provides a much slicker way to manage your database. Adding and removing Roles is much quicker in particular, which is something we use a lot (more on that later).

One additional tweak to the setup was to create a new group of MDT Database Admins who were granted write access against their SQL login. This meant we could delegate management of the computer records in the database to technicians without needing to open up access to the full Deployment Workbench interface. Perfect for on-the-go updates as machines are moved around and replaced.

Restarting deployment

Sometimes we’ll come across a machine that isn’t in the database, usually something that’s been on the shelf for a while or a laptop that’s been “off the grid” and come back for reimaging. In those cases you only find out that there’s no record after the deployment wizard has started and you get offered a randomly-generated name starting with MININT.

You can also check this in the ZTIGather.log file to see what information was found about the machine and whether any matching records were returned from the database. This step can be handy to troubleshoot unexpected behaviours that can be caused by something a bit out the ordinary e.g. DMI information entered into BIOS incorrectly by the manufacturer, which has happened to us a few times.

To save yourself an unwanted reboot after amending a record in the database hit F8 whilst at the deployment wizard (assuming you’re in PXE environment) then type in the magic command

wpeinit

Deployment will now restart with a fresh “Gather” phase and query the database again to pick up your new record; you should then see the correct name appear in the deployment wizard.

Bulk operations

Picture the situation… you’ve had a batch of 100 new laptops arrive, who gets the painful job of entering them into the database? Answer: PowerShell!

If manual data entry leaves you cold you’ll love the next set of scripts, allowing you to create a CSV of import data then run one command et voila, lots of effort and fingers saved.

First though you need to do a little fix on the database:

Ref: https://syscenramblings.wordpress.com/2016/01/15/mdt-database-the-powershell-module-fix/

The package comes in two parts:

  1. PowerShell cmdlets: https://blogs.technet.microsoft.com/mniehaus/2009/05/14/manipulating-the-microsoft-deployment-toolkit-database-using-powershell/
  2. Import Check script: https://deploymentbunny.com/2016/04/22/os-deployment-using-the-powershell-to-work-with-the-mdt-database-module-sample-1/

The check script is rather important as without it as MDT will quite happily create duplicate records and you don’t want that! If you don’t want to do the additional checks in Active Directory you can disable those sections by commenting them out.

I then made some changes to the Import Check script so it would process a CSV file to do all the work in one go. One big change was to replace the BREAK sections with CONTINUE as I didn’t want one duplicate record error to prevent the rest of the import from running. It seems to work for me but I’d advise testing that yourself before doing the same.

Ref: http://www.computerperformance.co.uk/powershell/powershell_continue.htm

Roles

Another part of the database that comes in really useful is Roles. In our case we install different software for machines deployed in a classroom to those that go in offices. On our previous ZCM imaging system I made a custom script for the technician to select the machine type but now we can automate that via the database.

Once a Role is assigned to a machine specific Applications can be assigned. That’s neat in itself but for added flexibility you can also then query the Roles during Task Sequence execution to take specific actions based on what type of machine you’re dealing with.

Ref: https://docs.microsoft.com/en-us/windows/deployment/deploy-windows-mdt/assign-applications-using-roles-in-mdt

At the moment I’ve stuck to only using one Role per machine in the database to make life easy for myself in the Task Sequence. Reason being that way I know when I query the TS variable “Role001” it will always return the data I’m looking for i.e. is this a classroom machine or one in an office? In an ideal world I’d test with multiple machine Roles to see what order they’re returned in and split things out a bit but I’m short on time and this method works for what we need.

During the Task Sequence I can then use WMI queries to get the granularity required to deploy software for specific machines, more on that in a later post…

MDT imaging megapost – part 1 (our first server)

The great thing about working in the tech field is that it keeps moving on, ever changing, always evolving. That means sometimes you have to let go of systems that were once the bright shining light of progress once it becomes apparent something better has taken its place. Now is that time for my trusty ZCM 11 custom imaging system; built back in 2013 and star of a 6-part thread series I look back on now and think “wow, I actually did that”.

Until I moved imaging onto a Satellite the stats say the original Primary server pushed out over 5000 images. Given the length time the Satellite has been in place, plus the stats from our other sites that figure can easily be doubled and over the course of 4 years around 10,000 image cycles have been completed.

Compared to the previous process that was in place a huge amount of time was saved and allowed us to complete a large-scale Windows 7 migration with relative ease. Add to that a 4-year saving on ENGL license costs and my motley crew of Bash and PowerShell scripts can retire with a satisfied feeling of a job well done 🙂

The future calls, and it’s shaped like the number 10…

However we need to move on, funny enough it’s another OS migration knocking on the door that prompted the change along with a shift in hardware and environment that meant the Linux-based PXE environment was starting to hold us back.

Windows 10 support from ZCM seemed patchy at best, as was timely support for new hardware such as Surfaces and their ilk. Reading the forums and email groups didn’t inspire much confidence either so we decided to start looking elsewhere.

SCCM was the natural direction of travel but having made a substantial investment of time creating ZCM Bundles we weren’t necessarily ready to move all that just yet. Similarly ZCM Patch Management works pretty well these days for covering our 3rd-party apps. With that in mind the Microsoft Deployment Toolkit was the obvious choice.

A nice GUI-based managed scripting environment with Windows PE as the underlying OS ticked all the boxes. Oh and did I mention it’s free!

It’s time for my own MDT… Massive Deployment Thread!

What originally started as a small side-project to push Windows 10 out to a couple of trial tablets has now expanded into a core system that’s been at the heart of our summer works. With that in mind it’s time to write up the journey and the numerous tips, tricks and tools used along the way.

Many of those ideas come from some of the best deployment pros in the business such as Johan Arwidmark, Michael Niehaus and Mikael Nystrom so a big shout out for all the knowledge they share. Hopefully this post will give an idea of how we put those pieces together in a live environment.

The beginning, our first server

Initially we started out deploying MDT for the sole purpose of imaging up a batch of demo Surface 3 devices so the first thing was to spool up a new VM with all the required software and roles installed. Links can be found below to save you some time:

Early fixes and customisations

After getting the basic Deployment Share running we hit a few minor issues that need resolving, which are worth bearing in mind:

Multiple DNS namespaces

We have two domains that are in use internally, one of which usually gets appended as part of the domain join process and the other via DHCP.

In the PE environment the machine isn’t domain joined and as such the default setting in Bootstrap.ini wouldn’t connect to the deployment share as it didn’t know the correct DNS suffix to append.

Ref: https://scottisageek.wordpress.com/2011/12/22/mdt-2010-and-multiple-dns-namespaces/

…we found it quicker in our case to change the DeployRoot setting to the MDT server’s FQDN rather than short name… problem solved 🙂

Share permissions

The default permissions applied to the Deployment Share by the installation wizard weren’t set up as we liked. Can’t remember the exact reason now but looking back documentation on other sites I think the share needed locking down to prevent users viewing the Deployment Share content or (even worse) making unauthorised changes to it (!)

We now have specific AD groups and a service account set up so nominated MDT Administrators can read \ write to the share to upload Application install files etc. but the imaging account (more on that later) can only read and all other users are denied access by virtue of having no rights.

Set UK Locale

A quick an easy tweak sets up the keyboard settings for UK users in Bootstrap.ini

Ref: http://kabri.uk/2010/01/20/sample-bootstrap-ini-for-uk-deployments/

Similarly set them also in CustomSettings.ini

Ref: https://scriptimus.wordpress.com/2011/06/23/mdt-2010-sample-customsettings-ini-for-fully-automated-deployments/

There are quite a few other settings you’ll want to add in CustomSettings.ini but more detail on those will follow in relevant posts so keep your eyes peeled!

Update the Deployment Share

This is one action you’ll soon need to get into the habit of! If you make changes to the settings in any of the .ini files or add drivers that you’ll need in the PE environment (basically network and storage) then you need to update the Deployment Share.

This recompiles the Boot Images to include your changes, otherwise you’ll find all those nice new additions above make no difference whatsoever!

Think of this as step 1 / 2 to completely updating the Boot Images though. If the MDT wizard says that the Boot Images have changed you also need to copy the new WIMs over to WDS so PXE boot is using the latest images.

In WDS browse your server select Boot Images then right click as per screenshot above and click Replace Image. Browse to your Deployment Share’s Boot folder and select the correct image for each architecture.

Windows Deployment Services service won’t start

At an early point in our testing WDS decided it didn’t want to start after a server reboot and was spewing error code 0x906. We weren’t sure why and were on the verge of reinstalling from scratch when I spotted this:

Ref: https://social.technet.microsoft.com/Forums/windows/en-US/265b4b53-63ac-491f-817c-6030daa39b81/cant-start-windows-deployment-services-service?forum=itprovistadeployment

As per Aaron Tyler’s advice in the link above run the wdsutil commands to uninitialize then reinitialize the server manually pointing to the RemoteInstall folder WDS creates.

wdsutil /uninitialize-server
wdsutil /initialize-server /reminst:[PATH_TO_REMOTEINSTALL_DIRECTORY]

Next time…

That should be enough to get your first server up and running. For the second post in the series we’ll look at the MDT Database and how it turns MDT from a good imaging solution into a great one 🙂

Build your own Thin-ish client with Windows 10 LTSB

After some positive user feedback from the launch of our new Server 2016-powered RDS setup I started wondering if it could have a wider use that just the remote access concept we initially wanted to address. One thought in mind was making use of old \ low-spec devices that would be a bit too clunky for running a modern OS but where the physical hardware itself was in good condition.

Chrome-OS esque distributions such as CloudReady sound nice but come at cost so I set up a little side-project to see if there’s anything that could be done with what we have on our licensing agreement or anything in the open-source space.

Looking around there do seem to be various thin-client “converter” products but again they all seem to be commercial e.g. https://www.igel.com/desktop-converter-udc/

The only other option I found was ThinStation which may also be worth a look when I have more time as it seems a bit more involved to get set up and I wanted to stick to the Microsoft RDP client for now for maximum compatibility.

Windows options

Going back some time I remember Microsoft released cut-down versions of Windows for RDS-type scenarios; going back to the XP days it was called Windows Fundamentals for Legacy PCs and morphed into Windows 7 Thin PC in its next incarnation. Effectively all I want the OS to do is boot up, log in quickly then pass the credentials to a pre-configured RDP file using the standard mstsc.exe application.

However building any solutions on a Windows 7 base going forward seems to be a false economy so I decided to have a look around to see what was available on the Windows 10 codebase – the results were interesting…

IoT is name of the day

Going forward it seems Microsoft have changed the branding for this kind of cut-down devices to Windows IoT. In fact there’s a free edition which sounds ideal but it only runs on certain devices and isn’t really geared for UI use:

Ref: https://www.theregister.co.uk/2015/05/21/first_look_windows_10_iot_core_on_raspberry_pi_2/
Ref: http://blogs.perficient.com/microsoft/2016/01/windows-10-iot-editions-explained/

Reading a bit further it appears Microsoft license an edition called Windows 10 IoT Enterprise for new thin client devices. Now it gets interesting… it seems that the OS itself is Windows 10 Enterprise LTSB but with some special OEM licensing. It just so happens the edu customers get Enterprise LTSB on EES licensing so it’s time to take a closer look!

What this does mean is that Windows 10 Enterprise LTSB gets features from the old Windows Embedded products such as Unified Write Filter, perfect for a locked down device that shouldn’t need to experience configuration changes to the base OS.

Ref: https://msdn.microsoft.com/en-us/windows/hardware/commercialize/customize/enterprise/unified-write-filter

All these features are available in Enterprise LTSB simply by going into Add \ Remove Windows Features window, look for the Device Lockdown section and add whichever ones meet your needs (more on this later).

Image & GPOs

After downloading the latest ISO the LTSB 2016 WIM was imported into MDT. I made a quick task sequence to get it up and running and deployed the OS to a Hyper-V VM.

Boot and logon speeds are very quick given the lack of any Modern Apps which usually need to be provisioned at each new login. The performance gain explains why quite a few people within education have used LTSB for their desktop builds against MS’ wishes; however they’ll miss out on new features such as the much-needed OneDrive Files on Demand that will only be provided to the Current Branch release.

In theory setting up a Mandatory Profile could speed up login even further but haven’t got round to trying that yet.

RDS domain SSO

Upon logging in with domain credentials the next aim is to seamlessly drop users into the RDS farm without any further prompts. After doing a bit of research this can be achieved by setting a couple of GPOs:

  • allow credential delegation
  • trust SHA1 signature of signed RDP file

The need to allow delegation of credentials is fairly commonly mentioned but a lot of the articles are old and don’t mention where this needs to be set in a 2016 farm. In fact you only need to allow the delegation on the FQDN of the Connection Broker based on the results of my testing so far.

Computer Configuration > Administrative Templates > System > Credentials Delegation

To avoid any unwanted prompts about trusting the signature of a signed RDP file populate the GPO mentioned above and copy \ paste the signature from the RDP file that is provided by RDWeb for whatever RDS Collection you want to connect to.

User Configuration > Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Connection Client > Specify SHA1 thumbprints of certificates representing trusted .rdp Publishers

Custom shell

Now with the credentials side sorted out the final piece of the puzzle was to cleanly launch the session and (here’s the tricky bit) made a seamless logout once the RDS connection is closed. Now there’s a few ways to achieve the first part:

  • use the IoT Embedded Shell Launcher feature \ enable Kiosk Mode via System Image Manager
  • use the Custom User Interface User GPO

Ref: https://social.technet.microsoft.com/Forums/en-US/b4552957-45c2-4cc4-a13d-6397f06ee62e/windows-10-kiosk-build-embedded-shell-launcher-vs-custom-user-interface?forum=win10itprosetup

Ref: https://docs.microsoft.com/en-us/windows/configuration/set-up-a-kiosk-for-windows-10-for-desktop-editions

One thing to bear in mind with Shell Launcher is what happens when the shell i.e. mstsc.exe closes, you only have the choice of

  • Restart the shell.
  • Restart the device.
  • Shut down the device.
  • Do nothing

For the sake of speed logging off would be better so I decided to go with the Custom User Interface GPO – seeing as the Windows 10 device would be domain-joined anyway it also seemed a quicker more efficient way to configure multiple clients too.

Seeing as the Custom User Interface is a User GPO it goes without saying that Loopback Policy Processing needs to be enabled for the OU where the client resides. That also comes in handy for a few additional personalisation settings later on too.

The User GPO settings are summarised in the screenshot below, you can add more lock-down policies as you see fit:

Auto log-out on disconnect

Seeing as I wanted to automate the process as much as possible and all the devices would be domain managed anyway the GPO method seems to be the quickest way to achieve what I want. Also avoids needing to do an Add \ Remove Features step for each endpoint device.

Another important point is that the Shell Launcher method only provides options to relaunch the program, shut down or restart the machine. For speed I was aiming to log off the “client” when the RDS session is done so definitely going down the GPO route as a result.

In the GPO settings I initially tried the standard string you’d expect to launch a Remote Desktop session i.e. mstsc.exe C:\Default.rdp but noticed some strange behaviour:

  • Windows logs in
  • RDP file launched
  • connection starts
  • before the green bar completes i.e. handshake still in progress
  • host session logs out

This seemed like a behaviour I’ve seen with some other programs in the past where they appear to terminate mid-way through actions actually occurring. To check I tried manually with the “start” command with the same result. It appears mstsc.exe doesn’t play nicely so we need another way…

Plan b) was to monitor the mstsc.exe process then log out from the client once RDS disconnected and therefore the process was no longer running. After looking around and trying a few scripts out I settled on one I found here:

Ref: https://www.experts-exchange.com/questions/24218998/Check-if-a-process-is-running-in-vbs.html

Just add the logout command as the action to run when the desired process terminates and we have the desired behaviour. It takes a second or two to react to the process closing but there doesn’t seem to be a way to speed that up as far as I can see.

Final steps

Now just some finishing touches required to give the solution a bit of polish 🙂

  • set logon and desktop wallpaper
  • disable Task Manager and related lockdown setings

When the machine boots users see this login screen, easily customised via GPO…

After login connection to RDS is pretty much immediate and no further credential \ security prompts appear…

UWF

The final piece of the puzzle is tidying up after the client has been in use for a while. That’s where the Unified Write Filter from earlier comes in handy:

Enable-WindowsOptionalFeature -Online -FeatureName Client-UnifiedWriteFilter

Then enable the filter;

uwfmgr.exe filter enable

Ref: https://docs.microsoft.com/en-us/windows-hardware/customize/enterprise/unified-write-filter
Ref: https://developer.microsoft.com/en-us/windows/iot/docs/uwf
Ref: https://deploymentresearch.com/Research/Post/632/Using-the-Unified-Write-Filter-UWF-feature-in-Windows-10

And there you have it, a locked down RDS client that will run on older hardware (Windows 10 works on pretty much anything from the last 10 years) which can be managed through your standard AD infrastructure, all using stuff you already have access to via your Campus agreement… enjoy!

Quick tips: custom port speed sensor for PRTG

We use PRTG Network Monitor at the College to monitor devices right across the network, from switches and firewalls right down to host \ VM \ application level for servers.

Recently I started playing a bit more with the network maps to try and build some “living” documentation that would give us live traffic stats whilst also satisfying the requirement of having some up-to-date network diagrams.

Port speed

Adding the devices and links was simple enough but we also wanted to display the uplink speed. Partially as it’s handy to visualise what goes where and also from a practical point of view to check if a link degrades.

However I couldn’t find a straightforward option to do this. It seemed possible as PRTG does actually display the speed when selecting ports to monitor but then doesn’t make the data available as a channel once added to the Devices list 😦

A bit of research confirmed I wasn’t going mad:

Ref: https://kb.paessler.com/en/topic/14843-how-to-see-port-speed-on-switches

Having the speed on the port name is nice but that’s a bit too wordy to be able to spot from a distance and being a text label isn’t really something that could be “monitored” if the value changes. However the comment about “ifSpeed” did give me an idea…

Custom library time

I’d already used the PRTG MIB Importer quite a few times bringing in libraries for various devices on the network so wondered if there was a value I could use to make a custom sensor. There was nothing in the device specific files for our switches so I figured that the value must be more generic \ standard than that.

Soon found this very handy website that walks through the SNMP OIDs; lo and behold there’s the ifSpeed values!

Ref: http://cric.grenoble.cnrs.fr/Administrateurs/Outils/MIBS/?oid=1.3.6.1.2.1.31.1.1

Well in fact it’s a slightly different value called ifHighSpeed, the reason why it’s required for 10GB interfaces is referenced below:

“That is because when using ifSpeed, the output value will exceed the max. value (4,294,967,295)  that the object can support.”

Ref: https://supportforums.cisco.com/discussion/11124321/what-should-be-ifspeed-and-ifhighspeed-2-gig-interfaceport-channel
Ref: https://kb.juniper.net/InfoCenter/index?page=content&id=KB9119

After trying out the values using the free Paessler SNMP Tester along with the MIB Importer eventually brought me to the setup below, which has a nice little bonus of also showing the port’s name when selecting it in PRTG for easier identification. Note the other OIDs to obtain port name etc.

Once done it’s then a simple matter of saving the newly created file for use in PRTG and adding an SNMP Library sensor in to a test device.

Once added the sensor displays like this (port name can be changed as required)

Map display

The sensor was working well at this point but there was one final tweak required before getting the display I wanted. PRTG’s default template for a value-only map item adds the device’s name to the label, which in our case made for quite an unsightly string of text that got in the way of other map elements.

The solution? Go to your PRTG folder which looks something like

*install path*\PRTG Network Monitor\webroot\mapobjects
  • copy the item template in question, in this case it was “An icon B2.html” and name it as required.
  • edit the copied file to remove the “ParentDevice” string highlighted below.
  • also note you need to change the display name at the top of the file to something unique

Compare the before and after shots below:


original PRTG file


updated file with name edited and ParentDevice string removed

Once done open up the PRTG Administration Tool then restart the Core Server Service as per screenshot below:

End result

Now when you look in your map icons you’ll see the additional option appear as you named it above.
Drag that in and you’ll get the output in the format below, nice and clean 🙂