#WindowsUglySweater 2021 unboxing and reveal

In recent years I’ve really embraced the joys of the novelty Christmas jumper craze with recent additions including Fallout 4, Pac Man and internet phenomenon Grumpy Cat (RIP). Last year the Windows 95 themed #WindowsUglySweater looked the ideal candidate but I missed my shot and it was sold out 😦

This year however I’ve been lucky to get one courtesy of the @Windows Twitter account, essential IT geek winter wear for the holiday season coming right up!


#WindowsUglySweater 2021 box

Coming all the way from the USA a sturdy square shipping box opens to reveal the Minesweeper themed artwork for the 2021 #WindowsUglySweater design. The shiny white box will certainly fill a nice amount of space under your tree.

Once you pop the lid you’ll see a Minesweeper themed sticker inviting you to open the wrapping paper. Fortunately hitting this particular mine won’t be Game Over for your new jumper!


Inside you get a real-life Solitaire card and the jumper itself, lots of neat touches from the window controls (in the top right, of course) to the XYZZY code on the back (if you know, you know…)

and here’s the full view sitting ready for https://www.savethechildren.org.uk/christmas-jumper-day/about
…but hey once it’s December any day is fair game, right 🙂

#WindowsUglySweater 2021 front view

When, where, how?

Keep a close eye on the @Windows Twitter in 2 days’ time and be quick off the mark to grab yours!

Configuring EAP-TLS Wireless connections on macOS with Jamf

After procuring a new Ruckus Wireless network to replace our soon-to-be EOL Aruba equipment my attention turned to simplifying the current setup in preparation for the changeover. One of those tasks involved moving to policy-defined Wi-Fi connections for our internal devices.

eduroam for organisation owned devices

After configuring eduroam for BYOD I was intrigued by the possibility of using the same SSID to also onboard our college-owned devices; a mixture of Windows 10 domain-joined laptops and MacBooks on macOS Mojave. Now we have Jamf Pro fully operational the task looked much more manageable.

I decided to look into certificate-based authentication (EAP-TLS) to achieve this. All the information to do this with AD CS and macOS devices is out there but it’s a bit scattered so this post aims to bring it all together in one handy step-by-step guide.

Jamf Pro AD CS Connector

We’re using Active Directory Certificate Services (AD CS) to issue certs to our devices using an auto-enrollment policy. You have two methods to do this; either use the original Jamf payload or the new Jamf Pro AD CS Connector. We don’t have SCEP \ NDES enabled on our CA (which appears to be required for the older Jamf AD CS method) so the Connector looked a better option.


The latter has the advantage that the machine in question doesn’t need to be directly connected to AD CS to renew its cert, which could prove useful in future as well.


Setting up the link between Jamf and AD CS

When running the installation the PowerShell command will look something like

PS C:\Jamf\adcs-connector-1.0.0\ADCS Connector> .\deploy.ps1 -fqdn youradcs.internaldomain.co.uk -jamfProDN jamf.yourdomain.co.uk -cleanInstall
  • youradcs.internaldomain.co.uk is the DNS name of your AD Certificate Services server
  • jamf.yourdomain.co.uk is the DNS name of your Jamf Pro server

The Jamf instructions above are pretty simple for the first part of the installation but pay attention to some key points below:

  1. the Jamf Pro AD CS Connector will only work on Server 2016, don’t even try it on anything older!
  2. check, check and check again that you’ve saved the “Client cert keystore password” generated by the PowerShell script before continuing
  3. when you configure the CA details in Jamf Pro make sure you use the name of the CA as it is displayed in AD CS
    this is really easy to miss as the instructions aren’t particularly clear, note the setup as per the YouTube walkthrough below
    (use this link to skip to the relevant section about naming the CA https://youtu.be/oRkpkN1Z3aI?t=612 )

Credit to Daniel MacLaughlin for making this and highlighting the key points 🙂

AD CS Certificate Template

If you already have a certificate template deployed for your Windows machines don’t try and re-use it for the Jamf Pro AD CS Connector. You need different settings when deploying with the AD CS connector as Jamf Pro will be requesting the certificates rather than the Computer itself.

Certificate Subject Name must be set to “Supply in the request”

Jamf server’s Computer Object in Active Directory needs to be given rights to Enrol \ Auto Enroll

Configuration Profile

Now go into Jamf and build a Profile to push out to your devices.

This part is important! You need to have all these elements defined within the same Profile for it to work!

  • Certificate to be generated from AD CS
  • Root CA for AD CS
  • Root CA for RADIUS server
    (if different to AD CS Root, which was the case for our eduroam profile)
  • Wireless network payload to actually make the connection

Defining the Certificate Payload

Enlarge the image below to see the Certificate payload more closely. You’ll see where I named the PKI CA wrongly at first but even after changing it to the proper CA name the UI doesn’t update. Still works though, which is the main thing (!)

Certificate subject is CN=$COMPUTERNAME.yourinternal.domain.co.uk
SAN name is $COMPUTERNAME.yourinternal.domain.co.uk

It appears having the SAN defined is important for the next part when you define the Wireless connection Payload.

Defining the Network Payload

When configuring the Wi-Fi connection Payload itself the next part is absolutely crucial. All credit to sbirdsley on Jamf Nation for this vital bit of info:


The username must be defined as follows or the connection will fail:


Also note the Identity Certificate you supply in the Network Payload must match the one you enter in the Certificate Payload. It’s on a dropdown so should be easy to match but if you have multiple entries be careful to pick the correct one.

Obviously, you’ll also need to set WPA2 Enterprise and TLS in the Security Type and Protocols sections.

Deploying the Profile and troubleshooting errors

Once saved the Configuration Profile should apply quickly.

Note: you will need to reboot for the connection to take effect. I’ve read elsewhere that the certificates are deployed to the System Keychain, which only connects at startup and if you try to manually connect once already logged in you’ll get errors as the user doesn’t have access to the required certificates.

Another common error you may see in the Jamf logs if the profile doesn’t apply successfully is this:

Unable to retrieve AD CS certificate for profile payload

If you receive this error double-check the name you entered in PKI settings when defining the AD CS server. If this doesn’t exactly match the name of your Certificate Authority (note this is the name of the CA itself, not the name of the server on which it’s installed) the profile won’t work.

DEP users beware

Also a further note for those deploying new machines via DEP. Because Configuration Profiles apply pretty much as soon as they possibly can there is a possibility you’ll get a certificate generated too early in the process with the wrong machine name i.e. “Administrator’s Macbook Pro” or something along those lines.

The best workaround we have for that so far is to name any manually-enrolled machines before starting the enrollment process and for brand new machines run machine naming as early on in the deployment process as possible. If you get a failed Wireless connection on a newly-enrolled machine check the certificates list in AD CS for any wrongly-named certificates. Revoke them and try again.

NPS logging

During the initial setup and troubleshooting process I found that our RADIUS server wasn’t giving me a great lot of detail from the default log files that get created by Windows NPS.

Turns out you can get a much more readable version in the Event Viewer by manually enabling some additional Audit Log settings – thanks Mike Nowak for the tip!

NPS Authentication events not showing up in Event Log


Moving from Federated to Pass-through auth for Office 365

After deploying Office 365 for all our users back in 2014 (which now seems a very long time ago!) we moved to a federated setup in order to gain SSO capability for our SharePoint Online-based Staff Intranet portal.

We weren’t keen on deploying ADFS at the time due to the amount of infrastructure to get a truly resilient setup (including having elements up in Azure) so we used Centrify Identity Service instead (now known as Idaptive), which at the time offered a free plan for basic functionality. It kept the bulk of the federation infrastructure in the cloud and only required a simple Agent installed locally to enable SSO.

That’s served us well for the past 5 years but it’s time to change. We now have Azure AD Premium in place, Hybrid AD Join for Windows 10 is a common deployment scenario and we can now make use of MFA and self-service features through Azure so having a non-standard and free \ unsupported identity provider no longer made sense.

Microsoft also has a newer authentication method available that follows the simpler deployment method we’ve grown accustomed to, called Pass-through authentication:


The model seems similar in architecture to Azure AD Application Proxy (more on that later) so we started planning to make the move.


Installation and configuration goes pretty much to the letter of the documentation, Microsoft have done a good job here 🙂


If, like us you’re are moving from an external federation provider check and then double-check that you have a copy of the results of the Federation Settings cmdlet before proceeding; in case you need to roll-back for any reason:

Get-MsolDomainFederationSettings -DomainName youroffice365domain.co.uk | fl *

We’re currently running 3 Agents but will probably add a couple of extras on our other sites for further resilience later on.

Note: one thing the documentation doesn’t show is that an extra little warning triangle that seems to have been added to the Azure Portal when only 1 Agent is installed.

There’s no tooltip or message to say what it’s trying to tell you but disappears as soon as you install the second Agent so we’re working on the theory it’s there to flag up a vulnerable configuration.

The screenshot above was taken as the Azure AD Connect blade loads – as soon as the number of agents calculates correctly and shows up as 2 or higher it disappears.

Proxy issues

Upon deploying the Agents we saw them register swiftly in the Azure portal and all seemed well, until we saw the IP address that was detected as the traffic source. Rather than a direct connection from the firewall as we’d expect from our servers the traffic was coming from the external address of our proxy instead.

We use WPAD for our clients but not for servers so something was clearly amiss. We don’t want to add any unnecessary dependencies to our Office 365 authentication system so needed to track this down and solve it before proceeding any further.

We checked the obvious IE settings in Windows but sure enough Auto Detect Proxy was off. We also checked the status of the WinHTTP proxy from the command line and reset it as well for good measure…

netsh winttp show proxy
netsh winhttp reset proxy

…both with no success

On the proxy itself we could see traffic for the PTA Agents so it was definitely going out through the wrong path. This post lists the URLs used by PTA, which came in handy for double-checking the theory above.

Azure AD Connect blocked by firewall

As we were troubleshooting I had a bit of a deja vu moment, recalling a similar issue when setting up Azure AD Application Proxy. After Googling my own blog (!) it became apparent that we’ve been here before…


Sure enough browsing the Program Files directory for the Pass-through Authentication Agent (and similarly Agent Updater) the same .config files were present.

  • AzureADConnectAuthenticationAgentService.exe.config
  • AzureADConnectAgentUpdater.exe.config

Adding the same lines from last time around then restarting both Updater and Agent services immediately changed the traffic path to a direct connection, no more proxy involvement! The below lines need to go in the <configuration> section:


<defaultProxy enabled="false"></defaultProxy>


It’s surprising this isn’t mentioned in the documentation anywhere and seems to be an issue that hasn’t been fully resolved since it first appeared back in 2016. One for the Azure team to take a look at? There is a similar section in the FAQ about explictly specifying a proxy but not about rogue use of WPAD when you’d expect a Direct connection to be used:


For more background about how the .NET proxy detection works take a look here.


Disclaimer: the above advice comes with no warranty so make changes at your own risk. If in doubt raise a ticket with Microsoft support!

End-user experience

After making the required GPO changes and feeling brave enough to covert from Federated to Managed status we began to check sign-in functionality on a range of devices.

With initial checks looking good on both mobile and desktop devices we moved to the finer points of the changeover.

Smart Links

We used Smart Links with the previous Centrify federation service to provide true SSO for our Staff Intranet and OneDrive when users clicked on them \ opened up their browser home page.

The Pass-through auth documentation suggests that full SSO can be obtained provided that the service in question has a domain hint present. Even better use this very handy Smart Link generator by Jack Stromberg, which works a treat!

O365 Smart Link/SSO Link Generator


Seamless SSO experience

With Seamless SSO enabled the following work the same as a Federated service would in terms of how they appear to the end-user:

  • Office 365 web apps (Outlook, SharePoint, OneDrive etc.)
  • Outlook client

The following need the email address entered manually (no password required):

  • Windows 10 OneDrive client
  • Office 2016 desktop apps

We’ll be enabling Hybrid AD Join in the next couple of days after testing on some machines in the office, which I believe should help fill in the gaps above, as well as enabling SSO to the Windows Store for Education.


Cover image credit: Photo by rawpixel.com from Pexels

Deploying and Monitoring Azure AD Password Protection

As another layer in protecting against insecure passwords I’d been waiting for Microsoft’s Azure AD Password Protection to come out of Preview for some time but now it’s moved to full GA release we’ve implemented it into our AD \ Office 365 environment.


The premise of the product is simple; when a password is set or changed check it against a list of bad \ known breached passwords in Microsoft’s password database. If the password is unsuitable then prevent it being set.

This should help add a bit more protection against the use of breached passwords. For example reuse of credentials between personal and work accounts, which users are slowly becoming more aware of thanks to services such as https://haveibeenpwned.com


The installation went pretty much as the Microsoft documentation describes. We installed the Proxy service on our Azure AD Application Proxy servers as the GitHub docs suggest this is a supported configuration.


A couple of things need to be watched out for though so check your environment particularly for…

  • .NET 4.7.2 is required on the Proxy Servers and 4.5 on Domain Controllers running the DC Agent Service
    check this via registry in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full
  • not strictly related to Password Protection but if you’re planning on enabling Password Writeback in Azure AD around the same time (we were as it all comes in as part of the Azure AD Premium P1 license) make sure your version of Azure AD Connect is up to date

If all goes well you’ll get this…

and after running the PowerShell commands to register the service this…

Monitoring with PRTG

As per the Microsoft documentation the Password Protection service creates its own Event Logs, which can be used to check how many times banned passwords have been detected and by whom. Although the output of the PowerShell summary command works well…


…I wanted to see if I could get something a bit more real-time using PRTG to monitor the logs.

Initially I was pleased to see the native WMI EventLog sensor but then hugely disappointed to see it only works on the standard Windows categories and not additional Applications and Services logs. Need to try harder there PRTG, I expect more from you!

Fortunately they do have a workaround EXE/Script Sensor that effectively runs a PowerShell script (quite ironic!) to monitor these additional logs. Under the hood the script is using a native Windows function called Get-WinEvent and then pushing the output to PRTG.


For some (still unknown) reason the script wouldn’t return output when searching for multiple events so to save time and frustration I’ve split the monitoring into two sensors. One for password set events and another for password change events. The code for both is below. Remember to update the sensor settings as you move from Audit to Enforcement mode!


-ComputerName %host -Username "%windowsdomain\%windowsuser" -Password "%windowspassword" -ProviderName "Microsoft-AzureADPasswordProtection-DCAgent" -Channel "Microsoft-AzureADPasswordProtection-DCAgent/Admin" -MaxAge 1 -EventID 10024 -Level 4
-ComputerName %host -Username "%windowsdomain\%windowsuser" -Password "%windowspassword" -ProviderName "Microsoft-AzureADPasswordProtection-DCAgent" -Channel "Microsoft-AzureADPasswordProtection-DCAgent/Admin" -MaxAge 1 -EventID 10025 -Level 4


-ComputerName %host -Username "%windowsdomain\%windowsuser" -Password "%windowspassword" -ProviderName "Microsoft-AzureADPasswordProtection-DCAgent" -Channel "Microsoft-AzureADPasswordProtection-DCAgent/Admin" -MaxAge 1 -EventID 10016 -Level 4
-ComputerName %host -Username "%windowsdomain\%windowsuser" -Password "%windowspassword" -ProviderName "Microsoft-AzureADPasswordProtection-DCAgent" -Channel "Microsoft-AzureADPasswordProtection-DCAgent/Admin" -MaxAge 1 -EventID 10017 -Level 4

Password Protection in action

Here’s a couple of screenshots of how the service presents itself to a user attempting to change to an insecure password:

On a PC using CTRL+ALT+DEL

From within Office 365 using Password writeback


During my testing I was surprised to get a password through that I expected to be banned given it’s a popular UK football team and in the top 10 most used passwords for football teams. On haveibeenpwned the password I used was shown as breached but not so in Azure AD Password Protection. For good measure I tried adding the names of all the current (for the next week or so at least!) Premiership sides to the custom banned list…

The character limit is interesting as although something like manchesterunited or tottenhamhotspur are perfectly valid in terms of length they’re terrible choices in terms of dictionary attacks.

Would be interested to hear a bit more feedback from the Password Protection team on how we can protect against this and why all the haveibeenpwned passwords haven’t yet added to the Azure AD Password Protection database, as you surely wouldn’t want any of those being set by your users?


Cover image credit: Photo by Matthew Brodeur on Unsplash

eduroam on Aruba and Microsoft NPS – an end-to-end guide

After a meeting with our Jisc account manager a few months back we decided to join the eduroam service. This provides RADIUS based Wi-Fi access for both our students and any educational visitors who have an eligible account via their own institutions. This post outlines some of the infrastructure changes we put in place to provide the service using our Aruba controller and Microsoft NPS.


To find out more about the eduroam service and how to get started visit:


External DNS record & IP address

Create an external DNS record and assign to an external-facing IP address that will be used by eduroam to contact your RADIUS server.

Firewall access

Ensure that your firewall rules are tight and locked down to the specific eduroam NRPS servers via their IP addresses. Only allow the RADIUS port 1812 to accept connections.

Note: you must allow ping on the eduroam external IP otherwise you will get server down errors in the support portal.

Also note the status page has a 24 hour refresh period so if you need to resolve a configuration issue don’t expect everything to go green straight away 🙂

NPS configuration

You’ll refer to the follow two links a lot during this process:


Jon Agland has written an excellent step-by-step guide to configuring Microsoft NPS for eduroam. Follow the steps precisely and you won’t go wrong! You’ll also set up a CA along the way, again no drama as long as you follow the guide:


RADIUS attributes and roles

In order for the Aruba controller to be able to assign eduroam Home and Visitor traffic to specific VLANs you’ll need to send RADIUS Vendor-Specific Attributes across during authentication on the NPS server. For more background see these two very handy links:


Note the requirement to assign the attributes for VLAN assignment, which acts as a filter in case any incompatible ones come down from visiting organisations.

Aruba configuration

(replace XX with a short name for your organisation, or an identifier of your own choosing)

Click to access cbp-79_guide_to_configuring_eduroam_using_the_aruba_wireless_controller_and_clearpass.pdf

  • Add RADIUS Server your-nps-server configure with shared secret etc.
  • Add to Server Group XX_eduroam
  • Add L2 Authentication > 802.1X Authentication profile XX_eduroam
  • Ad AAA Profile eduroam_AAA
  • Add User Roles eduroam-logon, eduroam-home and eduroam-visitor
  • SSID Profile eduroam_SSID
  • Virtual AP eduroam_VAP


By sending RADIUS attributes across after matching a rule on NPS you can set additional rules on eduroam traffic within the controller. For example we provide a “pot” of bandwidth for all users in a particular role to share, ensuring that our Internet connection doesn’t get saturated by BYOD traffic. We have separate bandwidth contracts for staff, students and visitors.


Configure the Attributes within your NPS Network Policies (under the Settings > Vendor Specific tab)

For a list of all supported Aruba VSAs visit

Aruba PEF rules (firewall policies)

After ensuring that the mandatory eduroam applications are allowed to connected outbound we also ensure Guest Isolation is enabled (so devices can’t contact each other) and also that eduroam users can only contact specific internal services (such as Moodle) on defined ports using Aruba role-based firewall ACLs.

If you’re not using a tunnelled \ controller setup then the security rules will need to be done with switch ACLs instead.

Logging and troubleshooting

Part of the eduroam specification requires you to retain DHCP and RADUS logs for 3 months. Use the following to make the process easy on yourself…

NPS logs

NPS logs aren’t particularly human-friendly by default but with the help of this rather handy tool you can use PowerShell to search through them for particular usernames. Very handy if you’re experiencing authentication issues.


You need to change the Log File format to IAS (Legacy) for the viewer tool to work correctly. Set the options as per screenshot below:

Once done you then browse the LogFiles folder, check the current file name and use the script as per below, subsitituing “AUser” for either a username or MAC address to search for. Very handy to diagnose connection errors or in the event you need to investigate activity on the eduroam network…

C:\Windows\System32\LogFiles\NAP_Logs_Interpreter.ps1 -filename C:\Windows\System32\LogFiles\IN1805.log -SearchData AUser -SearchDays 5

(use of the SearchDays switch is optional)

DHCP logs

eduroam also requires that you keep 3 months of DHCP logs to identify users and computers that have connected to the network. Fortunately there’s a handy DHCP log file retention script available that can help (as the standard Windows functionality is rather basic to say the least).

Credit to Jason Carter for the original script and Patrick Hoban for keeping a copy alive when the original blog post went down 🙂


I made a couple of tweaks to make the script easier to edit and added some logging using the handy PowerShell Transcript method


Download my edited script from OneDrive and replace the server name variable with the name of your backup target then follow these steps to implement it.

  1. create a shared folder on your central logging server (or file server if you prefer)
  2. create a service account user to run the Scheduled Task that will archive the logs.
    I recommend a standard Domain User account with a complex password.
  3. add the account to the Backup Operators group on each DHCP server
    (this allows the script to access the files it needs)
  4. create a Scheduled Task to run the script weekly

DNS server

I decided to run a separate DNS server for the eduroam clients. That way they only resolve the internal server names we want to expose (e.g. web server, VLE etc.) and it reduces any load on our main AD infrastructure.

I’m a fan of the CentOS distro so set up a basic server and added BIND

yum install bind bind-utils -y

Then configure BIND via /etc/named.conf using https://opensource.com/article/17/4/build-your-own-name-server as a template. We use the IBM Quad9 DNS resolver (set as a forwarder) to ensure clients don’t connect to known malicious domains.

Walled Garden for CAT

If you don’t have an onboarding tool such as Aruba ClearPass, Ruckus CloudPass etc. then the eduroam CAT tool will be your friend. Initially you’ll need to configure CAT with your eduroam certificate, organisation logo etc. to create the profiles used for configuration. Follow the guide below to set up your organisational profile:


Once done the site used for configuring clients is available at the dedicated URL https://cat.eduroam.org/

Unfortunately Android users need to download an additional app to install a configuration profile on their device. Because of this I’ve personally found it easier to tell users to download the eduroam CAT app from the Play Store as their first action then find the Havering College profile in there, rather than bouncing back and forward from browser > app > browser.



Onboarding SSID

I tried a few ways to get the eduroam CAT site to automatically open when users connect to a setup \ guest SSID, so that we can easily onboard them when they first arrive at the college.

However sometimes client devices get too clever for their own good and if they see a Captive Portal with redirect then try and open up the CAT tool within their own Captive Network Assistant mini-browser. The problem with this being that the CNA browser doesn’t support certain features & scripts, leading to the CAT page appearing but not doing very much as the profile doesn’t download or install as it normally would.


Although you may be able to do some funky URL redirection with roles on your Wi-Fi system also bear in mind some users may just want to connect to wireless to use apps and won’t touch the browser at all. At this point it seems the old fashioned methods may work best and clear signage telling users to visit the CAT site may be necessary (perhaps cat memes may well be a valid tactic?!)

In Aruba you can use the “Walled Garden” feature to set up an SSID that only allows access to the CAT website


However note that you will need to add a series of Google domains to the whitelist to ensure Google Play access is also allowed for Android users to get the eduroam CAT app.

check the About > About eduroam CAT page for the most up-to-date domain list as the Google URLs in particular change from time to time


  • cat.eduroam.org (the service itself)
  • crl3.digicert.com, crl4.digicert.com (the CRL Distribution Points for the site certificate), also TCP/80
  • ocsp.digicert.com (the OCSP Responder for the site certificate), also TCP/80
  • android.l.google.com (Google Play access for Android App)
  • android.clients.google.com (Google Play access for Android App)
  • play.google.com (Google Play access for Android App)
  • ggpht.com (Google Play access for Android App)

RECOMMENDED for full Google Play functionality (otherwise, Play Store will look broken to users and/or some non-vital functionality will not be available)

  • photos-ugc.l.google.com
  • googleusercontent.com
  • ajax.googleapis.com
  • play.google-apis.com
  • googleapis.l.google.com
  • apis.google.com
  • gstatic.com
  • http://www.google-analytics.com
  • wallet.google.com
  • plus.google.com
  • checkout.google.com
  • *.gvt1.com


A mandatory requirement of joining eduroam is that you provide a support page for users looking to connect to the service.

2.5. eduroam Service Information Website

2.5.1. Requirements

  1. Participants MUST publish an eduroam service information website which MUST be generally accessible from the Internet and, if applicable, within the organisation to allow visitors to access it easily on site.

Creating a Moodle course to house documentation and guides seemed a good fit for this seeing as our Student Intranet is hosted there.


If you’d like to use our course as a template for your own site get in contact and I’ll send a copy over.

Mac deployment with Jamf, DEP and more

Thanks to a funding bid from the Mayor of London we were recently able to upgrade our key Mac teaching rooms with the latest hardware for our Media students. With it brought a foray into the brave new world of Jamf Pro for management and DEP \ MDM deployment rather than the trusty DeployStudio imaging method the college has used for many years.

This post outlines some of the new things my colleague Tristan Revell and I have learnt along the way.

Most of the setup process during our Jamf Jumpstart session went smoothly but watch out when configuring your Active Directory bind settings. The service account we used initially wouldn’t connect for love nor money and in desperation I tried making a new account, avoiding any extra characters (-, _ etc) in the username and no special characters in the password… voila it worked first time. Whether that’s a bug or standard behaviour I’m unsure as it didn’t seem to be documented anywhere as a specific requirement.

DEP deployment process

Order from a well-known reseller to ensure they add your purchases into DEP correctly. Sometimes it can take a few days (or longer) to get sorted out so plan this in early if you’re on a deadline!

Once assigned add the Macs to a PreStage Enrollment group in Jamf. Remember to select all the Macs you want to activate with DEP and then hit Save when ready to deploy.

Firewall rules

In order for DEP & MDM Enrollment to work correctly you need to ensure the Macs can contact Apple’s servers for various applications, such as

  • Apple Push Notifications
  • Apple Maps (to set time and date correctly during out-of-box setup)

We allow these apps out to Apple’s IP range as per their documentation https://support.apple.com/en-us/HT203609

Internet recovery and OS upgrade

We needed to upgrade the Macs straight out the box as they shipped with High Sierra but we wanted to start from the latest OS Mojave. Rather than install the older version and upgrade we decided to wipe and use Internet Recovery to start from a fresh copy of Mojave:


The downside of Internet recovery is that it not only uses Apple’s published URLs such as osrecovery.apple.com as found at https://support.apple.com/en-gb/HT202481 but also a bunch of Akamai IPs as well 😦

At present we’re monitoring the firewall for traffic blocks when we do Internet Recovery and so far the IP-based whitelist entries we added have worked for a couple of weeks without change. However I don’t expect that to stay the case long-term and no doubt will be repeating the process at some point.

Naming machines

For reasons only known to the Jamf developers there’s no built-in functionality to name a machine when it runs through the DEP Enrollment process. The automatically-generated name created by DEP is useless to us as we rely on room-based naming for classroom management.

Fortunately there’s a workaround using a CSV file (or Google Docs if you prefer) that can do a lookup based on serial number and name the Mac to something of our choosing. We host the file internally and provide access to technicians via a network share so they can update the list as required.


Just make sure you run this early on in the deployment process so the correct name gets used when binding to Active Directory (more on that later)


By default the DEP Enrollment process (and subsequent Policies) run silently in the background, not giving the best first-run impression to the user, especially if something like Adobe CC is installed as part of the process! Again the community has stepped in and written the excellent SplashBuddy tool (also check out DEPNotify if your requirements aren’t as complex)

Sachin Parmar’s blog is a fantastic guide to getting the whole process up and running


In practice the only thing we’ve found different to his steps is the order we created the components; due to naming and selecting order of deployment we found it easier to do it like this

  1. create DEP Policies in Jamf
  2. create Packages in Jamf Composer
  3. edit the io.fti.SplashBuddy.plist file to reference the Packages named above

Apart from that the process works like a charm and looks spot on too!

I’ve took a few liberties with the screenshot below showing SplashBuddy doing its thing as I could never get a photo of a Mac looking as good as Thomas Quaritsch has with the ones he’s kindly uploaded for free on Unsplash 😎

Thomas Quaritsch

We wanted to skip as many of the Setup Assistant screens as possible so ticked all the options within Computers > PreStage Enrollments but the “add user” screen still appeared. After some help via the MacAdmins Slack channel it turns out the option to skip user creation is split out into the Account Settings tab…

Dock management

One area Jamf definitely need to improve upon is management of the Dock. The built-in functionality only provides configuration of the default Apple apps, which definitely aren’t the ones we’re concerned about presenting to the user (think Adobe CC, Microsoft Office etc. instead)

We found two third-party tools that work better than the Jamf functionality, try both and see which one works for you:

Dock Master

Dock Master has an online version and a more recent local utility to help build a custom Dock layout:


Profile Creator

In the end Tristan found another utility that he found more flexible when creating the Dock profile. It’s called Profile Creator and does exactly what it says on the tin; not just for the Dock but for numerous other settings too, such as Firefox, Chrome and many more. It’s still in beta but working well for us:


Mounting DFS shares

We’ve used Windows DFS shares under a new dedicated Namespace to store the home and shared drives for the new Macs and wanted them to auto-mount on login. Although Jamf includes built-in functionality to do something along those lines “Use UNC path from AD to get network home folder” there’s a couple of reasons we didn’t want to do this:

  • we’ve been burnt by issues relating to network homes in the past and found local homes provide a more reliable experience
  • we don’t currently populate the Home folder attribute in Active Directory (Folder Redirection works much better) and didn’t want to change that configuration in case of any unforeseen consequences by doing so

So instead we looked for what I thought would be a simple script to mount a share on login. That turned out to be quite a voyage of discovery!

Putting things in context

The first thing we learnt was that Jamf scripts that run at login run as root, so any scripts need to use a Jamf-defined parameter called $3 to obtain the currently logged-in user. We also found another setting that determines whether scripts run before the Finder loads or whether they can run asynchronously in the background. The latter setting helped with getting speedy login and the environment set as we wanted it.

Searching for the one

The first script we tried was created by Jamf themselves


However it seemed to struggle with reading the DFS namespace as well as mounting a dynamic path relating to the currently logged in user; for example we wanted to mount a path something like \\domain.fqdn\Mac\Home\$3 but the script either took the path very literally if entered as a Parameter or not at all if hardcoded in.

Other scripts we tried worked fine from Self Service (provided the user logs in first in order to get $3 populated by Jamf) but not at the Login Trigger. It was beginning to get a bit disheartening then we found the miracle cure!

Reading through various comments in the Jamf Nation forums turned up this gem from Samuel Look (scroll to second post from the bottom)


Key things it does differently to the Jamf script include

  1. using AppleScript rather than Bash to do the actual mount
  2. waiting for CoreServicesUIAgent to start i.e. desktop ready before attempting to proceed
  3. accepts a path that can be concatenated into a variable so we can do our dynamic DFS path for the user folder

We have up to four instances of this script running, depending on the combination of user groups and shared drives we wish to mount and it’s working nicely in our environment. Within Jamf we have two copies uploaded into the Scripts repository. One is the unaltered version, which uses the first user-defined Parameter (aka $4) to define the path to mount. The other script is our adjusted version, which includes a couple of extra lines to calculate the path to the user’s DFS folder path.

The snippet is below, the hashed section should give you an idea where it sits within the script:

##### START #####

# Share_Path=$4

Replace Share_Path with your share path, placing the $3 wherever your username-specific section lies.

Configure defaults

The standard behaviour in macOS 10.12 and above is to warn the user before connecting to unknown network shares with a dialog box saying “enter your name and password for the server”.

This warning (and requirement to manually click the Connect button) prevents mapped drives from loading up silently. To fix it we configured the AllowUnknownServers option using defaults write as outlined in the Apple KB below:


Kerberos SSO

Another item to tick off the list was getting SSO to work correctly across all browsers. Safari worked out the box, however Firefox and Chrome need a few tweaks:


You need to configure the following items using Profile Creator (or your tool of choice)

  • network.negotiate-auth.trusted-uris
  • network.automatic-ntlm-auth.trusted-uris

The syntax for the above is just the domain, with multiple entries separated by commas e.g.


You may also need to set this to true depending on how your server names are configured on the sites you want to SSO with

  • network.negotiate-auth.allow-non-fqdn


It’s a similar process for Chrome but slightly different settings

  • AuthNegotiateDelegateWhitelist
  • AuthServerWhitelist

Syntax for these is *domain.fqdn, same again with commas to separate multiple entries


Oddly our Centrify \ Office 365 SSO isn’t fully working at present in Chrome and presents an authentication screen whereas Firefox and Safari SSO correctly; however other Kerberos SSO sites work fine. Need to investigate further to see if it’s just a missing domain that needs adding for Chrome or if there’s a technical limitation somewhere.

I have seen a support page from another hosted SSO provider saying their service doesn’t work on Chrome for Mac, which does make me wonder if there’s a bug that needs fixing for this to work smoothly. Will test further when time permits.

Printing with PaperCut

Previously we used a bash script to map printers based on the name of the machine but wanted to try and stick to the native Jamf functionality going forward for ease of management. Initially that seemed simple enough; add the printer manually on a Mac in the office then use Jamf Admin to upload the printer to the server and create a Policy to map it.

However this didn’t seem to work and kept popping up authentication dialogs. Not good.

A swift bit of research brought up the fact that although you can set Kerberos Negotiate auth on the printer it doesn’t get carried over in the upload to Jamf. Therefore each printer needs the authentication configured e.g.

sudo lpadmin -p <printername> -o auth-info-required=negotiate

Fortunately there’s a handy script that can be used to set this for all printers mapped on a machine:


Although no-one on the forums seems to have implemented it this way it seems to work fine running the Script as an “After” action within the print policy. Effectively meaning the printers map, then get the correct authentication settings configured immediately afterwards. Checking PaperCut on a couple of test jobs shows the user authenticated correctly and jobs are running fine.

Initially we had some issues with older HP printers e.g. Color LaserJet 4600n and it seemed they may be too old to be supported in Mojave. That seems to have settled down a bit now after updating firmware to the latest (2014!) version so we’ll see how they fare and replace if need be.

Know your Limitations

Fortunately I’m not talking about the start of a self-help speech (!) Many of our environment customisations need to be applied to particular groups; most commonly staff and students. Initially we wondered how to achieve this when Scoping Policies as it seemed to require populating JSS Groups rather than using the ones in LDAP that we use for everything else.

The trick (learnt after reading the Jamf Nation forums) is to Scope using All Computers \ All Users as required then use the Limitations tab to restrict the policy to the LDAP group(s) you desire. That method works a treat 🙂

The finished product

With some assistance from our Marketing team we have a nice fresh background to go with the new hardware and software. The subtle grill effect is a nod to the days of the Mac G5 tower and the notice board area on the left contains useful information to remind students about backing up their work and general housekeeping.

We also rebrand Jamf Self Service to match the same “My Apps” terminology we use on our Windows 10 machines with ZENworks. Same concept, different platforms but keeps the user experience consistent.

Help and Advice

After reading Sachin’s blog post I joined up on the Mac Admins Slack channel, an excellent source of community advice with lots of members online at any time and all willing to help out. Was also good to get some experience with using Slack as a collaboration platform

Jamf Nation has lots of reference material in their forums and add-ons


Lastly a shout-out for our Jamf Success Managers who have been proactive at tracking us down on LinkedIn and offering assistance to make sure we get up and running, a nice touch and good customer service.

Much unboxing later (thanks go out to our work experience students for their work here) we now have all our shiny new Mac suites up and running in time for students returning from half-term 😎

gshaw0 Origins – the PC where it all began…

If I looked at my career like a movie franchise the prequel would surely involve the subject of today’s post.

Seeing as it turned 21 this year I thought it apt to go back in time to feature the machine which got me started on the path to working in IT.

So step back in time and relive some (now) retro memories of days gone by when computers were beige and floppy disks were more than just a Save icon…

Looking back I can trace my interests in computing further into childhood, starting in primary school with memories of pressing black and orange keys on what I now know was the BBC Micro. During my journey through early years education that then progressed to the Acorn Risc PC, of which I have very fond memories. However by 1997 when setting out to buy a new computer Windows was the main game in town.

Setting the scene

We start the journey in late 1997, having convinced my parents that a computer was going to help at secondary school and how it would make the perfect Christmas present we ventured down to our local Comet (remember them?) to take a look at what was available and see if we could get something under £1000. That was far from a given in those days! After initially asking the salesman whether they sold Acorn Risc PCs (spoiler, they obviously didn’t!) eventually a PC was found for £999 with a printer thrown in to sweeten the deal.

This is what I came home with, printer and speakers out of shot…

Siemens Nixdorf Xpert

The machine itself was a Siemens Nixdorf Xpert, the consumer brand of the well-known German tech company. Those Germanic roots formed quite a large part of my early IT experiences, more on that later…

  • Pentium 166MHz with MMX Technology
  • 32MB SDRAM
  • 1.5GB Fujitsu HDD
  • 3.5″ floppy drive
  • CD-ROM drive
  • 14″ CRT monitor
  • Microsoft PS/2 Mouse
  • Windows 95 OSR2
  • Microsoft Works 4.0
  • Microsoft Money

Early experiences

Starting off with Windows 95 OSR2 and Microsoft Works 4.0 was a pretty gentle introduction to computing but after only a few weeks things turned a lot more technical. The PC failed to boot and started throwing Scandisk (remember that?) errors. Initially a factory restore was tried, which wasn’t quite as straightforward as one might expect given half the process was in German, including the config.sys and autoexec.bat files (!) but I got there in the end.

That first Windows install has now turned into thousands, I dread to wonder what the actual number is!

screenshots taken from Windows 98 SE, which was what the machine has spent most of its life running, much nicer than 95. Random trivia – you could obtain much of the UI functionality of Windows 98 by installing IE4 onto Windows 95, but it still wasn’t quite as pretty

The original hard drive soldiered on for a while longer but eventually succumbed to Bad Sectors and was replaced by a 6.4GB Samsung unit. I never trusted Fujitsu HDDs after that…

It also turned out that the Lexmark (shudder) printer that was bundled in didn’t work either so troubleshooting skills were also quickly picked up before that went back and was replaced with an Epson Stylus Color 300.

The Microsoft Mouse that came with the PC probably shaped the almost claw grip I have that now only seems to comfortably fit Microsoft mice – even for gaming I can’t find anything that fits my hand better (the “new” example in the photo is now 13 years old itself!)



The machine gained a fair few upgrades along the way as I used it throughout school years, at the time fitted by the local computer repair tech we found in the local paper. I remember being inspired by his proficiency working on the (then expensive) tech and thinking “that’s what I want to do as a job”. That’s where it all began I think.

First upgrade was memory as I soon exhausted the stock 32MB. Initially it was getting a boost all the way to 128MB but a huge earthquake struck Taiwan and the price of RAM doubled overnight so ended up with 1 stick instead of two, for a total of 96MB.

After the 6.4GB Samsung drive filled up another Samsung 10.2GB HDD was added alongside. I’ve still got the receipt for that one and dug it out of my spares pile recently to re-fit at some point 🙂

The OS was upgraded to Windows 98 SE and a USB card was also added to take advantage of that fancy new connector which didn’t need a reboot to detect a new device… magic!

The other significant upgrade of note was adding a CD-RW drive in place of the existing CD-ROM. That 4x LG unit cost a cool £99.99 when it came out but proved well worth the investment as USB sticks hadn’t yet gone mass-market and transferring multiple files via floppy disk was a rather painful experience to say the least.

One upgrade this PC never had was a modem, indeed it’s never been on the Internet – ever! By the time I got my first 56k connection I’d obtained \ rescued a well-worn 486 machine which was used for web duties, mainly because it was easier to run a phone line cable to (!)

Wi-Fi was also just a pipe dream back then so if it didn’t have a cable, it wasn’t getting connected. These days although I have an Intel NIC that can go in the phrase “just because you can, doesn’t mean you should” comes to mind…

Classic software

It’s not all about the hardware and this machine also introduced me to Microsoft Office 97, many tips and tricks still relevant 21 years later even in these Office 365 times. Of course documents were filled with ClipArt and WordArt as was almost obligatory for school homework projects 🙂


I remember this being much nicer than first-gen Windows Media Player, until I discovered WinAmp

Image editing was done in a cut-down Photoshop-lite product called Adobe PhotoDeluxe, which also taught me never to trust v1.0 software as it crashed regularly, usually right in the middle of a large edit. My CTRL+S reflex was well honed by that program, plus Windows 9x’s  general tendency to do this..

that’s a genuine Windows 98 crash from yesterday, no VM necessary here folks…

What struck me at the time was the amount of system and inventory tools that Siemens Nixdorf bundled with a home machine, it felt like something you’d get for a corporate environment than your average home user. Similarly the machine came with a large suite of manuals including technical reference for the motherboard and BIOS, how often do you see that?

The software itself was called DeskView and DeskInfo. Recently we bought a Fujitsu Primergy server at work, which is a descendent of the Siemens Nixdorf computer business that was eventually bought out by Fujitsu. Sure enough the kit was built in Germany and comes with management tools called… ServerView. It felt like seeing an old friend again all those years later.

This machine also introduced me to a PC game that changed things forever, you could almost say it had Unforseen Consequences…

I only got this disc because I was looking for backup and recovery tools at the time after the HDD incident…

I remember installing the demo of this new “Half-Life” game and being initially impressed by the design and atmosphere, even if I did have to run it at the very lowest setting for it to run anything like acceptably on the 2MB (!) internal Matrox graphics chip.

I nearly gave up on it when I couldn’t get past the army grunts armed with only a crowbar and low-ammo handgun… then I beat that particular bit of level and 21 years later I’m still just as hooked.

If, like me you’re a massive Half-Life fan you need to see the next two links…

  • Unforeseen Consequences: A Half-Life Documentary – https://www.youtube.com/watch?v=BQLEW1c-69c
    A brilliantly researched and produced feature documentary on the background story of Half-Life
  • Half-Life: Echoes – https://www.moddb.com/mods/half-life-echoes
    This is simply stunning, 20+ years after the original Half-Life was released comes a mod that’s right up there with Valve’s releases. Download Half-Life on Steam, install this, set difficulty to Hard and put yourself right back where the story started!


Fast forward some years to around 2002 and it had got to a point where I needed to move onto a faster AMD Athlon CPU and graphics card so with regret I had to put the Siemens Nixdorf into retirement. Unfortunately around the same time the motherboard failed and it seemed that may be it as the floppy drive rang out one last time before the boot screen went dark. I couldn’t face throwing out the machine so kept it tucked away hoping I could fix it somehow.

It took until 2007 but one day I was browsing eBay and I had a thought to randomly search for “Siemens Nixdorf Xpert” and by pure chance there it was, an identical machine being listed for spares and repairs and even better starting at 99p. One catch, it was in Germany, of course it had to be! My knowledge of German only goes about as far as how to enable the mouse driver in DOS mode so I put my faith and Google Translate and waited…

The motherboard, manuals and RAM cost cost the grand total of €3 plus shipping, what a lucky find.

One replacement motherboard fitted and the machine sprung back into life, 10 years after it first started up 🙂 I’d always wanted to put the fastest available CPU in so it was a nice bonus when the replacement board came with a Pentium 233Mhz installed. I also found a brand new keyboard in a surplus store on eBay so added that in as well as part of the rebuild.

Now repaired it was kept nearby for any retro pangs and recently moved with me into its new home, where it was time to emerge from hibernation once again. This time the hard drive needed a bit of… persuasion (yes, a well-placed thump) to free the disk heads and one Scandisk later it was back on the familiar desktop from all those years ago.

One item left on the list is the original CRT monitor, which stopped powering on some time back. I suspect a power board issue but need to find someone who’s happy working around CRTs as that’s one area I’m happy to leave well alone due to the high voltages present.

All that is old is new again

It seems that Windows 98 and associated machines have now hit the “retro” phase so I’m glad I kept it all for that nostalgia blast, never thought the sound of a floppy drive seek at boot would be so comforting!

Now it sits on the other side of the desk to my day-to-day gaming machine – the past and present side by side, just a monitor input change away…

Learn something new every day – troubleshooting DMX lighting

One thing I really enjoy about working in education is the wide range of tech that we get to work with day-to-day. Yesterday was no exception and something a bit more exotic than the usual networking queries.

Due to our work with setting up streaming media for our Creative Arts department I was asked by one of the technicians if I could try and help resolve an issue they were having with remote managed lighting in our TV studio. With trusty ThinkPad in hand I wandered up to take a look.

The lighting itself was controlled by a small USB LimeLIGHT DX2 controller
This then hooked into lighting dimmer units, which feed off to the individual lights themselves.

The software launched correctly but the unit itself was unresponsive, as were the lights when attempting to change brightness. Initially tried power cycling the dimmers, reconnecting the interface cables, reinstalling drivers and so on but with no joy. We even tried a different brand of digital DMX controller with no luck.

I then read that the DMX cables need to form a continuous daisy chain and one faulty fitting could cause the kind of issues we were experiencing. With that in mind we tried disconnecting sections of the chain until eventually realising it was some LED light fittings at the end that were causing the issue.

Upon closer inspection it was found that they’d been switched from DMX mode to manual; a quick reconfiguration by the media tech later we were back in business!


There’s something about AV cables that makes them rather satisfying to work with. Heavy-duty cables with big mechanical connectors definitely feel the part when connecting everything up!


Deploying an Azure AD Joined machine to existing hardware with MDT and Windows Autopilot

We’ve recently started a refresh of our staff laptops to provide a better remote working experience, as well as baselining a common standard to their configuration (Windows 10, Office 365, BitLocker etc.) At this point we were also faced with a decision on how best to deploy this:

  • domained with DirectAccess
  • domained with VPN
  • Azure AD Joined


Given that we’re heavy users of Office 365 I decided to give Azure AD Join a try and go for a cloud-native solution, rather than extending the reach of internal services outwards. One flaw in the plan is that I’m still trying to make a case for adding InTune to our EES agreement so have had to get a bit creative in terms of deployment rather than using MDM to do the heavy lifting.

Windows Autopilot

Whilst at Future Decoded last year I attended a demo of Windows Autopilot, which sounded a very easy way to assign Windows 10 devices and get them up and running quickly.

Ref: https://docs.microsoft.com/en-us/windows/deployment/windows-autopilot/windows-10-autopilot

However looking a bit more closely it wouldn’t do here as without InTune we can’t install any additional software and on top of that the devices we’re using need upgrading to Windows 10, rather than being nice fresh kit with the latest OS already installed. That said it still has a part to play in this deployment, more on that later…

MDT time

So with that initial thought discounted we turn back to trusty MDT. Having already gained its Windows 10 deployment stripes over summer I wondered if there’s a way to make a Task Sequence that will give a similar Autopilot experience but with a bit more flexibility around apps and re-using existing kit.

A fresh Task Sequence was created to handle the usual driver packs, generic Applications and machine naming. Now time for the fun part of integrating Office 365 ProPlus and Azure AD Join!

Deploying Office 365 ProPlus

Before we get to the Azure AD Join we need to deploy some basic software to the machine such as Chrome, VLC and, of course Office Apps. Internally we use Office 2016 ProPlus but for these Azure AD Joined devices Office 365 ProPlus is a better bet in order to ensure smooth SSO from the Azure AD account.

Deployment of Office 365 ProPlus looks a lot simpler now than it was previously thanks to Microsoft creating a handy web app for generating the Configuration XML file you need for deployment.

First download the Office Deployment Tool

Then create the XML file using the Office Customization Tool, configuring your desired options

Place all the files in a folder on your MDT server, along with a new file called install.cmd using the code from Rens Hollanders’ instructions (you don’t need to use his config.xml though as the one from the Customization Tool does the same job and is a bit more granular in terms of installation options)

Finally create an MDT Application (with source files pointing to the folder above) that runs install.cmd

In your Task Sequence add this and any further Applications you wish to install in the usual place under State Restore.

If using a Volume Licensing versions of Windows I also create an Application to install the MAK product key at this point.

Preparing for Azure AD Join

Now at this point you have a pretty bog standard Task Sequence and might be wondering how we get back to the first-run wizard. The reason for this is because it’s where we get our only chance to properly join to Azure AD if we want to log into the machine using an Office 365 account, otherwise if you join later on you end up with a mix of a local account connected to Office 365, which we don’t want.

The process of getting back to that OOBE wizard is simpler than you might think and just requires one command at the end of your Task Sequence

cmd /c c:\windows\system32\sysprep\sysprep.exe /oobe /quiet /quit

This does assume a couple of things though:

  • your Deployment Share is configured with SkipDomainMembership=YES and JoinWorkgroup=WORKGROUP
    these are already set on my Deployment Share as I prefer to Join Domain manually via a TS step; that way I can control exactly when domain policies come down to the machine during deployment, or in this case not join a Domain at all
  • your FinishAction in MDT is set to SHUTDOWN
    you can either set this at Deployment Share level or override it (as I do) for a single TS by adding this step in early on…

With this configured the machine will automatically run sysprep and return to OOBE state, ready for the user (or admin) to join the machine to Azure AD via the first-run wizard.

Provisioning Packages and customisation

Now what we have so far is good, but we can go a bit further and add some InTune-esque customisation of the deployed system via the MDM method of Provisioning Packages. This will allow you to prepare an identical base set of machines that you can quickly customise by plugging a USB stick into them at the OOBE screen for any additional changes. It’s also a good insight into how the policies, or should I say CSPs (Configuration Service Providers) work.

To create a Provisioning Package you need to open the Windows ICD, which in turn is part of the ADK (don’t you just love acronyms!)

Windows Configuration Designer provisioning settings (reference)

I initially started with this side of things via the Set up School PCs app but ended up opening up the package it built manually to take a look exactly what it did. Not all the settings were required so I decided to build a new one from scratch. Again it gives a good idea what’s going on “under the hood” though 🙂

Ref: https://docs.microsoft.com/en-us/education/windows/use-set-up-school-pcs-app

Applying the package file from a USB when OOBE appears works fine but I couldn’t resist the automated approach outlined in the post below to do it via MDT. If you’re using one of the latest builds of Windows 10 note the comment at the bottom that you don’t appear to need to sign the packages for 1803 onwards.

Ref: http://chrisreinking.com/apply-a-provisioning-package-with-mdt/

However I found that in MDT running the AddProvisioningPackage Powershell command with a UNC path didn’t work, giving me “path not supported errors”.

There’s not much documentation online about using this particular cmdlet (most people seem to be using DISM instead) but I found that if you map a drive letter to the Application path it works fine. My code for this is below (also includes the nifty transcript logging wrapper from deploymentresearch.com so you get full visibility of the process in your MDT Logs folder)

# Determine where to do the logging 
# https://deploymentresearch.com/Research/Post/318/Using-PowerShell-scripts-with-MDT-2013

$tsenv = New-Object -COMObject Microsoft.SMS.TSEnvironment 
$logPath = $tsenv.Value("LogPath") 
$logFile = "$logPath\$($myInvocation.MyCommand).log"
$DeployRoot = $tsenv.Value("DeployRoot")

# Start the logging 
Start-Transcript $logFile
Write-Output "Logging to $logFile"

net use P: "$DeployRoot\Applications\Provisioning Package - Your Name"

Write-Output "Adding TrustedProvisioners Registry Key"
Start-Process -filepath "C:\windows\regedit.exe" -argumentlist "/s P:\TrustedProvisioners.reg"

Write-Output "Adding Provisioning Package from folder: $DeployRoot\Applications\Provisioning Package - Your Name mapped to P:"
Add-ProvisioningPackage -Path "P:\Provisioning Package - Your Name.ppkg" -ForceInstall

# Stop logging

Note: you need to copy the entire contents of the folder where the signed exported Provisioning Package is created for the silent install to work correctly. Thanks to Dhanraj B for pointing this out in the Technet forums otherwise I may well have given up on it…


If you followed the Chris Reinking instructions to the letter you should see a successful import in your logs, which looks something like this…

IsInstalled : False
PackageID : d69c654b-3546-4b77-abcd-93f09285c123
PackageName : Provisioning Package - Your Name
PackagePath : P:\Provisioning Package - Your Name.ppkg
Description :
Rank : 1
Altitude : 5001
Version : 1.17
OwnerType : ITAdmin
Notes :
LastInstallTime : 22/11/2018 15:51:52
Result : 0__Personalization_DeployDesktopImage.provxml
Message:Provisioning succeeded
NumberOfFailures:0 (0x0)

Message:Provisioning succeeded
NumberOfFailures:0 (0x0)

Message:Provisioning succeeded
NumberOfFailures:0 (0x0)

Message:Provisioning succeeded
NumberOfFailures:0 (0x0)

Start Menu layout

One part of ICD that I found didn’t seem to work at all was Start Menu layout deployment. In Active Directory land it’s a simple GPO but despite following the MS documentation to the letter it never applied despite the Provisioning Package working otherwise.

Instead of using the ICD method I took an easy way out and created another Application in MDT, which copies the desired LayoutModification.xml to the Default User AppData folder:

cscript.exe CopyFiles.vbs C:\Users\Default\AppData\Local\Microsoft\Windows\Shell

Get CopyFiles.vbs from here https://mdtguy.wordpress.com/2014/07/30/how-to-copy-folders-in-mdt-like-a-boss-the-easy-way/

The final piece of the puzzle – automatic Azure AD Join

At this point we’re very close to having a hands-off deployment but we need a method to join the machine to Azure AD itself. Of course you can do this manually if you wish; the machine at this point can be handed to a user to complete the OOBE wizard and they’ll be able to log in with their Office 365 credentials.

Be mindful that the user who performs the Azure AD Join becomes local admin on that device. If you don’t want this you’ll need to use the method below to get a bit more control.

Initially I thought the Azure AD Join would be simple as there’s an option sitting right there in the ICD wizard

However the URL gives away what seems to be the crux of the issue in that you need an InTune license to get the Bulk Token 😦 When I try it fails miserably with the error “Bulk token retrieval failed”

Again documentation on this isn’t exactly forthcoming but there’s one particular Technet post by a Microsoft staffer that suggests it is limited by licensing.

“By the way, the error message “Bulk token retrieval failed” might be caused if there are no more licenses available for Azure AD Premium and Microsoft Intune”

Interestingly the Set up School PCs app is able to join to Azure AD in bulk, which suggests Microsoft can enable this functionality to non-InTune users when they feel like it but seemingly not for ICD use.

Windows Autopilot returns

Remember I said I’d return to Autopilot… well, “later” in the post has now arrived and it’s time to make use of it.

Without InTune we can still configure Autopilot using the Microsoft Store for Business (or Microsoft Store for Education if you’re an edu user). You’ll need to set it up if you haven’t already…

and then access it via one of the links below


Now follow the steps below:

At this point once your freshly imaged device hits the OOBE screen it’ll connect to Autopilot, apply the profile settings and skip all screens apart from bare minimum input required from the user for keyboard layout and login info. Once they log in Office 365 will already be installed, along with any other apps you provisioned via the Task Sequence and the device will be branded up as per your organisation’s design (provided you’ve configured this in the Azure Portal)

Note: during my testing Autopilot didn’t seem to work so well with a Hyper-V VM. The gather script managed to obtain 3 different sets of hardware hashes for the same VM on 3 separate image attempts, whereas on physical laptops the data was gathered consistently. One to keep an eye on but it was a case of third time lucky in this case, which allowed for some nice screenshots of the end product…

Multiple user access

An interesting observation during this process was the mysterious appearance of the “Other user” button, which I’d been chasing on Azure AD Joined machines for a while before this but without much joy. As you can imagine I was quite pleased when it popped up after running the first couple of test Task Sequences.

I’m not sure if it’s a recent Windows Update, enabling some of the “Shared PC” settings in ICD or (more likely) the additional of Azure AD Premium P1 licenses on our Office 365 tenant but it makes an Azure AD Joined machine much more usable for our use case, where machines need to be loaned out to multiple members of staff.

If anyone can clear up this quandary it’d be great to hear from you!

Note the use of the branded login screen to add some additional instructions for the user to help them log in for the first time 🙂

At this point once your freshly imaged device hits the OOBE screen it’ll connect to Autopilot, apply the profile settings and skip all screens apart from bare minimum input required from the user for keyboard layout and login info.

Once they log in Office 365 will already be installed and visible on the (customised) Start Menu, along with any other apps you provisioned via the Task Sequence and the device will be branded up as per your organisation’s design (provided you’ve configured this in the Azure Portal)

And here’s one we made earlier…

Not bad eh 😉


If Autopilot doesn’t work check network connectivity first, particularly if using a proxy server internally



image credit: blickpixel – https://pixabay.com/en/gift-made-surprise-loop-christmas-548290/