Joomla adventures – rebuilding a community (part 1)

Sometimes work skills cross over into personal life and being the resident IT geek can come in rather handy. Cars are one of my passions outside IT and I’ve been a member of an owners group called Metropower for well over 10 years now.

When we came close to losing our old website due to issues with the previous web host I decided to take on the challenge of building a new one – the first major revamp since the site started in 2003 (!)

Outside of my usual network infrastructure work I enjoy indulging a more creative side as well so it seemed a good opportunity to combine two skill sets In the end I’ve surprised myself with what’s come out from the project and picked up some very useful tips and tricks along the way so that can only mean one thing… time for a new post series ūüôā

Getting up and running… crowdfunding

Metropower has always been a free-to-join community from the very beginning and as such we didn’t have any funds to draw on for the new website. With the vision I had in mind I knew we’d need some money to get the site up and running so we turned to crowdfunding… the ¬£300 goal was reached easily and gave us the investment we needed to purchase hosting and some very useful extras (more on that soon!)

Ref: https://www.justgiving.com/crowdfunding/metropower

It was great to see how valuable our members find the site and the community that’s been built from it and our aim with the new site was to build on that and go even bigger and better!

CMS platform… decisions decisions

When our site first started back in the early 2000’s web technologies were¬†very¬†different to how they are today. Our users were pretty much all accessing from desktop PCs and the portal software itself was pretty basic… to all intents a series of static web pages. Most interaction was made via forum threads and at the time users would visit multiple times each day to check for new posts.

Via the magic of the Internet Wayback Machine some screenshots of the old site below:

  

Moving forward to the present day a constant message was that the site needed to be mobile-friendly and from my personal perspective needed to present content more effectively. As such getting the platform right early on was going to be important. Social media has also taken over much of the role of the forum, although it turns out this form of communication still has a place albeit in a less starring role.

A few years back I tried out WordPress as a potential replacement but although I love the product as a blog platform it didn’t seem quite as convincing managing more structured content.

On the other hand I’d seen a fair bit of Joomla via work and as a CMS it seemed to fit the bill. Add in a wide range of extensions and the decision was made… Joomla it was.

With that in mind we chose a UK-based web host so our data was local to us. Quality of support was high on the list and after shopping around we were recommend to https://www.webhosting.uk.com/

Their live chat facility looked great and prices were very competitive with what we’d been offered elsewhere.

Theme

Although I wanted to retain the core branding from the old site I felt it needed to be merged with a more modern style such as Google Material design. As such a commercial theme was high up the shopping list to give us a quality base to build from. After searching across many theme sites the TechNews template from GavickPro caught my eye.

Ref: https://demo.gavick.com/joomla3/technews/

It ticked all the boxes for being clean, responsive and also included a few neat additional extensions such as NewsShowPro, again fitting the bill for presenting fresh content in a simple way.

The reviews section with it’s animated score sections was perfect for our “How-To Guides” section which gave a modern twist to the well-known Haynes manual “spanner ratings”or better known by that famous phrase “refitting is the reverse of removal” (!)

Other standard features you’d expect from a modern website such as social media sharing, print friendly view etc. are provided by a rather neat radial menu in the corner of the article’s cover image.

Branding & customisation

With the Technews base in place it was time to customise it to merge some core branding into the new site.

Using my favourite colour swatch tool ColorPix a few core colours were extracted from the old website and I set about customising the TechNews base to suit the Metropower brand. The built-in Chrome Web Developer tools come in very handy for this, the element inspector and  Computed CSS sections in particular are worth their weight in gold!

   

Taking feedback from our admin team on board the colours and styling continued to be tweaked for some time to get to a place where it looked “right”. It’s one of those things that’s really hard to quantity upfront, especially when working without a strict design brief but when it comes together there’s that moment when you realise the project is coming together and it’s a great feeling ūüôā

Images were an area we had to put a fair bit of effort into as our old photo stocks were far too low-res (2006 vintage) so all needed to be retaken afresh. Our members proved to be very helpful on this front and supplied images via social media and our new dedicated website Gmail address (one of many platforms this project ended up utilising, more on that later).

In advance of the site launch we sent out a teaser image on social media to give our members an idea of what was to come and the feedback was very positive:

Further customisations included:

  • additional navigation using Breadcrumbs Advanced module
  • textured header background to add contrast and depth to the theme
  • social media icons added to main nav menu

The screenshots below show stages of the build process:


standard TechNews theme with minimal customisation


adding sidebar modules, colour scheme CSS modifications and social media menu icons


breadcrumb nav module installed, textured header background and feature banners added

Documentation is king

As the design progressed I started to find I needed to dig deeper into the theme to achieve what I wanted. For example in our Store category I wanted a 2-column layout that wasn’t a standard feature of the theme so had to rewrite a new PHP layout file and CSS style to suit.

Given that I was doing this in spare time I knew I’d have to revisit some elements later on so decided quite early to write documentation for the customisations as I went along, in the same way I would if I was doing the project at work. Looking back 6 months later when adding new content I’m very grateful to my past self for making that decision!

  

A useful tip that I learnt from our web designer at work was to place a custom Administrator module into the Joomla back-end as the first thing that other admins would see when they logged in. We use this to remind people of the colour scheme hex codes, layout recommendations (image sizes etc.) and common locations for admin tasks. Again it’s something I’d recommend and have done on sites I’ve made since.

To do this go to Extensions > Modules, change the dropdown on the left from Site to Administrator then add a new Custom module in the cpanel position.

The next post will focus on the Extensions used to bring additional functionality to the site, as well as the various tools and platforms for optimisation, monitoring and management that have proved incredibly useful so far!

Build your own Thin-ish client with Windows 10 LTSB

After some positive user feedback from the launch of our new Server 2016-powered RDS setup I started wondering if it could have a wider use that just the remote access concept we initially wanted to address. One thought in mind was making use of old \ low-spec devices that would be a bit too clunky for running a modern OS but where the physical hardware itself was in good condition.

Chrome-OS esque distributions such as CloudReady sound nice but come at cost so I set up a little side-project to see if there’s anything that could be done with what we have on our licensing agreement or anything in the open-source space.

Looking around there do seem to be various thin-client “converter” products but again they all seem to be commercial e.g.¬†https://www.igel.com/desktop-converter-udc/

The only other option I found was ThinStation which may also be worth a look when I have more time as it seems a bit more involved to get set up and I wanted to stick to the Microsoft RDP client for now for maximum compatibility.

Windows options

Going back some time I remember Microsoft released cut-down versions of Windows for RDS-type scenarios; going back to the XP days it was called Windows Fundamentals for Legacy PCs and morphed into Windows 7 Thin PC in its next incarnation. Effectively all I want the OS to do is boot up, log in quickly then pass the credentials to a pre-configured RDP file using the standard mstsc.exe application.

However building any solutions on a Windows 7 base going forward seems to be a false economy so I decided to have a look around to see what was available on the Windows 10 codebase – the results were interesting…

IoT is name of the day

Going forward it seems Microsoft have changed the branding for this kind of cut-down devices to Windows IoT. In fact there’s a free edition which sounds ideal but it only runs on certain devices and isn’t really geared for UI use:

Ref: https://www.theregister.co.uk/2015/05/21/first_look_windows_10_iot_core_on_raspberry_pi_2/
Ref: http://blogs.perficient.com/microsoft/2016/01/windows-10-iot-editions-explained/

Reading a bit further it appears Microsoft license an edition called Windows 10 IoT Enterprise for new thin client devices. Now it gets interesting… it seems that the OS itself is Windows 10 Enterprise LTSB but with some special OEM licensing. It just so happens the edu customers get Enterprise LTSB on EES licensing so it’s time to take a closer look!

What this does mean is that Windows 10 Enterprise LTSB gets features from the old Windows Embedded products such as¬†Unified Write Filter, perfect for a locked down device that shouldn’t need to experience configuration changes to the base OS.

Ref: https://msdn.microsoft.com/en-us/windows/hardware/commercialize/customize/enterprise/unified-write-filter

All these features are available in Enterprise LTSB simply by going into Add \ Remove Windows Features window, look for the Device Lockdown section and add whichever ones meet your needs (more on this later).

Image & GPOs

After downloading the latest ISO the LTSB 2016 WIM was imported into MDT. I made a quick task sequence to get it up and running and deployed the OS to a Hyper-V VM.

Boot and logon speeds are very quick given the lack of any Modern Apps which usually need to be provisioned at each new login. The performance gain explains why quite a few people within education have used LTSB for their desktop builds against MS’ wishes; however they’ll miss out on new features such as the much-needed OneDrive Files on Demand that will only be provided to the Current Branch release.

In theory setting up a Mandatory Profile could speed up login even further but haven’t got round to trying that yet.

RDS domain SSO

Upon logging in with domain credentials the next aim is to seamlessly drop users into the RDS farm without any further prompts. After doing a bit of research this can be achieved by setting a couple of GPOs:

  • allow credential delegation
  • trust SHA1 signature of signed RDP file

The need to allow delegation of credentials is fairly commonly mentioned but a lot of the articles are old and don’t mention where this needs to be set in a 2016 farm. In fact you only need to allow the delegation on the FQDN of the Connection Broker based on the results of my testing so far.

Computer Configuration > Administrative Templates > System > Credentials Delegation

To avoid any unwanted prompts about trusting the signature of a signed RDP file populate the GPO mentioned above and copy \ paste the signature from the RDP file that is provided by RDWeb for whatever RDS Collection you want to connect to.

User Configuration > Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Connection Client > Specify SHA1 thumbprints of certificates representing trusted .rdp Publishers

Custom shell

Now with the credentials side sorted out the final piece of the puzzle was to cleanly launch the session and (here’s the tricky bit) made a seamless logout once the RDS connection is closed. Now there’s a few ways to achieve the first part:

  • use the IoT Embedded Shell Launcher feature \ enable Kiosk Mode via System Image Manager
  • use the Custom User Interface User GPO

Ref: https://social.technet.microsoft.com/Forums/en-US/b4552957-45c2-4cc4-a13d-6397f06ee62e/windows-10-kiosk-build-embedded-shell-launcher-vs-custom-user-interface?forum=win10itprosetup

Ref: https://docs.microsoft.com/en-us/windows/configuration/set-up-a-kiosk-for-windows-10-for-desktop-editions

One thing to bear in mind with Shell Launcher is what happens when the shell i.e. mstsc.exe closes, you only have the choice of

  • Restart the shell.
  • Restart the device.
  • Shut down the device.
  • Do nothing

For the sake of speed logging off would be better so I decided to go with the Custom User Interface GPO – seeing as the Windows 10 device would be domain-joined anyway it also seemed a quicker more efficient way to configure multiple clients too.

Seeing as the Custom User Interface is a User GPO it goes without saying that Loopback Policy Processing needs to be enabled for the OU where the client resides. That also comes in handy for a few additional personalisation settings later on too.

The User GPO settings are summarised in the screenshot below, you can add more lock-down policies as you see fit:

Auto log-out on disconnect

Seeing as I wanted to automate the process as much as possible and all the devices would be domain managed anyway the GPO method seems to be the quickest way to achieve what I want. Also avoids needing to do an Add \ Remove Features step for each endpoint device.

Another important point is that the Shell Launcher method only provides options to relaunch the program, shut down or restart the machine. For speed I was aiming to log off the “client” when the RDS session is done so definitely going down the GPO route as a result.

In the GPO settings I initially tried the standard string you’d expect to launch a Remote Desktop session i.e.¬†mstsc.exe C:\Default.rdp but noticed some strange behaviour:

  • Windows logs in
  • RDP file launched
  • connection starts
  • before the green bar completes i.e. handshake still in progress
  • host session logs out

This seemed like a behaviour I’ve seen with some other programs in the past where they appear to terminate mid-way through actions actually occurring. To check I tried manually with the “start” command with the same result. It appears mstsc.exe doesn’t play nicely so we need another way…

Plan b) was to monitor the mstsc.exe process then log out from the client once RDS disconnected and therefore the process was no longer running. After looking around and trying a few scripts out I settled on one I found here:

Ref: https://www.experts-exchange.com/questions/24218998/Check-if-a-process-is-running-in-vbs.html

Just add the logout command as the action to run when the desired process terminates and we have the desired behaviour. It takes a second or two to react to the process closing but there doesn’t seem to be a way to speed that up as far as I can see.

Final steps

Now just some finishing touches required to give the solution a bit of polish ūüôā

  • set logon and desktop wallpaper
  • disable Task Manager and related lockdown setings

When the machine boots users see this login screen, easily customised via GPO…

After login connection to RDS is pretty much immediate and no further credential \ security prompts appear…

UWF

The final piece of the puzzle is tidying up after the client has been in use for a while. That’s where the Unified Write Filter from earlier comes in handy:

Enable-WindowsOptionalFeature -Online -FeatureName Client-UnifiedWriteFilter

Then enable the filter;

uwfmgr.exe filter enable

Ref: https://docs.microsoft.com/en-us/windows-hardware/customize/enterprise/unified-write-filter
Ref: https://developer.microsoft.com/en-us/windows/iot/docs/uwf
Ref: https://deploymentresearch.com/Research/Post/632/Using-the-Unified-Write-Filter-UWF-feature-in-Windows-10

And there you have it, a locked down RDS client that will run on older hardware (Windows 10 works on pretty much anything from the last 10 years) which can be managed through your standard AD infrastructure, all using stuff you already have access to via your Campus agreement… enjoy!

Quick tips: custom port speed sensor for PRTG

We use PRTG Network Monitor at the College to monitor devices right across the network, from switches and firewalls right down to host \ VM \ application level for servers.

Recently I started playing a bit more with the network maps to try and build some “living” documentation that would give us live traffic stats whilst also satisfying the requirement of having some up-to-date network diagrams.

Port speed

Adding the devices and links was simple enough but we also wanted to display the uplink speed. Partially as it’s handy to visualise what goes where and also from a practical point of view to check if a link degrades.

However I couldn’t find a straightforward option to do this. It seemed possible as PRTG does actually display the speed when¬†selecting ports to monitor but then doesn’t make the data available as a channel once added to the Devices list ūüė¶

A bit of research confirmed I wasn’t going mad:

Ref: https://kb.paessler.com/en/topic/14843-how-to-see-port-speed-on-switches

Having the speed on the port name is nice but that’s a bit too wordy to be able to spot from a distance and being a text label isn’t really something that could be “monitored” if the value changes. However the comment about “ifSpeed” did give me an idea…

Custom library time

I’d already used the PRTG MIB Importer quite a few times bringing in libraries for various devices on the network so wondered if there was a value¬†I could use to make a custom sensor. There was nothing in the device specific files for our switches so I figured that the value must be more generic \ standard than that.

Soon found this very handy website that walks through the SNMP OIDs; lo and behold there’s the ifSpeed values!

Ref: http://cric.grenoble.cnrs.fr/Administrateurs/Outils/MIBS/?oid=1.3.6.1.2.1.31.1.1

Well in fact it’s a slightly different value called ifHighSpeed, the reason why it’s required for 10GB interfaces is referenced below:

“That is because when using ifSpeed,¬†the output value will exceed the max. value (4,294,967,295) ¬†that the object can support.”

Ref: https://supportforums.cisco.com/discussion/11124321/what-should-be-ifspeed-and-ifhighspeed-2-gig-interfaceport-channel
Ref: https://kb.juniper.net/InfoCenter/index?page=content&id=KB9119

After trying out the values using the free Paessler SNMP Tester¬†along with the MIB Importer eventually brought me to the setup below, which has a nice little bonus of also showing the port’s name when selecting it in PRTG for easier identification. Note the other OIDs to obtain port name etc.

Once done it’s then a simple matter of saving the newly created file for use in PRTG and adding an SNMP Library sensor in to a test device.

Once added the sensor displays like this (port name can be changed as required)

Map display

The sensor was working well at this point but¬†there was one final tweak required before getting the display I wanted. PRTG’s default template for a value-only map item adds the device’s name to the label, which in our case made for quite an unsightly string of text that got in the way of other map elements.

The solution? Go to your PRTG folder which looks something like

*install path*\PRTG Network Monitor\webroot\mapobjects
  • copy the item template in question, in this case it was “An icon B2.html” and name it as required.
  • edit the copied file to remove the “ParentDevice” string highlighted below.
  • also note you need to change the display name¬†at the top of the file to something unique

Compare the before and after shots below:


original PRTG file


updated file with name edited and ParentDevice string removed

Once done open up the PRTG Administration Tool then restart the Core Server Service as per screenshot below:

End result

Now when you look in your map icons you’ll see the additional option appear as you named it above.
Drag that in and you’ll get the output in the format below, nice and clean ūüôā

Server 2016 RDS via Azure AD Application Proxy end-to-end guide

remote_desktop_blueOne of our priorities for this year was to improve our remote access offering to staff to enable more flexible working whilst outside of college. Office 365 helps greatly and has already improved functionality in many ways but there’s still some legacy applications and classic file shares that need to be provided remotely too. If at all possible we prefer the files not to leave the network so some form of virtual desktop looked the way to go.

After discounting VMware and Citrix offerings on cost grounds the improvements to Microsoft’s RDS offering in Server 2016 seemed to come at a perfect time.

Even more so now we’ve implemented Azure AD Application Proxy (more on that shortly!) We’ve also recently decommissioned some services that freed up a bit of physical hardware resource to “play” with so away we went!

Server installation

The physical hardware for now is running on some reclaimed Dell PowerEdge R610 servers; 64GB RAM, dual CPU and 6 x 15k disks in RAID10. Should be plenty to get us up and running with the RDS roles eventually split across two hosts. For now we’re running on just the one but even that’s plenty to get up and running with.

We installed Server 2016 Core running the Hyper-V role, which was simple enough. The Core role looks to be a tad more polished in Server 2016, although not new the sconfig tool got the main settings entered with fairly minimal fuss.

r610
yes it will go back in the rack once we’re done with it!

Getting the OS to update correctly wasn’t so simple due to Microsoft doing something silly to the update mechanism in the initial release of Windows 10 1607 and its equivalent Server 2016 release. Update status was stuck on “Downloading” showing no signs of progressing. In the end manually installing the latest Cumulative update release from the Microsoft Update Catalog did the trick e.g.

wusa.exe windows10.0-kb3213986-x64_a1f5adacc28b56d7728c92e318d6596d9072aec4.msu /quiet /norestart

Server roles

With Hyper-V up and running the next stage was to install our guests. We went with 3 VMs set up as follows:

  • Connection Broker \ RD Licensing
  • RD Web Access \ RD Gateway
  • RD Session Host

The original plan was to try and embrace the Server Core concept and only install the GUI where absolutely necessary. With that in mind we made the first two servers with Core and only the Session Host with a GUI. More on that soon… (!)

add-roles-wizard
RDS deployment wizard Role Services

Running the deployment through Server Manager on my desktop was easy going, Microsoft have done good work with this and the deployment doesn’t seem too far removed from the 2012 R2 guides I’ve been looking at online. We added each server to the roles as per above, got to the final screen and hit the magic Deploy button then…

"Unable to install RD Web Access role service on server"

Role service... Failed
Deployment... Cancelled

Well that didn’t go to plan! We had a look online, trying to find reasons for the failures and went through some initial troubleshooting to make sure all recent updates were installed and each server’s patches matched exactly, also enabled Powershell remoting…

Enable-PSRemoting -force

…still no joy until we found this little nugget of information…

Ref: https://social.technet.microsoft.com/Forums/Sharepoint/en-US/b5c2bae3-0e3b-4d22-b64d-a51d27f0b0e4/deploying-rds-2012-r2-unable-to-install-rd-web-access-role-service-on-server?forum=winserverTS

So it appears the RD Gateway \ RD Web Access role isn’t supported on Server Core. Of course we wouldn’t want the web-facing part of the deployment running on a server with reduced attack surface would we Microsoft… not impressed!

Ref: https://technet.microsoft.com/en-us/library/jj574158(v=ws.11).aspx

To confirm the hypothesis running Get-WindowsFeature on Server 2016 Core gives this…

server-core-available-rds-roles
Server Core

and on Server 2016 with GUI gives this…

server-gui-available-rds-roles
Server with GUI

Published names & certificate fun and games

After begrudgingly re-installing one of the VMs with a GUI (seemed quicker than trying to convert the Core install) we managed to get past the final Deploy page with 3 success bars ūüôā

The first key setting we were asked for was the external FQDN for the RD Gateway, which was added to our ISP-hosted DNS records. We use a wildcard certificate to cover our external facing SSL needs, nothing out the ordinary there and went on to apply it to each of the four roles specified by the RDS Deployment wizard. A Session Collection was created for a test group and pointed at the new Session Host. All looking promising.

The RD Gateway FQDN naming in itself wasn’t a problem but led us to an¬†interesting part of the setup relating to SSL certificates and domains. Once we had the RDS services accessible from outside the network (see below) I fired up my 4G tethering to give it a test.

The connection worked but threw up a certificate warning and it was obvious to see why. Our wildcard certificate is for *.domain.ac.uk but the Connection Broker’s published FQDN is servername.subdomain.domain.ac.uk and therefore isn’t covered.

Fortunately a Powershell script called Set-RDPublishedName exists to change this published name and works a treat! Grab it from https://gallery.technet.microsoft.com/Change-published-FQDN-for-2a029b80

You’ll also need to ensure that you can access the new published name internally, depending on what form your internal domain is vs. your external you may need to do a bit of DNS trickery with zones to get the records you need. More on that can be found at:

Ref: https://msfreaks.wordpress.com/2013/12/09/windows-2012-r2-remote-desktop-services-part-1
Ref: https://msfreaks.wordpress.com/2013/12/23/windows-2012-r2-remote-desktop-services-part-2

set-rdpublishedname
Set-RDPublishedName script in action

External access via Azure AD Application Proxy

We published the RD Gateway and RD Web Access via our new shiny Azure AD Application Proxy for a few reasons…

  • simplicity, no firewall rules or DMZ required
  • security, leverages Azure to provide the secure tunnel
  • SSO, use Kerberos Delegation to sign into RD Web Access as part of the user’s Office 365 login

I followed the excellent guides from¬†Arjan Vroege’s blog¬†for this, in particular the section regarding how to edit the RD Web Access webpage files… nice work Arjan!

Publish your RDS Environment with Azure and AD Proxy – Part 1 –¬†http://www.vroege.biz/?p=2462
Publish your RDS Environment with Azure and AD Proxy – Part 2 –¬†http://www.vroege.biz/?p=2563
Publish your RDS Environment with Azure and AD Proxy – Part 3 –¬†http://www.vroege.biz/?p=2647

As per my previous post on Azure AD Application Proxy & Kerberos delegation use the command below to add the SPN record (replace the FQDN and server name as appropriate)

setspn -s HTTP/servername.subdomain.domain.ac.uk servername

When done the end result is a seamless login to RD Web Access via the Azure AD login page. In our case the link will eventually end up as a button on our Office 365-based Staff Intranet, therefore not requiring any further logins to get to the RDWeb app selection screen.

I particularly wanted to avoid the RDWeb login screen, which I’m amazed in 2017 still requires DIY hacks to avoid the requirement to login with the DOMAIN\username format. Thought Microsoft would’ve improved that in the Server 2016 release but evidently not.

One more gotcha

So having done all the hard work above preparing the login all that was left was to click the Remote Desktop icon and enjoy, right? Wrong.

After running the¬†Set-RDPublishedName script the certificate warning went away and I could see the change to the new wildcard-friendly name, however the connection attempt now failed with the error¬†“Remote Desktop can’t connect to the remote computer *connectionbrokername* for one of these reasons”

remote-desktop-cant-connect
connection failure after changing Published Name

Neither explanation made any sense as the connection was working perfectly fine until changing the Published Name. Indeed changing it back to the original FQDN of the Connection Broker restored service so it had to be something to do with that. After being stumped initially I came back after food (always helps!) then after a bit more research found this very helpful post:

Ref: https://social.technet.microsoft.com/Forums/windowsserver/en-US/4fa952bc-6842-437f-8394-281823b0e7ad/change-published-fqdn-for-2012-r2-rds?forum=winserverTS

It turns out the new FQDN we added when changing the Published Name needs to be added to RDG_RDAllConnectionBrokers Local Computer Group.

This group is used to approve connections in the Resource Authorization Policies (RD-RAP) section of RD Gateway Manager. By default only the server’s domain FQDN is present in the list (as you’d expect) so it appears unless you add the new Published Name in there the connection attempt gets denied.

To add your external published name follow these steps:

  • Server Manager > Tools > Remote Desktop Services > Remote Desktop Gateway Manager
  • expand your RD Gateway server > Policies > Resource Authorization Policies
  • Click Manage Local Computer Groups on the right hand pane
  • Select RDG_RDConnectionBrokers > Properties
  • Click the Network Resources tab
  • type the FQDN of the Published Name you supplied to the Powershell script earlier then click Add
  • OK all the way out then try your connection again

manage-locally-stored-computer-groups
RD Gateway Manager

The example below replaces the real server names with dummy entries but should illustrate the concept. The same scenario applies if your servers exist in a .local Active Directory domain (which will be the top entry) and your external domain is something different (again remember to sort out internal DNS zone entries to suit)

add-external-name-to-rdcbcomputers-group
Manage RDG_RDCBComputers group

Finishing touches

Once all the above is done you should then get a connection, there is one seemingly unavoidable credential prompt due to Microsoft persisting with using an ActiveX control to start the RDP session but perhaps one day they’ll update it (we live in hope). It seems you can use the UPN style format here which is handy as it keeps things consistent. In a way it’s a bit of a security measure so not the end of the world.

Now the connection itself is sorted out all that’s left is to tweak the Session Host to our requirements. This guide gives some nice pointers on locking down the server via GPO:

Ref: http://www.it.ltsoy.com/windows/lock-down-remote-desktop-services-server-2012

We also push out a custom Start Menu using the newer Windows 10 1607 GPO settings along with the Export-StartLayout command. Finally install any programs required, remember to change the mode of the server first:

Ref: https://technet.microsoft.com/en-us/library/ff432698.aspx

change user /install

Then once done

change user /execute

Now enjoy ūüôā

rds-screenshot
Connection to Server 2016 RDS Session Based desktop via RD Web Access \ RD Gateway

Tech review: Havering Asks 2016

img_20161130_140105With a few hours to go before the end of the year I thought I’d do a quick review of our last event of the year – our TV production “Havering Asks”.

It’s part of our live TV week, where media students produce their own shows as part of their course programme. We then live stream it on YouTube and via the website¬†http://www.hcronair.com

I’ve been helping with the technical side for 4 years now and each time we try and add something extra. In the past that’s gone from live streaming across college, then online with Planet eStream then using multiple input streams with vMix and a Datavideo capture server.

This year¬†on top of our now business-as-usual vMix setup we wanted to add a live videoconference link so I went away to gather some kit and ideas…

Skype for Business prime time

We already use Skype for Business within college in some of our conferencing rooms and ah-hoc usage on staff PCs so my first thought was if we could use it here as well. I did also consider Google Hangouts on Air after being on a Google conference a few months back but found out it was discontinued in September, which was disappointing as the YouTube replacement didn’t fit our needs.

I gathered a few of our newer loan laptops (Core i5, 8GB RAM etc.) and headed down to set up, realising we’d need to make some adjustments to get this to work…

  1. The output from our mixing desk was via SDI cables so I dug out a USB capture card that we keep for occasions like this, first problem solved with the help of a phono adapter
  2. An audio input from the mixing desk was also required, our sound engineers sorted that out quickly and made sure there was no feedback while mics were active
  3. Our large screen TV was at the front of the set but the mixing desk at the back. Given we don’t have any wireless HDMI extenders the only option was to stitch together a long cable or two to get from the back of set to the front via some neat use of rubber cable mats!

In the end Skype for Business proved to be a good call as it accepted our decidedly non-standard video input without a grumble whereas the consumer version of Skype refused to connect to the capture card. With the cabling out the way we used the now-standard federation from Skype for Business > Skype consumer to invite our guests to the show.

For the purposes of the event a dedicated Office 365 account was created so the branding would look right on-screen. Radio presenter Iain Lee was first up and I’ll admit it was a relief to see the full screen conference up and running when he dialled in ūüôā

Havering live TV week Skype video call

Twitter wall

On the day of the main Havering Asks event I was also asked to set up a Twitter wall for viewers to interact with the show via our hashtag #haveringasks

In the past we’ve used Zoomph with great results so I was pleased to find they have a free option for up to 250 posts, which was fine for the needs of this event. The display was placed at the entrance to the show and also via our digital signage screens using Planet eStream.

Havering Asks Zoomph Twitter wall display

Plans for the future

In the end the TV went really well and it was another great experience for the students, who excelled with¬†the quality of this year’s show. The video conferencing went down well too so I’m sure that will return again next time round, maybe we’ll go for multiple remote guests to keep things interesting!

I’m hoping that by the time we run our next show we might get some shiny new mixing kit to work with. The current setup has done a great service but would be good to move into the world of 4k, perhaps with some (very nice)¬†Black Magic kit¬†… Santa any chance of some additional presents? ūüėČ

and finally…

Wishing you all a Happy New Year and best wishes for the year ahead.
Recently hit 300k views on here now so thanks for reading and hope to see you all back in 2017!

Azure Active Directory Application Proxy installation and troubleshooting

11225654646_7fc9621cc9_bRecently we decided to migrate away from our legacy reverse-proxy¬†product to something¬†that would integrate better with our AD \ Office 365 systems.¬†I’ve wanted to try out Azure AD Application Proxy for a while since seeing it in beta last year so this seemed a good time to get to grips with it. This post outlines a few gotchas to watch out for and some useful background reading.

Let’s start off with the initial Microsoft documentation available here

https://docs.microsoft.com/en-us/azure/active-directory/active-directory-application-proxy-get-started

Education freebies

Although Microsoft’s recent price hikes haven’t come at a good time for us in education we do get a lot of extras thrown into our Microsoft licensing agreement. One of the lesser-known ones is Azure AD Basic, which is the minimum requirement to use Azure AD Application Proxy – see comparison chart at¬†https://www.microsoft.com/en-cy/cloud-platform/azure-active-directory-features for more info

To get your free licenses you’ll need to get in contact with your EES reseller and they’ll get them added to your tenant in a similar way to¬†Office 365.

Applying the Azure AD Basic license is nice and simple, go to your Azure Management portal at https://manage.windowsazure.com, select your Azure AD directory then assign suitable groups to the license. What’s handy is that if you’re using Azure AD Connect to sync from your on-prem directory any new users will get automatically licensed as they come on board.

Installation

Next step in the documentation list is here:

https://docs.microsoft.com/en-us/azure/active-directory/active-directory-application-proxy-enable

I used two dedicated Server 2012 R2 VMs for our install, the connector will be installed on each so we have failover should it be required at some point. Enabling the Application Proxy in Azure is nothing more than one click in the portal

Now in theory the installation should be straightforward, nothing more than downloading the installer from the link, sign in with admin credentials and job done. However if everything went that smoothly this blog wouldn’t exist (!)

Troubleshooting 403 Forbidden errors

At the end of the installation the wizard helpfully offers to run a troubleshooter to check all is well but in fact all was far from well…

Checking Event Viewer threw up the following errors:

  • Event ID 32012
    The Connector update using the update service failed: ‘The remote server returned an error: (403) Forbidden.’. Check your firewall settings.
  • Event ID 12020
    The Connector was unable to connect to the service due to networking issues. The Connector tried to access the following URL: ‘https://***GUID***.bootstrap.msappproxy.net:8080/’

Outbound firewall settings were already configured to allow all the ports that were asked for in the documentation, proxy was disabled in Connection Settings and the firewall didn’t register any outbound traffic being blocked so what’s going on here? The mystery deepens…

Although the wizard only offers to run the troubleshooter once you can run it again manually by launching it from:

C:\Program Files\Microsoft AAD App Proxy Connector\ConnectorTroubleshooterLauncher.exe

Troubleshooting the troubleshooter

Although there’s a fair bit of documentation in the troubleshooting section on Microsoft’s pages none of it referred to this particular error. Google didn’t have much to go on either but did throw up some useful and detailed slides from the Ignite conference that are well worth a read:

Ref: https://channel9.msdn.com/Events/Ignite/2015/BRK3864
Ref: https://techcommunity.microsoft.com/t5/Microsoft-Ignite-Content/BRK3139-Throw-away-your-DMZ-Azure-Active-Directory-Application/td-p/10675

The second link references another useful document aimed purely at troubleshooting:

Ref: http://aka.ms/proxytshootpaper

Whilst searching I stumbled across an email contact for the Microsoft Azure AD Application Proxy team

aadapfeedback@microsoft.com 

so I dropped them a message with the errors I was encountering. The team replied almost instantly and initially suggested ensuring that the following updates were applied on the server:

https://support.microsoft.com/en-us/kb/2973337
https://support.microsoft.com/en-us/kb/2975719

Proxy proxy proxy!

However still no joy even with everything present as it should be. The next recommendation was to check if I was using a proxy server for outbound connections. We do have one but it’s not used for server VLANs and is the first thing I disable on a new VM build.

However I did get intrigued to check the traffic going out via TCPView… lo and behold there was the proxy server trying to take the outbound connections and failing miserably. It seems that despite everything in the operating system suggesting traffic should be going out directly the Connector was still trying to use the proxy route instead.

Ref: https://blogs.technet.microsoft.com/applicationproxyblog/2016/03/07/working-with-existing-on-prem-proxy-servers-configuration-considerations-for-your-connectors/

The solution is in this document under the section “Bypassing outbound proxies”, which basically involves adding these lines to the .config files for both Connector and Updater services

<system.net>

<defaultProxy enabled="false"></defaultProxy>

</system.net>

Checking Event Viewer and the Azure Portal afterwards showed success, my Connectors were now up and running with nice green icons, much better ūüôā

Note: even though this fix resolves the issue the current version of the Troubleshooter doesn’t seem to follow the settings in the .config files and will still report connection failures. The Azure AD Application Proxy team are aware of this and are aiming to have a new version out soon.

Additional considerations

There’s a few other points to bear in mind when you’re completing the configuration of the application proxy. None of them are major issues but good to have everything ready before you start…

Certificates

Once the Connectors are up and running the rest of the process went smoothly, although note you will need a wildcard certificate if you want to publish your applications via a “vanity” URL i.e. your own domain rather than “msappproxy.net”

Using the vanity domain and some DNS CNAME records means that if you use Office 365 SharePoint for your Intranet your internal applications can work from the same URL both inside and outside.

Setting SPNs for Kerberos SSO

Even better, those internal apps can SSO based on the Office 365 initial sign-on for a suitably slick user experience! This does require a bit more configuration with Kerberos delegation but it’s not too bad.

When setting the SPN records I remembered the gotcha from when I worked on Dynamics CRM to type the command in manually… bizarre as it is the same still applies!

Using the -S switch worked well for me:

setspn -s HTTP/yourserver yourserver

Ref: https://blogs.msdn.microsoft.com/saurabh_singh/2009/01/08/new-features-in-setspn-exe-on-windows-server-2008/

Nested groups

Finally, bear in mind¬†if you’re using groups created natively in Azure AD you can’t nest memberships when creating application assignments, which is a shame. As a workaround create any nested ones in your local AD instead and sync them up via Azure AD Connect or just create flat groups in Azure AD if you prefer to work solely up there.

Ref: https://docs.microsoft.com/en-us/azure/active-directory/active-directory-accessmanagement-manage-groups

Application links

You can either publish your application links via your Intranet or users can browse them via the portal (I’ve linked to the new makeover version as it looks much better than the previous one in my opinion)

https://account.activedirectory.windowsazure.com/r#/applications

image credit Rainer Stropek 

Tip of the day – Windows Update fixes for 7 and 8.1

20013670043_113a55f0bf_z

Back in the good old days (aka a few years ago) Windows Update tended to be something that just… worked. You’d take a fresh Windows install, pop it through the update process and after a bit of chugging you’d get a fully patched OS.

Recently Microsoft seem to have made a bit of a mess of things and I’ve spent far too much time forcing recalcitrant machines to do what should be a simple task.

Hopefully once the cumulative updates start rolling everything into the monthly patch cycle this post may become irrelevant. Until then here’s the quick way to persuading a Windows 7 / 8.1 machine through the Update process…

High CPU hotfix

Install this one first if you’re faced with a particularly out-of-date installation otherwise you’ll be stuck for days “searching for updates” while your CPU goes crazy (100% utilisation) for very little return…

Windows 7 https://support.microsoft.com/en-gb/kb/3102810
Windows 8 https://support.microsoft.com/en-gb/kb/3102812

Windows Update Agent

Next install this to update your updating software in order to download new updates (!)

https://support.microsoft.com/en-gb/kb/949104

Reset Windows Update Agent script

Sometimes Windows Update still won’t work in spite of the patches above so run this script from TechNet to reset the Windows Update subsystem in case something has gone awry

https://gallery.technet.microsoft.com/scriptcenter/Reset-Windows-Update-Agent-d824badc

Round trip limit exceeded

Despite all of the above Windows Update can¬†still fail because of a hard-coded limit in how it talks to WSUS (this only applies to managed Windows desktops rather than home users). In which case you need to take advice from this song…


“you can get it if you really want but you must try, try and try, try and try… you’ll succeed at last”

Basically just keep clicking the retry button until WSUS gets through enough trips to serve you all the updates Windows needs.

Ref: http://trentent.blogspot.co.uk/2016/03/wsus-clients-fail-with-warning-exceeded.html
Ref: https://blogs.technet.microsoft.com/sus/2008/09/18/wsus-clients-fail-with-warning-syncserverupdatesinternal-failed-0x80244010/

You may also be able to speed things up by cleaning up your WSUS server, which can be aided via this very useful script

https://community.spiceworks.com/how_to/103094-automate-wsus-cleanup

or this one…

https://community.spiceworks.com/scripts/show/2998-adamj-clean-wsus

Now that’s sorted you can make yourself a cup of tea and wait for that progress bar to crawl across the screen!¬†Will be interesting to see how the cumulative update process goes¬†but if it means an easier way of rolling an out-of-date machine up with one single download then it’ll have some benefits for convenience albeit at the expense of granular control… swings and roundabouts I guess…

image credit Christiaan Colen
https://www.flickr.com/photos/132889348@N07/20013670043