Office 365 PowerShell – watch out for the Object Graph error

powershell

Whilst in the Office 365 admin console the other day we noticed that many of our new user accounts that are being readied for the new term were missing their Office 365 licenses. We run a script every night to automatically license our users so something wasn’t right… let the investigation begin…

Everything *looks* OK?

Initially I thought maybe the scheduled task on the server wasn’t firing so I logged on and verified the script had completed successfully the night before, at least it appeared that way as the last result code was 0x0 (success).

I double-checked the script to ensure the logic was still sound in light of changing our MIS database and subsequent student rollover for the new academic year… nothing out the ordinary there as the script just checks for users starting with a student ID code where no licenses are applied.

Run the script step-by-step

My next plan of attack was then to run the script manually in a PowerShell window line-by-line to see what was going on. The first parts were OK, authenticating into the Office 365 service with no issues then I ran the initial filter command:

Get-MsolUser -All | Where-Object {$_.isLicensed -eq $false}

But instead of returning a list of users the command bombed out with this error:

The formatter threw an exception while trying to deserialize the message: There was an error while trying to deserialize parameter http://provisioning.microsoftonline.com/:ListUsersResult. The InnerException message was 'Maximum number of items that can be serialized or deserialized in an object graph is '65536'. Change the object graph or increase the MaxItemsInObjectGraph quota. '. Please see InnerException for more details.

Looking around it seems the error is fairly common, I’m not sure whether it was triggered either by the number of users in our tenancy going past a certain level (and thus exceeding the query limit) or maybe a software update.

Either way the solution is explained clearly by the posters on a thread on serverfault.com:

http://serverfault.com/questions/691667/maximum-number-of-items-that-can-be-serialized-or-deserialized-in-an-object-gra

Basically the process goes as follows for an x64 server:

  • navigate to C:\Windows\Microsoft.NET\Framework64
  • go into the correct version folder based on the version of PowerShell you’re using
  • edit the machine.config file with the lines summarised in the link above
  • save and close config file

One step that isn’t mentioned was in my case the change didn’t seem to take effect until I rebooted the server

Running the unlicensed user query again after the rebooted returned the expected (large) list of unlicensed users, which I then followed up with a full run of the script to sort out all the student accounts. After that the list of unlicensed users was back at normal levels (shared mailboxes, archives etc.)

ZCM 11 – fixing a slow ZCC console

snail-160313_1280Over the past couple of months I’d been getting numerous reports from our techs that ZENworks Control Center was getting progressively slower causing problems with:

  • remote control sessions not starting consistently
  • Bundle assignments failing \ taking 5-10 minutes to apply
  • searching for Devices not completing

Oddly ZCC seemed to be running a bit better on my login so I started having a look around for known issues, then stumbled across this:

Ref: https://www.novell.com/support/kb/doc.php?id=7015054

Although the article suggests 11.3.1 (that we’re running) should have the issue we tried running a few of the tech accounts as Super Administrators, which seemed to help the initial login but didn’t solve the other issues above. I’ve since seen another article elsewhere that suggests 11.3.2 is required to fix the non-Super Administrator issue. However, I’m waiting for ZCM 11 SP4 to make the server and agent upgrade work worthwhile so holding off of 11.3.2 for now.

Ref: https://forums.novell.com/showthread.php/476553-Slow-login-in-ZCC-for-non-SuperAdministrator-users?p=2316919&mode=linear#post2316919

Server resources

Having only made minimal improvement with the Super Administrator fix above I turned my gaze to the servers themselves in case we were hitting a resource issue somewhere. Running “top” on the Linux primary servers didn’t show any signs of them being under heavy load, plenty of free RAM and given they’re running on an auto-tiering SSD-enabled SAN disk performance isn’t a concern either… onto the database server.

Our ZCM database runs on a dedicated Microsoft SQL Server VM, which gives a few potential pain points to watch out for. We’d already experienced issues in the past with ZPM causing massive growth of log files so it wouldn’t be a surprise if a database problem was the root cause here too.

Our database is 30GB+ so we tried upping the memory to run the whole lot in RAM but that had minimal effect (apart from creating a huge pagefile on the C: drive!) so that was scaled back to 16GB . Multiple vCPUs were already configured so nothing to change there. Disk space and datastore latencies were all looking good as well so no problems on the storage side either.

A closer look at SQL

At this point it was time to drill a bit deeper into the SQL Server itself to see if there was something within the database that could be causing our issues.

Initially I tried running the manual index defragmentation process (on top of our standard SQL Maintenance Plan) that’s referenced on the “Advanced SQL Concepts” support page

Ref: https://www.novell.com/documentation/zenworks113/zen11_cm_deployment_bp/data/b1abnlnh.html#b1abnss2

Various indexes were showing as fragmented but the end result was a marginal speed increase, which could well have been a placebo effect so no magic bullet here (although a good practice to run as per Novell’s recommendations)

By chance I stumbled back across a tool I’d used in the past called SQL Heartbeat so decided to pop it on my machine and watch the ZENWorks database for a while to see what appeared.

Ref: http://www.sqlsolutions.com/products/sql-server-monitor/sql-monitor.html

The results were almost instant, what I like about the Heartbeat tool is the graphical representation of SQL database process IDs, which makes spotting the problematic one(s) very quick. If you really don’t like 3rd party tools SQL Profiler will probably provide similar results.

A screenshot of what we found is below, watching the activity on the server it seemed that a query from one of the primaries was going round and round in circles every 1-2 minutes causing a huge spike in CPU and disk activity. The server CPU never dropped below 50% and often was staying up in the 80-100% range, no wonder ZCC was running slow!

zcm sql activity monitor high cpu

zcm sql heartbeart process monitor spid 152

SPID 152 – I choose you…

Looking at SQL Activity Monitor I could see a matching “Expensive Query” which stands out like a sore thumb in terms of the volume of reads, writes and CPU time.

zcm sql recent expensive queries

Solution

Initially I tried stopping and restarting the novell-zenserver and novell-zenloader processes on the primary server identified on the SPID. Initially it did make the process disappear but it then reappeared a few minutes later. Restarting the affected primary server also had no effect.

We raised an SR with Novell but before we got past the initial “check ZCC status” type troubleshooting steps we had a large power cut that forced some of our VM hosts offline, including the primary above that had the large query associated with it. When everything came back up the database server was back at normal resource usage and the stuck query had disappeared. Definitely goes down as an unconventional fix!

Now ZCC is lightning fast and all the issues we were experiencing disappeared with the stuck query :)

Tools for next time

After doing a bit of post-event research there’s a few more useful free tools out there I’ll try out next time something like this arises:

sp_WhoIsActive procedure
http://sqlblog.com/files/default.aspx

Idera SQL Check
https://www.idera.com/productssolutions/freetools/sqlcheck

The making of HC Radio

hcradioFollowing on from previous success we’ve had live streaming our Havering Asks event our media block wanted to try going one step further by running a student radio station, permanently live with both audio and video streams.

From a technical and e-learning perspective it sounded a great way to get students interacting with the technology available and also provided another way to get value from our Planet eStream solution, which already provides our video content repository, TV recording server and digital signage across the college.

As always though nothing is ever completely straightforward, especially as we kept trying to push the bar a little further by adding more features without breaking the bank!

Video stream

The video side was in theory the easy part as it’s something we’ve done before and only needs the Planet eStream encoder and a capture device with composite input. Simple? Of course not!

One idea was to have two cameras: one showing the students in their broadcast studio as you might see on Kiss FM or Capital and then another one directly above a mixing deck for certain radio show features.

The next fun part of the requirement was that the cameras would need to be in a physically separate location to the PC used for streaming due to space restrictions, which also makes things a bit more… interesting….

IP camera

The camera to be used for the studio ideally needed to be mounting in the ceiling to get the right kind of angle to get everyone in shot. Now although we could run composite cables through the building that really didn’t seem a sensible option as there was a very conveniently placed Ethernet port nearby, an IP camera made perfect sense.

I had a look around at the BETT show and various suppliers but found many devices were way too overpowered for our needs (read expensive!) or there were cheap and cheerful cameras at the polar opposite end of the market on Amazon, eBay etc. which didn’t seem to have a great track record for reliability.

We wanted something in between to provide HD quality video, which was when I remembered that Ubiquiti Networks also sell IP cameras alongside their Unifi wireless range (which we’ve had good experiences with recently).

$_35 (1)https://www.ubnt.com/enterprise/
https://www.ubnt.com/unifi-video/unifi-video-camera-dome/

Eventually I found their Unifi Video Camera Dome product, which looked solid, was made for ceiling mounting and came in just under £100 – sold!

RTSP stream

After receiving the camera I ran through the configuration steps then had a look around for how to get an RTSP stream I could enter as an input for Planet eStream. However I soon found out that this isn’t possible straight from the camera, but can be done via the free NVR recording software that comes bundled with the device.

Ref: https://www.ubnt.com/unifi-video/unifi-nvr/

The RTSP stream option needs to be enabled per-camera, which then provides a unique link that works perfectly in the eStream encoder. For simplicity I installed the NVR software on the streaming machine so the RTSP stream comes from the 127.0.0.1 loopback address. The host machine runs on an i5 CPU and hasn’t showed any performance issues running both pieces of software at the same time.

Ref: http://dl.ubnt.com/guides/unifivideo/UniFi_Video_UG.pdf

ubnt rtsp

Mixing software

Now we had two cameras in place we still had a requirement to solve in terms of how to switch between them. If possible some form of branding \ overlay was also mentioned.

Spending any form of significant sum on software was out of the question so I hit Google hoping for something good and wasn’t disappointed, enter vMix:

http://www.vmixhd.com/

The basic premise of the software allows you to live produce video by switching and overlaying various inputs to create your output stream. I really like the interface – everything just seems to be “there” and it also contains all the transitions, overlays and controls you’re likely to need…

…oh and did I mention the free version? :)

vMix-Screenshot

Two capture inputs plus two more from title sequences and overlays was a perfect fit for our needs. Yes there is a resolution limit but to get the project off the ground it was a compromise we’re willing to make. In the future I hope we can at least get to the Basic HD or HD package.

At the time I sent a quick question over to the ever-helpful eStream support team to find out how best to connect vMix to the eStream encoder; in a strange twist of fate it turned out they were testing it at the same time! All you need to do is click the External button in vMix, which then creates a virtual capture device that eStream can see and stream.

We’re using the eStream encoder rather than broadcasting via YouTube at present as it’s simpler for us to push the video internally to our signage screens on Planet eSign. However I may need to change this in the future depending on traffic, if our video stream starts getting a lot of hits it may be easier to let YouTube take the load.

hcradio screen
one of our screen layouts showing dual camera inputs via vMix

Audio stream

I was also asked to provide an audio-only stream so we could offer the channel up as an Internet Radio station. All the necessary licensing was already taken care of so it was time to find another technical solution that would do the job.

The students already do all the creative work using Virtual DJ software in the radio studio then we split the output into multiple signals that are used to play the station inside the media building as well as providing a 3.5mm jack for the streaming machine.

Icecast immediately looked to be the front-runner to encode the audio stream although it needs something else to act as a front-end. After a bit of searching I settled on an unfortunately named but rather good (and open-source) program called Butt.

http://butt.sourceforge.net/

I configured Icecast with its own dedicated external domain name and after trying a few different format settings went for an MP3 stream at 192Kbps.

Getting the stream to embed nicely took a few tries too, mainly down to how picky browsers are about what code and tags are used. Eventually settled on the format below which plays on most devices using an HTML5 player:

<audio name="media" autoplay="" controls="">
 <source type="audio/mpeg" src="http://yourstream.domain.tld:8000/mountpoint"></source>
 </audio>

This embed code generator may also provide useful
http://jasonlau.biz/embedcodegenerator/

The end result

We now have a website up and running that hosts both video and audio streams, along with the usual smattering of social networks for audience participation.

TuneIn_Logo_2000pxAnother upshot of setting up the audio-only stream was that it’s recently become the source for our upload to TuneIn radio http://tunein.com/radio/HCRadio-s245212/

Students broadcast their shows throughout the week and we’ve now partnered with some other stations and even celebs to raise the profile of the station.

Now we’ve done radio the next challenge lies ahead; HCTV will be coming soon as part of a TV week that will include Havering Asks and a couple of new shows, watch this space!

Field notes: delprof2 shutdown script

A new twist on a familiar tale is the best way to describe this post! Recently we’d noticed quite a few of our first-gen SSD machines (60GB drives) were running low on disk space, particularly in open-access areas where lots of different users were logging on.


didn’t think I’d be seeing you again…

This probably comes as no surprise to most education network admins as it’s something we used to deal with in the days of small HDDs but became almost irrelevant as larger local drives became the norm. To some extent history has repeated itself with SSD drives and we have no such problems with our newer Samsung Evo 120GB drives.

That said a solution still needed to be found for the machines with the 60GB drives. The first port of call was the easy option, enable automatic profile cleanup via GPO after a set period of days:

Computer Configuration > Policies > Administrative Templates > System/User Profiles > Delete user profiles older than a specified number of days on system restart

A bit more thought required

However after thinking about it for a couple of seconds it’s not that easy…

Our domain structure  places the Active Directory objects into OUs based on their location (room). Nothing unusual there. However what we don’t know based on OU alone is whether the machine is a classroom PC or one that lives in a staffroom \ office.

This information is very important because our Office 365 Outlook cache, amongst other per-user profile customisations needs to stay persistent and the last thing we want is to wipe staff profiles if the user has been away for a week on holiday!

With that in mind I seemed to have two choices

  1. Alter the OU structure to split machines into class and admin
    This would take a fair bit of administration and ongoing maintenance so wasn’t keen on this option
  2. Find a way to add some logic to the profile cleanup process
    Give the process some intelligence and get it to decide what to do by looking at the machine type

Registry key

registryOne of the sections in my custom imaging scripts asks the technician what the intended role of the machine is when they start the imaging process (name and location are done at the same time).

This information then goes into a custom registry key I create in HKLM\Software\HCFHE\WorkstationType for future reference.

Then the lightbulb moment: I can use that registry key as the identifier for the profile cleanup. First thought is can I add a WMI filter on the GPO… computer says no (or not easily at least)

Ref: https://social.technet.microsoft.com/Forums/fr-FR/5cd1b80a-2f90-4d46-bf65-dba52dcf0c56/how-to-make-wmifilter-that-looks-for-a-registrykey-or-filefolder?forum=winserverGP

Time for some scripting

By this point I’d decided that the GPO on its own wasn’t going to give me enough flexibility so I decided to go down the scripting route instead. First things first, we need a tool to run the profile cleanup, time to dip into my list of handy utilities for the excellent (and free) delprof2:

https://helgeklein.com/free-tools/delprof2-user-profile-deletion-tool/

Not only does it do a thorough job of cleaning profiles but it’s also very flexible in terms of specifying which profiles to clean and runs very nicely from a script :)

The script needs to do a couple of things:

  1. Check a registry key for the machine type ADMIN, CLASS (or an invalid value) and proceed \ quit accordingly
  2. Run the delprof2 tool with appropriate switches
  3. Write output to the Windows Event Log

For some reason I went for VBScript rather than PowerShell this time around. After a bit of research some very handy links turned up some code snippets to use. All credit to the authors for creating and putting them out there for re-use.

The second snippet is particularly useful, having worked with Linux for a while now I was yearning for a Windows equivalent to the “tee” command and very glad that the code below does the same job!

Check if value exists in the Registry (TechNet)
Save output of command to a variable (StackOverflow)
Write to Event Log (StackOverflow)

The end result

I’ve included a generalised version of the script below. As always grab the code from my OneDrive public folder to avoid any copy \ paste issues.

  • replace REMOVEME with the prefix of accounts to remove i.e. match something consistent in your student account numbers. You can have multiple /id: switches if you have a couple of different patterns to match
  • replace LEAVEME with accounts you want to exclude e.g. accounts you may have created for specialist use with profiles that you don’t want to be removed
  • You can have multiple /id and /ed switches if you have a range of different account name patterns to match
  • change any other delprof2 parameters as required (the example below removes inactive profiles over 7 days old)
  • the use of & vbCrLf after each line of output from delprof2 gives a nicely formatted Event Log entry, otherwise everything ends up on one long line!
Const HKEY_LOCAL_MACHINE = &H80000002

strComputer = "."
Set objRegistry = GetObject("winmgmts:\\" & _ 
    strComputer & "\root\default:StdRegProv")
 
strKeyPath = "SOFTWARE\HCFHE"
strValueName = "WorkstationType"
objRegistry.GetStringValue HKEY_LOCAL_MACHINE,strKeyPath,strValueName,strValue

Set shell = CreateObject("WScript.Shell")

If IsNull(strValue) Then
		shell.LogEvent 4, "Machine type registry key not found, exiting without performing profile cleanup"
		wscript.quit
		
ElseIf strValue="CLASS" Then
		'wscript.Echo "The registry key exists, the type is " & strValue
		
		Dim objShell
		Set objShell = WScript.CreateObject( "WScript.Shell" )
		Dim strCMD 
		strCMD = """\\yourdomain.tld\netlogon\delprof\DelProf2.exe""" & " /u /d:7 /id:REMOVEME* /ed:LEAVEME*"
		'wscript.echo strCMD
		'objShell.Run(strCMD)

		Set objExecObject = objShell.Exec(strCMD)
		strText = ""

		Do While Not objExecObject.StdOut.AtEndOfStream
			strText = strText & objExecObject.StdOut.ReadLine() & vbCrLf
		Loop
		
		shell.LogEvent 4, "Machine type is " & strValue & " - student profile cleanup has been performed" & vbCrLf & strText
		Set objShell = Nothing
		wscript.quit
		
ElseIf strValue="ADMIN" Then
		shell.LogEvent 4, "Machine type is " & strValue & " - profile cleanup not required"
		wscript.quit
		
Else 
		shell.LogEvent 4, "Machine type is unexpected value " & strValue & ", exiting without performing profile cleanup"
		wscript.quit
	
End If

The output in the Event Viewer then looks something like this…

Lyncing up Mitel VOIP

Slight confession, I wrote this post a while back before the name change from Lync to Skype for Business but the title sounds snappy so it stays!

Introduction

Lync, like SharePoint is one of those products that become much easier to get off the ground now that they’re available as a cloud service on Office 365. We’ve already been using Lync for its IM and video-conferencing abilities but also wanted to see how it could be integrated with our phone system.

Whilst some people have gone all the way and replaced their PBX completely with Lync we already had a fairly recent (and substantial) Mitel system in place so for now integration rather than replacement is the primary focus. During a recent project we were informed of the availability of a Mitel plugin for Lync (including the cloud-based version on O365) that would bring the two systems together and give us the best of both worlds.

The product itself is called Mitel MiVoice for Lync and connects to the MAS server in your environment.

http://www.mitel.com/product-service/mitel-mivoice-lync
http://www.ati.com.ph/mitel/pdf/mivoice/MitelMiVoiceforLyncBrochure.pdf

This video on YouTube (not mine) gives a live demo of the product in action

Usernames and LDAPS

In order to use the MAS applications users need login credentials. Initially this looked simple to hook up to Active Directory until we saw that the MAS server was a bit particular about how it wanted to connect to the directory. LDAPS is required, which means you’ll need a Certificate Authority and the cert imported onto your domain controller in order to create the secure connection required by the MAS.

Enabling LDAPS with Microsoft CA
Enabling LDAPS with 3rd party CA

Once that’s done you’ll be able to sync with Active Directory, select the users you want to enable for the MiVoice functionality and set an appropriate role (in our case a choice of Lync+Voicemail, Lync only and Voicemail only)

Plugin deployment

The documentation for MiVoice only seems to give the option for a manual installation where it’s down to the user to enter the correct vMAS server address during the process. This is far from ideal and seeing as the installer is an MSI anyway it seemed worth having a bit of a closer look at what else could be done.

Installing through the standard /qb switch worked but when the plugin started up it wasn’t able to login to the vMAS server. I was expecting that seeing as a step had been effectively missed out but soon found the setting stored in a config file.

C:\Program Files (x86)\Mitel\MiVoice\UCA.exe.config

Running an additional action after the install to copy the file across seemed to work but still felt a bit clunky though. Knowing that the server address is an option passed through the installer it’s highly likely that it could be specified as an MSI property. Trouble was knowing what one, that’s where MSI logs come into play. You can create a log of the installation using the following syntax:

msiexec /log logfile.txt /i installer.msi

Ref: http://thoughtsofmarcus.blogspot.co.uk/2010/10/find-all-possible-parameters-for-msi.html
Ref: http://stackoverflow.com/questions/7302251/determine-if-msi-exe-supports-certain-flag-argument

Searching the log for the vMAS server address soon turned up the string I was looking for is called UC_SERVER_HOSTNAME

With the last piece of the puzzle found running the command below will install the MiVoice plugin silently

msiexec /i MitelMiVoiceForLync.msi /qn UC_SERVER_HOSTNAME="vmas.yourdomain.com"

One thing I did notice was that sometimes the Mitel plugin wouldn’t fire up with Lync on the first launch after installation, it seemed to be hit and miss on the manual install and never worked at all with the silent method. With that in mind I added a reboot prompt as a final cleanup step to make sure the plugin behaved consistently.

Lync contact card behaviour

Whilst testing the deployment method on a generic VM I thought I’d broken the plugin when it wasn’t dialling contacts correctly from the right-click “Make a voice call” option as I kept getting an error stating “No number found”. However when I used one of my colleagues as a guinea pig it worked first time. Around the same time I decided to right-click on the contact I’d tried to dial from both machines and noticed a difference in what was being displayed.

On my VM the contact card only displayed a basic profile (name, department and job title) but no phone number or email address whereas on the desktop machine a full range of information was shown. There was also a little status section stating that Outlook was source of the contact card. Now the difference in behaviour started to make a bit more sense as my VM didn’t have an Outlook profile configured. Lo and behold once Outlook was set up on next launch Lync on the VM showed full contact information and I was able to use MiVoice to dial.

At that point I decided to try and figure out what the design logic was for syncing \ displaying contact information in Lync as in theory it should’ve been using Active Directory as the data source so it seemed odd that the Outlook GAL was having any bearing on the search results.

I also noticed that if a manual contact had been created in Outlook with a different number (e.g. mobile phone or the internal number expressed as an outside DDI) that would appear in MiVoice instead of the internal extension number, resulting in a failed dial.

Ref: http://getucinfo.com/lync/lync-2013-address-book-not-updating-information-from-active-directory.html
Ref: http://y0av.me/2012/02/23/normalize_ad_numbers
Ref: http://ucryan.com/2014/11/15/lync-contact-merge/

Basically it seems that Lync will ignore the phone number field in Active Directory if it isn’t in the E.164 format (i.e. starting with a + sign) as per this Microsoft KB article

http://support.microsoft.com/kb/2658120

At which point you have to decide how to proceed:

  1. add the required + sign in front of all numbers in Active Directory
  2. work on the assumption Lync users have Outlook configured

At present we’ll probably go with #2 as all our staff machines have Outlook installed.

Personal extension number

One final hurdle to get over is that all staff who want to use the plugin need to have their own individual extension number as two users trying to control the same phone at the same time isn’t going to end well! To that end we’re likely to start issuing each member of staff with a “hot desk” extension number that’s assigned to them when their network account is created, rather than extensions based on location.

Field notes: OST cache, shared mailboxes and SSD drives

As we’ve been running all our staff and students on Office 365 for the best part of a year now we’ve found a few tweaks that may be of interest to a wider audience. Here’s one of them from experiences earlier in the week…

SSD vs HDD

hdd-154463_1280Like many of you out there all our recent machines have been specced with SSD drives as the performance difference is incredible (can you imagine going back to HDD now?!) but the downside being that the size of drive isn’t as large.

This has become less of an issue on newer builds as 120GB drives have dropped right down in price now but for the first-gen machines with 60GB drives we have hit some space issues, mainly due to…

Shared mailbox caching behaviour

In a previous post I’ve mentioned the hybrid cache in Outlook 2013 that makes working with large mailboxes much easier; however what’s hidden away in the small print is that the hybrid cache doesn’t apply to shared mailboxes or other peoples’ mailboxes that you have access to.

That has some knock-on effects that aren’t immediately apparent but have began to manifest themselves recently in a couple of ways:

Low disk space

We’ve had a few calls coming in recently with machines running out of disk space, an issue we’d pretty much consigned to the history books after being spoiled with 160GB+ drives being more than spacious for most generic desktops.

Upon running the old but incredibly useful WinDirStat tool we could see where the space had gone… OST files! The worst case thus far was 35GB in a single file but other machines have had numerous ~7GB files (multiply that by a factor of 3-4 on a machine used by multiple staff and you can soon see where the space goes)

Calendar entries not syncing

Another recent call involved staff responsible for managing other users’ calendars not seeing updates when new entries were added or moved, yet when viewing on another machine or via OWA there was no such problem. What seems to happen is as the OST grows it corrupts and eventually the sync behaviour becomes a bit erratic.

Emails stuck in Outbox

Similar to the calendar scenario above some users have had random emails getting stuck in their Outbox and refusing to send. This particular issue has occurred on smaller OSTs as well as the huge ones above so it seems to be a corruption issue that can pop up from time to time.

Resolution steps

The quickest way to fix a corrupted cache is just to delete it. If, however you don’t want to do that for some reason (slow connection, user doesn’t want to wait for cache to reload or has unsynchronised items) you can run the scanpst tool that’s included within Office. It’s not something you’ll find in the Start Menu so run it manually from:

C:\Program Files (x86)\Microsoft Office\Office15\scanpst.exe

Ref: https://support.office.com/en-za/article/Repair-Outlook-Data-Files-pst-and-ost-25663bc3-11ec-4412-86c4-60458afc5253

In the scenario above where emails were getting stuck in the Outbox scanpst resolved the issue without needing to delete and repopulate the cache so it’s worth a shot as a quick fix

Disable caching for additional mailboxes

Some of our users need to have 10+ additional mailboxes open, others have shared mailboxes with many attachments and have tended to be the ones hit hardest by the caching and disk space issues. If money was no object we could just get them all 250GB+ SSDs but seeing as that’s not the case we need plan b)

Ref: https://support.microsoft.com/en-us/kb/982697

The solution is to enable an Office 2013 GPO setting under User Configuration > Administrative Templates:

Outlook 2013 > Outlook Options > Delegates > Disable shared mail folder caching

Once this applies you’ll notice a change in the status bar along the bottom of Outlook, non-cached mailboxes will show up as “Online” (confirmed at the end of the KB article above)

status bar

The slight downside is a bit of lag when first opening the folders. For a secondary mailbox that isn’t used as regularly it’s an acceptable compromise, given the issues that users were experiencing with the oversized caches so we’ve rolled it out across the board.

The only thing we’ll have to wait and see is whether the large OSTs reduce in size or if they need deleting to remove the cache that was previously stored for the shared mailboxes.

 

ZCM PXEMenu: TFTP Read File failed

Just a quick post but could prove useful to anyone who heavily customises their ZCM 11 imaging servers:

When I first started working on our imaging system we only had a couple of entries in the boot menu, which was pretty much the same as the Novell out-the-box menu bar one added option for our own image.

As I started customising further we gained more and more options until I got to the point of making a couple of submenus to house the various scenarios I’d built up, including:

  • standard single image (clear existing ISD data)
  • multicast image (master and slave machine options)
  • OOBE image (run the first couple of imaging scripts to pre-install drivers but shut down leaving the machine ready to use just needing a name)
  • various diagnostic options (basic VGA for unsupported chipsets, imaging manual mode etc.)

The error

The other day I went to add another line to try out some new code then to my surprise got a call from one of our technicians saying the PXE boot was broken – sure it enough it was:

ProcessPXEMenu: TFTP Read File failed


The fix

Initially I thought I’d made a typo on one of the new lines I’d added, or perhaps forgot to upload a matching config or script file that the menu was calling. Checking back I couldn’t see any errors but did notice in WinSCP that the file was now 76 lines long.

I removed the new line, back to 75 lines total and rebooted… PXE boot worked again!

I then removed an old comment line I didn’t need anymore and replaced it with the new option I tried to add initially and sure enough the PXE boot still worked. Adding the comment line back caused the error again.

It seems that there’s some sort of size restriction on the pxemenu.txt file, whether it’s file size or a 75 line limit I can’t say for sure but definitely one to watch out for if you like to customise your imaging menu.

Follow

Get every new post delivered to your Inbox.

Join 52 other followers