Tech review: Havering Asks 2016

img_20161130_140105With a few hours to go before the end of the year I thought I’d do a quick review of our last event of the year – our TV production “Havering Asks”.

It’s part of our live TV week, where media students produce their own shows as part of their course programme. We then live stream it on YouTube and via the website http://www.hcronair.com

I’ve been helping with the technical side for 4 years now and each time we try and add something extra. In the past that’s gone from live streaming across college, then online with Planet eStream then using multiple input streams with vMix and a Datavideo capture server.

This year on top of our now business-as-usual vMix setup we wanted to add a live videoconference link so I went away to gather some kit and ideas…

Skype for Business prime time

We already use Skype for Business within college in some of our conferencing rooms and ah-hoc usage on staff PCs so my first thought was if we could use it here as well. I did also consider Google Hangouts on Air after being on a Google conference a few months back but found out it was discontinued in September, which was disappointing as the YouTube replacement didn’t fit our needs.

I gathered a few of our newer loan laptops (Core i5, 8GB RAM etc.) and headed down to set up, realising we’d need to make some adjustments to get this to work…

  1. The output from our mixing desk was via SDI cables so I dug out a USB capture card that we keep for occasions like this, first problem solved with the help of a phono adapter
  2. An audio input from the mixing desk was also required, our sound engineers sorted that out quickly and made sure there was no feedback while mics were active
  3. Our large screen TV was at the front of the set but the mixing desk at the back. Given we don’t have any wireless HDMI extenders the only option was to stitch together a long cable or two to get from the back of set to the front via some neat use of rubber cable mats!

In the end Skype for Business proved to be a good call as it accepted our decidedly non-standard video input without a grumble whereas the consumer version of Skype refused to connect to the capture card. With the cabling out the way we used the now-standard federation from Skype for Business > Skype consumer to invite our guests to the show.

For the purposes of the event a dedicated Office 365 account was created so the branding would look right on-screen. Radio presenter Iain Lee was first up and I’ll admit it was a relief to see the full screen conference up and running when he dialled in 🙂

Havering live TV week Skype video call

Twitter wall

On the day of the main Havering Asks event I was also asked to set up a Twitter wall for viewers to interact with the show via our hashtag #haveringasks

In the past we’ve used Zoomph with great results so I was pleased to find they have a free option for up to 250 posts, which was fine for the needs of this event. The display was placed at the entrance to the show and also via our digital signage screens using Planet eStream.

Havering Asks Zoomph Twitter wall display

Plans for the future

In the end the TV went really well and it was another great experience for the students, who excelled with the quality of this year’s show. The video conferencing went down well too so I’m sure that will return again next time round, maybe we’ll go for multiple remote guests to keep things interesting!

I’m hoping that by the time we run our next show we might get some shiny new mixing kit to work with. The current setup has done a great service but would be good to move into the world of 4k, perhaps with some (very nice) Black Magic kit … Santa any chance of some additional presents? 😉

and finally…

Wishing you all a Happy New Year and best wishes for the year ahead.
Recently hit 300k views on here now so thanks for reading and hope to see you all back in 2017!

Advertisements

Azure Active Directory Application Proxy installation and troubleshooting

11225654646_7fc9621cc9_bRecently we decided to migrate away from our legacy reverse-proxy product to something that would integrate better with our AD \ Office 365 systems. I’ve wanted to try out Azure AD Application Proxy for a while since seeing it in beta last year so this seemed a good time to get to grips with it. This post outlines a few gotchas to watch out for and some useful background reading.

Let’s start off with the initial Microsoft documentation available here

https://docs.microsoft.com/en-us/azure/active-directory/active-directory-application-proxy-get-started

Education freebies

Although Microsoft’s recent price hikes haven’t come at a good time for us in education we do get a lot of extras thrown into our Microsoft licensing agreement. One of the lesser-known ones is Azure AD Basic, which is the minimum requirement to use Azure AD Application Proxy – see comparison chart at https://www.microsoft.com/en-cy/cloud-platform/azure-active-directory-features for more info

To get your free licenses you’ll need to get in contact with your EES reseller and they’ll get them added to your tenant in a similar way to Office 365.

Applying the Azure AD Basic license is nice and simple, go to your Azure Management portal at https://manage.windowsazure.com, select your Azure AD directory then assign suitable groups to the license. What’s handy is that if you’re using Azure AD Connect to sync from your on-prem directory any new users will get automatically licensed as they come on board.

Installation

Next step in the documentation list is here:

https://docs.microsoft.com/en-us/azure/active-directory/active-directory-application-proxy-enable

I used two dedicated Server 2012 R2 VMs for our install, the connector will be installed on each so we have failover should it be required at some point. Enabling the Application Proxy in Azure is nothing more than one click in the portal

Now in theory the installation should be straightforward, nothing more than downloading the installer from the link, sign in with admin credentials and job done. However if everything went that smoothly this blog wouldn’t exist (!)

Troubleshooting 403 Forbidden errors

At the end of the installation the wizard helpfully offers to run a troubleshooter to check all is well but in fact all was far from well…

Checking Event Viewer threw up the following errors:

  • Event ID 32012
    The Connector update using the update service failed: ‘The remote server returned an error: (403) Forbidden.’. Check your firewall settings.
  • Event ID 12020
    The Connector was unable to connect to the service due to networking issues. The Connector tried to access the following URL: ‘https://***GUID***.bootstrap.msappproxy.net:8080/’

Outbound firewall settings were already configured to allow all the ports that were asked for in the documentation, proxy was disabled in Connection Settings and the firewall didn’t register any outbound traffic being blocked so what’s going on here? The mystery deepens…

Although the wizard only offers to run the troubleshooter once you can run it again manually by launching it from:

C:\Program Files\Microsoft AAD App Proxy Connector\ConnectorTroubleshooterLauncher.exe

Troubleshooting the troubleshooter

Although there’s a fair bit of documentation in the troubleshooting section on Microsoft’s pages none of it referred to this particular error. Google didn’t have much to go on either but did throw up some useful and detailed slides from the Ignite conference that are well worth a read:

Ref: https://channel9.msdn.com/Events/Ignite/2015/BRK3864
Ref: https://techcommunity.microsoft.com/t5/Microsoft-Ignite-Content/BRK3139-Throw-away-your-DMZ-Azure-Active-Directory-Application/td-p/10675

The second link references another useful document aimed purely at troubleshooting:

Ref: http://aka.ms/proxytshootpaper

Whilst searching I stumbled across an email contact for the Microsoft Azure AD Application Proxy team

aadapfeedback@microsoft.com 

so I dropped them a message with the errors I was encountering. The team replied almost instantly and initially suggested ensuring that the following updates were applied on the server:

https://support.microsoft.com/en-us/kb/2973337
https://support.microsoft.com/en-us/kb/2975719

Proxy proxy proxy!

However still no joy even with everything present as it should be. The next recommendation was to check if I was using a proxy server for outbound connections. We do have one but it’s not used for server VLANs and is the first thing I disable on a new VM build.

However I did get intrigued to check the traffic going out via TCPView… lo and behold there was the proxy server trying to take the outbound connections and failing miserably. It seems that despite everything in the operating system suggesting traffic should be going out directly the Connector was still trying to use the proxy route instead.

Ref: https://blogs.technet.microsoft.com/applicationproxyblog/2016/03/07/working-with-existing-on-prem-proxy-servers-configuration-considerations-for-your-connectors/

The solution is in this document under the section “Bypassing outbound proxies”, which basically involves adding these lines to the .config files for both Connector and Updater services

<system.net>

<defaultProxy enabled="false"></defaultProxy>

</system.net>

Checking Event Viewer and the Azure Portal afterwards showed success, my Connectors were now up and running with nice green icons, much better 🙂

Note: even though this fix resolves the issue the current version of the Troubleshooter doesn’t seem to follow the settings in the .config files and will still report connection failures. The Azure AD Application Proxy team are aware of this and are aiming to have a new version out soon.

Additional considerations

There’s a few other points to bear in mind when you’re completing the configuration of the application proxy. None of them are major issues but good to have everything ready before you start…

Certificates

Once the Connectors are up and running the rest of the process went smoothly, although note you will need a wildcard certificate if you want to publish your applications via a “vanity” URL i.e. your own domain rather than “msappproxy.net”

Using the vanity domain and some DNS CNAME records means that if you use Office 365 SharePoint for your Intranet your internal applications can work from the same URL both inside and outside.

Setting SPNs for Kerberos SSO

Even better, those internal apps can SSO based on the Office 365 initial sign-on for a suitably slick user experience! This does require a bit more configuration with Kerberos delegation but it’s not too bad.

When setting the SPN records I remembered the gotcha from when I worked on Dynamics CRM to type the command in manually… bizarre as it is the same still applies!

Using the -S switch worked well for me:

setspn -s HTTP/yourserver yourserver

Ref: https://blogs.msdn.microsoft.com/saurabh_singh/2009/01/08/new-features-in-setspn-exe-on-windows-server-2008/

Nested groups

Finally, bear in mind if you’re using groups created natively in Azure AD you can’t nest memberships when creating application assignments, which is a shame. As a workaround create any nested ones in your local AD instead and sync them up via Azure AD Connect or just create flat groups in Azure AD if you prefer to work solely up there.

Ref: https://docs.microsoft.com/en-us/azure/active-directory/active-directory-accessmanagement-manage-groups

Application links

You can either publish your application links via your Intranet or users can browse them via the portal (I’ve linked to the new makeover version as it looks much better than the previous one in my opinion)

https://account.activedirectory.windowsazure.com/r#/applications

image credit Rainer Stropek 

Tip of the day – Windows Update fixes for 7 and 8.1

20013670043_113a55f0bf_z

Back in the good old days (aka a few years ago) Windows Update tended to be something that just… worked. You’d take a fresh Windows install, pop it through the update process and after a bit of chugging you’d get a fully patched OS.

Recently Microsoft seem to have made a bit of a mess of things and I’ve spent far too much time forcing recalcitrant machines to do what should be a simple task.

Hopefully once the cumulative updates start rolling everything into the monthly patch cycle this post may become irrelevant. Until then here’s the quick way to persuading a Windows 7 / 8.1 machine through the Update process…

High CPU hotfix

Install this one first if you’re faced with a particularly out-of-date installation otherwise you’ll be stuck for days “searching for updates” while your CPU goes crazy (100% utilisation) for very little return…

Windows 7 https://support.microsoft.com/en-gb/kb/3102810
Windows 8 https://support.microsoft.com/en-gb/kb/3102812

Windows Update Agent

Next install this to update your updating software in order to download new updates (!)

https://support.microsoft.com/en-gb/kb/949104

Reset Windows Update Agent script

Sometimes Windows Update still won’t work in spite of the patches above so run this script from TechNet to reset the Windows Update subsystem in case something has gone awry

https://gallery.technet.microsoft.com/scriptcenter/Reset-Windows-Update-Agent-d824badc

Round trip limit exceeded

Despite all of the above Windows Update can still fail because of a hard-coded limit in how it talks to WSUS (this only applies to managed Windows desktops rather than home users). In which case you need to take advice from this song…


“you can get it if you really want but you must try, try and try, try and try… you’ll succeed at last”

Basically just keep clicking the retry button until WSUS gets through enough trips to serve you all the updates Windows needs.

Ref: http://trentent.blogspot.co.uk/2016/03/wsus-clients-fail-with-warning-exceeded.html
Ref: https://blogs.technet.microsoft.com/sus/2008/09/18/wsus-clients-fail-with-warning-syncserverupdatesinternal-failed-0x80244010/

You may also be able to speed things up by cleaning up your WSUS server, which can be aided via this very useful script

https://community.spiceworks.com/how_to/103094-automate-wsus-cleanup

or this one…

https://community.spiceworks.com/scripts/show/2998-adamj-clean-wsus

Now that’s sorted you can make yourself a cup of tea and wait for that progress bar to crawl across the screen! Will be interesting to see how the cumulative update process goes but if it means an easier way of rolling an out-of-date machine up with one single download then it’ll have some benefits for convenience albeit at the expense of granular control… swings and roundabouts I guess…

image credit Christiaan Colen
https://www.flickr.com/photos/132889348@N07/20013670043

Tip of the day – Excel INDEX MATCH in 10 seconds

microsoft_excel_2013_logo-svgI originally meant to write this post last summer the first time I used the magic of INDEX MATCH but for some reason never got around to it. I did however leave myself a template spreadsheet but even that took a bit of time to decipher what I’d done so this time around I’ve decided to make the post happen!

The need to delve back into my archives came about when a colleague in the HR department asked me if there was a way to look up information from one set of data against another in Excel and mentioned VLOOKUP as an option.

That got me thinking about a similar scenario I’d had the previous summer when I needed to so something similar with user accounts after some fun with Office 365 DirSync experiences: synced OUs and user deletion

I also remember swiftly dropping VLOOKUP in favour of the lesser-used but (imo) more flexible INDEX MATCH formula. Some of the advantages of the latter include:

  • lookup columns from anywhere in the sheet
  • no need to worry about messing up the formula if you insert \ move columns around

Of the websites I’ve looked at this one gives the best explanation and real-world examples so give it a read for further background:

Ref: https://fiveminutelessons.com/learn-microsoft-excel/how-use-index-match-instead-vlookup

What’s the answer?

However I wanted to write the formula out in even simpler plain-English so it would take me no longer than 10 seconds to remember how it works should my future self need a quick reminder.

Initially I went with the classic method of a post-it note but to save anyone needing to decipher my typically IT-techie scrawl here’s a much nicer version I made earlier 🙂

index-match

  • In the example I’m using a value in cell A2 of Sheet1 to find an equivalent value in Sheet2 column A
    Once found the formula returns a related record for the item in question from Sheet 2 column D
  • You can fill the formula downwards if you have multiple inputs that need matching (e.g. a list of IDs that each need a value against them)
  • To help illustrate I’ve made a sample file that uses a fictional student’s ID number to return their grade and date of birth from another sheet.
  • If the value isn’t found in the data source Excel returns an N\A value
  • As always the file is available in my Public OneDrive folder

Further tips

  1. to save having to define exact cell ranges for the data just use D:D (or whichever column you require) to search the whole lot, handy if you’re likely to replace the data source with a refreshed version at some point.
  2. If you’re typing this formula in manually and selecting columns across tabs make sure you don’t follow your natural instinct to click back in the formula cell to complete it; if you do you’ll end up changing the tab’s reference back to the one the cell exists in, which will play havoc with your results!
  3. if you want to use the INDEX MATCH to return multiple values from the source data I find it easier to copy the formula into notepad, adjust the first cell reference then paste it back. Sometimes Excel tries to be too clever when copying \ filling across formulas and ends up causing more errors than it helps to solve!

In the end INDEX MATCH did the trick perfectly and earned me a Freddo chocolate bar for my troubles, which at the current ever-increasing price of chocolate these days is a pretty fair trade!

Save yourself from insanity… Google and Outlook contacts on Android

2000px-Android_dance.svgRecently I had to factory reset my HTC One M8 whilst it was in for a repair (thanks to a stray bottle of soy sauce landing square on the screen, ouch!) but since reinstalling all my apps I noticed my contacts sync wasn’t working correctly.

Although my Google account had synced contacts when first setting things up the People app would’t let me add a new contact to my Google account. Rather it would default to SIM instead. Very strange I thought, it’s never done that before and I could still see everything else that was already there. Oddly the filter menu wouldn’t list “Google” as an option either.

Initial thoughts

First I thought maybe the app permissions after the Android M update may have gone wonky so checked those, no problems there (People app had access to Contacts permission).

Next… maybe the Google Account sync had Contacts sync turned off but after checking it’s there and working fine.
As another test I created a new contact online via Google Contacts and then forced a sync on the phone… contact didn’t appear. Very odd.

Tried a few other ideas like clearing App caches, also cleared the Android cache partition via Recovery as I’d been having some issues with the HTC Camera app as well but no joy there either (although Camera app now seems to have sorted itself out so a bit of a bonus there).

Solution – turn off Outlook contacts!

Finally I came across this…

http://forums.androidcentral.com/google-nexus-5/350303-phone-contacts-not-syncing-google-account-contacts-2.html

Credit to “haneyman” for this…

Confirmed, you cannot have Outlook sync contacts and expect Google contacts to sync. As soon as I unlinked the Outlook account on my phone, the Google contacts appeared.

So it seems the Outlook app is the culprit. To confirm I went into the sync options for my Outlook.com account and sure enough contacts sync was enabled. Turned that off, cleared my running apps then sure enough on next load the People app was letting me create and sync Google contacts again.

Maybe having accounts on both Google and Microsoft is a bit unusual but definitely one to watch out for if you have a foot in both camps and use an Android smartphone.

Save yourself from insanity: Aruba Captive Portal RADIUS Accounting

raidusI’ve been meaning to post this one for a while but got there in the end! Recently we changed our content filtering provider and one of the aims of the new system was to ensure tighter integration between the Wi-Fi controller and filter for authentication \ identification of users.

We particuarly needed the framed-ip-address attribute as that’s used to tie a device to a user on our particular filtering product. In theory the setup sounds fairly straightforward:

  • set up Windows Network Policy Server to handle RADIUS authentication
  • set up RADIUS authentication profile against a new Wi-Fi SSID
  • set up RADIUS accounting on the wireless controller
  • set up RADIUS accounting on the filtering server

Initially all went well and we were able to authenticate users smoothly onto the Wi-Fi network via the existing captive portal… but (and isn’t there always a but!) we saw nothing on the filtering server, just an empty void of white space where user account activity should’ve been 😦

Initial troubleshooting steps

So I checked the simple things first…

  1. Check RADIUS Interim Accounting option is enabled on the AAA profile
  2. Check if shared secret is too complex \ typo when entering it into various config pages
  3. Ensure accounting server options in Windows NPS are configured correctly
  4. Confirm configuration of accounting server details on Wi-Fi controller
  5. Ensure ports for accounting information are set as they should be

Everything checked out correctly and authentication still worked fine despite me trying to break it, which made accounting failing even more strange. With that in mind it was time to move onto some more in-depth troubleshooting.

Delving deeper

Next step was to try and see if any accounting traffic was actually being sent so trusty Wireshark was spooled up to watch traffic for anything on port 1813. We saw plenty on 1812 for authentication but consistently nothing on 1813. At one stage I was beginning to wonder if the NPS server had something to do with it but replies to my posts to TechNet forums suggested otherwise.

A case was then opened with Aruba support which involved upgrading the controller to latest firmware 6.4.2.12 before further troubleshooting could be performed. A few useful commands came out of this process, which should be ran before upgrading to ensure the controller has enough resources to run the upgrade:

show memory
show storage

As an aside the upgrade did give us a nice new(er) feature called AppRF that basically brings application-level monitoring to the Aruba UI. It saves going through the firewall to find the same information and allows us to see at-a-glance where the bandwidth is going on the wireless network and to which user(s):


image credit: Aruba Networks

The update also made packet captures on the controller a bit simpler, which further proved our theory that no accounting traffic was being sent as the controller itself didn’t log anything on 1813 in its direct captures. However despite the upgrade we were still no closer to resolving the accounting issue.

The breakthrough

After escalating through various levels of Aruba support and product management one of the technical team finally found our issue, which turned out to be a deceptively simple fix. It’s a sneaky little setting squirrelled away named Captive Portal Check for Accounting

The setting in question lives within the Misc. Configuration section of Security > User Roles.

You need to edit the settings of the role that is assigned as the 802.1X User Default Role for the the AAA Profile associated with your RADIUS-enabled VAP (what a sentence that is!)

aruba role misc settings

Basically untick that box and everything starts working…

By default the Captive Portal Check for Accounting box is ticked and therefore accounting won’t work if the user has authenticated via a captive portal. The Aruba documentation has this to say about it:

The check-for-accounting parameter is introduced in ArubaOS 6.3.1.7. If disabled, RADIUS accounting is done for an authenticated users irrespective of the captive-portal profile in the role of an authenticated user. If enabled, accounting is not done as long as the user’s role has a captive portal profile on it. Accounting will start when Auth/XML-Add/CoA changes the role of an authenticated user to a role which doesn’t have captive portal profile. This parameter is enabled by default.

As soon as the box was cleared accounting information came flooding in and I was pleasantly surprised to see how quick the interim updates were also processed, as some vendors’ interpretations of the RADIUS accounting standards aren’t quite so amiable from what I read during my research.

Was certainly a voyage of discovery to get to the solution but we have gained a few new features along the way and I’ve also become well acquainted with the ArubaOS CLI for troubleshooting purposes, so the process has added some valuable knowledge too 🙂

Video streaming update and a visit to BVE 2016

Print

This week I was able to visit the BVE expo to have a look for the next generation of video mixing and streaming equipment for our media block. There was plenty on show, including an interesting talk on using drones in TV production that drew quite a crowd!

It’s really interesting to see how networking and video technologies are converging and definitely something I’d like to do more work with in the future.

Our equipment is used by students in their classes for as well as projects such as HC Radio and our yearly Havering Asks TV programme. The visit also reminded me to write a little about some of the new kit we used in our recent productions.

vMix updates

After using the free version of vMix for the video stream of HC Radio we decided to purchase the vMix HD edition for video production use. The additional inputs and extra features such as Video list were what we needed to add pre-recorded content into the live show production.

Ref: http://www.vmix.com/purchase/

One thing we found with the video list is that the UI started to lag when we loaded 20+ videos into vMix. A workaround from the support team was to use VLC to generate a playlist and load the content in that way instead. End output was the same but this method seemed a lot more CPU friendly. We’ll need to check this again as new versions of vMix are released.

We’ve also since found out about the free vMix Social plugin which will allow live updates to be posted as on-screen graphics so will be trying that out next time round as well.

vmix-logo-large  Planet-eStream

For those wanting to record the output to Planet eStream use either of these methods, credit to eStream support for the below as they were testing vMix around the same time we did. Great minds and all that 🙂

1) On vMix there is an option for ‘External’ at the bottom, if you go to the settings next to ‘External’ then go to ‘Outputs’ make sure that Recording/External is set as output and all overlays selected. Now when you click ‘External’ and it goes red you can open an instance of the encoding application, on the same machine and there will be a video device called ‘vMix Video’ this will allow you to record the output window on vMix.

2) Stream it through eStream by editing the settings next to ‘Stream’ at the bottom. You can create a custom RTMP server. The settings will be:
Server: rtmp://svrestream/HCBcast
Stream Key: vMix

Now you can go to the encoder machine on another pc and use a network video source. Use the URL rtmp://svrestream/HCBcast/vMix please note capitalisation is important.

vMix GO

vmix-goThis is one of the new (to me anyway!) products I spotted at BVE today. It’s a self-contained, portable production system with all the inputs etc. you need integrated with a suitably powerful PC and vMix Pro included. It provides an interesting alternative to the Blackmagic Design kit I also went to see today, which is hardware-based rather than vMix’s software approach.

Streaming across multiple locations

One of the new requirements for Havering Asks 2015 was to provide an additional video source so we could transition between the live show taking place in our performance area “The Space” as well as our TV studio in the media block. vMix would then be used to mix the inputs and provide the stream to our YouTube channel.

Given that the two buildings are at opposite ends of the college it was a pretty simple decision required that we needed to use the network to get video from one place to the other. The question was how best to do it. We also wanted to use whatever solution we found for future events so it needed to be robust and easy to set up going forward.

From a cost perspective we thought of using a PC \ laptop but after adding an external capture card the solution seemed rather clunky. There’s also a fair bit to go wrong and once you put all the hardware prices together it’s not particularly cost-effective either. We then moved onto dedicated streamers to see what was available and looked at a couple of different products:

I liked the look of the Teradek and the output LCD would made it easy to use with DHCP as we could easily spot what address it had obtained as it gets moved around. Unfortuately it’s HDMI only and was the most expensive of the three options. It also turns out not to be supported with Planet eStream so we continued onto the other options.

The unbranded Chinese device did its basic job of streaming but, as is often the case with these no-name products had some odd firmware issues that meant we couldn’t 100% trust it. The main one was with DHCP, where the stream output link seemed to stick with the previous address it had been assigned, rather than the current lease. This presented a problem for us as setting up a static port each time we wanted to stream would add an extra administrative burden.

Now we come to the NVS-25, which does a great job of offering lots of flexibility at a great price:

  • SDI, HDMI and composite video inputs
  • RCA and XLR audio inputs
  • multiple streaming protocols
  • USB port for recording of video stream

The multiple inputs are particularly good as it means we can use our current hardware over SDI \ CVBS and then in future have the flexibility to move to HDMI should we want to.

I had a look around BVE for similar devices and was rather pleased to see one of the suppliers rate it as the best devices for feature set in its price range, always a relief to hear we chose wisely!

Experiences with the NVS-25

We learnt a few things from setting up and using the Datavideo device so here’s a few lessons learned to save anyone else the trouble:

Networking

The IP scanner utility is very handy and helps get up and running quickly.
I hear that an NVS-30 is on the cards and if Datavideo can get a screen on the new product it’ll be even better!

Storage

The front USB port should only be used with USB sticks or, at a push SSD drives on an adapter. It won’t run USB hard drives that don’t have their own external power and the side effect is that the encoder will freeze up until you do a hard power off and disconnect the offending drive. The media should also be formatted as FAT32.

Firmware

Update the firmware to the latest version as there are bugs in previous versions relating to how streams are presented. We had problems getting an RTSP stream into vMix due to incorrect header information in the stream. Apparently from what I was told at BVE an update has since been released to resolve this. As a workaround we changed over to RTMP instead, which worked OK.

IMAG1029
Datavideo NVS-25 in action connected up to our mixing desk

Whilst on the Datavideo stand their tablet-oriented autocue caught my eye. Again rather reasonably priced it syncs the script with multiple devices and allows central control from another station wirelessly. Perhaps one for the 2016 productions 🙂

IMAG1198