Ubiquiti / UniFi Protect Outage (Aug 2020)

I wanted to share a couple of thoughts on the recent UniFi Protect outage that happened on Aug 25th 2020 as it raises a number of questions that I think Ubiquiti need to answer.

Firstly tho, outages happen. It’s not a question of if, but when, and how severe. What’s important is that companies (and individual teams) expect them, and plan for how to handle them and provide quick mitigations (including proactively designing for failure modes). Having the right data, and the correct processes is critical to handling outages.

The incident

The incident was described as “Elevated API Errors” that affected “AmpliFi Cloud & UniFi Protect (AmpliFi Cloud Production API)”. This translated to customers not being able to access the UniFi Protect cameras both locally and remotely.

In total there were two outages that apparently lasted for a total of 8 hours 14 mins (more on this figure later). The first outage was classed as a partial outage, then 11 mins after it was “resolved” a major outage was triggered.

Partial outage

The partial outage lasted for 6 hours, 27 minutes. The status website wasn’t updated until 4 hours into the outage. Meanwhile customers, like me, weren’t able to access devices remotely.

Major outage

The major outage was posted 11 mins after the partial outage was marked as resolved. The status updates were much faster but the timeframe on the status website doesn’t add up as the timeline shows updates over 12 hours and 9 mins. Which makes the total outage 14 hours and 39 mins and their rolling 30 day availability 97.96% (if you calculate this based solely on the status timeline).

Plenty of customers were vocal about the issue, I personally cut support tickets after a couple of hours (as soon as I could work around a separate issue with their ticketing system).

The questions

Let’s be honest, problems like this happen, but there are a number of issues here that Ubiquiti need to address publicly, if they want their customers to have confidence in their cloud systems (a lot of their target customer base are sysadmins and tech professionals after all).

  1. Why did this outage last so long? If it was caused by a change (bad deployment etc) why wasn’t it rolled back quickly? What actions did folks have to take that took so long to get out of the door?
  2. What happened between the two incidents? Why did resolving a partial outage cause an even worse outage? Who is in charge of resolving these incidents and making the decisions about the course of action and assessing the customer impact to help teams make the right decisions?
  3. Why was the blast radius so wide? This was either an automated deployment (see #1 about rollbacks) or a manual change. Either way, why did this have global impact? Are changes not deployed into different regional stacks to isolate faults? Was it deployed too quickly, so the damage was done by the time the issue was apparent? Or, do they just have a single global system, that represents a clear single point of failure?
  4. How can customers still retain access to their local camera systems when the Ubiquiti cloud is down? Customers like the fact that footage is local, but still accessible remotely. So why do both local and remote access require cloud connectivity?
  5. Why does the timeline and availability data not align with their own summary information? How is availability being measured?

I raised my request for a root cause analysis / post-mortem in my support ticket and I was told

Regarding your query, we don’t have any official updates as of now. When we do so it’ll available on community.ui.com

I understand your concern, but if you check the historical uptime we haven’t had any major outages before this.

I think Ubiquiti owe it to their customers to provide more analysis of this outage. It’s not about historic performance, it’s about having confidence in their engineering systems and processes to be sure that you can rely on them.

Automatic monitoring of AWS Lambda functions for an Alexa Skill

If you’re not testing your services in production yourself, then you’re letting your customers test it for you!

Automatic testing of your system means you can catch issues before your customers see them. This doesn’t mean you can skip on unit testing or functional testing, it simply means you have an extra layer to notify you of problems before they affect users.

What to test

The Alexa Skill that I want to test is implemented as an AWS Lambda Function. The Lambda Function will be the focus of the automated testing since that’s where my code lives and where it’s most likely to have a problem. I’m not testing anything to do with voice commands, Echo devices, or the Alexa Skill backend. I’m treating that as a black box, after all I don’t have any way of fixing or debugging issues inside of those systems anyways.

When I’m monitoring the system I want to know the following

  • Is the Lambda function online and working
  • Are dependencies functioning correctly
  • How long does it take to execute (latency)

To test this I’ve decided to take the simplistic approach and use an automated request to my Lambda Function that synthesises a customer request and then use my monitoring systems to identify problems.

This works well for my use case because I have already implemented monitoring and logging for Lambda Function so I have pretty good metrics and most users interact with my skill on a weekly basis so this is likely to highlight problems before users find them.

How to test it

My Lambda function exposes a single handler function in Node.js so to test different behaviours you have to modify the values sent in the request payload (rather than having different APIs for each). There are a couple of different options that I can use to test my function but for simplicity and cost it’s easier to use an AWS CloudWatch Event to automatically trigger my Lambda Function at a set interval and check everything is OK.

Setting up the Cloudwatch event is split into two parts:

  1. Modifying the Lambda Function to allow CloudWatch
  2. Setting up the Event itself to call the function at a set interval

Allowing the function to be invoked is pretty simple. You modify your Lambda Function and add a new trigger from the menu on the left for “CloudWatch Events”, This means your function can be invoked either from an Alexa Skill or from CloudWatch.

You can then go into CloudWatch and add a new event. I’ve opted to trigger this event on a set schedule every 5 minutes. You can also select the target as being a Lambda function and then select the function name from the list, and also the version of the function you want to target (I always select the same version that I have live for users to make sure I’m testing what users are experiencing).

For the input of the event, I used a fixed JSON payload which specifies the intent name that I want to trigger.

  "session": {     
    "new": true,     
    "sessionId": "SessionId.253d1fe4-9af0-45de-b767-ccfea6f0e3d4",     
    "application": {       
      "applicationId": "<YOUR SKILL ID HERE>"     
    "attributes": {},     
    "user": {      
      "userId": "HEALTH-CHECK-USER"     
  "request": {     
    "type": "IntentRequest",     
    "requestId": "EdwRequestId.cb85ea9c-1e57-4226-83b6-f1a1d9e2eb8a",     
    "intent": {      
      "name": "PlayLatestSermon",       
      "slots": {}     
    "locale": "en-GB",     
    "timestamp": "2018-01-14T14:31:58Z"   
  "context": {     
    "AudioPlayer": {       
      "playerActivity": "IDLE"     
    "System": {       
      "application": {         
        "applicationId": "<YOUR SKILL ID HERE>"       
      "user": {         
        "userId": "HEALTH-CHECK-USER"       
      "device": {         
        "supportedInterfaces": {}       
  "version": "1.0" 

Once that’s in place you should have the specified intent being triggered every 5 mins!

Knowing when things go wrong

I’ve already written about monitoring Alexa Skills and creating a dashboard. I’d rather not have to keep checking graphs to know when something is going wrong. So, you can actually use CloudWatch alarms to check metrics for you and send an email when a configured threshold is breached.

When you add a CloudWatch event it automatically logs some metrics for each configured event, including for successful and failed invocations. This is perfect for knowing when the health check failed. I followed the configuration options for the alarm and set this to fail if I have three or more failed invocations in a 15 min period. I’ve also configured some other alarms on errors, and some capacity alarms. One thing that I find helpful is to set an alert when the status is ALARM (when it goes wrong) and also when it’s OK (when it recovers), that way, if you get a blip that triggers the alarm, you’ll also get a follow up telling you it was OK.

The beauty of this approach is that you get automatic traffic testing your code at whatever interval you pick, and you also get notified when something starts to misbehave so you can catch it before your users do helping ensure you have a reliable system and a better experience for your users

Using AWS CloudWatch to monitor an Alexa Skill (Node.js)

Monitoring Alexa Skills can be a bit hit or miss. On one hand, you get some metrics for free (throttles, invocations, duration, errors) which is great, but sending custom metrics can be a bit confusing, and it’s not helped by the 166 npm packages that you find when you search for ‘cloudwatch’ in the npm database.

What I want is a way of monitoring different parts of my Alexa skill, especially:

  • Execution time
  • RSS feed download time
  • RSS feed parsing time
  • Custom errors
  • Which intents are being used the most

Sending custom metrics to CloudWatch via Metric Filters

After looking through the various NPM packages, and the CloudWatch API. Thankfully I found a much easier way to push custom metrics to CloudWatch. These are called Metric Filters. They allow you to set up parsing rules for CloudWatch log lines, when a log line matches the filter it can be pushed as a metric as well (and then graphed). This means you can graph metrics, just by writing a log line in a consistent format!

Log Format

The log format I settled on is pretty simple, I have two main metric types:

  • Timers – These log a value (duration) against a specific metric name
  • Counters – These just log out a metric name and can be used to count the number of times something happened (perfect for errors).

I wrapped this format in a simple logger object so I can just call logger.error(), logger.timer() or logger.increment() in my code and pass the name (and value if it’s a timer).

var logger = {

    timer: function(metricName, value) {
        console.log("TIMER_METRIC " + metricName + " " + value);

    increment: function(metricName) {
        console.log("COUNTER_METRIC " + metricName);

    error: function(metricName) {
        console.log("COUNTER_METRIC Error");
        console.log("COUNTER_METRIC " + metricName);

This results in a set of log lines being automatically written to the CloudWatch log file. The example below shows a counter for the metric PlayLatestSermon, a timer for FeedFetchTime (taking 1,001ms) and a timer for FeedProcessingTime (taking 3,278ms).

2018-01-27T15:06:40.441Z	acd0af62-0373-11e8-94ab-53e50c1e53b5	COUNTER_METRIC PlayLatestSermon
2018-01-27T15:06:41.442Z	acd0af62-0373-11e8-94ab-53e50c1e53b5	TIMER_METRIC FeedFetchTime 1001
2018-01-27T15:06:44.720Z	acd0af62-0373-11e8-94ab-53e50c1e53b5	TIMER_METRIC FeedProcessingTime 3278

Now that this data is in the logs I can set up the filters.

Setting up the Metric Filters

Setting up the filters is pretty simple. Log into CloudWatch in the AWS portal, and click on ‘Logs’. Then click on the log group for your AWS Lambda Function and then click the blue ‘Create Metric Filter’ button.



You’ll then be prompted to enter the filter pattern. This is the pattern you want to match for in each log line. You can see from the log examples above that my log structure is in the format


Since I’m logging two different types of metrics (timers and counters) my filters are pretty much one of two.

[datetime, requestId, level=TIMER_METRIC, metricName=FeedFetchTime, metricValue]
[datetime, requestId, level=COUNTER_METRIC, metricName=AMAZON.LoopOffIntent]

The first line looks for a log line starting with ‘TIMER_METRIC’ it then takes the next value as the metric name (for this example it’s the FeedFetchTime metric), then the value I write log. The second example is almost identical but the type of log is different, the metricName is different, and since it’s a counter I don’t emit a value.

You can test the filter pattern against sample log data but be warned, the drop down is in reverse order, so you can only test it against some of your oldest log data (not great if you’ve added new data that’s in your newer log files). To work around this you can copy/paste log data and test against that. Once you’re happy it’s matching log lines correctly you can configure the metric itself.

Here you can set a namespace to group your metrics together. The metric name (I copied the name that I emit to the logs to keep it simple). If you want to log the value from the logs then you can click the different quick-links below the value field (these will match the names you used in your filter pattern). For my example I used ‘metricValue’ but for counters I just leave this field blank since I don’t care about a specific value. When that’s done you can save your filter.

Graphing your new metrics

The metrics will normally take a few minutes to come through the CloudWatch metrics side. After a few minutes you should be able to click on the name of the namespace you configured in your metric filter and then find the metric you want. Once it’s graphed you can change the statistic type too. Some useful ones are

  • avg – The average value for the specified interval. Be careful with this stat, it’s nice to have but it can give you a false sense of security
  • sum – This calculates the sum of all of the values emitted in the time window
  • min – The lowest value emitted in the time window
  • max – The largest value emitted in the time window
  • sample count – The number of times the metric was emitted (useful for error counts etc, but make sure you don’t log error=0 or something similar since that will have a sample count of 1)
  • p99 / p90 / p50 etc – These are immensely valuable, these are percentile metrics. P99 shows you the values at the 99th percentile. In other words, they show the top 1% of values seen. Likewise P90 is the top 10% etc. These are great for timer metrics. The P99 execution time, shows you what the slowest (larger values = longer time) 1% of users experience. The P90 would show what the slowest 10% of users experience. This really show where users might be getting a much worse experience than you think if you just focus on the average values.

I suggest duplicating the same metric on a single graph and changing the statistic type to show avg, P90 and P99 on a single graph so you can get a feel for the average experience, but also what the top 1% and 10% of users experience too.

Creating a Dashboard

Once you’ve got data coming through to CloudWatch metrics I suggest creating a custom dashboard to show the data you care about in a single screen. Click on dashboard and pick a name and then click ‘add widget’ to add a new graph. You can expand graphs to take up different numbers of columns / rows but remember to click “save dashboard” whenever you modify something!

Building an Alexa podcast skill

Disclaimer: I’m a software engineer at Amazon, but everything expressed in this post is my own opinion. 

Photo by Andres Urena

What am I building?

Essentially I’m building a podcast skill for the Church I attend (Kingsgate Community Church). The aim is for people to be able to ask Alexa to play the latest sermon and have the device play back the audio. That way people can keep up-to-date if they missed a week at Church.

What I’m building would be easy to adapt (if you just wanted a generic podcast skill) and I’m keeping the code for the back-end on my public Github page.

Building the skill

Alexa skills are pretty simple to create if you use AWS (although you’re not restricted to this). There are two main parts to building a skill

  1. Configure the skill and define the voice commands (called an interaction model)
  2. Create a function (or API) that will provide the back-end functionality for your features

Configuring the skill

Creating a skill is pretty simple, you can visit the Alexa Developer Portal and click on “create a skill” where you’ll be asked some key questions.

  1. Skill type – For my podcast skill I’m using a custom skill so I can control exactly what commands and features I want to support (learn more about skill types)
  2. Name – This is what will be displayed in the skill store so it needs to be clear so that people can find your skill to enable it.
  3. Invocation Name – This is the word (or phrase) that people will say when they want to interact with your skill. If you used the invocation name “MySkill” then users would say “Alexa, ask MySkill…”. It doesn’t matter if someone else is using the same invocation name since users have to enable each skill they want to use. You should check the invocation name guidelines when deciding what to use.
  4. Global fields – The skill I’m building will support audio playback so I tick “yes” to use the audio player features.

Voice Commands (a.k.a the interaction model)

This is one of the parts that people struggle with when creating Alexa Skills. What you are really designing here is the interface for your voice application. You’re defining the voice commands (utterances) that users can say, and what commands (intents) these will be mapped to. If you want to capture a variable / parameter from the voice command, these can be configured into something called a ‘slot’. I find it easiest to think of intents like I would REST API requests. You configure the voice commands that will trigger an intent / request and then your skill handles each different intent just like you would handle different API requests if they came from a different button click command.

The developer portal has a great new tool for managing the interaction model. You can see from the screenshot on the right that I have a number of different intents defined. Some of these are built-in Alexa intents (like AMAZON.Stop) and I have some custom intents too. The main two intents I’ve configured are

  1. PlayLatestSermon – Used to fetch the latest sermon from the podcast RSS feed and start audio playback. Invoked by a user saying “Alexa, ask Kingsgate for the latest sermon”
  2. SermonInfoIntent – Gives details of the podcast title, presenter name, and publication date. Invoked y a customer saying “Alexa, ask Kingsgate for the latest sermon”

Adding an intent is as simple as clicking the add button, selecting the intent name, and then defining the utterances (voice commands) that users can say to trigger that intent. Remember: for custom intents the user will have to prefix your chosen utterance with “Alexa, ask <invocation name>…”, for built in Alexa intents (like stop, next, shuffle) the user doesn’t have to say “Alexa, ask…” first. It’s important to think about this as your user interface and pick utterances that users are likely to say without too much thought. You don’t want people to have to think about what to say so make sure you give lots of variations of phrasings. When you’re done you’ll need to click save and also build the model.

Creating the function

For my skill I’m using an AWS Lambda Function, which is a great way of publishing the code I want to run without having to worry about server instances, configuration or scaling etc.

Creating a skill is simple, just log into the AWS control panel in go to Lambda and then click create function. Pick the name, runtime (for my function I’m using Node.js 6.10 but you can use Go, C#, Java, or Python). Once you’ve created the function it will automatically be given access to a set of AWS resources (logging, events, DynamoDB etc). You’ll need to add a trigger (which defines the AWS components that can execute the Lambda function), since this is an Alexa skill I selected ‘Alexa Skills Kit’. If you click on the box you’ll be given the option to enter the ID of your Alexa skill (which is displayed in the Alexa Developer Portal). This gives an extra protection to make sure only your Alexa skill can execute the function. I’ve also given access to Cloudwatch Events, but I’ll cover this in another up-coming post about automated lambda monitoring).

The code for the lambda function is split into three main parts

  1. The entry point
    This sets up the Alexa SDK with your desired settings and also registers the handlers for different intents you support in different states. You can see the full entry code on Github.

    exports.handler = (event, context, callback) => {
        var alexa = Alexa.handler(event, context);
        alexa.appId = constants.appId; // Set your Skill ID here to make sure only your skill can execute this function
        alexa.dynamoDBTableName = constants.dynamoDbTable; // The DynamoDB table is used to store the state (of everything you set in this.attributes[] on a per-user basis).
        // Register the handlers you support
  2. The handlers
    The handler code defines each intent that is supported in different states “START_MODE”, “PLAY_MODE” etc. You can also have default code here for unhandled intent. This is a simplified version of the START_MODE handlers, you can see the full version on Github.

    var stateHandlers = {
        // Handlers for when the Skill is invoked (we're running in something called "START_MODE"
        startModeIntentHandlers : Alexa.CreateStateHandler(constants.states.START_MODE, {
            // This gets executed if we encounter an intent that we haven't defined
            'Unhandled': function() {
                var message = "Sorry, I didn't understand your request. Please say, play the latest sermon to listen to the latest sermon.";
            // This gets called when someone says "Alexa, open <skill name>"
            'LaunchRequest' : function () {
                var message = 'Welcome to the Kingsgate Community Church sermon player. You can say, play the latest sermon to listen to the latest sermon.';
                var reprompt = 'You can say, play the latest sermon to listen to the latest sermon.';
            // This is when we get a PlayLatestSermon intent, normally because someone said "Alexa, ask <skill name> for the latest sermon"	
            'PlayLatestSermon' : function () {
                // Set the index to zero so that we're at the first/latest sermon entry
                this.attributes['index'] = 0;
                this.handler.state = constants.states.START_MODE;
                // Play the item
        ... rest of handler code
  3. The controller
    The controller is called by the handlers and takes care of interacting with the podcast RSS feed, calling the audio player to start playing the podcast on the device etc. The code is too long to show here so I’d suggest looking at the controller code in the Github repo.

Once you’re happy you can upload your skill code (or use the in-line editor). Since I have a few npm dependencies I zipped my function (and the node_modules folder) and uploaded it). You’ll also need to give the name of the function that should be executed when the function is called (for mine it’s index.handler).

You can then edit the configuration of your skill and point it to the ARN of your lambda function. You don’t have to publish the skill to start using it yourself, as long as your Alexa device is using the same account as your Alexa developer account, then you’ll be able to test the skill on your own device.

If unit testing is the cake, functional testing is just the icing

I’ve heard a lot of different view points on software testing over the years. Especially when it comes to unit testing. Some people swear by it and treat code coverage metrics like it’s their greatest achievement in life (maybe it is). Others think it’s an unnecessary burden. I’ve heard arguments like “how can you refactor code if you have to update all of your tests each time” and “how can you refactor code if you don’t have sufficient unit test coverage” or “you’re flying by the seat of your pants without good coverage” vs “your tests are just a burden, you’re slowing me down!”.

I’d like to give a view from the trenches as it were. Albeit a view my trench working on a REST web service.

Some background

Before I begin I’d like to clarify what I mean by “unit test” and “functional test”.

  1. Unit testing – This is breaking down the application code into individual units that can be tested in isolation at a code level.
  2. Functional testing – This is more of a higher level, black-box test (normally from a user point of view) where the test doesn’t care about how the application functions. In this context functional are REST API calls simulating user behaviours and flows.

Over the past 18 months we’ve tried to balance our approach to testing. To do this we have taken the view that the end user is the most important piece of the puzzle. So we should focus our testing efforts on user scenarios. This means more functional tests, and less focus on unit testing.

The problem

For quite a while this approach worked well. We have suits of functional tests that make REST API calls to our web service and check the responses coming back. The tests walk through specific scenarios (creating user accounts, configuring the state of user accounts, testing scenarios with these accounts etc) and assert that the system behaves as expected. If something breaks (like you change the location of a field, or something that was an integer in the response is now a string, then the test fails, or a field is removed from the response.

Fast forward 18 months. We now have several hundred of these tests. They run every time code in the “dev” branch is updated (for us that means every time a feature branch is merged in and the code is pushed to our test environment). “That’s good right?” Well, here are some down-sides we’ve seen

  1. Tests are slow! – These kind of functional tests aren’t quick to execute, they are testing real systems with complex logic behind them.
  2. They are dependent on things I don’t care about – Should I really care if our user account team has a server down in our test environment? Should it block my work? What if the storage platform is down? In this instance if my test can’t create a user account then it fails and I have to investigate why.
  3. They test my colleagues work – Remember when I said these tests run every time code is merged in from feature branches. That means that not only are these codes testing my work, but they are testing other peoples work too. If I’m testing that value A = B and someone else is working on a feature where A = B+C then we have a race to see who gets there first with both code changes and test changes (granted this could be avoided by managing the work but it’s not always so simple in the real world).
  4. They are slow! – I’ve said it twice because it’s important. When testing changes I want to be confident that my changes work as expected, but I also want to be sure that I haven’t broken anything. Functional tests often run late in the development process (they need to run on a test environment after the code is built & deployed) which means you get feedback late too. If you did break something, finding out at this stage normally means you’ve broken your service / app for someone else in the testing environment.

The solution

I honestly think that both unit testing and functional testing have their place in web app development. It’s critical to test your application from the inside (via unit tests) and the outside (via functional tests). After all, your users are the most important thing so you have to test that they get the behaviour they expect.

However, a lot of people ignore unit testing, or at least treat it like second class citizen. Maybe it’s because they associate it with TDD or other approaches they dislike. Maybe it’s a lack of knowledge of how to write testable code (dependency injection, inversion of control, TDD all help with this).

Personally, I think that unit testing should be the primary focus of your testing efforts. Why?

  1. Unit tests are fast (well, if written properly) – Yes, it’s possible to write slow unit tests, but, if you’re breaking your code down into small individual units and writing testable code in the first place, then the tests should be quick to execute.
  2. Feedback is isolated- Unit tests run against my local changes. So if a test breaks I know it’s almost certainly because I’ve broken something and I need to fix it. No need to go on a wild goose chase only to find out its someone else’s change.
  3. You can refacor with confidence – I can look at code coverage reports and other metrics and assess if the code I’m going to refactor has test coverage and then make changes with a higher degree of confidence. If something breaks I get feedback straight away.
  4. They enforce good patterns and practices – Just like it’s possible to write a slow test, it’s also possible to write a monolithic test to test monolithic code. Unit testing makes you think about how you’re going to test your code (like how can I test what happens if dependency X throws an exception). Unit testing nudges you towards practices like dependency injection, inversion of control (etc) all of which makes code easier to maintain and understand. Everyone wins.

Having a solid set of unit tests means you catch issues early and can refactor with confidence, leaving the functional tests there as a safety net rather than first line of defence. Like I said in the title, unit tests should really be the cake, and functional tests are just the icing.

Progress on the Kingsgate Media Player

Just a quick update on the progress of the Kingsgate media player. Lots of things have been completed, the main feature being that video playback is now supported! Here’s a video showing the app in action on my Amazon FireTV.

There have been a lot of other internal changes too, including:

  1. Implemeted dependency injection using Dagger 2
  2. Added automated builds using Travis-CI
  3. Added new HttpClient, HttpRequest and HttpResponse object using DI

Lots more to come!

Solving a problem, with technology

I’ve been attending a fantastic church, Kingsgate, for over a year now. The level of technology that the church uses is excellent, but there is one problem.


For ages podcasts have been a pain to use. Some platforms make it easy, like iOS who have a dedicated podcast app where you can search for Kingsgate and you see the podcast. It’s not a bad experience but it’s not great either. For one, you only get the audio feed not the video feed. The logo is also missing and it just looks pretty crappy.

My main issue is that I want to watch the video of the sermons from Church not just listen to the audio. I have a fairly nice TV and I can stream a tonne of great programs from Netflix and Amazon Prime, and BBC iPlayer. So why is it so hard to watch the sermons from church on my TV?

I know there are RSS apps for a lot of different platforms (even the Amazon Fire TV) but I’m not really the target audience. What about my parents, how can they watch the podcast of the sermons on their TV:

  1. Go to the app store of choice
  2. Search for RSS (no “dar-ess”, the letters “R”,”S”,”S”)
  3. Now install one (if you’re lucky there is only one and you don’t have to make a choice)
  4. Now go to settings, and remove any default subscriptions (CNN, BBC etc)
  5. Click add subscription
  6. Now type “http://”
  7. Give up

There must be a better way…

I have an Amazon Fire TV which is essentially and Android box with a better interface. So I’ve decided to create an app that lets you easily access the sermon podcasts on the Amazon Fire TV.


Right now this is totally unofficial, and in no way linked to Kingsgate. The app is currently fetching a list of available sermons. The next steps are:

  1. Full-screen media playback
  2. Keeping track of played / un-played sermons
  3. Porting to iOS (iPad, Apple TV) and Windows 10 (desktop, tablet, windows phone and Xbox One)

All of the code for this is on GitHub so if you want to help out then feel free to fork the repo and send a pull request!

“Could not load file or assembly” in Azure Web Role

Every now and again you hit what seems like a simple problem only to dig deeper and find it’s not as straight forward as first thought…Today was one such time. A simple upgrade of an Azure application to the latest version of the Azure SDK (2.8 at the time of writing this). Everything is updated, compiled, tested, and running locally. Then deployed to Azure and… bang!


The full error reads:

Could not load file or assembly ‘Microsoft.WindowsAzure.Diagnostics, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35’

Surely this should have affected the app when running locally too, and even if it didn’t it’s simple to fix. Just add a binding redirect (in this case in my web.config file).

<assemblyIdentity name="Microsoft.WindowsAzure.Diagnostics" publicKeyToken="31bf3856ad364e35" culture="neutral"/>
<bindingRedirect oldVersion="" newVersion=""/>

Simple right… Not quite.

Looking deeper at the stack trace you’ll see:


When you’re using IIS (like we are) the startup process for your app is handled by WAIISHost.exe not but w3wp.exe (aka. the worker process). This means that the web.conf file isn’t applicable and the binding redirect is useless here.

There are various blog posts and stack overflow questions/answers about this. A lot of them mention adding an App.config file or a WaIISHost.exe.config file. Neither of these worked with the full IIS hosting model and Azure SDK 2.8.

Solved: I had to add a file <WebProjectName>.dll.config and set it to copy always. Once that was packaged and deployed the app started as normal and the binding redirect worked fine. For reference here is the contents of my <WebProjectName>.dll.config

<?xml version="1.0" encoding="utf-8"?>
For more information on how to configure your ASP.NET application, please visit

<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">

<assemblyIdentity name="Microsoft.WindowsAzure.Diagnostics" publicKeyToken="31bf3856ad364e35" culture="neutral"/>
<bindingRedirect oldVersion="" newVersion=""/>



Hopefully this helps anyone else who finds themselves in a similar position with all of the binding rediects in place but still not working when deploying to Azure.

Time for a spring clean

It’s been far too long since I last looked at this blog but I decided it’s about time for a quick facelift and a bit of TLC. If you’re reading this via a feed reader then head over to the blog and take a look at the new theme.

The theme is PixelPower and makes nice use of both HTML5 and CSS3 to produce a responsive design that works well on mobile devices (so I’ve been able to do away with WPTouch) and still have an accessible site on mobile devices.

Giving the blog a facelift was the easy part. Now I need to actually get some time to write some posts!

Stunnel & Apache (Invalid method in request x80gx01x03)

Here’s a really quick post about an issue I’ve encountered recently when using stunnel to connect through to Apache via HTTPS.  I set up the connections and then tried to view the end-point using ‘links’ (https://localhost) and received an SSL error. The apache logs listed:

Invalid method in request x80gx01x03

The stunnel config that I was using looked something like this

connect=someserver.com:1234 # Apache SSL listing on a non-standard port

It turned out to be a really simple fix. Because I was connecting to stunnel using SSL it was being encrypted by my browser then encrypted by stunnel. At the other end it was being decrypted by stunnel and then left with my original browser encrypted data which Apache couldn’t do anything with and couldn’t understand the request.

The fix was to simply change the config to:

connect=someserver.com:1234 # Apache SSL listing on a non-standard port

Then test the connection via ‘links http://localhost’ and let stunnel handle the encryption and certificate negotiation on its own.

1 2 3 5  Scroll to top