Cross browser testing with LambdaTest Automate using .NET Core and xUnit

I was recently trying out the cross-browser testing capabilities of LambdaTest, which provides automated testing in all major browsers on a scalable, online cloud-based Selenium grid. The LambdaTest platform frees you from the headaches of manually administering an in-house Selenium grid and therefore having to keep multiple devices connected, updated and running on-demand. With one simple subscription, you get access to live interactive testing to support your exploratory testing, as well as reliable and repeatable automated Appium and Selenium testing capability. Being a cloud based service brings other benefits, such as being able to parallelise automated tests and thereby shorten test cycles, as well as performing localization testing of your app by executing your tests from multiple global locations across more than 27 countries.

For automation testing, LambdaTest supports Java, JavaScript, Python, Ruby and PHP, but coming from a C# background I wanted to augment the example documentation provided by LambdaTest on getting started executing C# tests with Selenium. So I have put together an example solution and made this available on GitHub. You can clone this project or browse the GitHub code to see how it’s done.

I still think it’s pretty incredible being able to log in to a cloud platform like LambdaTest and be able to watch videos of your UI tests being fired up and verified in multiple browsers. To see the example code in action, you’ll need to:

  1. Sign up for a free LambdaTest account and log in.
  2. You will need your LambdaTest credentials – username and accessKey – to see your tests’ execution. So on your LambdaTest dashboard, click on ‘Automation’ in the left navigation bar. In the top-right corner click on the key icon to retrieve your credentials, as shown in Figure 1 below.
  3. Get a copy of the example code from the GitHub repo, and replace the remoteUserName and remoteAccessKey within the TestConfiguration class with the above fetched credentials.
  4. Ensure that the isRemoteTestingSession boolean within the TestConfiguration class is set to true. Otherwise, your tests will start spawning local browser instances.
  5. You should then be able to compile the code and run the tests in the ToDoAppTests class.
Figure 1 – Automation credentials

You can also get your username and accessKey from your profile section.

The rest of the article demonstrates the example given in the LambdaTest documentation using my Github project. Like the original documentation, this project also uses the sample To-Do List app available on the LambdaTest GitHub account as the application under test.

xUnit as the unit test framework

The first thing I did is to create a new xUnit test project in .NET Core 3.1, and brought the LambdaTest C# example into a test method within this project. Rather than using NUnit, I adapted the code to use xUnit. They’re both great unit test frameworks but I wanted to show how it could be done in an alternative framework for comparison. The test class itself is called ToDoAppTests.cs and it contains a few test methods. The first, VerifyPageTitle, is the most simple, as it just checks the page title is as expected:

public void VerifyPageTitle(TestBrowser browser)
    var testConfiguration = new TestConfiguration(browser);
    var driver = testConfiguration.GetDriver();
    var page = new ToDoAppPage(driver);
    Assert.Equal("Sample page -", page.PageTitle);

There is also a test called AddAndVerifyToDoItem which interacts further with the app. Using the xUnit MemberData class allows each test method to be executed multiple times – once for each target browser – by passing an enum representing the browser into the test method.

public static IEnumerable<object[]> GetTestData()
    return new List<object[]>
        new object[]{TestBrowser.Chrome},
        new object[]{TestBrowser.InternetExplorer}

Implement Page Object Model (POM)

Whilst this could be considered overengineering for a simple getting started example, the Page Object Model design pattern is none the less a good practice when it comes to UI test automation. I created a very simple ToDoAppPage class within my test project and gave this class the responsibility of page-level UI interactions. This way, you don’t end up with selectors and other page-specific nastiness interrupting the readability of your tests.

The methods of the ToDoAppTests class creates instances of this ToDoAppPage class as required in order to carry out interactions with the UI.

DesiredCapabilities and browser specific options

In order to control the remote Selenium driver in your code you have two options.

Firstly, DesiredCapabilities is a Selenium class which encapsulates a series of key/value pairs representing aspects of browser behaviour. In the case of the LambdaTest platform this gives you access to a number of specific platform capabilities, for example:

  • Lambda Tunnel – connects your local system with LambdaTest servers via SSH based integration tunnel, enabling testing of locally-hosted pages and applications
  • Network throttling – reduces network bandwidth to simulate how your application responds when accessed over low-latency networks
  • Geolocation – Check if your users get to see your website as intended when accessed from different Geolocations
  • Headless browser testing for running tests without a UI
  • Screenshot and video capture during test execution, for reviewing within the LambdaTest dashboard
  • Network – Not to be confused with network throttling mentioned above, the network capability generates network logs for low-level diagnostics of network interactions
  • TimeZone – Configure tests to run on a custom time zone

LambdaTest provide an intuitive capabilities generator, an online tool which interactively generates DesiredCapabilities code in your choice of 6 languages based on your selections in the UI. The screenshot below shows an example of C# code generated with many of the capabilities enabled.

Over time, the Selenium project has indicated an eventual move away from DesiredCapabilities, as the use of DesiredCapabilities was deprecated from version 3.14 of the C# Selenium bindings, and instead the alternative approach is to use browser-specific options classes. Within the TestConfiguration class of my project, I have shown how this could be done by implementing a number of private methods to set up the appropriate driver options depending on which browser is being used. As mentioned earlier, the target browser is determined at the point the class is instantiated within a test method, by passing the appropriate browser enum into the constructor, thereby allowing each test to be executed against specific browsers as required.

Switch between remote and local drivers

If you hit a bug while running in LambdaTest cloud, the last thing you want to be doing is re-writing your code to replace the RemoteWebDriver class with a local ChromeDriver or other specific browser class in order to step through and debug while running locally.

The TestConfiguration class also includes the boolean value isRemoteTestingSession to indicate whether the tests should be run using local browsers or the remote driver. Depending on how you’re running your tests, you may want to set this using a configuration file or pass it in as a command-line parameter to your tests, for example from your CI/CD server. LambdaTest has a series of helpful articles that show how to integrate LambdaTest into your CI/CD platform of choice. If you’re just trying out the example project you’ll need to simply toggle the remote testing flag in code.

LambdaTest Dashboard

Once you’ve executed the tests against LambdaTest you can log in to the dashboard and review the results of your tests. The UI of the testing dashboard is another area where LambdaTest really shines and makes it straightforward to pinpoint and correct issues in your tests.

Within the ‘Automate’ section, there are 3 main navigation areas. Firstly, the timeline view shows a list of test execution sessions. Note that there are filters across the top of this list to filter the results by date, user, build and status to more quickly refine the tests you want to review. You can also programmatically specify custom tags to group your Selenium tests:

The results are grouped by build as standard, and clicking into each of these takes you across to the second navigation area, automation logs, which provides further detailed analysis of each test outcomes, including the ability to watch recordings of the tests, review network requests/responses and view test logs, if you’ve specified these capabilities in your test options. Note also that the ‘Create Issue’ button provides one-click bug logging via integration with JIRA, GitHub, Trello and other popular tracking platforms:

Finally the Analytics option provides metrics on your test execution and can be customised to include specific graphs and filters to specific time periods you’re interested in:

Automated acceptance testing with TeamCity and BrowserStack

I’ve posted previously articles on setting up Continuous Integration using TeamCity and Octopus Deploy. With the build and deploy aspects of the delivery pipeline running smoothly I’ve since turned my attentions to the problem of automated acceptance testing, and how to integrate this practice into our workflow with existing test automation tooling.

The Goals

  • To have automated acceptance tests execute automatically after each deployment
  • To ensure automated acceptance tests are flexible enough to execute on any environment
  • To avoid installing and maintaining a local on-premise infrastructure of test devices and browsers by using a scaleable cloud-based test grid platform (in our case BrowserStack)
  • To allow developers and testers to run and debug these same tests locally using their own installed browsers.

This last point is to allow the identification of defects earlier in the application lifecycle, and resolves the challenge of trying to debug tests when you can’t run them locally. In Chapter 5 of Continuous Delivery – Reliable Software Releases Through Build, Test and Deployment Automation, the authors recommend that:

[…] developers must be able to run automated acceptance tests on their development environments. It should be easy for a developer who finds an acceptance test failure to fix it easily on their own machine and verify the fix by running that acceptance test locally.

Before we go any further, if you’ve never done any automated UI testing before then this post probably isn’t the right place to start. The basics are covered in the other sources I’ve linked to throughout the post, so you probably want to make sure you’re comfortable with those before following this through.

A Starting Point

BrowserStack provide some basic code to get you started with the Automate platform. It’s really useful as a starting point but that’s as far as it goes; the sample code has a lot of stuff hard-coded and as a result doesn’t give enough flexibility to meet the goals I’ve outlined above. I wanted to build on this example to meet my goals, whilst also applying the concept of separation of concerns so that test configuration isn’t dealt with within the same method as web driver instantiation, as well as the actual test steps, tear-down and everything else.

The Tools

In addition to TeamCity and Octopus Deploy, there are some extra tools I’m using to achieve our goals.

  • NUnit. Our UI tests are developed within a Visual Studio project, using c#. The test grouping, setup and execution is orchestrated by nUnit, which can be used as a test framework not just for Unit Tests, but other layers of testing too. It can wrap around unit, integration and component tests all the way up to UI tests, which is the focus of this post.
  • Selenium WebDriver. The actual tests themselves are then constructed using the Selenium WebDriver library in order to fire up a browser and interact with pages and their components.
  • Page Object Model. Not quite a ‘tool’ in the same way the others are, but this common pattern for UI automation is so useful I wanted to call it out in isolation. There’s some good starter guides on how to implement it over at SW Test Academy and also in the Selenium docs themselves.
  • IMPORTANT! Be wary of any articles which mention PageFactory methods when you’re dealing with the .net implementation of WebDriver. As of March 2018, the intention is to deprecate support for PageFactory in the .net bindings.
  • Autofac. To satisfy the goal of switching between the cloud based testing grid and a local browser I’m also using the IoC container Autofac in order to swap in the appropriate WebDriver instance that we require. This is what allows us to switch between local and remote testing. It doesn’t have to be Autofac; you could achieve the same thing with other DI containers, I just chose this tool because it was already being used in the main software project being tested.
  • BrowserStack Automate. You can set up a Selenium grid of devices in your own organisation if you choose to, but I think the effort it would take to maintain just don’t make sense when compared to the subscription cost for Browserstack Automate, which gives you instant access to thousands of combinations of platforms and devices. The Browserstack docs describe how to dispatch instructions to their remote grid via the RemoteWebDriver class, all within the context of nUnit tests – perfect for our requirements.

The Test Framework

I think this sort of stitching together of the tools I’ve described must be unusual, as I couldn’t find many other sources online which solve these problems elegantly. We already had a web project in a Visual Studio solution containing the code for the web application our development team were working on, as well as a Unit Test project. Alongside these I added another project for our Web UI tests, and added Nuget references to Autofac and nUnit.

The Base Test Interface

Within this project, I wanted to have a set of tests which used the Page Object Model pattern, so I created a “PageObjects” folder to represent the pages in the application, and then a set of test classes in the root of the project, such as “SearchTests.cs” and “NavigationTests.cs”. Each test fixture looks something like the code below (abbreviated to just show class definition, constructor and one test method):

public class NavigationTests : IBaseTest
public IWebDriver Driver { get; set; }
public IConfigurationReader ConfigurationReader { get; set; }
public Local BrowserStackLocal { get; set; }
public string CapabilityProfileName { get; set; }

public NavigationTests(string capabilityProfile)
CapabilityProfileName = capabilityProfile;

public void VerifyTopLevelNavigation()
var page = new HomePage(Driver);

Every test fixture implements this IBaseTest interface, to ensure a uniform set of properties is exposed:

public interface IBaseTest
string CapabilityProfileName { get; set; }
IWebDriver Driver { get; set; }
IConfigurationReader ConfigurationReader { get; set; }
Local BrowserStackLocal { get; set; }

The idea of the interface is to allow each test fixture to draw upon functionality from a set of services at runtime, such as the particular WebDriver instance we want to use. I’d initially done this using an inherited “BaseClass” but I started to run into problems with this approach and wanted to favour composition over inheritance which is where the interface approach came in.

The two main things to draw attention to are

  1. The CapabilityProfileName which gets set in the constructor of each fixture using nUnit’s TestFixture attribute. This means we can run each fixture multiple times using different device configurations.
  2. The ConfigureDependencyContainer attribute decorating the IBaseTest interface. This attribute makes use of one of the bits of magic glue which comes for free when using nUnit and makes this whole test layer hang together: Action Attributes

The Action Attribute

Action Attributes are a feature of nUnit 2, but are still supported in nUnit 3, and they are designed to help composability of test logic. Note that there is a new way of achieving similar composition of functionality introduced in nUnit 3 called Custom Attributes but personally I’ve found the documentation lacking on this newer approach, with fewer examples and a less clear rationale for using each of the interface types, so I stuck with Action Attributes for now.

The Action Attribute I’ve built here is specifically designed to be applied to our IBaseTest interface. By using the “BeforeTest” method, I can trigger logic which uses Autofac to fetch various dependencies such as the IWebDriver instance and inject them into my test fixture:

public class ConfigureDependencyContainerAttribute: TestActionAttribute
public override void BeforeTest(ITest test)
var fixture = test.Fixture as IBaseTest;
if (fixture != null)
// Set up the IoC container using the configuration module
var builder = new ContainerBuilder();
if (string.IsNullOrEmpty(fixture.CapabilityProfileName))
throw new ConfigurationErrorsException(“The capability profile name must be       set”);
builder.RegisterModule(new           AutofacConfigurationModule(fixture.CapabilityProfileName));
var container = builder.Build();

// Resolve the dependencies down through the object chain using the IoC container
using (var scope = container.BeginLifetimeScope())
fixture.Driver = scope.Resolve<IWebDriver>();
fixture.ConfigurationReader = scope.Resolve<IConfigurationReader>();

The Autofac Configuration Module

Configuration modules in Autofac allow a set of services to be configured together. They’re really useful for packaging up a set of related dependency injection code. I built a configuration module in line with the Autofac documentation and when the Action Attribute above calls off to the constructor of this module it passes in a string representing the capability profile (if you remember earlier this was defined in the nUnit TestFixture attributes).

The “Load” method of the configuration module fetches a set of configuration from the test config file, and then registers the appropriate WebDriver with AutoFac, ready to be used during execution of the test fixture:

protected override void Load(ContainerBuilder builder)
var configurationReader = new ConfigurationReader(_capabilityProfileName);

var webDriver = GetWebDriver(configurationReader);

TestContext.Progress.WriteLine($”TestContext.Writeline… Configuration is set to use {webDriver.GetType().Name}”);

private IWebDriver GetWebDriver(IConfigurationReader configurationReader)
IWebDriverBuilder webDriverBuilder;
if (configurationReader.UseRemoteDriver)
webDriverBuilder = new RemoteWebDriverBuilder(configurationReader);
switch (configurationReader.Browser)
case “Chrome”:
webDriverBuilder = new ChromeWebDriverBuilder();
case “IE”:
webDriverBuilder = new InternetExplorerWebDriverBuilder();

return webDriverBuilder.BuildWebDriver();


There are a number of helper classes I’ve referred to in the code here which I haven’t described explicitly in full:

  • The ConfigurationReader, which is just a typed wrapper around app.config configuration values
  • The XxxxWebDriverBuilder classes, which are just “single responsibility” classes, each of which implements a “BuildWebDriver()” method which returns an IWebDriver instance for Chrome, Firefox, IE or whatever browser you want.

Triggering the UI Tests

Getting to this point was the hardest part. With all of that plumbing out of the way, the UI test project was built within our deployment pipeline and we have a UI test .dll file which pops out of the end of our commit stage, ready to be executed.

To do this, I daisy-chained an extra build configuration in our pipeline to pick up the compiled dll and used the out-of-the-box nUnit runner in TeamCity to trigger the execution of the tests:


The whole process works well; tests are triggered by TeamCity following a deployment, sent up to Browserstack, executed against their cloud grid, and the results reported back in the TeamCity interface and log files.

Further details of the test run, including timings, log messages and even videos, are available within the BrowserStack web interface if they are needed for debugging and evidencing test execution.

Running multiple NodeJs versions in TeamCity on Windows


If you’re building Web applications using TeamCity, you’re probably going to want to execute your Front-End pipeline as part of the build. You’ve worked hard to perfect all that lovely unit testing, linting, CSS precompilation and so on in your Gulp* build, so this part of your software packaging process should be a first-class citizen in your automated build process.

*Most of what’s in this post probably applies equally to Grunt.

Don’t have a centralised build server set up? I strongly recommend you buy a copy of Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation to understand not only build automation, but further techniques to improve both speed and quality of your software releases.

In theory, this should be straightforward if you’re using TeamCity – it’s a powerful and flexible product. However, the first issue to understand (and one of the primary factors in what’s to follow) is that all of our back-end work is done on the Microsoft stack, and so our TeamCity Agents are Windows Server VM’s. Layered on top of that you then have Node, NPM, Gulp and all the various package dependencies that your particular pipeline and solution brings in.

The reality is that getting this all working definitely isn’t the well-trodden path that I expected. I’ve lost countless hours Googling, reading StackOverflow posts and mailing list discussions to get to the bottom of all manner of bugs, caveats and gotchas, so it felt right that I share my experiences.

Welcome to the Inferno…

Beginner Level: Fixed Node Version

So in the early days of our build process, things were nice and simple. We installed a particular version of NodeJs on all our TeamCity build agents. To accomplish this, we have a batch script which uses Chocolatey to install dependencies on agents. The following line from this script installs NodeJs v4.5.0, which in turn brings Node package manager (NPM) v2.15.9 along for the ride:

cinst nodejs -version 4.5.0 -y

With NodeJs and NPM installed as pre-requisites on all the agents, it was then fairly straightforward to add steps to our build configuration which invoke the Front-End pipeline. I’ve previously written about how we setup our TeamCity instance, and mentioned Eugene Petrenko’s TeamCity plugin for running Node.js, NPM, Gulp and Grunt.

So the front-end elements of our build process followed this sequence:

  1. Runner type: Node.js NPM. Command: “install”
  2. Runner type: Gulp*. Task command: “default”

*Again could be Grunt if that’s what you use. Ain’t nobody be judgin’ here. But be aware that if you are using Grunt, you’ll need the Grunt command line to be available to your build, either by adding an initial NPM runner step which does an “install grunt-cli”, or by installing it globally (Warren Buckley covered this approach in a helpful post whilst at Cogworks)

Note that the runner types used are generally the ones from Eugene Petrenko’s plugin that I mentioned – these worked well enough at this stage.

Intermediate Level: Variable Node Version

So after we’d been running many builds happily using the approach described above, we came to realise that NodeJs 4.5.0, released in August 2016, was getting rather out of date. (version 9.x had been released in late 2017). Such is the pace of the front-end development ecosystem, there were new front-end libraries being adopted within the industry which required more recent versions of NodeJs, and yet we were tied to a hard dependency on 4.5.0 which was installed on our build agents.

Time for a rethink.

My front-end colleagues pointed me in the direction of Node Version Manager (NVM) which they would use during local development in order to avoid a fixed NodeJs dependency. NVM comes in 2 flavours: the original NVM bash script which runs on Unix variants (Mac/Linux), and then there is NVM for Windows, developed using the Go language. One thing to mention is that there is not feature parity between these 2 tools. More on this later.

So we created a new TeamCity agent and installed the Chocolatey NVM for Windows package, so that our builds would be freed from the shackles of NodeJs v4.5.0:

cinst nvm -version 1.1.5 -y

With NVM installed, I attempted (sensibly, one might conclude) to use the NVM runner from Eugene Petrenko’s plugin, but was a little confused to find my builds wouldn’t run on the new agent and was greeted with the explanation that NVM was an ‘unmet requirement’ not just within my default pool of 2 existing agents (as expected), but also on my new agent shown at the top of this image:


Only after digging further did I then discover that this plugin has a known limitation which means it only detects NVM properly when running on Linux (at the time of writing there are currently several open issues on the GitHub repo about this).

Pro tip!

Here’s my top tip if you’re going through this same setup: make the first step in your build process a Command Line runner, and add the following commands into the “Custom script” box:

set /p nodeversion=<.nvmrc
nvm install %%nodeversion%%
nvm use %%nodeversion%%

Then, within your solution, include a .nvmrc file containing your target NodeJs version. Another crucial fact I learnt along this journey is that the Windows version of NVM doesn’t support the .nvmrc feature. However the command line script above is a “poor man’s” emulation of this capability. There are a few advantages to this approach. Firstly, you source control your required NodeJs version alongside the code, and can version it accordingly in different branches etc. This means no TeamCity parameters are required. Secondly, your front-end build is one step more compatible between Windows and Mac.

After having problems with it, I decided to avoid the TeamCity Node plugin altogether for the other front-end steps, and switch to simply calling npm and Gulp from command-line runners. With the NVM switch successfully configured, our builds were no longer blocked by the false-negative issue. At each build execution they would adopt the correct NodeJs version, run npm install and Gulp steps, and finally call a Nuget Pack step in order to package up the software into something that could be distributed.

I thought I was home and dry. If you’ve made it with me this far, well done.

WTF Level: Am I the only one?

It wasn’t over yet. I was still trapped in the bloody battle between the Windows beast and the Node monster.

I kinda got a bit lost in the pits of gotchas of various NPM package dependencies running on Windows at this point. To be honest, I don’t think I kept a record of every problem I had to resolve – I certainly lost count of the various different blog posts I read and hours I burned. However to get an idea of the scale of the problems you’re facing here, it’s worth taking a look through the comments in this post.

This comment summarises pretty much sums it up:


Zen Level: It’s not just me

Just when you’re about to claw your eyes out with a blunt pencil and wondering whether your entire career has been a mistake, you hit the GitHub post titled “Windows users are not happy”. Stumbling across this blog post was the equivalent of being on the side of an icy mountain for 2 weeks, running out of supplies, getting frostbite … and then falling through the door of a warm tavern full of welcoming and equally lost travellers, each with similar stories to tell.

The last pieces of the puzzle to get the whole process up and running were:

  1. Ensure the C++ build tools module has been installed as part of the Visual Studio installation you have on your build agent.
  2. Install Python 2 on your build agent. As per the linked post, it’s a dependency of node-gyp:

    choco install python2

  3. I got a very intermittent error message in my build logs which said Error: EPERM: operation not permitted, scandir. From all the comments left in the thread of this particular post, the fix which worked for me was to run an extra command line step of “npm cache verify” just before the npm install.

And that was that. Our builds went green, consistently.

Of course if you’re very lucky you may not encounter as many issues as I did, depending on the exact nature of what you’re building, how many front-end packages you’re making use of etc. But I hope this narrative serves as a useful map [here be dragons!] of some potential pain-points to shortcut the journey of anyone else setting up similar build pipelines.


Integrating TeamCity and Octopus Deploy

Around 18 months ago, we went through a significant shift in deployment pipeline tooling within our agency.

After 7 years of using an on-premise Team Foundation Server (TFS) instance for version control and builds, integrating with MSDeploy for deployment, I realised we needed to be doing more within our build processes, and there were better tools available for our needs. We went through a 2-month period of thinking deeply about our requirements and trialling different software in order to land at the tools which we felt were the correct long-term fit: Git, TeamCity and Octopus Deploy. In some senses coming to that decision was the easy part. The harder part came later: how do we configure those tools in the best way to support our development workflow? As we configured the tooling, this was also an opportunity to look again at our processes and identify ways in which the new tools allow us to improve quality.

If anyone’s going through an installation and configuration of these tools, here’s the steps we carried out, and some lessons learned along the way. There are a lot of links in here to other articles with more detailed information, but there weren’t many articles which pulled together everything into one place, which is what I wanted to cover here.

Installing the basics

There are plenty of resources available about how to install these products, so I’m not going to dwell on these:

On the build agents there are some pre-requisites that we installed on each agent. Yours will vary according to the dependencies of the projects you’re building, but Git and Visual Studio were two dependencies that we installed which most people will probably also need. Knowing that we would likely require new agents in the future, we created a script which uses Chocolatey to install these extra pre-requisites.

For the Octopus Tentacles, you can either download and run the standard Tentacle installer on each target machine, or alternatively use the Chocolatey tentacle package. Regardless of which path you take, there’s a final step of configuring options for your tentacle, including the secure key it uses to handshake with your server. Again, we opted to script both the Chocolatey install step, and the configuration step.

Installing and Configuring Plugins

With the vanilla TeamCity and Octopus platforms installed and running, we then installed some extra plugins:

Connect TeamCity to BitBucket

We wanted a standardised connection from TeamCity to BitBucket. The only thing that would need to change from one solution to the next was the Git URL, so we created a new VCS root at <root> project level and connected to a BitBucket account

Note: You may have to open up SSH comms over port 22 from the TeamCity build server and agent in order to facilitate SSH access from TeamCity to BitBucket (otherwise, when testing the VCS Root connectivity, you get the error “List remote refs failed. Not authorised”).

  • In TeamCity, go to Administration -> <root> Project -> New VCS root and give your VCS root a suitable name
  • Create an Access Key (with read-only permissions) and uploaded the Public Key to BitBucket.
  • Store the Private Key in TeamCity (Administration -> <root> project -> SSH Keys)
  • Set the Authentication Method on the VCS root to “Uploaded Key” and in the Uploaded Key field select the one you have uploaded.
  • Parameterize the VCS root “Fetch URL” field with a repository URL to allow the VCS root to be used for multiple builds (“%system.RepositoryUrl%”)
  • Decide on your preferred branching strategy and configure VCS root Branch Specifications to monitor relevant branches according to your standard.

On the last bullet point, we chose to adopt GitFlow as our preferred branching strategy. We selected this as a convention within our agency because it was already an existing and well documented standard (no thanks, NIH!) and it was fairly straightforward for our team to get their head around. Most of the solutions we implement for our clients are iterated on through a regular series of enhancements, fixes and larger projects. Releases are often approved via Change Board and business priorities often change between releases, so Gitflow’s feature / release model works well for us.

One of the reasons it’s so useful to adopt a standard like this is that you can tell TeamCity to build any branch matching your standard. So if a developer creates a feature branch and pushes it to the remote, TeamCity will pick up on that and automatically build the branch. To support this, we configured our Branch Specification in the VCS root to monitor GitFlow compliant branches:


Adding a TeamCity build configuration template

One of the key factors in our selection of TeamCity was the ability to create template build configurations, so that similar builds could all inherit the same template. That way, if we needed to add extra, universal build steps or change an existing step, it could be done in one place.

If you run an internal development team which supports a handful of solutions then you might not create new build configurations very often. But in an agency environment with dozens of active clients, having a template means that the process of setting up a new build configuration is much faster, and if you need to change a setting globally it can be done in the template.

Within 2 months of implementing a template for our build configurations, it was proving to be a good decision. Not only were our builds much more consistent and faster to set up, but we had a situation in which our build agents were running out of space quickly, and we found that the cleanup settings could be changed within the template. This meant that with one small configuration change in the template, we could reduce the number of old builds hanging around from 10 down to 3 across all projects.

We created new build template at <root> project level and, as with the with build agent pre-requisites, you’ll want a set of steps suitable for the type of projects you need to build. But here’s some guidelines regarding the settings and steps which we figured out along the way:

Create a semantic version

There are ongoing debates about the use of SemVer, particularly for large, composite applications when the lower-level details of the SemVer standard is more focused on libraries and API’s. At the very least, I’d advocate having a 3-part version number which uniquely identifies your releases, and tells you something about how much the application has changed since the last release. There are practical reasons for doing this, such as the way OctoPack and Octopus Deploy work. It’s also pretty vital for creating good release documentation and making sure everyone on your team is on the same page about what changes are going to which environment, and when.

We hooked in the GitVersion tool using the TeamCity command-line runner. This tool injects a load of useful version metadata into your build and, as long as you’re following a consistent branch convention, means that you don’t have any manual steps to worry about to version your packages.

Restore external dependencies

You don’t really want to be checking in your Nuget dependencies into version control. Likewise, you don’t want .css files in there if you use a tool like SASS. These will not only bloat your repo, but worse still make mistakes more likely (for example your checked-in CSS might not be in sync with the source SASS).

We used the Nuget installer runner to restore back-end dependencies, and the Node plugin I mentioned earlier to restore Node packages and run front-end piplines (mostly Gulp these days).

Create Deployable Packages

The output we’re looking for from our build process is one or more Octopus-compatible Nuget packages (.nupkg files). Each of these packages represents an atomic, deployable unit – usually a web application, database migrations, Web services etc – each of which is one component of your overall application. In TeamCity terms, you want to create these as Artifacts of the build. Octopus provide a tool called OctoPack for this purpose, and the ability to integrate OctoPack easily into TeamCity builds.

TeamCity has a built-in Nuget server which can host the packages as they drop out of the builds, and acts as a source for Octopus to come and collect them when it’s told to carry out a deployment. This Nuget service is therefore the main interface between TeamCity and Octopus and enabling it is a one-off job in your TeamCity instance:

Administration -> NuGet Feed -> Enable

This screen then provides the URL for the feed, along the lines of:


In order to package up the outputs of a Visual Studio project as a Nuget package, you’ll need to add the ‘OctoPack’ Nuget package to your project. It’s done this way because you may not want to deploy every project in your solution.


We set up a VCS checkin trigger which adds a build to the queue whenever a commit is detected:

Add new trigger -> VCS Trigger

Finally, if you’re taking the ‘template’ approach like we did, you need to create a new build configuration which inherits from the template and set parameters accordingly for your solution.

Connect TeamCity to Octopus Deploy

  • At the TeamCity root level we created a new system variable “%system.OctopusDeployUrl%” and set value to the URL of our Octopus Deploy instance
  • Configure Octopus Deploy to consume the TeamCity NuGet feed (see instructions at

You’ll need to then configure service accounts in TeamCity and Octopus, so that you’ve got credentials available (with an appropriate level of permissions) for each to talk to the other.

In TeamCity

  • Create a service account in TeamCity (called something like “OctopusServiceAccount”). This is the account used in the ‘credentials’ section when setting up the external Nuget feed in Octopus to authenticate to the TeamCity Service

In Octopus Deploy

  • Create a service account in Octopus Deploy (called something like “TeamCityServiceAccount”), marked as a service account – therefore there is no password- and create a new API key within this account. This is the API Key that TeamCity deploy steps can use to trigger a build in Octopus.
  • Create a “TeamCity Integration Role” with custom permissions and add the TeamCity service account to this role (see

Adding a TeamCity deployment step configuration

So with TeamCity happily connecting to BitBucket as part of a build, and the output package(s) from this build available to Octopus, the next step is for TeamCity to tell Octopus to create a release and deploy it.

There’s an interesting quirk here that means the artifacts of a TeamCity build configuration won’t be available and published until after every step of the build is completed. This means that you can’t create artifacts and also include a deployment step for those artifacts all as part of the same build configuration (see the tip “delayed package publishing” in the Octopus docs) so you have to use a separate build configuration in TeamCity which is actually not doing any “building” as such, but is responsible for triggering deployments and has a dependency upon your first build configuration.

In our case, we wanted to have continuous deployment of the ‘develop’ branch to a CI environment, so we set our snapshot dependency up to only deploy builds coming from the develop branch. This is covered in a video in the Octopus Deploy documentation.

Configure Octopus Deploy

You’ll need to set up projects and environments in Octopus Deploy as required. Again I’m not going to go into too much detail here otherwise I’ll replicate what’s already in the Octopus documentation, but I will mention that we initially set up a single lifecycle for each of our solutions consisting of each target environment end-to-end:

CI -> Internal Test -> Client UAT -> Production

This makes good use of the Octopus ‘promote release’ feature, meaning that we didn’t need a new build to be done to get a build candidate to a downstream environment, and that can be a timesaver if your builds take a long time to run.

However, the implication of this is that every build that made it to production originated from our develop branch – because that’s where the CI environment got its builds, and CI sat at the front of this lifecycle. The builds were tagged with an ‘unstable’ version metadata, and we were finding that there was an extra post-deployment step required to ensure that the right code was merged up to master branch following deployment. It was all too easy to neglect the housekeeping and therefore master would fall out of date.

So, we decided to use channels in Octopus and set these up using version rules as follows

Continuous Integration channel

Has one lifecycle containing a single, Continuous Integration environment, and restricts packages using a version rule consisting of the tag “^unstable”. This means that all CI builds come from the ‘develop’ branch.

QA channel

Has one lifecycle containing two environments – an internal Test environment and a downstream, client-facing UAT test environment. Packages are restricted using a version rule consisting of the tag “^beta”. This means that all QA builds come from ‘release’ branches.

Releases candidates can be promoted from internal Test to downstream UAT environments.

Production channel

Has one lifecycle containing a single, production environment, and restricts packages using a version rule consisting of the tag “$^”. This means that all production builds come from the ‘master’ branch.

In order to deploy to production, this means that we first need to merge release branch to master, create a final build (minus branch metadata in packages) and then release this. However, this means we always have confidence that master is representative of the codebase on production.


Five Highlights from SUGCON 2017

Sitecore flagsThere were so many excellent sessions at this year’s Sitecore User Group Conference (SUGCON). But I’ve picked my personal top 5 take-aways and summarised here for anyone who couldn’t make it.

1. Integration with Cognitive Services

There are so many innovative integrations that the Sitecore community are exploring between Sitecore and commercial AI services. For example Bas Lijten & Rob Habraken demonstrated their Raspberry Pi-powered “Robbie the Sitecore Robot”, a physical extension of Sitecore’s personalisation capabilities using IoT technologies. But the session which left me feeling the most inspired was Mark Stiles’s Cognitive Services extensions to Sitecore. This open source project adds additional features into the Sitecore UI. For example, when a content editor uploads an image to the media library, it can be interpreted by the image service and tagged with appropriate metadata. Images can then be searched for using these tags (for example “outdoors”). Such a time-saver, and a brilliantly practical application of AI technology.

2. Sitecore in Azure PaaS

The release of Sitecore 8.2 update 1 featured full Azure Web Apps support. It was so exciting to hear how Sitecore have been working closely with Microsoft to build features and tools to enable and accelerate Sitecore’s cloud roadmap. Sitecore are working on an Azure Marketplace wizard to make it easier to deploy Sitecore ARM templates, as well as a lower-level Sitecore Azure Toolkit if you’re a developer and want more fine-grained control of the process. Sitecore have done a great job of mapping the Sitecore platform architecture to the various Azure PaaS services, ensuring that partners and customers can take full advantage of Azure features such as auto-scaling, reliable automated deployments and application monitoring.

3. Publishing Service 2.0

Performance optimisations can be such a satisfying developer task on any platform, but Stephen Pope and the rest of the team in Bristol have been working on a major rewrite of the Sitecore Publishing Service and deserve kudos for huge gains in publication speed in the new Publishing Service 2.0. The new Publishing Service, developed in .NET Core, is scaleable, transactional and highly efficient even in geographically distributed scenarios. The numbers spoke for themselves, with publishing jobs which would previously taken 5 minutes completing in under 5 seconds.

4. DevOps in the cloud

When you think Sitecore and cloud, you think Azure. But it doesn’t have to be that way. Nick Hills from True Clarity described their team’s journey to delivering the EasyJet solution using Amazon Web Services. I’ve always thought about DevOps as a component of the overall project delivery, but it was clear to me that at such a level of scale and resilience described in this case, the DevOps work can almost be a separate project with its own scope, budget and team. To do it properly requires you to build your own “shims and jigs” to manage the infrastructure. Nick gave such examples as 3.5k line PowerShell scripts and standalone deployment configuration applications. And just because you’re hosting in the cloud, don’t take it for granted that you won’t have downtime, blips in service and content freezes!

5. Sitecore Commerce

Since acquiring Commerce Server, Sitecore have been investing heavily in integrating Commerce Server into the Sitecore platform, and re-writing much of the 3.5 million lines of code they inherited, to align it with the Sitecore platform architecture. There are still a few areas which are being worked on, but I really like the direction this product is taking – such as surfacing the product catalogue as first class content items, eligible for personalisation and all the other richness of Sitecore’s digital marketing platform.

To catch up on the rest of the Sitcore community’s activity around the SUGCON event, take a look at the Twitter hashtag or look out for the event videos on the official event website.


Substituting JavaScript variables using Octopus Deploy

Octopus Deploy has fantastic native support for substituting configuration variables in .NET applications, using the “substitute variables in files” feature, and the ability to run XML config transforms. Combining these two approaches using the “One transform + Variable replacement” approach is my favoured approach to configuring ASP.NET applications.

But what about web applications that are front-end focussed such as Single Page Apps, or those which have no .NET back end code to configure? A web app I was working on recently contained a ‘global variable’ JavaScript file (variables.js), containing a single variable which acted as a pointer to an API endpoint.

servicesUrl = ""

When the app is deployed out across CI, QA and Production environments, I wanted this variable to be substituted by Octopus for a variable. So I duly configured my web deployment step to perform an additional substitution in the variables.js file, and replaced the line above to include the Octopus variable syntax. So far, so good. However, I also wanted to be able to run the app locally without having to constantly change the variables.js file. So here was my next iteration:

servicesUrl = "http://#{WebsiteName.DataServices}/"
if (~servicesUrl.indexOf("#{"))
    servicesUrl = "http://services.test.endpoint.url/";

Unfortunately, I then hit upon an issue in Octostache (the variable substitution engine in Octopus) whereby you can’t use the ‘#’ symbol in files unless it does actually relate to a variable you want to be substituted. You can’t even include a hash in a comment without Octostache throwing an error like:

Octostache returned the following error: `Parsing failure: unexpected '#'; expected end of input (Line n, Column n); recently consumed: xxxxxxxxxx

So, in the end I had to use a bit of JavaScript string encoding in order to parse for the ‘un-substituted’ variable placeholder, and fallback to a default URL. Here’s what worked for me – noting that “%23%7B” is the encoded version of “#{“…

// During Octopus deployment, variable values in this file using Octopus hash-syntax will be replaced
servicesUrl = "http://#{WebsiteName.DataServices}/"
// For local development, the string above won't be replaced. The following block provides a default fallback URL
// In a file being ran through Octopus variable substitution, you have to avoid using the 'hash' symbol anywhere
// (other than a "real" Octopus variable, of course). Otherwise Octopus throws a wobbly.
if (~encodeURIComponent(servicesUrl).indexOf("%23%7B"))
    servicesUrl = "http://services.test.endpoint.url";



In-page bookmarks using dynamic content

I recently worked on an EPiServer 6 solution which had a requirement for an in-page navigation element to provide the user with a shortcut to jump down into specific content sections within a single page. It seemed like a straightforward requirement, but after some research there was no clear function within this version of EPiServer to meet this requirement.

The closest fit was the bookmark capability within the TinyMCE editor (the default Rich Text Editor component), which allows an editor to add a bookmark location within the flow of text, then choose this bookmark from an anchor they create elsewhere in the text. The problem is that the RTE is only aware of bookmarks which have been created within the context of the current RTE instance:



In this case, the page was composed of multiple content modules, each containing their own combination of content elements. Any navigation element which needed to act as a ‘table of content’ to the rest of the page wouldn’t have the awareness of the other RTE’s containing the target bookmarks for the anchor elements. In addition to this, the RTE doesn’t really give the control of markup that you need for things like navigation elements (usually HTML <li> elements).

Step up Dynamic Content:

The solution was to use the Dynamic Content feature in EPiServer.

  1. The page template contained the outermost <ul> element with all required CSS attributes for controlled styling
  2. Within the <ul> was embedded a RTE placeholder to render the contents of the RTE, as added by the page editor
  3. We created a Dynamic Content element which consisted of properties for the text and bookmark values, and front-end HTML markup for each <ul> item to be added to the navigation list
  4. Page editors could drop in multiple instances of the dynamic content element within the RTE
  5. Bookmarks could be created anywhere else in the page where an RTE was used, and referenced from the Dynamic Content elements rendered in our navigation list



Customising work item types in TFS2010

We recently examined our bug lifecycle at Mando Group, and one of the improvements we decided to implement was a simple categorisation so that we could report on the source of issues raised in Team Foundation Server. For example, some issues raised may be down to missing images which have not been added to the content management system, whereas others may be down to bona fide bugs in code. We wanted to be able to quantify these different issue types on each project.

New Global List

The first thing I did was take advantage of the Global List feature in TFS2010 (see this MSDN article on managing global lists) to set up the list of classifications we wanted to use across all projects. (nb I’ve shortened the listitem elements for brevity)

I placed the following XML in a local file (eg c: DefectCategories.xml)

<?xml version="1.0" encoding="utf-8"?>
<gl:GLOBALLISTS xmlns:gl="">
<GLOBALLIST name="Mando – DefectCategories">
<LISTITEM value="Bug" />
<LISTITEM value="Content missing or incorrect" />
<LISTITEM value="Duplicate" />

From the VS2010 cmd prompt run the following command

witadmin importgloballist /collection: /f:"c:DefectCategories.xml"

To confirm the global list has been added correctly, run this command and verify that the global list is present in the output:

witadmin exportgloballist /collection:

Note that once the Global List has been created, items can be added to it using the Process Editor in the TFS power tools (see

Modify the WorkItem Type on your Project

This step shows how to implement the global list categorisation on a bug work item type on a specific project. If you choose to, you can make similar changes to other work item types, and also it is possible to change the process template XML so that the change applies to all projects subsequently created from this template. These options are outside the scope of this article.

What we’re doing here is adding a new field which references the Global List, so that the values are limited to those available in the Global List, and a new control in the bug layout so that we have a drop-down to choose the value in the UI. Importantly, we’re also adding an extra step in the bug workflow which means that the categorisation is mandatory at the point the bug is being closed.

First export the bug work item definition from the project you wish to amend it for. The example below exports the Bug work item type from the “YourProject” project in the ‘yourcollection’ to a file called “bug.xml”:

witadmin exportwitd /collection: /p:YourProject /n:Bug /f:"c:bug.xml"

Open the XML file for the bug in a text editor (Visual Studio or NotePad is fine) and make the following changes:

1. At the end of the FIELDS element add the new field as follows:

<FIELD name="Mando Classification" refname="Mando.Classification" type="String" reportable="dimension">
<GLOBALLIST name="Mando – DefectCategories" />
<HELPTEXT>This categorisation allows reports to be generated to establish the cause of bugs</HELPTEXT>

2. Find the STATE element which has the value attribute of “Closed”. Within this element, ensure the classification is mandatory as part of the workflow by including this as a required field:

<STATE value="Closed">
 <FIELD refname="Mando.Classification">

3. Within the Layout element, find the Classification group and add a control to represent the classification so it can be set by users in Visual Studio (note that the control group you add it to is up to you, and your XML may look different depending on which process template you’re using):

<Group Label="Classification">
<Column PercentWidth="100″>
<Control FieldName="System.AreaPath" Type="WorkItemClassificationControl" Label="&amp;Area:" LabelPosition="Left" />
<Control FieldName="System.IterationPath" Type="WorkItemClassificationControl" Label="Ite&amp;ration:" LabelPosition="Left" />
<Control FieldName="Mando.Classification" Type="FieldControl" Label="Defect Classification" LabelPosition="Left" />

Lastly you need to import the updated XML bug definition into TFS

witadmin importwitd /collection: /p:YourProject /f:"c:bug.xml"

Don’t forget to update the project name and filename parameters for your own project.

There are other approaches to making these changes, including the more visual approach in the WIT designer (TFS power tools). I took the XML modification approach for better consistency between projects.

Problem downloading Windows 8 apps

I got a Lenovo Thinkpad 2 tablet this week and, like most people, the first thing I did was try to load up a bunch of apps from the Windows Store. Unfortunately, all the apps I tried to install got stuck either in the ‘downloading’ or ‘pending’ status.

After a bit of searching, there are several suggested remedies, but none of them worked for me. In the end the solution seemed to be as simple as making sure the system clock had the correct time (mine was a few hours behind).

Once the system clock was correct, my apps downloaded and installed correctly.

Using PowerShell to search within files

My WordPress blog got hacked recently. I still haven’t got to the root vulnerability, but I suspect I’ve only got myself to blame for not upgrading to the latest version. There are plenty of articles on how to recover from this situation, but one of the things I found myself having to do is locate some offending code within the WordPress .php files which injected an IFrame with malcious target into my blog pages.
Once I had grabbed a backup copy of my site files, I started tinkering around with Windows Search to get it to index inside .php file contents, but realised some simple PowerShell script was probably the quicker solution:

Select-String -Path "C:path-to-my-wordpress-files*.php" -pattern iframe

This one liner gave me a list of all php files containing an iFrame, but the pattern could easily be adapted to be more specific. PowerShell can be such a lifesaver at times.