Automated acceptance testing with TeamCity and BrowserStack

I’ve posted previously articles on setting up Continuous Integration using TeamCity and Octopus Deploy. With the build and deploy aspects of the delivery pipeline running smoothly I’ve since turned my attentions to the problem of automated acceptance testing, and how to integrate this practice into our workflow with existing test automation tooling.

The Goals

  • To have automated acceptance tests execute automatically after each deployment
  • To ensure automated acceptance tests are flexible enough to execute on any environment
  • To avoid installing and maintaining a local on-premise infrastructure of test devices and browsers by using a scaleable cloud-based test grid platform (in our case BrowserStack)
  • To allow developers and testers to run and debug these same tests locally using their own installed browsers.

This last point is to allow the identification of defects earlier in the application lifecycle, and resolves the challenge of trying to debug tests when you can’t run them locally. In Chapter 5 of Continuous Delivery – Reliable Software Releases Through Build, Test and Deployment Automation, the authors recommend that:

[…] developers must be able to run automated acceptance tests on their development environments. It should be easy for a developer who finds an acceptance test failure to fix it easily on their own machine and verify the fix by running that acceptance test locally.

Before we go any further, if you’ve never done any automated UI testing before then this post probably isn’t the right place to start. The basics are covered in the other sources I’ve linked to throughout the post, so you probably want to make sure you’re comfortable with those before following this through.

A Starting Point

BrowserStack provide some basic code to get you started with the Automate platform. It’s really useful as a starting point but that’s as far as it goes; the sample code has a lot of stuff hard-coded and as a result doesn’t give enough flexibility to meet the goals I’ve outlined above. I wanted to build on this example to meet my goals, whilst also applying the concept of separation of concerns so that test configuration isn’t dealt with within the same method as web driver instantiation, as well as the actual test steps, tear-down and everything else.

The Tools

In addition to TeamCity and Octopus Deploy, there are some extra tools I’m using to achieve our goals.

  • NUnit. Our UI tests are developed within a Visual Studio project, using c#. The test grouping, setup and execution is orchestrated by nUnit, which can be used as a test framework not just for Unit Tests, but other layers of testing too. It can wrap around unit, integration and component tests all the way up to UI tests, which is the focus of this post.
  • Selenium WebDriver. The actual tests themselves are then constructed using the Selenium WebDriver library in order to fire up a browser and interact with pages and their components.
  • Page Object Model. Not quite a ‘tool’ in the same way the others are, but this common pattern for UI automation is so useful I wanted to call it out in isolation. There’s some good starter guides on how to implement it over at SW Test Academy and also in the Selenium docs themselves.
  • IMPORTANT! Be wary of any articles which mention PageFactory methods when you’re dealing with the .net implementation of WebDriver. As of March 2018, the intention is to deprecate support for PageFactory in the .net bindings.
  • Autofac. To satisfy the goal of switching between the cloud based testing grid and a local browser I’m also using the IoC container Autofac in order to swap in the appropriate WebDriver instance that we require. This is what allows us to switch between local and remote testing. It doesn’t have to be Autofac; you could achieve the same thing with other DI containers, I just chose this tool because it was already being used in the main software project being tested.
  • BrowserStack Automate. You can set up a Selenium grid of devices in your own organisation if you choose to, but I think the effort it would take to maintain just don’t make sense when compared to the subscription cost for Browserstack Automate, which gives you instant access to thousands of combinations of platforms and devices. The Browserstack docs describe how to dispatch instructions to their remote grid via the RemoteWebDriver class, all within the context of nUnit tests – perfect for our requirements.

The Test Framework

I think this sort of stitching together of the tools I’ve described must be unusual, as I couldn’t find many other sources online which solve these problems elegantly. We already had a web project in a Visual Studio solution containing the code for the web application our development team were working on, as well as a Unit Test project. Alongside these I added another project for our Web UI tests, and added Nuget references to Autofac and nUnit.

The Base Test Interface

Within this project, I wanted to have a set of tests which used the Page Object Model pattern, so I created a “PageObjects” folder to represent the pages in the application, and then a set of test classes in the root of the project, such as “SearchTests.cs” and “NavigationTests.cs”. Each test fixture looks something like the code below (abbreviated to just show class definition, constructor and one test method):

[TestFixture(“chrome62-win10”)]
[TestFixture(“ie11-win10”)]
[TestFixture(“ff62-win10”)]
[Parallelizable]
[Category(“Navigation”)]
[BrowserstackLocal]
public class NavigationTests : IBaseTest
{
public IWebDriver Driver { get; set; }
public IConfigurationReader ConfigurationReader { get; set; }
public Local BrowserStackLocal { get; set; }
public string CapabilityProfileName { get; set; }

public NavigationTests(string capabilityProfile)
{
CapabilityProfileName = capabilityProfile;
}

[Test]
[Category(“SmokeTests”)]
public void VerifyTopLevelNavigation()
{
var page = new HomePage(Driver);
page.GoToPage();

Every test fixture implements this IBaseTest interface, to ensure a uniform set of properties is exposed:

[ConfigureDependencyContainer]
public interface IBaseTest
{
string CapabilityProfileName { get; set; }
IWebDriver Driver { get; set; }
IConfigurationReader ConfigurationReader { get; set; }
Local BrowserStackLocal { get; set; }
}

The idea of the interface is to allow each test fixture to draw upon functionality from a set of services at runtime, such as the particular WebDriver instance we want to use. I’d initially done this using an inherited “BaseClass” but I started to run into problems with this approach and wanted to favour composition over inheritance which is where the interface approach came in.

The two main things to draw attention to are

  1. The CapabilityProfileName which gets set in the constructor of each fixture using nUnit’s TestFixture attribute. This means we can run each fixture multiple times using different device configurations.
  2. The ConfigureDependencyContainer attribute decorating the IBaseTest interface. This attribute makes use of one of the bits of magic glue which comes for free when using nUnit and makes this whole test layer hang together: Action Attributes

The Action Attribute

Action Attributes are a feature of nUnit 2, but are still supported in nUnit 3, and they are designed to help composability of test logic. Note that there is a new way of achieving similar composition of functionality introduced in nUnit 3 called Custom Attributes but personally I’ve found the documentation lacking on this newer approach, with fewer examples and a less clear rationale for using each of the interface types, so I stuck with Action Attributes for now.

The Action Attribute I’ve built here is specifically designed to be applied to our IBaseTest interface. By using the “BeforeTest” method, I can trigger logic which uses Autofac to fetch various dependencies such as the IWebDriver instance and inject them into my test fixture:

[AttributeUsage(AttributeTargets.Interface)]
public class ConfigureDependencyContainerAttribute: TestActionAttribute
{
public override void BeforeTest(ITest test)
{
var fixture = test.Fixture as IBaseTest;
if (fixture != null)
{
// Set up the IoC container using the configuration module
var builder = new ContainerBuilder();
if (string.IsNullOrEmpty(fixture.CapabilityProfileName))
throw new ConfigurationErrorsException(“The capability profile name must be       set”);
builder.RegisterModule(new           AutofacConfigurationModule(fixture.CapabilityProfileName));
var container = builder.Build();

// Resolve the dependencies down through the object chain using the IoC container
using (var scope = container.BeginLifetimeScope())
{
fixture.Driver = scope.Resolve<IWebDriver>();
fixture.ConfigurationReader = scope.Resolve<IConfigurationReader>();
}
}
}

The Autofac Configuration Module

Configuration modules in Autofac allow a set of services to be configured together. They’re really useful for packaging up a set of related dependency injection code. I built a configuration module in line with the Autofac documentation and when the Action Attribute above calls off to the constructor of this module it passes in a string representing the capability profile (if you remember earlier this was defined in the nUnit TestFixture attributes).

The “Load” method of the configuration module fetches a set of configuration from the test config file, and then registers the appropriate WebDriver with AutoFac, ready to be used during execution of the test fixture:

protected override void Load(ContainerBuilder builder)
{
var configurationReader = new ConfigurationReader(_capabilityProfileName);
builder.RegisterInstance(configurationReader).As<IConfigurationReader>();

var webDriver = GetWebDriver(configurationReader);
builder.RegisterInstance(webDriver).As<IWebDriver>();

TestContext.Progress.WriteLine($”TestContext.Writeline… Configuration is set to use {webDriver.GetType().Name}”);
}

private IWebDriver GetWebDriver(IConfigurationReader configurationReader)
{
IWebDriverBuilder webDriverBuilder;
if (configurationReader.UseRemoteDriver)
webDriverBuilder = new RemoteWebDriverBuilder(configurationReader);
else
{
switch (configurationReader.Browser)
{
case “Chrome”:
webDriverBuilder = new ChromeWebDriverBuilder();
break;
case “IE”:
webDriverBuilder = new InternetExplorerWebDriverBuilder();
break;

return webDriverBuilder.BuildWebDriver();

}

There are a number of helper classes I’ve referred to in the code here which I haven’t described explicitly in full:

  • The ConfigurationReader, which is just a typed wrapper around app.config configuration values
  • The XxxxWebDriverBuilder classes, which are just “single responsibility” classes, each of which implements a “BuildWebDriver()” method which returns an IWebDriver instance for Chrome, Firefox, IE or whatever browser you want.

Triggering the UI Tests

Getting to this point was the hardest part. With all of that plumbing out of the way, the UI test project was built within our deployment pipeline and we have a UI test .dll file which pops out of the end of our commit stage, ready to be executed.

To do this, I daisy-chained an extra build configuration in our pipeline to pick up the compiled dll and used the out-of-the-box nUnit runner in TeamCity to trigger the execution of the tests:

AutomatedAcceptanceTest

The whole process works well; tests are triggered by TeamCity following a deployment, sent up to Browserstack, executed against their cloud grid, and the results reported back in the TeamCity interface and log files.

Further details of the test run, including timings, log messages and even videos, are available within the BrowserStack web interface if they are needed for debugging and evidencing test execution.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: