Cross browser testing with LambdaTest Automate using .NET Core and xUnit

I was recently trying out the cross-browser testing capabilities of LambdaTest, which provides automated testing in all major browsers on a scalable, online cloud-based Selenium grid. The LambdaTest platform frees you from the headaches of manually administering an in-house Selenium grid and therefore having to keep multiple devices connected, updated and running on-demand. With one simple subscription, you get access to live interactive testing to support your exploratory testing, as well as reliable and repeatable automated Appium and Selenium testing capability. Being a cloud based service brings other benefits, such as being able to parallelise automated tests and thereby shorten test cycles, as well as performing localization testing of your app by executing your tests from multiple global locations across more than 27 countries.

For automation testing, LambdaTest supports Java, JavaScript, Python, Ruby and PHP, but coming from a C# background I wanted to augment the example documentation provided by LambdaTest on getting started executing C# tests with Selenium. So I have put together an example solution and made this available on GitHub. You can clone this project or browse the GitHub code to see how it’s done.

I still think it’s pretty incredible being able to log in to a cloud platform like LambdaTest and be able to watch videos of your UI tests being fired up and verified in multiple browsers. To see the example code in action, you’ll need to:

  1. Sign up for a free LambdaTest account and log in.
  2. You will need your LambdaTest credentials – username and accessKey – to see your tests’ execution. So on your LambdaTest dashboard, click on ‘Automation’ in the left navigation bar. In the top-right corner click on the key icon to retrieve your credentials, as shown in Figure 1 below.
  3. Get a copy of the example code from the GitHub repo, and replace the remoteUserName and remoteAccessKey within the TestConfiguration class with the above fetched credentials.
  4. Ensure that the isRemoteTestingSession boolean within the TestConfiguration class is set to true. Otherwise, your tests will start spawning local browser instances.
  5. You should then be able to compile the code and run the tests in the ToDoAppTests class.
Figure 1 – Automation credentials

You can also get your username and accessKey from your profile section.

The rest of the article demonstrates the example given in the LambdaTest documentation using my Github project. Like the original documentation, this project also uses the sample To-Do List app available on the LambdaTest GitHub account as the application under test.

xUnit as the unit test framework

The first thing I did is to create a new xUnit test project in .NET Core 3.1, and brought the LambdaTest C# example into a test method within this project. Rather than using NUnit, I adapted the code to use xUnit. They’re both great unit test frameworks but I wanted to show how it could be done in an alternative framework for comparison. The test class itself is called ToDoAppTests.cs and it contains a few test methods. The first, VerifyPageTitle, is the most simple, as it just checks the page title is as expected:

[Theory]
[Trait("Category","SmokeTests")]
[MemberData(nameof(GetTestData))]
public void VerifyPageTitle(TestBrowser browser)
{
    var testConfiguration = new TestConfiguration(browser);
    var driver = testConfiguration.GetDriver();
    
    var page = new ToDoAppPage(driver);
    driver.Manage().Window.Maximize();
    page.GoToPage();
    Assert.Equal("Sample page - lambdatest.com", page.PageTitle);
    driver.Quit();
}

There is also a test called AddAndVerifyToDoItem which interacts further with the app. Using the xUnit MemberData class allows each test method to be executed multiple times – once for each target browser – by passing an enum representing the browser into the test method.

public static IEnumerable<object[]> GetTestData()
{
    return new List<object[]>
    {
        new object[]{TestBrowser.Chrome},
        new object[]{TestBrowser.InternetExplorer}
    };
}

Implement Page Object Model (POM)

Whilst this could be considered overengineering for a simple getting started example, the Page Object Model design pattern is none the less a good practice when it comes to UI test automation. I created a very simple ToDoAppPage class within my test project and gave this class the responsibility of page-level UI interactions. This way, you don’t end up with selectors and other page-specific nastiness interrupting the readability of your tests.

The methods of the ToDoAppTests class creates instances of this ToDoAppPage class as required in order to carry out interactions with the UI.

DesiredCapabilities and browser specific options

In order to control the remote Selenium driver in your code you have two options.

Firstly, DesiredCapabilities is a Selenium class which encapsulates a series of key/value pairs representing aspects of browser behaviour. In the case of the LambdaTest platform this gives you access to a number of specific platform capabilities, for example:

  • Lambda Tunnel – connects your local system with LambdaTest servers via SSH based integration tunnel, enabling testing of locally-hosted pages and applications
  • Network throttling – reduces network bandwidth to simulate how your application responds when accessed over low-latency networks
  • Geolocation – Check if your users get to see your website as intended when accessed from different Geolocations
  • Headless browser testing for running tests without a UI
  • Screenshot and video capture during test execution, for reviewing within the LambdaTest dashboard
  • Network – Not to be confused with network throttling mentioned above, the network capability generates network logs for low-level diagnostics of network interactions
  • TimeZone – Configure tests to run on a custom time zone

LambdaTest provide an intuitive capabilities generator, an online tool which interactively generates DesiredCapabilities code in your choice of 6 languages based on your selections in the UI. The screenshot below shows an example of C# code generated with many of the capabilities enabled.

Over time, the Selenium project has indicated an eventual move away from DesiredCapabilities, as the use of DesiredCapabilities was deprecated from version 3.14 of the C# Selenium bindings, and instead the alternative approach is to use browser-specific options classes. Within the TestConfiguration class of my project, I have shown how this could be done by implementing a number of private methods to set up the appropriate driver options depending on which browser is being used. As mentioned earlier, the target browser is determined at the point the class is instantiated within a test method, by passing the appropriate browser enum into the constructor, thereby allowing each test to be executed against specific browsers as required.

Switch between remote and local drivers

If you hit a bug while running in LambdaTest cloud, the last thing you want to be doing is re-writing your code to replace the RemoteWebDriver class with a local ChromeDriver or other specific browser class in order to step through and debug while running locally.

The TestConfiguration class also includes the boolean value isRemoteTestingSession to indicate whether the tests should be run using local browsers or the remote driver. Depending on how you’re running your tests, you may want to set this using a configuration file or pass it in as a command-line parameter to your tests, for example from your CI/CD server. LambdaTest has a series of helpful articles that show how to integrate LambdaTest into your CI/CD platform of choice. If you’re just trying out the example project you’ll need to simply toggle the remote testing flag in code.

LambdaTest Dashboard

Once you’ve executed the tests against LambdaTest you can log in to the dashboard and review the results of your tests. The UI of the testing dashboard is another area where LambdaTest really shines and makes it straightforward to pinpoint and correct issues in your tests.

Within the ‘Automate’ section, there are 3 main navigation areas. Firstly, the timeline view shows a list of test execution sessions. Note that there are filters across the top of this list to filter the results by date, user, build and status to more quickly refine the tests you want to review. You can also programmatically specify custom tags to group your Selenium tests:

The results are grouped by build as standard, and clicking into each of these takes you across to the second navigation area, automation logs, which provides further detailed analysis of each test outcomes, including the ability to watch recordings of the tests, review network requests/responses and view test logs, if you’ve specified these capabilities in your test options. Note also that the ‘Create Issue’ button provides one-click bug logging via integration with JIRA, GitHub, Trello and other popular tracking platforms:

Finally the Analytics option provides metrics on your test execution and can be customised to include specific graphs and filters to specific time periods you’re interested in:

Running multiple NodeJs versions in TeamCity on Windows

nodetc

If you’re building Web applications using TeamCity, you’re probably going to want to execute your Front-End pipeline as part of the build. You’ve worked hard to perfect all that lovely unit testing, linting, CSS precompilation and so on in your Gulp* build, so this part of your software packaging process should be a first-class citizen in your automated build process.

*Most of what’s in this post probably applies equally to Grunt.

Don’t have a centralised build server set up? I strongly recommend you buy a copy of Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation to understand not only build automation, but further techniques to improve both speed and quality of your software releases.

In theory, this should be straightforward if you’re using TeamCity – it’s a powerful and flexible product. However, the first issue to understand (and one of the primary factors in what’s to follow) is that all of our back-end work is done on the Microsoft stack, and so our TeamCity Agents are Windows Server VM’s. Layered on top of that you then have Node, NPM, Gulp and all the various package dependencies that your particular pipeline and solution brings in.

The reality is that getting this all working definitely isn’t the well-trodden path that I expected. I’ve lost countless hours Googling, reading StackOverflow posts and mailing list discussions to get to the bottom of all manner of bugs, caveats and gotchas, so it felt right that I share my experiences.

Welcome to the Inferno…

Beginner Level: Fixed Node Version

So in the early days of our build process, things were nice and simple. We installed a particular version of NodeJs on all our TeamCity build agents. To accomplish this, we have a batch script which uses Chocolatey to install dependencies on agents. The following line from this script installs NodeJs v4.5.0, which in turn brings Node package manager (NPM) v2.15.9 along for the ride:

cinst nodejs -version 4.5.0 -y

With NodeJs and NPM installed as pre-requisites on all the agents, it was then fairly straightforward to add steps to our build configuration which invoke the Front-End pipeline. I’ve previously written about how we setup our TeamCity instance, and mentioned Eugene Petrenko’s TeamCity plugin for running Node.js, NPM, Gulp and Grunt.

So the front-end elements of our build process followed this sequence:

  1. Runner type: Node.js NPM. Command: “install”
  2. Runner type: Gulp*. Task command: “default”

*Again could be Grunt if that’s what you use. Ain’t nobody be judgin’ here. But be aware that if you are using Grunt, you’ll need the Grunt command line to be available to your build, either by adding an initial NPM runner step which does an “install grunt-cli”, or by installing it globally (Warren Buckley covered this approach in a helpful post whilst at Cogworks)

Note that the runner types used are generally the ones from Eugene Petrenko’s plugin that I mentioned – these worked well enough at this stage.

Intermediate Level: Variable Node Version

So after we’d been running many builds happily using the approach described above, we came to realise that NodeJs 4.5.0, released in August 2016, was getting rather out of date. (version 9.x had been released in late 2017). Such is the pace of the front-end development ecosystem, there were new front-end libraries being adopted within the industry which required more recent versions of NodeJs, and yet we were tied to a hard dependency on 4.5.0 which was installed on our build agents.

Time for a rethink.

My front-end colleagues pointed me in the direction of Node Version Manager (NVM) which they would use during local development in order to avoid a fixed NodeJs dependency. NVM comes in 2 flavours: the original NVM bash script which runs on Unix variants (Mac/Linux), and then there is NVM for Windows, developed using the Go language. One thing to mention is that there is not feature parity between these 2 tools. More on this later.

So we created a new TeamCity agent and installed the Chocolatey NVM for Windows package, so that our builds would be freed from the shackles of NodeJs v4.5.0:

cinst nvm -version 1.1.5 -y

With NVM installed, I attempted (sensibly, one might conclude) to use the NVM runner from Eugene Petrenko’s plugin, but was a little confused to find my builds wouldn’t run on the new agent and was greeted with the explanation that NVM was an ‘unmet requirement’ not just within my default pool of 2 existing agents (as expected), but also on my new agent shown at the top of this image:

unmet-requirements

Only after digging further did I then discover that this plugin has a known limitation which means it only detects NVM properly when running on Linux (at the time of writing there are currently several open issues on the GitHub repo about this).

Pro tip!

Here’s my top tip if you’re going through this same setup: make the first step in your build process a Command Line runner, and add the following commands into the “Custom script” box:

set /p nodeversion=<.nvmrc
nvm install %%nodeversion%%
nvm use %%nodeversion%%

Then, within your solution, include a .nvmrc file containing your target NodeJs version. Another crucial fact I learnt along this journey is that the Windows version of NVM doesn’t support the .nvmrc feature. However the command line script above is a “poor man’s” emulation of this capability. There are a few advantages to this approach. Firstly, you source control your required NodeJs version alongside the code, and can version it accordingly in different branches etc. This means no TeamCity parameters are required. Secondly, your front-end build is one step more compatible between Windows and Mac.

After having problems with it, I decided to avoid the TeamCity Node plugin altogether for the other front-end steps, and switch to simply calling npm and Gulp from command-line runners. With the NVM switch successfully configured, our builds were no longer blocked by the false-negative issue. At each build execution they would adopt the correct NodeJs version, run npm install and Gulp steps, and finally call a Nuget Pack step in order to package up the software into something that could be distributed.

I thought I was home and dry. If you’ve made it with me this far, well done.

WTF Level: Am I the only one?

It wasn’t over yet. I was still trapped in the bloody battle between the Windows beast and the Node monster.

I kinda got a bit lost in the pits of gotchas of various NPM package dependencies running on Windows at this point. To be honest, I don’t think I kept a record of every problem I had to resolve – I certainly lost count of the various different blog posts I read and hours I burned. However to get an idea of the scale of the problems you’re facing here, it’s worth taking a look through the comments in this post.

This comment summarises pretty much sums it up:

node-gyp

Zen Level: It’s not just me

Just when you’re about to claw your eyes out with a blunt pencil and wondering whether your entire career has been a mistake, you hit the GitHub post titled “Windows users are not happy”. Stumbling across this blog post was the equivalent of being on the side of an icy mountain for 2 weeks, running out of supplies, getting frostbite … and then falling through the door of a warm tavern full of welcoming and equally lost travellers, each with similar stories to tell.

The last pieces of the puzzle to get the whole process up and running were:

  1. Ensure the C++ build tools module has been installed as part of the Visual Studio installation you have on your build agent.
  2. Install Python 2 on your build agent. As per the linked post, it’s a dependency of node-gyp:

    choco install python2

  3. I got a very intermittent error message in my build logs which said Error: EPERM: operation not permitted, scandir. From all the comments left in the thread of this particular post, the fix which worked for me was to run an extra command line step of “npm cache verify” just before the npm install.

And that was that. Our builds went green, consistently.

Of course if you’re very lucky you may not encounter as many issues as I did, depending on the exact nature of what you’re building, how many front-end packages you’re making use of etc. But I hope this narrative serves as a useful map [here be dragons!] of some potential pain-points to shortcut the journey of anyone else setting up similar build pipelines.