Automated acceptance testing with TeamCity and BrowserStack

I’ve posted previously articles on setting up Continuous Integration using TeamCity and Octopus Deploy. With the build and deploy aspects of the delivery pipeline running smoothly I’ve since turned my attentions to the problem of automated acceptance testing, and how to integrate this practice into our workflow with existing test automation tooling.

The Goals

  • To have automated acceptance tests execute automatically after each deployment
  • To ensure automated acceptance tests are flexible enough to execute on any environment
  • To avoid installing and maintaining a local on-premise infrastructure of test devices and browsers by using a scaleable cloud-based test grid platform (in our case BrowserStack)
  • To allow developers and testers to run and debug these same tests locally using their own installed browsers.

This last point is to allow the identification of defects earlier in the application lifecycle, and resolves the challenge of trying to debug tests when you can’t run them locally. In Chapter 5 of Continuous Delivery – Reliable Software Releases Through Build, Test and Deployment Automation, the authors recommend that:

[…] developers must be able to run automated acceptance tests on their development environments. It should be easy for a developer who finds an acceptance test failure to fix it easily on their own machine and verify the fix by running that acceptance test locally.

Before we go any further, if you’ve never done any automated UI testing before then this post probably isn’t the right place to start. The basics are covered in the other sources I’ve linked to throughout the post, so you probably want to make sure you’re comfortable with those before following this through.

A Starting Point

BrowserStack provide some basic code to get you started with the Automate platform. It’s really useful as a starting point but that’s as far as it goes; the sample code has a lot of stuff hard-coded and as a result doesn’t give enough flexibility to meet the goals I’ve outlined above. I wanted to build on this example to meet my goals, whilst also applying the concept of separation of concerns so that test configuration isn’t dealt with within the same method as web driver instantiation, as well as the actual test steps, tear-down and everything else.

The Tools

In addition to TeamCity and Octopus Deploy, there are some extra tools I’m using to achieve our goals.

  • NUnit. Our UI tests are developed within a Visual Studio project, using c#. The test grouping, setup and execution is orchestrated by nUnit, which can be used as a test framework not just for Unit Tests, but other layers of testing too. It can wrap around unit, integration and component tests all the way up to UI tests, which is the focus of this post.
  • Selenium WebDriver. The actual tests themselves are then constructed using the Selenium WebDriver library in order to fire up a browser and interact with pages and their components.
  • Page Object Model. Not quite a ‘tool’ in the same way the others are, but this common pattern for UI automation is so useful I wanted to call it out in isolation. There’s some good starter guides on how to implement it over at SW Test Academy and also in the Selenium docs themselves.
  • IMPORTANT! Be wary of any articles which mention PageFactory methods when you’re dealing with the .net implementation of WebDriver. As of March 2018, the intention is to deprecate support for PageFactory in the .net bindings.
  • Autofac. To satisfy the goal of switching between the cloud based testing grid and a local browser I’m also using the IoC container Autofac in order to swap in the appropriate WebDriver instance that we require. This is what allows us to switch between local and remote testing. It doesn’t have to be Autofac; you could achieve the same thing with other DI containers, I just chose this tool because it was already being used in the main software project being tested.
  • BrowserStack Automate. You can set up a Selenium grid of devices in your own organisation if you choose to, but I think the effort it would take to maintain just don’t make sense when compared to the subscription cost for Browserstack Automate, which gives you instant access to thousands of combinations of platforms and devices. The Browserstack docs describe how to dispatch instructions to their remote grid via the RemoteWebDriver class, all within the context of nUnit tests – perfect for our requirements.

The Test Framework

I think this sort of stitching together of the tools I’ve described must be unusual, as I couldn’t find many other sources online which solve these problems elegantly. We already had a web project in a Visual Studio solution containing the code for the web application our development team were working on, as well as a Unit Test project. Alongside these I added another project for our Web UI tests, and added Nuget references to Autofac and nUnit.

The Base Test Interface

Within this project, I wanted to have a set of tests which used the Page Object Model pattern, so I created a “PageObjects” folder to represent the pages in the application, and then a set of test classes in the root of the project, such as “SearchTests.cs” and “NavigationTests.cs”. Each test fixture looks something like the code below (abbreviated to just show class definition, constructor and one test method):

[TestFixture(“chrome62-win10”)]
[TestFixture(“ie11-win10”)]
[TestFixture(“ff62-win10”)]
[Parallelizable]
[Category(“Navigation”)]
[BrowserstackLocal]
public class NavigationTests : IBaseTest
{
public IWebDriver Driver { get; set; }
public IConfigurationReader ConfigurationReader { get; set; }
public Local BrowserStackLocal { get; set; }
public string CapabilityProfileName { get; set; }

public NavigationTests(string capabilityProfile)
{
CapabilityProfileName = capabilityProfile;
}

[Test]
[Category(“SmokeTests”)]
public void VerifyTopLevelNavigation()
{
var page = new HomePage(Driver);
page.GoToPage();

Every test fixture implements this IBaseTest interface, to ensure a uniform set of properties is exposed:

[ConfigureDependencyContainer]
public interface IBaseTest
{
string CapabilityProfileName { get; set; }
IWebDriver Driver { get; set; }
IConfigurationReader ConfigurationReader { get; set; }
Local BrowserStackLocal { get; set; }
}

The idea of the interface is to allow each test fixture to draw upon functionality from a set of services at runtime, such as the particular WebDriver instance we want to use. I’d initially done this using an inherited “BaseClass” but I started to run into problems with this approach and wanted to favour composition over inheritance which is where the interface approach came in.

The two main things to draw attention to are

  1. The CapabilityProfileName which gets set in the constructor of each fixture using nUnit’s TestFixture attribute. This means we can run each fixture multiple times using different device configurations.
  2. The ConfigureDependencyContainer attribute decorating the IBaseTest interface. This attribute makes use of one of the bits of magic glue which comes for free when using nUnit and makes this whole test layer hang together: Action Attributes

The Action Attribute

Action Attributes are a feature of nUnit 2, but are still supported in nUnit 3, and they are designed to help composability of test logic. Note that there is a new way of achieving similar composition of functionality introduced in nUnit 3 called Custom Attributes but personally I’ve found the documentation lacking on this newer approach, with fewer examples and a less clear rationale for using each of the interface types, so I stuck with Action Attributes for now.

The Action Attribute I’ve built here is specifically designed to be applied to our IBaseTest interface. By using the “BeforeTest” method, I can trigger logic which uses Autofac to fetch various dependencies such as the IWebDriver instance and inject them into my test fixture:

[AttributeUsage(AttributeTargets.Interface)]
public class ConfigureDependencyContainerAttribute: TestActionAttribute
{
public override void BeforeTest(ITest test)
{
var fixture = test.Fixture as IBaseTest;
if (fixture != null)
{
// Set up the IoC container using the configuration module
var builder = new ContainerBuilder();
if (string.IsNullOrEmpty(fixture.CapabilityProfileName))
throw new ConfigurationErrorsException(“The capability profile name must be       set”);
builder.RegisterModule(new           AutofacConfigurationModule(fixture.CapabilityProfileName));
var container = builder.Build();

// Resolve the dependencies down through the object chain using the IoC container
using (var scope = container.BeginLifetimeScope())
{
fixture.Driver = scope.Resolve<IWebDriver>();
fixture.ConfigurationReader = scope.Resolve<IConfigurationReader>();
}
}
}

The Autofac Configuration Module

Configuration modules in Autofac allow a set of services to be configured together. They’re really useful for packaging up a set of related dependency injection code. I built a configuration module in line with the Autofac documentation and when the Action Attribute above calls off to the constructor of this module it passes in a string representing the capability profile (if you remember earlier this was defined in the nUnit TestFixture attributes).

The “Load” method of the configuration module fetches a set of configuration from the test config file, and then registers the appropriate WebDriver with AutoFac, ready to be used during execution of the test fixture:

protected override void Load(ContainerBuilder builder)
{
var configurationReader = new ConfigurationReader(_capabilityProfileName);
builder.RegisterInstance(configurationReader).As<IConfigurationReader>();

var webDriver = GetWebDriver(configurationReader);
builder.RegisterInstance(webDriver).As<IWebDriver>();

TestContext.Progress.WriteLine($”TestContext.Writeline… Configuration is set to use {webDriver.GetType().Name}”);
}

private IWebDriver GetWebDriver(IConfigurationReader configurationReader)
{
IWebDriverBuilder webDriverBuilder;
if (configurationReader.UseRemoteDriver)
webDriverBuilder = new RemoteWebDriverBuilder(configurationReader);
else
{
switch (configurationReader.Browser)
{
case “Chrome”:
webDriverBuilder = new ChromeWebDriverBuilder();
break;
case “IE”:
webDriverBuilder = new InternetExplorerWebDriverBuilder();
break;

return webDriverBuilder.BuildWebDriver();

}

There are a number of helper classes I’ve referred to in the code here which I haven’t described explicitly in full:

  • The ConfigurationReader, which is just a typed wrapper around app.config configuration values
  • The XxxxWebDriverBuilder classes, which are just “single responsibility” classes, each of which implements a “BuildWebDriver()” method which returns an IWebDriver instance for Chrome, Firefox, IE or whatever browser you want.

Triggering the UI Tests

Getting to this point was the hardest part. With all of that plumbing out of the way, the UI test project was built within our deployment pipeline and we have a UI test .dll file which pops out of the end of our commit stage, ready to be executed.

To do this, I daisy-chained an extra build configuration in our pipeline to pick up the compiled dll and used the out-of-the-box nUnit runner in TeamCity to trigger the execution of the tests:

AutomatedAcceptanceTest

The whole process works well; tests are triggered by TeamCity following a deployment, sent up to Browserstack, executed against their cloud grid, and the results reported back in the TeamCity interface and log files.

Further details of the test run, including timings, log messages and even videos, are available within the BrowserStack web interface if they are needed for debugging and evidencing test execution.

Integrating TeamCity and Octopus Deploy

Around 18 months ago, we went through a significant shift in deployment pipeline tooling within our agency.

After 7 years of using an on-premise Team Foundation Server (TFS) instance for version control and builds, integrating with MSDeploy for deployment, I realised we needed to be doing more within our build processes, and there were better tools available for our needs. We went through a 2-month period of thinking deeply about our requirements and trialling different software in order to land at the tools which we felt were the correct long-term fit: Git, TeamCity and Octopus Deploy. In some senses coming to that decision was the easy part. The harder part came later: how do we configure those tools in the best way to support our development workflow? As we configured the tooling, this was also an opportunity to look again at our processes and identify ways in which the new tools allow us to improve quality.

If anyone’s going through an installation and configuration of these tools, here’s the steps we carried out, and some lessons learned along the way. There are a lot of links in here to other articles with more detailed information, but there weren’t many articles which pulled together everything into one place, which is what I wanted to cover here.

Installing the basics

There are plenty of resources available about how to install these products, so I’m not going to dwell on these:

On the build agents there are some pre-requisites that we installed on each agent. Yours will vary according to the dependencies of the projects you’re building, but Git and Visual Studio were two dependencies that we installed which most people will probably also need. Knowing that we would likely require new agents in the future, we created a script which uses Chocolatey to install these extra pre-requisites.

For the Octopus Tentacles, you can either download and run the standard Tentacle installer on each target machine, or alternatively use the Chocolatey tentacle package. Regardless of which path you take, there’s a final step of configuring options for your tentacle, including the secure key it uses to handshake with your server. Again, we opted to script both the Chocolatey install step, and the configuration step.

Installing and Configuring Plugins

With the vanilla TeamCity and Octopus platforms installed and running, we then installed some extra plugins:

Connect TeamCity to BitBucket

We wanted a standardised connection from TeamCity to BitBucket. The only thing that would need to change from one solution to the next was the Git URL, so we created a new VCS root at <root> project level and connected to a BitBucket account

Note: You may have to open up SSH comms over port 22 from the TeamCity build server and agent in order to facilitate SSH access from TeamCity to BitBucket (otherwise, when testing the VCS Root connectivity, you get the error “List remote refs failed. Not authorised”).

  • In TeamCity, go to Administration -> <root> Project -> New VCS root and give your VCS root a suitable name
  • Create an Access Key (with read-only permissions) and uploaded the Public Key to BitBucket.
  • Store the Private Key in TeamCity (Administration -> <root> project -> SSH Keys)
  • Set the Authentication Method on the VCS root to “Uploaded Key” and in the Uploaded Key field select the one you have uploaded.
  • Parameterize the VCS root “Fetch URL” field with a repository URL to allow the VCS root to be used for multiple builds (“%system.RepositoryUrl%”)
  • Decide on your preferred branching strategy and configure VCS root Branch Specifications to monitor relevant branches according to your standard.

On the last bullet point, we chose to adopt GitFlow as our preferred branching strategy. We selected this as a convention within our agency because it was already an existing and well documented standard (no thanks, NIH!) and it was fairly straightforward for our team to get their head around. Most of the solutions we implement for our clients are iterated on through a regular series of enhancements, fixes and larger projects. Releases are often approved via Change Board and business priorities often change between releases, so Gitflow’s feature / release model works well for us.

One of the reasons it’s so useful to adopt a standard like this is that you can tell TeamCity to build any branch matching your standard. So if a developer creates a feature branch and pushes it to the remote, TeamCity will pick up on that and automatically build the branch. To support this, we configured our Branch Specification in the VCS root to monitor GitFlow compliant branches:

+:refs/heads/(develop)
+:refs/heads/(feature/*)
+:refs/heads/(hotfix/*)
+:refs/heads/(release/*)
+:refs/heads/(support/*)
+:refs/heads/(master)

Adding a TeamCity build configuration template

One of the key factors in our selection of TeamCity was the ability to create template build configurations, so that similar builds could all inherit the same template. That way, if we needed to add extra, universal build steps or change an existing step, it could be done in one place.

If you run an internal development team which supports a handful of solutions then you might not create new build configurations very often. But in an agency environment with dozens of active clients, having a template means that the process of setting up a new build configuration is much faster, and if you need to change a setting globally it can be done in the template.

Within 2 months of implementing a template for our build configurations, it was proving to be a good decision. Not only were our builds much more consistent and faster to set up, but we had a situation in which our build agents were running out of space quickly, and we found that the cleanup settings could be changed within the template. This meant that with one small configuration change in the template, we could reduce the number of old builds hanging around from 10 down to 3 across all projects.

We created new build template at <root> project level and, as with the with build agent pre-requisites, you’ll want a set of steps suitable for the type of projects you need to build. But here’s some guidelines regarding the settings and steps which we figured out along the way:

Create a semantic version

There are ongoing debates about the use of SemVer, particularly for large, composite applications when the lower-level details of the SemVer standard is more focused on libraries and API’s. At the very least, I’d advocate having a 3-part version number which uniquely identifies your releases, and tells you something about how much the application has changed since the last release. There are practical reasons for doing this, such as the way OctoPack and Octopus Deploy work. It’s also pretty vital for creating good release documentation and making sure everyone on your team is on the same page about what changes are going to which environment, and when.

We hooked in the GitVersion tool using the TeamCity command-line runner. This tool injects a load of useful version metadata into your build and, as long as you’re following a consistent branch convention, means that you don’t have any manual steps to worry about to version your packages.

Restore external dependencies

You don’t really want to be checking in your Nuget dependencies into version control. Likewise, you don’t want .css files in there if you use a tool like SASS. These will not only bloat your repo, but worse still make mistakes more likely (for example your checked-in CSS might not be in sync with the source SASS).

We used the Nuget installer runner to restore back-end dependencies, and the Node plugin I mentioned earlier to restore Node packages and run front-end piplines (mostly Gulp these days).

Create Deployable Packages

The output we’re looking for from our build process is one or more Octopus-compatible Nuget packages (.nupkg files). Each of these packages represents an atomic, deployable unit – usually a web application, database migrations, Web services etc – each of which is one component of your overall application. In TeamCity terms, you want to create these as Artifacts of the build. Octopus provide a tool called OctoPack for this purpose, and the ability to integrate OctoPack easily into TeamCity builds.

TeamCity has a built-in Nuget server which can host the packages as they drop out of the builds, and acts as a source for Octopus to come and collect them when it’s told to carry out a deployment. This Nuget service is therefore the main interface between TeamCity and Octopus and enabling it is a one-off job in your TeamCity instance:

Administration -> NuGet Feed -> Enable

This screen then provides the URL for the feed, along the lines of:

http://<your-teamcity-root>/httpAuth/app/nuget/v1/FeedService.svc/

In order to package up the outputs of a Visual Studio project as a Nuget package, you’ll need to add the ‘OctoPack’ Nuget package to your project. It’s done this way because you may not want to deploy every project in your solution.

Triggers

We set up a VCS checkin trigger which adds a build to the queue whenever a commit is detected:

Add new trigger -> VCS Trigger

Finally, if you’re taking the ‘template’ approach like we did, you need to create a new build configuration which inherits from the template and set parameters accordingly for your solution.

Connect TeamCity to Octopus Deploy

  • At the TeamCity root level we created a new system variable “%system.OctopusDeployUrl%” and set value to the URL of our Octopus Deploy instance
  • Configure Octopus Deploy to consume the TeamCity NuGet feed (see instructions at https://octopus.com/docs/api-and-integration/teamcity)

You’ll need to then configure service accounts in TeamCity and Octopus, so that you’ve got credentials available (with an appropriate level of permissions) for each to talk to the other.

In TeamCity

  • Create a service account in TeamCity (called something like “OctopusServiceAccount”). This is the account used in the ‘credentials’ section when setting up the external Nuget feed in Octopus to authenticate to the TeamCity Service

In Octopus Deploy

  • Create a service account in Octopus Deploy (called something like “TeamCityServiceAccount”), marked as a service account – therefore there is no password- and create a new API key within this account. This is the API Key that TeamCity deploy steps can use to trigger a build in Octopus.
  • Create a “TeamCity Integration Role” with custom permissions and add the TeamCity service account to this role (see http://help.octopusdeploy.com/discussions/questions/2464-minimum-permissions-for-a-service-account-role)

Adding a TeamCity deployment step configuration

So with TeamCity happily connecting to BitBucket as part of a build, and the output package(s) from this build available to Octopus, the next step is for TeamCity to tell Octopus to create a release and deploy it.

There’s an interesting quirk here that means the artifacts of a TeamCity build configuration won’t be available and published until after every step of the build is completed. This means that you can’t create artifacts and also include a deployment step for those artifacts all as part of the same build configuration (see the tip “delayed package publishing” in the Octopus docs) so you have to use a separate build configuration in TeamCity which is actually not doing any “building” as such, but is responsible for triggering deployments and has a dependency upon your first build configuration.

In our case, we wanted to have continuous deployment of the ‘develop’ branch to a CI environment, so we set our snapshot dependency up to only deploy builds coming from the develop branch. This is covered in a video in the Octopus Deploy documentation.

Configure Octopus Deploy

You’ll need to set up projects and environments in Octopus Deploy as required. Again I’m not going to go into too much detail here otherwise I’ll replicate what’s already in the Octopus documentation, but I will mention that we initially set up a single lifecycle for each of our solutions consisting of each target environment end-to-end:

CI -> Internal Test -> Client UAT -> Production

This makes good use of the Octopus ‘promote release’ feature, meaning that we didn’t need a new build to be done to get a build candidate to a downstream environment, and that can be a timesaver if your builds take a long time to run.

However, the implication of this is that every build that made it to production originated from our develop branch – because that’s where the CI environment got its builds, and CI sat at the front of this lifecycle. The builds were tagged with an ‘unstable’ version metadata, and we were finding that there was an extra post-deployment step required to ensure that the right code was merged up to master branch following deployment. It was all too easy to neglect the housekeeping and therefore master would fall out of date.

So, we decided to use channels in Octopus and set these up using version rules as follows


Continuous Integration channel

Has one lifecycle containing a single, Continuous Integration environment, and restricts packages using a version rule consisting of the tag “^unstable”. This means that all CI builds come from the ‘develop’ branch.


QA channel

Has one lifecycle containing two environments – an internal Test environment and a downstream, client-facing UAT test environment. Packages are restricted using a version rule consisting of the tag “^beta”. This means that all QA builds come from ‘release’ branches.

Releases candidates can be promoted from internal Test to downstream UAT environments.


Production channel

Has one lifecycle containing a single, production environment, and restricts packages using a version rule consisting of the tag “$^”. This means that all production builds come from the ‘master’ branch.

In order to deploy to production, this means that we first need to merge release branch to master, create a final build (minus branch metadata in packages) and then release this. However, this means we always have confidence that master is representative of the codebase on production.

 

Customising work item types in TFS2010

We recently examined our bug lifecycle at Mando Group, and one of the improvements we decided to implement was a simple categorisation so that we could report on the source of issues raised in Team Foundation Server. For example, some issues raised may be down to missing images which have not been added to the content management system, whereas others may be down to bona fide bugs in code. We wanted to be able to quantify these different issue types on each project.

New Global List

The first thing I did was take advantage of the Global List feature in TFS2010 (see this MSDN article on managing global lists) to set up the list of classifications we wanted to use across all projects. (nb I’ve shortened the listitem elements for brevity)

I placed the following XML in a local file (eg c: DefectCategories.xml)

<?xml version="1.0" encoding="utf-8"?>
<gl:GLOBALLISTS xmlns:gl="http://schemas.microsoft.com/VisualStudio/2005/workitemtracking/globallists">
<GLOBALLIST name="Mando – DefectCategories">
<LISTITEM value="Bug" />
<LISTITEM value="Content missing or incorrect" />
<LISTITEM value="Duplicate" />
</GLOBALLIST>
</gl:GLOBALLISTS>

From the VS2010 cmd prompt run the following command

witadmin importgloballist /collection:http://yourtfsdomain.com:8080/tfs/yourcollection /f:"c:DefectCategories.xml"

To confirm the global list has been added correctly, run this command and verify that the global list is present in the output:

witadmin exportgloballist /collection:http://yourtfsdomain.com:8080/tfs/yourcollection

Note that once the Global List has been created, items can be added to it using the Process Editor in the TFS power tools (see http://stackoverflow.com/questions/3507974/adding-items-to-a-global-list-in-tfs-via-the-sdk)

Modify the WorkItem Type on your Project

This step shows how to implement the global list categorisation on a bug work item type on a specific project. If you choose to, you can make similar changes to other work item types, and also it is possible to change the process template XML so that the change applies to all projects subsequently created from this template. These options are outside the scope of this article.

What we’re doing here is adding a new field which references the Global List, so that the values are limited to those available in the Global List, and a new control in the bug layout so that we have a drop-down to choose the value in the UI. Importantly, we’re also adding an extra step in the bug workflow which means that the categorisation is mandatory at the point the bug is being closed.

First export the bug work item definition from the project you wish to amend it for. The example below exports the Bug work item type from the “YourProject” project in the ‘yourcollection’ to a file called “bug.xml”:

witadmin exportwitd /collection:http://yourtfsdomain.com:8080/tfs/yourcollection/ /p:YourProject /n:Bug /f:"c:bug.xml"

Open the XML file for the bug in a text editor (Visual Studio or NotePad is fine) and make the following changes:

1. At the end of the FIELDS element add the new field as follows:

<FIELD name="Mando Classification" refname="Mando.Classification" type="String" reportable="dimension">
<ALLOWEDVALUES>
<GLOBALLIST name="Mando – DefectCategories" />
</ALLOWEDVALUES>
<HELPTEXT>This categorisation allows reports to be generated to establish the cause of bugs</HELPTEXT>
</FIELD>

2. Find the STATE element which has the value attribute of “Closed”. Within this element, ensure the classification is mandatory as part of the workflow by including this as a required field:

<STATE value="Closed">
<FIELDS>
 <FIELD refname="Mando.Classification">
<REQUIRED />
</FIELD>
</FIELDS>
</STATE>

3. Within the Layout element, find the Classification group and add a control to represent the classification so it can be set by users in Visual Studio (note that the control group you add it to is up to you, and your XML may look different depending on which process template you’re using):

<Group Label="Classification">
<Column PercentWidth="100″>
<Control FieldName="System.AreaPath" Type="WorkItemClassificationControl" Label="&amp;Area:" LabelPosition="Left" />
<Control FieldName="System.IterationPath" Type="WorkItemClassificationControl" Label="Ite&amp;ration:" LabelPosition="Left" />
<Control FieldName="Mando.Classification" Type="FieldControl" Label="Defect Classification" LabelPosition="Left" />
</Column>
</Group>

Lastly you need to import the updated XML bug definition into TFS

witadmin importwitd /collection:http://yourtfsdomain.com:8080/tfs/yourcollection/ /p:YourProject /f:"c:bug.xml"

Don’t forget to update the project name and filename parameters for your own project.

There are other approaches to making these changes, including the more visual approach in the WIT designer (TFS power tools). I took the XML modification approach for better consistency between projects.

Problem downloading Windows 8 apps

I got a Lenovo Thinkpad 2 tablet this week and, like most people, the first thing I did was try to load up a bunch of apps from the Windows Store. Unfortunately, all the apps I tried to install got stuck either in the ‘downloading’ or ‘pending’ status.

After a bit of searching, there are several suggested remedies, but none of them worked for me. In the end the solution seemed to be as simple as making sure the system clock had the correct time (mine was a few hours behind).

Once the system clock was correct, my apps downloaded and installed correctly.

Using PowerShell to search within files

My WordPress blog got hacked recently. I still haven’t got to the root vulnerability, but I suspect I’ve only got myself to blame for not upgrading to the latest version. There are plenty of articles on how to recover from this situation, but one of the things I found myself having to do is locate some offending code within the WordPress .php files which injected an IFrame with malcious target into my blog pages.
Once I had grabbed a backup copy of my site files, I started tinkering around with Windows Search to get it to index inside .php file contents, but realised some simple PowerShell script was probably the quicker solution:

Select-String -Path "C:path-to-my-wordpress-files*.php" -pattern iframe

This one liner gave me a list of all php files containing an iFrame, but the pattern could easily be adapted to be more specific. PowerShell can be such a lifesaver at times.

18 simple rules

A friend blogged this recently and it struck such a chord that I thought I’d repeat here in entirety rather than simply link. At the start of the new millennium the Dalai Lama apparently issued eighteen rules for living. Here they are.

1. Take into account that great love and great achievements involve great risk.
2. When you lose, don’t lose the lesson.
3. Follow the three Rs:
Respect for self
Respect for others
Responsibility for all your actions.
4. Remember that not getting what you want is sometimes a wonderful stroke of luck.
5. Learn the rules so you know how to break them properly.
6. Don’t let a little dispute injure a great friendship.
7. When you realize you’ve made a mistake, take immediate steps to correct it.
8. Spend some time alone every day.
9. Open your arms to change, but don’t let go of your values.
10. Remember that silence is sometimes the best answer.
11. Live a good, honourable life. Then when you get older and think back, you’ll be able to enjoy it a second time.
12. A loving atmosphere in your home is the foundation for your life.
13. In disagreements with loved ones, deal only with the current situation. Don’t bring up the past.
14. Share your knowledge. It’s a way to achieve immortality.
15. Be gentle with the earth.
16. Once a year, go someplace you’ve never been before.
17. Remember that the best relationship is one in which your love for each other exceeds your need for each other.
18. Judge your success by what you had to give up in order to get it.

TechDays 2011

Today a contingent from the Mando Group programming team took an excursion to the Microsoft TechDays event in Fulham to hear about a cross section of innovations in the Microsoft Web platform. There will be further presentations over the next few days on the Windows Azure platform, Silverlight and Windows Phone 7, and some of the team will be blogging about these topics later in the week. Those of us present for today’s Web track heard talks on a broad mixture of web related subjects. The highlights for me were:

Bruce Lawson gave an outstanding overview of the development of the HTML 5 standard, covering the history, politics and relationships between the various individuals and working groups involved, then moving on to explain what HTML 5 really is. The key take home messages for me were that HTML 5 standard succeeded over XHTML 2 because

  • it is backwards compatible,
  • is pragmatic (vs XHTML 2’s idealistic, puritan stance),
  • puts the web user ahead of the author, developer or any other party
  • allows for errors in authored markup. This is an inevitable consequence of human developer error, content management system limitations and 3rd party plugins, but importantly the standard describes exactly how the browser parser should respond to these errors, removing the problem of inconsistent DOM rendering between browser implementations.

Bruce also helped to dispel the concerns about HTML5 being an ‘unfinished’ standard by pointing out that while some parts of the standard are still under development, others are completed and already being used across the web in browser implementations and sites.

Martin Beeby gave a presentation on developing the TechDays website using Umbraco, an open source .NET Content Management System which we work with at Mando Group, and hosting the solution on the Windows Azure platform. The most exciting aspect of this is that Umbraco v5, due for release in June, will make this hosting configuration considerably easier to set up and scalable support for cloud deployment will be baked into the core product. We also learned that the website for the latest “Take That” tour was powered by Umbraco, and at peak times when ticket sales were announced the Umbraco site continued to perform while the separate ticketing site (not using Umbraco) crashed!

The other session which I personally found really interesting was the MVC3 update by Steven Sanderson. In contrast to last year’s MVC2 update, the new additions in MVC3 are not so much around the core framework, but more around the tooling and development infrastructure surrounding ASP.NET, but these are major improvements in their own right. The addition of scaffolding (another tip of the hat to Rails) and Entity Framework code first development make it faster than ever to get a data-driven application up and running, and the speed with which you can architect apps by pulling in components using the Nuget extension in Visual Studio will save many hours of work for developers.

The example app developed by Steve used EF4 with the repository pattern to separate data access concerns, and NInject to resolve dependencies within the app, exactly the architecture we have already used successfully at Mando Group for our last major MVC implementation for a client. It’s lovely to see how the ASP.NET MVC approach naturally leads the developer down the route of better architected web apps. Steve also covered some of the basics of Razor syntax, SQL Server CE (Compact Edition) and IIS Express (based on IIS 7.5). Again, all great additions to the Microsoft web stack which mean the developer has to spend less time thinking about underlying setup and infrastructure, and more time writing code which solves real business need.