Around 18 months ago, we went through a significant shift in deployment pipeline tooling within our agency.
After 7 years of using an on-premise Team Foundation Server (TFS) instance for version control and builds, integrating with MSDeploy for deployment, I realised we needed to be doing more within our build processes, and there were better tools available for our needs. We went through a 2-month period of thinking deeply about our requirements and trialling different software in order to land at the tools which we felt were the correct long-term fit: Git, TeamCity and Octopus Deploy. In some senses coming to that decision was the easy part. The harder part came later: how do we configure those tools in the best way to support our development workflow? As we configured the tooling, this was also an opportunity to look again at our processes and identify ways in which the new tools allow us to improve quality.
If anyone’s going through an installation and configuration of these tools, here’s the steps we carried out, and some lessons learned along the way. There are a lot of links in here to other articles with more detailed information, but there weren’t many articles which pulled together everything into one place, which is what I wanted to cover here.
Installing the basics
There are plenty of resources available about how to install these products, so I’m not going to dwell on these:
- Install Team City Build Server + minimum of one build agent
- Install Octopus deployment server + tentacles as required on target servers
On the build agents there are some pre-requisites that we installed on each agent. Yours will vary according to the dependencies of the projects you’re building, but Git and Visual Studio were two dependencies that we installed which most people will probably also need. Knowing that we would likely require new agents in the future, we created a script which uses Chocolatey to install these extra pre-requisites.
For the Octopus Tentacles, you can either download and run the standard Tentacle installer on each target machine, or alternatively use the Chocolatey tentacle package. Regardless of which path you take, there’s a final step of configuring options for your tentacle, including the secure key it uses to handshake with your server. Again, we opted to script both the Chocolatey install step, and the configuration step.
Installing and Configuring Plugins
With the vanilla TeamCity and Octopus platforms installed and running, we then installed some extra plugins:
- We wanted to hand-off built packages from TeamCity to Octopus Deploy, and the recommended approach to this is to use the Octopus Deploy plugin for TeamCity
- We wanted to run front-end pipelines within our builds, so we installed a Node plugin which provides build runners for NPM, Grunt, Gulp
- We wanted to be able to restore Nuget dependencies as part of the build process so we installed Nuget on the build agents. The full instructions are available on the TeamCity docs, but the bit you’re looking for is in Administration -> Tools -> Install Tool -> Selected NuGet version 3.5.0 (and ‘set as default’)
Connect TeamCity to BitBucket
We wanted a standardised connection from TeamCity to BitBucket. The only thing that would need to change from one solution to the next was the Git URL, so we created a new VCS root at <root> project level and connected to a BitBucket account
Note: You may have to open up SSH comms over port 22 from the TeamCity build server and agent in order to facilitate SSH access from TeamCity to BitBucket (otherwise, when testing the VCS Root connectivity, you get the error “List remote refs failed. Not authorised”).
- In TeamCity, go to Administration -> <root> Project -> New VCS root and give your VCS root a suitable name
- Create an Access Key (with read-only permissions) and uploaded the Public Key to BitBucket.
- Store the Private Key in TeamCity (Administration -> <root> project -> SSH Keys)
- Set the Authentication Method on the VCS root to “Uploaded Key” and in the Uploaded Key field select the one you have uploaded.
- Parameterize the VCS root “Fetch URL” field with a repository URL to allow the VCS root to be used for multiple builds (“%system.RepositoryUrl%”)
- Decide on your preferred branching strategy and configure VCS root Branch Specifications to monitor relevant branches according to your standard.
On the last bullet point, we chose to adopt GitFlow as our preferred branching strategy. We selected this as a convention within our agency because it was already an existing and well documented standard (no thanks, NIH!) and it was fairly straightforward for our team to get their head around. Most of the solutions we implement for our clients are iterated on through a regular series of enhancements, fixes and larger projects. Releases are often approved via Change Board and business priorities often change between releases, so Gitflow’s feature / release model works well for us.
One of the reasons it’s so useful to adopt a standard like this is that you can tell TeamCity to build any branch matching your standard. So if a developer creates a feature branch and pushes it to the remote, TeamCity will pick up on that and automatically build the branch. To support this, we configured our Branch Specification in the VCS root to monitor GitFlow compliant branches:
+:refs/heads/(develop)
+:refs/heads/(feature/*)
+:refs/heads/(hotfix/*)
+:refs/heads/(release/*)
+:refs/heads/(support/*)
+:refs/heads/(master)
Adding a TeamCity build configuration template
One of the key factors in our selection of TeamCity was the ability to create template build configurations, so that similar builds could all inherit the same template. That way, if we needed to add extra, universal build steps or change an existing step, it could be done in one place.
If you run an internal development team which supports a handful of solutions then you might not create new build configurations very often. But in an agency environment with dozens of active clients, having a template means that the process of setting up a new build configuration is much faster, and if you need to change a setting globally it can be done in the template.
Within 2 months of implementing a template for our build configurations, it was proving to be a good decision. Not only were our builds much more consistent and faster to set up, but we had a situation in which our build agents were running out of space quickly, and we found that the cleanup settings could be changed within the template. This meant that with one small configuration change in the template, we could reduce the number of old builds hanging around from 10 down to 3 across all projects.
We created new build template at <root> project level and, as with the with build agent pre-requisites, you’ll want a set of steps suitable for the type of projects you need to build. But here’s some guidelines regarding the settings and steps which we figured out along the way:
Create a semantic version
There are ongoing debates about the use of SemVer, particularly for large, composite applications when the lower-level details of the SemVer standard is more focused on libraries and API’s. At the very least, I’d advocate having a 3-part version number which uniquely identifies your releases, and tells you something about how much the application has changed since the last release. There are practical reasons for doing this, such as the way OctoPack and Octopus Deploy work. It’s also pretty vital for creating good release documentation and making sure everyone on your team is on the same page about what changes are going to which environment, and when.
We hooked in the GitVersion tool using the TeamCity command-line runner. This tool injects a load of useful version metadata into your build and, as long as you’re following a consistent branch convention, means that you don’t have any manual steps to worry about to version your packages.
Restore external dependencies
You don’t really want to be checking in your Nuget dependencies into version control. Likewise, you don’t want .css files in there if you use a tool like SASS. These will not only bloat your repo, but worse still make mistakes more likely (for example your checked-in CSS might not be in sync with the source SASS).
We used the Nuget installer runner to restore back-end dependencies, and the Node plugin I mentioned earlier to restore Node packages and run front-end piplines (mostly Gulp these days).
Create Deployable Packages
The output we’re looking for from our build process is one or more Octopus-compatible Nuget packages (.nupkg files). Each of these packages represents an atomic, deployable unit – usually a web application, database migrations, Web services etc – each of which is one component of your overall application. In TeamCity terms, you want to create these as Artifacts of the build. Octopus provide a tool called OctoPack for this purpose, and the ability to integrate OctoPack easily into TeamCity builds.
TeamCity has a built-in Nuget server which can host the packages as they drop out of the builds, and acts as a source for Octopus to come and collect them when it’s told to carry out a deployment. This Nuget service is therefore the main interface between TeamCity and Octopus and enabling it is a one-off job in your TeamCity instance:
Administration -> NuGet Feed -> Enable
This screen then provides the URL for the feed, along the lines of:
http://<your-teamcity-root>/httpAuth/app/nuget/v1/FeedService.svc/
In order to package up the outputs of a Visual Studio project as a Nuget package, you’ll need to add the ‘OctoPack’ Nuget package to your project. It’s done this way because you may not want to deploy every project in your solution.
Triggers
We set up a VCS checkin trigger which adds a build to the queue whenever a commit is detected:
Add new trigger -> VCS Trigger
Finally, if you’re taking the ‘template’ approach like we did, you need to create a new build configuration which inherits from the template and set parameters accordingly for your solution.
Connect TeamCity to Octopus Deploy
- At the TeamCity root level we created a new system variable “%system.OctopusDeployUrl%” and set value to the URL of our Octopus Deploy instance
- Configure Octopus Deploy to consume the TeamCity NuGet feed (see instructions at https://octopus.com/docs/api-and-integration/teamcity)
You’ll need to then configure service accounts in TeamCity and Octopus, so that you’ve got credentials available (with an appropriate level of permissions) for each to talk to the other.
In TeamCity
- Create a service account in TeamCity (called something like “OctopusServiceAccount”). This is the account used in the ‘credentials’ section when setting up the external Nuget feed in Octopus to authenticate to the TeamCity Service
In Octopus Deploy
- Create a service account in Octopus Deploy (called something like “TeamCityServiceAccount”), marked as a service account – therefore there is no password- and create a new API key within this account. This is the API Key that TeamCity deploy steps can use to trigger a build in Octopus.
- Create a “TeamCity Integration Role” with custom permissions and add the TeamCity service account to this role (see http://help.octopusdeploy.com/discussions/questions/2464-minimum-permissions-for-a-service-account-role)
Adding a TeamCity deployment step configuration
So with TeamCity happily connecting to BitBucket as part of a build, and the output package(s) from this build available to Octopus, the next step is for TeamCity to tell Octopus to create a release and deploy it.
There’s an interesting quirk here that means the artifacts of a TeamCity build configuration won’t be available and published until after every step of the build is completed. This means that you can’t create artifacts and also include a deployment step for those artifacts all as part of the same build configuration (see the tip “delayed package publishing” in the Octopus docs) so you have to use a separate build configuration in TeamCity which is actually not doing any “building” as such, but is responsible for triggering deployments and has a dependency upon your first build configuration.
In our case, we wanted to have continuous deployment of the ‘develop’ branch to a CI environment, so we set our snapshot dependency up to only deploy builds coming from the develop branch. This is covered in a video in the Octopus Deploy documentation.
Configure Octopus Deploy
You’ll need to set up projects and environments in Octopus Deploy as required. Again I’m not going to go into too much detail here otherwise I’ll replicate what’s already in the Octopus documentation, but I will mention that we initially set up a single lifecycle for each of our solutions consisting of each target environment end-to-end:
CI -> Internal Test -> Client UAT -> Production
This makes good use of the Octopus ‘promote release’ feature, meaning that we didn’t need a new build to be done to get a build candidate to a downstream environment, and that can be a timesaver if your builds take a long time to run.
However, the implication of this is that every build that made it to production originated from our develop branch – because that’s where the CI environment got its builds, and CI sat at the front of this lifecycle. The builds were tagged with an ‘unstable’ version metadata, and we were finding that there was an extra post-deployment step required to ensure that the right code was merged up to master branch following deployment. It was all too easy to neglect the housekeeping and therefore master would fall out of date.
So, we decided to use channels in Octopus and set these up using version rules as follows
Continuous Integration channel
Has one lifecycle containing a single, Continuous Integration environment, and restricts packages using a version rule consisting of the tag “^unstable”. This means that all CI builds come from the ‘develop’ branch.
QA channel
Has one lifecycle containing two environments – an internal Test environment and a downstream, client-facing UAT test environment. Packages are restricted using a version rule consisting of the tag “^beta”. This means that all QA builds come from ‘release’ branches.
Releases candidates can be promoted from internal Test to downstream UAT environments.
Production channel
Has one lifecycle containing a single, production environment, and restricts packages using a version rule consisting of the tag “$^”. This means that all production builds come from the ‘master’ branch.
In order to deploy to production, this means that we first need to merge release branch to master, create a final build (minus branch metadata in packages) and then release this. However, this means we always have confidence that master is representative of the codebase on production.