Azure DevOps: Engineering an Automated Framework

I have spent many years working with the core concepts of DevOps, before the term was popularly coined and recognised. The ideas are not new. Teams have used these techniques for ages, but the term “DevOps” ties together a set of practices to a common framework.

DevOps is not a single tool, a product or a platform. It is a framework that also changes the culture within a team. However, Microsoft has renamed their platform “Azure DevOps” with does confuse the matter.

Here, I will discuss on how I use Azure DevOps on a daily basis and how it has changed and improved team performance.

On a historical note, the Azure DevOps platform comes from the Team Foundation Server (TFS) which I have worked with since it as called SourceSafe. Initially, this was only used for managing code versioning but evolved to include reporting, task management, testing and release management. Azure DevOps was also called Visual Studio Online (VSTO) before the DevOps rebranding.

So, what does this tool, and the techniques of the DevOps framework, bring to the table?

Improved communication

I use a myriad of tools for communication. Anything from written documents, white boards, email to systems such as Microsoft Planner, Microsoft Teams, Trello, Slack or Azure DevOps. There are no single correct platform but a team should use a tool that combines the data from multiple systems. Slack or Teams are good at combining several sources of data into one stream. However, I personally spend a lot of time in Azure DevOps and create direct reports and Dashboards that allow the entire team to get a quick overview of progress, deployments, risks and personal tasks.

A common dashboard view in Azure DevOps

We use these dashboards during our daily status meetings together with analysis of Kanban boards and bug lists. Stakeholders who do not attend the meetings can also get access and quickly see the overall status of the project.

Microsoft Teams with integrated Kanban task board

Microsoft Teams or Slack allows us to integrate these multiple views. For example, we can set up a DevOps dashboard or Kanban board, add general chat channels, integrate group emails, add document management solutions such as Office 365 sites, add ad-hoc task management such as Trello or Microsoft Planner. The point is to get one tool where the team can access all information from.

Improved planning

This method also improves planning. We normally use a modified agile methodology to plan and complete work. This works well even in traditional waterfall projects where we already know the scope, deadline and budget as it allows us to split the work into manageable chunks. This works even better when we can use product backlogs that the business can prioritise very close to the beginning of a sprint/iteration.

Capacity planning in Azure DevOps

As we plan tasks, we can group them by activity. This is useful for separating business analysis, infrastructure deployments, development, QA/testing and so forth. We then assign coworkers to activities, sent the amount of hours per day the resource can commit to, and we can then see if we have the correct balance between assigned work to capacity for the planned period.

As business analysts and solutions architects, we can also connect the business documentation directly to the requirement in Azure DevOps. The developer does not have to search for the documentation in other systems and get quick access to key aspects of the requirement. We normally add an overview together with key information about functional regression testing and acceptance criteria. The items can easily be customised to suit your organisations need and methodology.

Improved analysis

We can also integrate with other systems for task management. You can create advanced reports from the data stored in Azure DevOps from another system or just from Excel. You can get triggers when specific tasks, bugs or other items are created or changed and receive an alert. I normally set up build verification testing so that any code that is checked in is analysed and, if it fails, a bug is automatically assigned to the colleague who committed the change.

For example, I once had a lightbulb that flashed red when a deployment to our test systems failed.

On a more useful note, you can create a report that compares estimated vs actual output, break that figure down between activities and specific coworkers to analyse issues in the overall performance of the team. You could analyse how much time is spent on testing, bug fixing and development to assess where you should focus on improving or streamlining QA.

Improved QA/testing

You can use DevOps for testing. We store our user acceptance tests in DevOps and execute them manually through a browser and then record the results of each step. You can then create a bug which will add the detail of the steps and associate it with the requirement.

We also execute automated tests by installing a test agent on a virtual server, which then can run anything from unit tests to PowerShell scripts or performance tests written in Visual Studio. These results can then be added to the bug list when issues are found.

There are plenty of add-ins available as well from the DevOps store. We use the exploratory testing add-in which allows you to navigate the system and add bugs when you find them, along with the browser steps taken.

Finally, we run both Build Verification Testing (BVT) on check-ins as mentioned earlier, and we can also run unit tests and code analysis automatically on check-in or on a gated basis (where the analysis happens every couple of hours).

In the end, we have a bug report list that can easily be triaged as part of the daily progress meeting.

Lastly, we use the code review framework in DevOps and Visual Studio. Developers can send their changesets to a lead developer for manual review of maintainability, performance and security. All steps are also documented for guidance in the DevOps wiki. The process both improves quality and at the same time actively guides and teaches developers to produce better code.

Improved release management

The biggest change for my current team was introduction of release management capabilities. Before, we had several manual configurations as well as manual packaging of release code. There were many errors due to shortcuts being taken where a bug was manually fixed by configuration, and these steps had to be repeated in each environment.

Although scripted installations did solve a lot of issues, we still had “manual configurations” that were poorly documented. In the end, we removed all access to environment so that they only way to deploy changes were via automated release management.

An overview of release management in DevOps

We can now fully automate the release of a package to various test environments and then perform an installation in the production environments without any downtime.

Technically, DevOps allows you to visually build a pipeline with various modular steps. We combined standard pipeline components, such as copy files, build packages etc, with custom PowerShell based steps. The custom modules can also send information back to DevOps while running so we can report in realtime what is actually happening within the installation process. This removes the need for an engineer to be logged in to the server during deployments and increases security and simplifies troubleshooting.

Final thoughts

The changes we have implemented within my current team have had a dramatic impact on quality, time taken to deploy solutions and the frequency we can deploy solutions at. Not all of this is attributed to DevOps, but it is a very large part of the success story. When we took over the project, there had not been a successful deployment of code for over a year. We started deploying around each quarter, but the time to deploy code could easily take a whole weekend, with teams working close to around the clock. We now deploy once a month without any downtime, where the actual deployment takes between 5 minutes to an hour. Most time is instead spent on final quality assurance and we can do this in half a day without any noticeable downtime for the end users.