Iris Classon
Iris Classon - In Love with Code

Dev at work: End-to-end test pipeline

Its been a while since I wrote a ‘Dev at work’ post, sorry about that! Here is a short post summarizing my day today.

[caption id=“attachment_38603” align=“aligncenter” width=“282”] Sunny outside, but I'm stuck indoor coding :D Not complaining though! Sunny outside, but I’m stuck indoor coding :D Not complaining though![/caption]

The day started as always with breakfast, for me and baby Loke, before I biked to work. I’m training for my next triathlon and thus trying to get in as many miles on the bike as possible. The workday started with our daily standup, and a small discussion surrounding QA and end to end testing. We’ve been struggling to cope with the amount of testing that needs to be done as the product grows (mostly end-to-end testing and exploratory testing), and we decided two weeks ago to set aside three sprints to wire up proper end to end testing. The plan is as following:

Set up a separate test tenant with complex data (using the good old Adventureworks database) that we can run the following end-to-end tests on:
• Ghost Inspector GUI tests (our previously manually run exploratory tests will be recorded and automated)
• End to end testing of our API with Pester and PowerShell
• OpenAPI tests- once we have our API cleaned up

Yesterday I wrote a mock service that spits out mock data similar to the Ghost Inspector API result so we have some thing we can use when testing our scripts (as you have limited test runs with Ghost Inspector), and created a new project in Octopus Deploy (our deployment service) where we will run the different steps.

Today I wrote a Ghost Inspector PowerShell template for running tests against a suite. The template accepts two parameters, a suite URI and API key. Since I have the mock server, I’ve created the steps using the template and provided the URI for the mock service while testing the scripts.

The process starts with creating a backup of the test tenant database, and saving the backup with the release number as version number. Afterwards, the suite steps are run, outputting warnings if tests fail. A shared Octopus Deploy variable is incremented after each suite is run, setting the number of passed and failed tests.

At the end the tenant database is restored to its prior state, and the test result is outputted. If any of the tests failed, then the ‘deployment ‘fails.
The tests runs can be triggered manually, in addition to daily runs. Due to limited number of runs (cost issue) we can’t run them on every build.

[caption id=“attachment_38613” align=“aligncenter” width=“922”] The outline for the pipeline (before adding the actual suite runs) The outline for the pipeline (before adding the actual suite runs)[/caption]

I haven’t added to Pester tests (API only) yet, but will do so in the next sprint (we are closing this sprint tomorrow). Tomorrow Jonas will be recording some GUI tests based on the manual tests we’ve used before, and hopefully I’ll just have to update the variables for the project and everything will just run smoothly.

Alright, time to bike home, go for a run and then prepare a lecture I’m doing on Thursday at a local Meetup. Busy times, but fun times!

Comments

Leave a comment below, or by email.


Last modified on 2019-09-17

comments powered by Disqus