Test case management in JIRA on ZERO budget

Keep getting the question regarding how do you do test management with JIRA without paying for any plugins. Trying to explain it in one sentence it would be something like this…

For a given JIRA project there is a set of master test cases from where executable tests/test-runs can be cloned based on version and executed against a system under test.

This post (in a beta) will describe a working flow that fits well with JIRA (assuming you have test cases in JIRA already) and what needs to be setup to get it working including some code for automating tedious manual tasks.

The sections are…

  • The master test case
  • Executable test case (i.e. master test case clone)
  • JIRA configuration and set-up
  • Test management, visibility and linkage of entities in JIRA
  • Automation of the bulk (cloning master test cases into executable copies)

The master test case

The master test case is the reference on which to base a test on at any current point in time. If a feature changes the master tests case gets updated accordingly, no magic in that. The master test case shall have a workflow that represents the purpose of a master test-case/reference, hints: written and approved. Exemplified in the JIRA setup/configuration section…

  • New – In progress
  • Pending approval – Written and waiting for approval from product owner
  • Approved – Active and up to date test case
  • Closed – obsoleted state / retired test case

Avoiding the never ending cloning of a test case

A usual problem with test case management in JIRA is that test cases are cloned, tweaked a bit, cloned again and again, which is a big mistake. The recommendation to solve this is that there can ONLY be ONE living test case in place for a given functionality. Something which is a common practice if looking at tools from major vendors.

To make this work there has to be something else in JIRA hence another ticket type or something that resembles an executable form of the ‘master’ test case.

The executable test case

Since the master test case act as a reference/template we have to derive and use its content in a another JIRA entity. The way this gets done (the manual way) is to clone a master test case, convert it to a new ticket type (executable test case) and make sure it is a sub task to the master test case. What is the benefit with that?

  1. it will have better suited workflow
  2. there will only be one test case to update
  3. there will be one historical instance for every time the executable test case was executed – which will be a clone of the master test case at the point when it was cloned

Creating the JIRA entities

The master test case type will most likely be present in your current JIRA setup but to be sure these are the configurations that needs to be made. Issue types If your JIRA installation does not include any Test Case issue types, add the ones needed. NOTE that the executable test case type has to be a Sub-Task Issue type.

  1. Master test case as a Standard Issue Type (Test Case)
  2. Executable test case as a Sub-Task Issue Type (Test Run)

Workflows The master test case type needs a simple approval workflow to make sure the product owner input is there.

Master test case workflow

The executable test case shall have states that tracks progress, and the final verdict.

Executable test case / test run workflow

The glue and workflow

Test case to story linkage is handled by linking issues together with the regular Link feature. The full entity relations will look like this.

jira-entities

Entities in JIRA, a story can be covered by multiple test cases, which in turn might have been executed multiple times.

The executable test case is a sub-task issue and shall have a master test case as its parent at all times and is at the same time a clone of a master test case at the point in time when it was created. The master test case is expected to change over time and the history will be visible in its sub-tasks, the executable test runs.

Using the sub-task concept will give a very good overview regarding execution history.

test-case-linkage

All executable tests / test runs are listed in the master test case with its execution version and verdict.

Use case – setting up a test plan

How would this work if I was to create a test plan including a set of test cases? Pick the test cases you want to get executed and for each of these test cases clone it and convert the clone into an executable test case that is a sub-task to the test case you wanted to include in your test plan. To make it easier to find the cloned executable tests give them all the same affects version. All set, a set of executable test case ready to be executed using its well fitted work flow.

Create a new JIRA Dashboard for your test plan/phase and check the progress of your executed test cases in (browser refresh) real-time.

dashboard-example

Dashboard example

Smooth right? OR ??? Ugh, this will require a huge amount of manual work! Lets automate it…

Generating the test runs

Here comes the not so smooth part of JIRA. Cloning a master test case into an executable test case. Do not do this in manually, automate it.

This is based on that you have selected a set of test cases you want to include in your test plan. They are all listed if using a shared JIRA filter and the easiest way is to set the affects version of all these test case to the same (trust me it makes sense).

The second thing you need to know is what the JIRA type id is for the executable test case sub type. Inspect the element in your favourite browser (e.g. Firebug) and you will find out.

# This is Python code
import requests
import json
import base64

headers = {'Authorization':'Basic %s' % base64.b64encode("username" + ':' + "password"),
           'Content-Type':'application/json'}

# 4 getting JIRA issues from a given filter
filter = "12345"
fields = "key"

# Details in a specific issue that we are interested in,
# add fields that you want to clone. For custom fields you will need
# to lookup its real name e.g. customfield_1234 which you can easily
# find out by inspecting the elements of your ticket in your browser.
fields_to_clone = [
    "summary",
    "description",
    "components"
]

# Get all master test case given an existing public JIRA filter
all_master_tests = requests.get(url="https://jira.mycompany.com/rest/api/2/search?jql=filter="+filter+"&fields="+fields, headers=headers)
issues = json.loads(all_master_tests.content)['issues']
for issue in issues:
    master_test = requests.get(url=issue['self']+"?fields="+",".join(fields_to_clone), headers=headers)
    fields = json.loads(master_test.content)['fields']
    fields['project'] = {'id':'10000'}
    fields['parent'] = {'id':issue['id']}
    fields['issuetype'] = {'id':'21'}

    # If you want assignee and versions to be set for the created test case add it here.
    #fields['versions'] = {'id':version}
    #fields['assignee'] = {'id':assignee}

    payload = {
        "fields":fields
    }
    rs = requests.post(url="https://jira.mycompany.com/rest/api/2/issue", headers=headers, data=json.dumps(payload))
    if rs.status_code == 201:
        print "Test case created: "+rs.text

Test automation don’ts #1 – Separate repositories

Since this is one of these days when frustration is at what feels like an all time high ‘again’ what better to do that to get it all out of you. When it comes to test automation there are so many poor decisions out there… yes, many. I’ll try to address some of these here. The first one out is, da-da.

#1 Do not keep automation code in a separate reposity

So what does it mean, consider the case where all test cases and test framework code is in a nicely well structured repository and the code to be tested resides in an equally professionally handled repository. But what happens when it is time to run test cases? Is the HEAD of these repositories/branches ‘always’ in synch, well most likely they are not, not even if the codebase is completely ‘branch free’. The end result, most likely broken tests that will cause a lot of waste.

So for the case where there is one repository for the product code, DO NOT put the automation code in a sperate repository or you’ll end up struggling with failing test cases as soon as the branches get out of synch. On top of this guess what kind of overhead you will face if a team/teams are working with a feature branch strategy, let me smile and mention that this is what I struggle with today broken tests all over the place and no one has a clue about which failing tests that are regressions and which ones are expected to fail?

I have seen the scenario with separate repositories at several places and the usual reason is that the system (under test) it self is spread out over multiple repositories and the aim was to keep the automated tests in one place supporting all parts of a ‘system’. Not even tests that are defined as end-2-end tests spanning a full system will handle the scenario smoothly. I have been involved in several approaches trying to resolve problems without putting the automation code in the same repository, none of the approaches have been fully successful.

Identified problems

  • Test code out of synch with code under test
  • TWICE the amount of branches to maintain and work with

Using JUnit @Category and Maven profiles to get stable test suites

Since it happens over and over again, no matter what project I am in. Keeping the test suites green and passing is a …

The blind eye

The one sneaking little test case that starts failing can prove to be a real killer to all automation approaches. More than once it has shown that as soon as a single test case breaks a test suite things starts to go downhill, fast. Ok, there could be acceptable resons for the test case to be broken, e.g. if there are issues in the system under test or in the test framework that are not bugs.

The usual decision taken is to leave the test case as is for now since we are all aware that it will fail for a while, BAD choice!!! All attention on the failing test suite is moving from slight attention to abandoned, and if other test in the test same test suite starts failing for various reasons it will most likely go by undetected.

What can/should be done here

  • Disable the test case and enable it again when it should be back on track
  • Maintain test suites to allow it to execute and fail but in a another suite/run to keep the MUST always pass test suites all green

The later case is the way to go.

Maven profiles to your rescue

Using Maven profiles it is easy to group your JUnit test classes and test cases in different categories. Usually this is used for grouping tests into slow, fast and browser-less tests a.s.f.. This approach can be stretched even further by using a grouping that reflects the current state of the test case. There does not have to be that many different groupable categories but at least two is needed.

  • Stable
  • Unstable/Maturing

Stable is self explained while Unstable/Maturing covers everything that is NOT always PASSING. It might be test cases that are unstable beacuse of;

  • timing issues
  • tested functionality is still under development (during sprints)
  • there is a low priority bug that will be fixed later
  • unknown issues causing it to sporadically fail
  • waiting to get a STABLE stamp

Use case 1
The last item is an approach that can be used to be really be sure that the stable test suites does not get polluted. Any new test case needs to run a set number of times (e.g. 20) in a test suite and must pass all all times before being stamped as a reliable test case.

Use case 2
When a bug surfaces that causes a test case to fail it might sometimes be the case that it wont be fixed for some time period. Instead of totally disabling or trying to fix the test case that would fail an otherwise stable test suite it shall be moved to an unstable test suite. This is a way to put the test case in a quarantine until the bug has been fixed and on the same tame make sure it still compiles and is executable.

Categorizing tests using JUnit @Category annotation
Use the JUnit @Category annotation in front of your test cases or test classes.

@Category(x.y.z.Stable.class)
@Test
public void aTestCase() {
  // void
}

// 
// Alternatively an unstable test case
//

@Category(x.y.z.Unstable.class)
@Test
public void anotherTestCase() {
  // void
}

Splitting test execution using Maven profiles

Configure your Surefire plugin to include and exclude certain test classes as well as determine a set of JUnit @Category groups to be included in the test run. Set up your pom including profiles as examplified below.

<build>	
	<plugins>	
		<plugin>
			<groupId>org.apache.maven.plugins</groupId>
			<artifactId>maven-surefire-plugin</artifactId>
			<version>2.12.2</version>
			<dependencies>
				<dependency>
					<groupId>org.apache.maven.surefire</groupId>
					<artifactId>surefire-junit47</artifactId>
					<version>2.12.2</version>
				</dependency>
			</dependencies>
			<configuration>
				<testFailureIgnore>true</testFailureIgnore>
				<excludes>
					<exclude>${exclude.tests}</exclude>
				</excludes>
				<includes>
					<include>${include.tests}</include>
				</includes>
				<groups>${testcase.groups}</groups>
				<configuration>
					<forkMode>never</forkMode>
					<runOrder>random</runOrder>
				</configuration>
			</configuration>
		</plugin>
	</plugins>
</build>

The testcase.groups property can be used to define a set of groups (JUnit @Categories) to include when tests using the actual profile.

<profiles>
	<profile>
		<id>stable-tests</id>
		<properties>
			<exclude.tests>**/x/**/*.java</exclude.tests>
			<include.tests>**/y/**/*.java</include.tests>
			<testcase.groups>x.y.z.Stable</testcase.groups>
		</properties>
	</profile>
	<profile>
		<id>unstable-tests</id>
		<properties>
			<exclude.tests>**/x/**/*.java</exclude.tests>
			<include.tests>**/y/**/*.java</include.tests>
			<testcase.groups>x.y.z.Unstable,x.y.z.Maturing</testcase.groups>
		</properties>
	</profile>
</profiles>