JUnit: Excluding data driven tests

Once in a while there is that single test case that cannot be executed for certain data when running data driven tests. This could be a true  annoyment and it is tempting to break out those tests into a separate test class, don’t do that and stay away from taking shortcuts that will give you;

  • False test statistics, reporting passing test that has never been run
  • Avoid Ignoring tests e.g. using conditional ignores or JUnit’s assume.

A simple way to exclude a test from being executed when running data driven tests using the Parameterized.class is to set up a conditional JUnit rule. A test rule that based on the current test data in the data driven loop runs OR excludes a specifc test.

Consider the scenario below

We run the all tests below with the following samples “a”, “b”, “c” but know that test02 will not work for the sample “b” so we will have to deal with that in a clever way.

@Runwith(Parameterized.class)
public class Test {

  private String sample;

  public Test(String sample) {
    this.sample = sample;
  }

  @Parameters
  public static Collection<Object[]> generateSamples() {
    final List <Object[]> samples = new ArrayList<Object[]>();
    sample.add(new Object[]{"a"});
    sample.add(new Object[]{"b"});
    sample.add(new Object[]{"c"});
    return samples;
  }

  @Test
  public void test01() {
    // Works with sample "a", "b", "c"
  }

  @Test
  public void test02() {
    // Works with sample "a", "c" BUT NOT "b"
  }

  @Test
  public void test03() {
    // Works with sample "a", "b", "c"
  }
}

Mark-up with test annotation

Step one is to mark-up a test case so that we know that it will only run for certain samples. We will do this by adding an annotation, in this case we call it Samples.

@Test
public void test01() { ... }

@Samples({"a","c"})
@Test
public void test02() { ... }

@Samples({"a","b","c"})
@Test
public void test03() { ... }

Using the test annotation Samples we can start filtering during run-time if to execute a test case or not. A test without the annotation will always run the test no matter what data that is reurned in the data driven loop. If the annotation is in place the test will only run for the samples that matches the values in the annotation. In case of test02 it will only run for samples “a” and “c”.

The test annotation

Adding the annotation is straight forward, read up on it here.

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface Samples {
    String [] value();
}

Adding the exclusion rule

Worth knowing is that JUnit rules will always be created before a test class constructor. This means that a rule will not know the test data sample of the current data driven iteration. The current data driven value is always passed to the test class constructor (Parameterized.class) and hence the rule we are going to implement needs to get that information as well at that point.

@Runwith(Parameterized.class)
public class Test {

  @Rule
  public OnlyRunForSampleRule rule = new OnlyRunForSampleRule();
  private String sample;

  public Test(String sample) {
    this.sample = sample;
    rule.setSample(sample); // <<-- HERE
  }

With the current data sample and the Samples annotation values known at run-time, we define a simple rule that excludes test cases if they are not supposed to run for certain test data.

public class OnlyRunForSampleRule implements TestRule {

  private String sample;

  @Override
  public Statement apply(Statement s, Description d) {
    Protocols annotation = d.getAnnotation(Samples.class);
    // No annotation/samples matching, always run
    if (annotation == null) {
      return s;
    }
    // Match! One sample value matches current parameterized sample value
    else if (Arrays.asList(annotation.value()).contains(protocol)) {
      return s;
    }
    // No match in the samples annotation, skipping
    return new Statement() {
      @Override
      public void evaluate() throws Throwable {}
    };
  }

  public void setSample(String sample) {
    this.sample = sample;
  }
}

The rule above is defined to run tests on known samples, creating the inverse of this rule is simply done by returning the empty Statement in the else if that checks for matches in the Samples annotation.

Debugging WireMock calls when using JUnit WireMockRule

Mocking using the WireMockRule in your JUnit test classes and struggle with 404’s?

It is not that trivial to find in the WireMock documentation but it is in there, under ‘Listening for requests’ @ http://wiremock.org/verifying.html. Plain debugging fine, but sometimes one really wants to know the details of the calls made to the underlying services that are consumed, especially when WireMocking these services and there is a fine grained matching mechanism to deal with.

Below is the quick awesome tip to get the details you need to resolve the WireMock returned 404’s easily.

Add a request listener to your WireMockRule and Use Java 8 lambdas to smoothly implement the WireMock interface RequestListener that has the single method requestReceived(Request request, Response response). Print out the reponse and request details you want. Run your tests and check the print outs, all set!

import com.github.tomakehurst.wiremock.junit.WireMockRule;

public class Test {

   @Rule
   public WireMockRule wireMockRule = new WireMockRule(6969);

   @Before
   public void setupTest() {
      wireMockRule.addMockServiceRequestListener((request, response) -> {
         System.out.println("URL Requested => " + request.getAbsoluteUrl());
         System.out.println("Request Body => " + request.getBodyAsString());
         System.out.println("Request Headers => " + request.getAllHeaderKeys());
         System.out.println("Response Status => " + response.getStatus());
         System.out.println("Response Body => " + response.getBodyAsString());
      });
   }
   ...
}

Test case management in JIRA on ZERO budget

Keep getting the question regarding how do you do test management with JIRA without paying for any plugins. Trying to explain it in one sentence it would be something like this…

For a given JIRA project there is a set of master test cases from where executable tests/test-runs can be cloned based on version and executed against a system under test.

This post (in a beta) will describe a working flow that fits well with JIRA (assuming you have test cases in JIRA already) and what needs to be setup to get it working including some code for automating tedious manual tasks.

The sections are…

  • The master test case
  • Executable test case (i.e. master test case clone)
  • JIRA configuration and set-up
  • Test management, visibility and linkage of entities in JIRA
  • Automation of the bulk (cloning master test cases into executable copies)

The master test case

The master test case is the reference on which to base a test on at any current point in time. If a feature changes the master tests case gets updated accordingly, no magic in that. The master test case shall have a workflow that represents the purpose of a master test-case/reference, hints: written and approved. Exemplified in the JIRA setup/configuration section…

  • New – In progress
  • Pending approval – Written and waiting for approval from product owner
  • Approved – Active and up to date test case
  • Closed – obsoleted state / retired test case

Avoiding the never ending cloning of a test case

A usual problem with test case management in JIRA is that test cases are cloned, tweaked a bit, cloned again and again, which is a big mistake. The recommendation to solve this is that there can ONLY be ONE living test case in place for a given functionality. Something which is a common practice if looking at tools from major vendors.

To make this work there has to be something else in JIRA hence another ticket type or something that resembles an executable form of the ‘master’ test case.

The executable test case

Since the master test case act as a reference/template we have to derive and use its content in a another JIRA entity. The way this gets done (the manual way) is to clone a master test case, convert it to a new ticket type (executable test case) and make sure it is a sub task to the master test case. What is the benefit with that?

  1. it will have better suited workflow
  2. there will only be one test case to update
  3. there will be one historical instance for every time the executable test case was executed – which will be a clone of the master test case at the point when it was cloned

Creating the JIRA entities

The master test case type will most likely be present in your current JIRA setup but to be sure these are the configurations that needs to be made. Issue types If your JIRA installation does not include any Test Case issue types, add the ones needed. NOTE that the executable test case type has to be a Sub-Task Issue type.

  1. Master test case as a Standard Issue Type (Test Case)
  2. Executable test case as a Sub-Task Issue Type (Test Run)

Workflows The master test case type needs a simple approval workflow to make sure the product owner input is there.

Master test case workflow

The executable test case shall have states that tracks progress, and the final verdict.

Executable test case / test run workflow

The glue and workflow

Test case to story linkage is handled by linking issues together with the regular Link feature. The full entity relations will look like this.

jira-entities

Entities in JIRA, a story can be covered by multiple test cases, which in turn might have been executed multiple times.

The executable test case is a sub-task issue and shall have a master test case as its parent at all times and is at the same time a clone of a master test case at the point in time when it was created. The master test case is expected to change over time and the history will be visible in its sub-tasks, the executable test runs.

Using the sub-task concept will give a very good overview regarding execution history.

test-case-linkage

All executable tests / test runs are listed in the master test case with its execution version and verdict.

Use case – setting up a test plan

How would this work if I was to create a test plan including a set of test cases? Pick the test cases you want to get executed and for each of these test cases clone it and convert the clone into an executable test case that is a sub-task to the test case you wanted to include in your test plan. To make it easier to find the cloned executable tests give them all the same affects version. All set, a set of executable test case ready to be executed using its well fitted work flow.

Create a new JIRA Dashboard for your test plan/phase and check the progress of your executed test cases in (browser refresh) real-time.

dashboard-example

Dashboard example

Smooth right? OR ??? Ugh, this will require a huge amount of manual work! Lets automate it…

Generating the test runs

Here comes the not so smooth part of JIRA. Cloning a master test case into an executable test case. Do not do this in manually, automate it.

This is based on that you have selected a set of test cases you want to include in your test plan. They are all listed if using a shared JIRA filter and the easiest way is to set the affects version of all these test case to the same (trust me it makes sense).

The second thing you need to know is what the JIRA type id is for the executable test case sub type. Inspect the element in your favourite browser (e.g. Firebug) and you will find out.

# This is Python code
import requests
import json
import base64

headers = {'Authorization':'Basic %s' % base64.b64encode("username" + ':' + "password"),
           'Content-Type':'application/json'}

# 4 getting JIRA issues from a given filter
filter = "12345"
fields = "key"

# Details in a specific issue that we are interested in,
# add fields that you want to clone. For custom fields you will need
# to lookup its real name e.g. customfield_1234 which you can easily
# find out by inspecting the elements of your ticket in your browser.
fields_to_clone = [
    "summary",
    "description",
    "components"
]

# Get all master test case given an existing public JIRA filter
all_master_tests = requests.get(url="https://jira.mycompany.com/rest/api/2/search?jql=filter="+filter+"&fields="+fields, headers=headers)
issues = json.loads(all_master_tests.content)['issues']
for issue in issues:
    master_test = requests.get(url=issue['self']+"?fields="+",".join(fields_to_clone), headers=headers)
    fields = json.loads(master_test.content)['fields']
    fields['project'] = {'id':'10000'}
    fields['parent'] = {'id':issue['id']}
    fields['issuetype'] = {'id':'21'}

    # If you want assignee and versions to be set for the created test case add it here.
    #fields['versions'] = {'id':version}
    #fields['assignee'] = {'id':assignee}

    payload = {
        "fields":fields
    }
    rs = requests.post(url="https://jira.mycompany.com/rest/api/2/issue", headers=headers, data=json.dumps(payload))
    if rs.status_code == 201:
        print "Test case created: "+rs.text

SonarQube quality gate feedback in a Slack channel

Well it is easy to agree on that Slack is awesome for team collaboration and with the hooks it has available it is rather easy to get some decent feedback into the team channels.

There is a Jenkins CI plugin available for Slack that pushes build notifications into a given channel. But that merely pushes what Jenkins provides, failed, unstable asf.. If you are working with SonarQube and want to know a bit more than if the build turned unstable, then there is an additional Jenkins plugin available for you, SonarQube-Slack-Pusher

… which would push a notification like this in a Slack channel.

sonar-slack-pusher-notification

What needs to be in place

  1. QualityGate for a SonarQube project
  2. Incoming WebHook integration to your Slack channel
  3. Installed Jenkins plugin, Sonar-Slack-Pusher
  4. Configured Jenkins job pushing to Slack

Quality gates
The notification above highlights a quality gate that has failed in SonarQube. A quality gate is a way of highlighting that we are not meeting expected quality criteria for a SonarQube project. In the SonarQube UI this is visualised by read and yellow highlighting and could be things like warnings in the code or not enough unit test coverage.

Well, yes a quality gate needs to be defined in SonarQube. It could be linked to an existing project already but it is also possible to pass the quality gate as a part of running a SonarQube analysis job.

Configure the Slack channel
The SonarQube project/’quality gate’ and the Slack channel is there. Add an Incoming WebHook to the channel and note down the hook URL, you will need it when configuring the Jenkins job, you will also need channel admin rights to perform this step.

Pushing quality gate information from Jenkins to the Slack channel
Assuming that there is a Jenkins job that triggers and runs a SonarQube pipeline, i.e. pushing the results to SonarQube. Add the Sonar-Slack-Pusher plugin to your Jenkins installation and add the corresponding Post-build Action to she SonarQube job.

Configure it to use

  • hook URL given when configuring the Slack WebHook
  • sonar URL
  • SonarQube job name and if using branches add that as well.

jenkins-config

Bam all set! Run the job and if you have any failing quality gates you will get a notification in your channel.

Testing in Agile life cycle cheat sheet v2

Finally managed to squeeze in the testing part of our Agile life cycle on a one pager that can be handed out to the teams, well we are on version two already. We’ve had this in mind for while since some companies hand out a test policy sheet to devs, but we did not want to have anything that strict. So when finding the nutrition fact sheet by CARLZ J @ Söderström Creative at local CrossFit-gym it was easy, simply had to transform that sheet in to a hand-out’able QA statement, company colours and all.   Screen Shot 2014-09-22 at 11.34.16-1

Overview of sporadically failing test cases in Jenkins – UnitTH Jenkins plugin

Never, ever a stable blue build right? It does not seem to matter what you do, but for system integration tests there always seems to one or a few sporadically failing test cases. What can be extra annoying is that there might always be different test cases that sporadically fail, not the same one or two. To be able to determine the success of executed test suites there is a need to get an overview of the last few runs. The easiest way to visualise this is to create a matrix of test runs versus test cases to get a hit map with frequency of failures for specific test cases.

unitth-matrix

Test case failure spread.

For those using Jenkins there is a small Jenkins plugin at Sourceforge that can be added to the post build steps of any job that generates test results. The plugin gives you useful overview and stats for the available build test results.

Installation

Usage

  • Edit the configuration of a Jenkins job and in the post build section select the plugin
  • Run the job again to get the first matrix report, there will be a link in the sidebar
  • Every failed run of a test case has a link to the report trace in the build it was executed, simply click on the red hits in the matrix

ScalaTests on Jenkins using Maven, use case examples

This is a post for how to do it with Maven, the same functionality can be acheived by using Sbt if preferring that option.

There are a number of threads out there that addresses the topic of configuring and running ScalaTest test cases/specs on Jenkins, some threads older than others but since this seems to be an re-occuring question topic why not to examplify how it can be done based for some of the most common use cases that seems to be popping up in the forums. So if you have problems setting up jobs that are running specific test specs or test that are tagged then have a look at the examples below.

Core knowledge basics

scalatest-maven-plugin

This plugins enbles execution of ScalaTests tests using Maven without any extra fuzz like using @RunWith(classOf[JUnitRunner])… I takes a set of configuration options that are well defined on the . So start by adding this plugin to your Maven pom build section.

The ScalaTest options used in the examples below are added to the plugin’s excecution configuration according to.

<plugin>
   <groupId>org.scalatest</groupId>
   <artifactId>scalatest-maven-plugin</artifactId>
   <version>1.0-M2</version>
   <configuration>
      <reportsDirectory>${project.build.directory}/scalatest-reports</reportsDirectory>
      <junitxml>.</junitxml>
      <testFailureIgnore>true</testFailureIgnore>
      <filereports>WDF TestSuite.txt</filereports>
      <forkMode>never</forkMode>
      <parallel>${runSpecsInParallel}</parallel>
   </configuration>
   <executions>
      <execution>
         <id>test</id>
         <goals>
            <goal>test</goal>
         </goals>
         <configuration>
            <membersOnlySuites>${membersOnlySuites}</membersOnlySuites>
            <suites>${suites}</suites>
            <tagsToExclude>${tagsToExclude}</tagsToExclude>
            <tagsToInclude>${tagsToInclude}</tagsToInclude>
            <wildcardSuites>${wildcardSuites}</wildcardSuites>
         </configuration>
      </execution>
   </executions>
</plugin>
maven-profiles

For the majority of the use cases below we are using Maven project profiles the are piped to the Maven execution using the -P options. By example: mvn test -Pmyprofile1

The profiles we are setting up are defining NONE or more of the ScalaTest configuration options described above.

<profile>
   <id>myprofile1</id>
   <properties>
      <tagsToExclude>SlowTest</tagsToExclude>
      <membersOnlySuites>com.mycompany.app.api</membersOnlySuites>
   </properties>
</profile>
Jenkins jobs

These are straight forward, create a maven job that runs a goal in line with: mvn test -Pmyprofile1.

Use cases addressed

  1. Running a selected set of test specs based on a package structure using, membersOnlySuites and wildcardSuites
  2. Running a selected set if tests based on spec names using suites
  3. Running a selected set of tests that are tagged using tagsToInclude
  4. Running a selected set of tests but exluding tests that have a certain tag
  5. Running tests with a given profile and overriding scalatest properties

Assuming the following test spec structure for all examples below

|- com.mycompany.app.api.v1
| |- CreateEntitySpec.scala
| \- DeleteEntitySpec.scala
\- com.mycompany.app.api.v2

Selected set of tests based on package structure

The membersOnlySuites picks up specs that are directly placed in the given package. It does not pick up any specs in its sub-packages.

// Runs all v1 tests
mvn test -DmembersOnlySuites=com.mycompany.app.api.v1
// Runs all v1 and v2 tests
mvn test -DmembersOnlySuites=com.mycompany.app.api.v1,com.mycompany.app.api.v2

The property wildcardSuites on the other hand will pick up all specs in the given package and all sub packages.

// Runs all v1 and v2 tests
mvn test -DwildcardSuites=com.mycompany.app.api

 

Running a selected set if tests based on spec names

// Runs all tests in the CreateEntitySpec and DeleteEntitySpec
mvn test -Dsuites=com.mycompany.app.api.v1.CreateEntitySpec,com.mycompany.app.api.v1.DeleteEntitySpec

 

Running a selected set of tests that are tagged

// Runs all tests in the CreateEntitySpec and DeleteEntitySpec that are tagged as SmokeTest and/or FastTest
mvn test -Dsuites=com.mycompany.app.api.v1.CreateEntitySpec,com.mycompany.app.api.v1.DeleteEntitySpec
-DtagsToInclude=SmokeTest,FastTest

 

Running a selected set of tests but exluding tests that have a certain tag

// Runs all test cases in all test spec but not those tests that are tagged as VerySlowTest
mvn test -DmembersOnlySuites=com.mycompany.app.api -DtagsToExclude=VerySlowTest

 

Running tests with a given profile and overriding scalatest properties

Resetting properties are done using the value None otherwise just overwrite by giving it a new value.

// Runs all tests as given in the myprofile1 profile in the pom file resetting
// the tagsToExclude property set in the profile in the pom.
mvn test -Pmyprofile1 -DtagsToExclude=None