JUnit: Excluding data driven tests

Once in a while there is that single test case that cannot be executed for certain data when running data driven tests. This could be a true  annoyment and it is tempting to break out those tests into a separate test class, don’t do that and stay away from taking shortcuts that will give you;

  • False test statistics, reporting passing test that has never been run
  • Avoid Ignoring tests e.g. using conditional ignores or JUnit’s assume.

A simple way to exclude a test from being executed when running data driven tests using the Parameterized.class is to set up a conditional JUnit rule. A test rule that based on the current test data in the data driven loop runs OR excludes a specifc test.

Consider the scenario below

We run the all tests below with the following samples “a”, “b”, “c” but know that test02 will not work for the sample “b” so we will have to deal with that in a clever way.

@Runwith(Parameterized.class)
public class Test {

  private String sample;

  public Test(String sample) {
    this.sample = sample;
  }

  @Parameters
  public static Collection<Object[]> generateSamples() {
    final List <Object[]> samples = new ArrayList<Object[]>();
    sample.add(new Object[]{"a"});
    sample.add(new Object[]{"b"});
    sample.add(new Object[]{"c"});
    return samples;
  }

  @Test
  public void test01() {
    // Works with sample "a", "b", "c"
  }

  @Test
  public void test02() {
    // Works with sample "a", "c" BUT NOT "b"
  }

  @Test
  public void test03() {
    // Works with sample "a", "b", "c"
  }
}

Mark-up with test annotation

Step one is to mark-up a test case so that we know that it will only run for certain samples. We will do this by adding an annotation, in this case we call it Samples.

@Test
public void test01() { ... }

@Samples({"a","c"})
@Test
public void test02() { ... }

@Samples({"a","b","c"})
@Test
public void test03() { ... }

Using the test annotation Samples we can start filtering during run-time if to execute a test case or not. A test without the annotation will always run the test no matter what data that is reurned in the data driven loop. If the annotation is in place the test will only run for the samples that matches the values in the annotation. In case of test02 it will only run for samples “a” and “c”.

The test annotation

Adding the annotation is straight forward, read up on it here.

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface Samples {
    String [] value();
}

Adding the exclusion rule

Worth knowing is that JUnit rules will always be created before a test class constructor. This means that a rule will not know the test data sample of the current data driven iteration. The current data driven value is always passed to the test class constructor (Parameterized.class) and hence the rule we are going to implement needs to get that information as well at that point.

@Runwith(Parameterized.class)
public class Test {

  @Rule
  public OnlyRunForSampleRule rule = new OnlyRunForSampleRule();
  private String sample;

  public Test(String sample) {
    this.sample = sample;
    rule.setSample(sample); // <<-- HERE
  }

With the current data sample and the Samples annotation values known at run-time, we define a simple rule that excludes test cases if they are not supposed to run for certain test data.

public class OnlyRunForSampleRule implements TestRule {

  private String sample;

  @Override
  public Statement apply(Statement s, Description d) {
    Protocols annotation = d.getAnnotation(Samples.class);
    // No annotation/samples matching, always run
    if (annotation == null) {
      return s;
    }
    // Match! One sample value matches current parameterized sample value
    else if (Arrays.asList(annotation.value()).contains(protocol)) {
      return s;
    }
    // No match in the samples annotation, skipping
    return new Statement() {
      @Override
      public void evaluate() throws Throwable {}
    };
  }

  public void setSample(String sample) {
    this.sample = sample;
  }
}

The rule above is defined to run tests on known samples, creating the inverse of this rule is simply done by returning the empty Statement in the else if that checks for matches in the Samples annotation.

Debugging WireMock calls when using JUnit WireMockRule

Mocking using the WireMockRule in your JUnit test classes and struggle with 404’s?

It is not that trivial to find in the WireMock documentation but it is in there, under ‘Listening for requests’ @ http://wiremock.org/verifying.html. Plain debugging fine, but sometimes one really wants to know the details of the calls made to the underlying services that are consumed, especially when WireMocking these services and there is a fine grained matching mechanism to deal with.

Below is the quick awesome tip to get the details you need to resolve the WireMock returned 404’s easily.

Add a request listener to your WireMockRule and Use Java 8 lambdas to smoothly implement the WireMock interface RequestListener that has the single method requestReceived(Request request, Response response). Print out the reponse and request details you want. Run your tests and check the print outs, all set!

import com.github.tomakehurst.wiremock.junit.WireMockRule;

public class Test {

   @Rule
   public WireMockRule wireMockRule = new WireMockRule(6969);

   @Before
   public void setupTest() {
      wireMockRule.addMockServiceRequestListener((request, response) -> {
         System.out.println("URL Requested => " + request.getAbsoluteUrl());
         System.out.println("Request Body => " + request.getBodyAsString());
         System.out.println("Request Headers => " + request.getAllHeaderKeys());
         System.out.println("Response Status => " + response.getStatus());
         System.out.println("Response Body => " + response.getBodyAsString());
      });
   }
   ...
}

Bundling ChromeDriver with your test code

This post will just exemplify one way of maintaining the chrome drivers used for running automated tests with WebDriver and the Chrome browser without having to update and install the ChromeDriver to all possible nodes where tests will be running. The ChromeDriver will simply be bundled with the running tests and put under source control as for all other test ware.

Downloading links
http://code.google.com/p/selenium/wiki/ChromeDriver

Bundling
Start by putting the the drivers under the resource folder so that they will be picked up by Maven per default.

<project root>/src/main/resources/chromedriver/mac/chromedriver
<project root>/src/main/resources/chromedriver/windows/chromedriver.exe

Implementation
There are of course different drivers for different OS types and this needs to be handled using the os.name system property.

As per the ChromeDriver usage instructions (here) a system property has to be set pointing to the ChromeDriver server to use for the Chrome browser bridging. We will not point to a fixed location in the file system, instead well get the path by using the Class.getResource which will enable us to bundle ChromeDriver inside our test framework even if it is bundled into a jar file.

Basically what should be done are the following steps.

  • Determine OS type
  • Get the Chrome Driver resource and make sure it is executable using File.setExecutable(true). This is due to when packaged in a jar the execution attributes ‘x’ will be removed on Mac (and assumed on Linux too).
  • Set the “web driver.chrome.driver” system property.
  • Check that a Chrome installation exists in the default location [OPTIONAL]
private static WebDriver driver = null;
// The ChromeDriver locations under the resource folder
private static String MAC_DRIVER = "/chromedriver/mac/chromedriver";
private static String WINDOWS_DRIVER = "/chromedriver/windows/chromedriver.exe";

public static void setupChromeDriver() {
   // OS type
   if (System.getProperty("os.name").contains("Mac")) {
      File cDriver = new File(Tester.class.getResource(MAC_DRIVER).getFile());

      // Is it executable
      if (!cDriver.canExecute()) {
         cDriver.setExecutable(true);
      }
      System.setProperty("webdriver.chrome.driver", Tester.class.getResource(MAC_DRIVER).getFile());

      // Now checking for existence of Chrome executable.'
      if (!new File("/Applications/Google Chrome.app/Contents/MacOS/Google Chrome").exists()) {
         throw new RuntimeException(errorMessage);
      }
   } else {
      System.setProperty("webdriver.chrome.driver", Tester.class.getResource(WINDOWS_DRIVER).getFile());

      // Now checking for existence of Chrome executable.'
      if (!new File(System.getProperty("user.home") + "/AppData/Local/Google/Chrome/Application/chrome.exe").exists()) {
         throw new RuntimeException(errorMessage);
      }
   }

   ChromeOptions options = new ChromeOptions();
   options.addArguments("--start-maximized");
   options.addArguments("--ignore-certificate-errors");
   driver = new ChromeDriver(options);
}

Test case example
Pretty straight on from here, setup WebDriver through the implemented method above and run a simple open page test to see that things worked out.

private static WebDriver driver = null;
public static void setupChromeDriver(){
   ...
}

@BeforeClass
public static void setupTestClass() throws Exception {
   setupChromeDriver();
}

@Test
public void demoTestCase() throws Exception {
   driver.get("http://code.google.com/p/selenium/wiki/ChromeDriver");
   Thread.sleep(1000);
}

JUnit @Rule for printing test case start and end information

If you are watching the test execution trace during run-time and is missing the information about when a test cases starts and ends, then it can be easily fixed by adding a JUnit rule that prints out this information.

[TEST START] tc_shallPass
stuff printed during execution
[TEST ENDED] Time elapsed: 1.835 sec

The start and end tags gets generated automatically and hence no need to add any test case specific printing.

@Test
public void tc_shallPass() {
	System.out.println("stuff printed during execution");
	assertTrue(true);
}

To hook up the rule to all test cases in a class it has to be declared and instantiated first.

public class TestClass {
	@Rule
	public TestCasePrinterRule pr = new TestCasePrinterRule(System.out);
	
	// @Test
	// ...
}

So far so good, but how is this claimed to be “easily fixed” rule implemented then? Pretty straight forward by implmenting the org.junit.rules.TestRule and using a private class extending the org.junit.rules.ExternalResource. The example below calculates the execution time for the test case as well as providing the name of the current test case.

During run-time the start message will be printed before any other @Before annotated methods in the class but after the @BeforeClass methods. The apply method handles the creation of start and end tags since it has access to the test case description content, org.junit.runner.Description. The actual printing is done in the overridden before and after methods.

package com.yourcompany.customrules;

import java.io.IOException;
import java.io.OutputStream;
import java.text.DecimalFormat;

import org.junit.rules.ExternalResource;
import org.junit.rules.TestRule;
import org.junit.runner.Description;
import org.junit.runners.model.Statement;

public class TestCasePrinterRule implements TestRule {

    private OutputStream out = null;
    private final TestCasePrinter printer = new TestCasePrinter();

    private String beforeContent = null;
    private String afterContent = null;
    private long timeStart;
    private long timeEnd;
    
    public TestCasePrinterRule(OutputStream os) {
        out = os;
    }

    private class TestCasePrinter extends ExternalResource {
        @Override
        protected void before() throws Throwable {
            timeStart = System.currentTimeMillis();
            out.write(beforeContent.getBytes());
        };


        @Override
        protected void after() {
            try {
                timeEnd = System.currentTimeMillis();
                double seconds = (timeEnd-timeStart)/1000.0;
                out.write((afterContent+"Time elapsed: "+new DecimalFormat("0.000").format(seconds)+" sec\n").getBytes());
            } catch (IOException ioe) { /* ignore */
            }
        };
    }

    public final Statement apply(Statement statement, Description description) {
        beforeContent = "\n[TEST START] "+description.getMethodName()+"\n"; // description.getClassName() to get class name
        afterContent =  "[TEST ENDED] ";
        return printer.apply(statement, description);
    }
}

Using JUnit @Category and Maven profiles to get stable test suites

Since it happens over and over again, no matter what project I am in. Keeping the test suites green and passing is a …

The blind eye

The one sneaking little test case that starts failing can prove to be a real killer to all automation approaches. More than once it has shown that as soon as a single test case breaks a test suite things starts to go downhill, fast. Ok, there could be acceptable resons for the test case to be broken, e.g. if there are issues in the system under test or in the test framework that are not bugs.

The usual decision taken is to leave the test case as is for now since we are all aware that it will fail for a while, BAD choice!!! All attention on the failing test suite is moving from slight attention to abandoned, and if other test in the test same test suite starts failing for various reasons it will most likely go by undetected.

What can/should be done here

  • Disable the test case and enable it again when it should be back on track
  • Maintain test suites to allow it to execute and fail but in a another suite/run to keep the MUST always pass test suites all green

The later case is the way to go.

Maven profiles to your rescue

Using Maven profiles it is easy to group your JUnit test classes and test cases in different categories. Usually this is used for grouping tests into slow, fast and browser-less tests a.s.f.. This approach can be stretched even further by using a grouping that reflects the current state of the test case. There does not have to be that many different groupable categories but at least two is needed.

  • Stable
  • Unstable/Maturing

Stable is self explained while Unstable/Maturing covers everything that is NOT always PASSING. It might be test cases that are unstable beacuse of;

  • timing issues
  • tested functionality is still under development (during sprints)
  • there is a low priority bug that will be fixed later
  • unknown issues causing it to sporadically fail
  • waiting to get a STABLE stamp

Use case 1
The last item is an approach that can be used to be really be sure that the stable test suites does not get polluted. Any new test case needs to run a set number of times (e.g. 20) in a test suite and must pass all all times before being stamped as a reliable test case.

Use case 2
When a bug surfaces that causes a test case to fail it might sometimes be the case that it wont be fixed for some time period. Instead of totally disabling or trying to fix the test case that would fail an otherwise stable test suite it shall be moved to an unstable test suite. This is a way to put the test case in a quarantine until the bug has been fixed and on the same tame make sure it still compiles and is executable.

Categorizing tests using JUnit @Category annotation
Use the JUnit @Category annotation in front of your test cases or test classes.

@Category(x.y.z.Stable.class)
@Test
public void aTestCase() {
  // void
}

// 
// Alternatively an unstable test case
//

@Category(x.y.z.Unstable.class)
@Test
public void anotherTestCase() {
  // void
}

Splitting test execution using Maven profiles

Configure your Surefire plugin to include and exclude certain test classes as well as determine a set of JUnit @Category groups to be included in the test run. Set up your pom including profiles as examplified below.

<build>	
	<plugins>	
		<plugin>
			<groupId>org.apache.maven.plugins</groupId>
			<artifactId>maven-surefire-plugin</artifactId>
			<version>2.12.2</version>
			<dependencies>
				<dependency>
					<groupId>org.apache.maven.surefire</groupId>
					<artifactId>surefire-junit47</artifactId>
					<version>2.12.2</version>
				</dependency>
			</dependencies>
			<configuration>
				<testFailureIgnore>true</testFailureIgnore>
				<excludes>
					<exclude>${exclude.tests}</exclude>
				</excludes>
				<includes>
					<include>${include.tests}</include>
				</includes>
				<groups>${testcase.groups}</groups>
				<configuration>
					<forkMode>never</forkMode>
					<runOrder>random</runOrder>
				</configuration>
			</configuration>
		</plugin>
	</plugins>
</build>

The testcase.groups property can be used to define a set of groups (JUnit @Categories) to include when tests using the actual profile.

<profiles>
	<profile>
		<id>stable-tests</id>
		<properties>
			<exclude.tests>**/x/**/*.java</exclude.tests>
			<include.tests>**/y/**/*.java</include.tests>
			<testcase.groups>x.y.z.Stable</testcase.groups>
		</properties>
	</profile>
	<profile>
		<id>unstable-tests</id>
		<properties>
			<exclude.tests>**/x/**/*.java</exclude.tests>
			<include.tests>**/y/**/*.java</include.tests>
			<testcase.groups>x.y.z.Unstable,x.y.z.Maturing</testcase.groups>
		</properties>
	</profile>
</profiles>

Data-driven soapUI JUnit integration with readable test reports

I have never really been satisfied with the way soapUI and JUnit integration had to be done. There where too many things that had to be maintained at two ends. This posting will describe an approach for runing soapUI test cases through JUnit using the data driven parts of JUnit with a small extension that makes the reporting really useful (traceable). In short, soapUI test case names are not needed any more, JUnit will pick up all non disabled test cases in a soapUI test suite and will execute and use the retrieved soapUI test case name in the report.

  • Minimal JUnit wrapping test classes
  • Superb reporting

Using

  • soapUI APIs – for running tests and retrieveing test cases
  • JUnit 4.8.x+ Parameterized

There are may examples on the web that explains how to wrap JUnit test cases around soapUI test cases. The basic approach for performing this is to simply map all test cases (test case names as strings) into single JUnit test cases. This is a very fragile approach since any change in a test case name needs to be reflected in the JUnit test suite.

  • soapUI test cases can be renamed at will
  • soapUI test cases can be enabled/disabled at will

Both of the above will break any JUnit test suites of not maintained properly.

JUnit 4 data-driven

JUnit 4.x comes with a feature where tests can be Parameterized or in other words data-driven. By using the Parameterized functionality it is rather trivial to extract all test cases to be executed and feed the test cases names as arguments to the test classes. This approach makes the JUnit test classes minimal and easy to maintain since all test case are read from the soapUI project files.

Sounds superb out of the box but things are seldom as fancy as the first appear. JUnit lacks a proper mechanism for the test reporting, all executed tests simply get names in the following format.


package.testclass
  \-<JUnit test case name>[0]
  \-<JUnit test case name>[1]
  \-<JUnit test case name>[2]
  \- ...

Not very useful since this makes the tracing back to the actual soapUI test case rather annoying.

So how to get what we want in place, starting from the end?

JUnit Parameterized extensions

I struggled with this bit for a while before I got it to work. After trying a few different approaches for extending the native JUnit functionality the easiest approach turned out to be to clone and extend the Parameterized class from the JUnit jar. Download any source jars from https://github.com/KentBeck/junit/downloads and fetch the org/junit/runners/Parameterized.java.

With the entire class code available the extension was trivial by simply edit the returned string from the getName and testName methods. I choose to use the exact name as given in the soapUI test suite, replacing all whitespace characters with an underscore ‘_’ sign. The soapUI test case name is fed through the JUnit class as described further down.


// My class is named ParameterizedExtended and is an exact clone of Parameterized apart from
// the changes below.

// Interface renaming to avoid any possible conflicts with the original classes in the JUnit diet
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public static @interface ParametersExtended {
}

@Override
protected String getName() {
  Object[] tcName = fParameterList.get(fParameterSetNumber);
  return String.format("[%s]-%s", fParameterSetNumber,((String)tcName[0]).replaceAll(" ", "_"));
}


@Override
protected String testName(final FrameworkMethod method) {
  Object[] tcName = fParameterList.get(fParameterSetNumber);
  return String.format("%s", ((String)tcName[0]).replaceAll(" ", "_"));
}

The JUnit test class

For enabling the data-driven parts the test class fist of all needs to use the @RunWith annotation pointing at your extended Parameterized class. Apart from this there is a minimum of methods to be implmented the CTOR, the method for getting all test cases, the actual test case and the soapUI runner.

Implementing the test class

Annotations used, @RunWith, “@ParametersExtended” (from the interface in the extended Parameterized class), @Test. Every test case gets an instance where a unique test case name is set based on the collection returned in the getTestCases method. The collection is really a set of String arrays but in this example only a single String is needed, the name of a soapUI test case.


@RunWith(ParameterizedExtended.class)
public class DemoTestSuite {

  private String testCaseName;
  public DemoTestSuite(String testCaseName) {
    super();
    this.testCaseName = testCaseName;
  }

  private boolean runSoapUITestCase(String testCase) {
    ...
  }

  @ParametersExtended
  public static Collection getTestCases() {
    ...
  }

  @Test
  public void TC_SOAP() {
    assertTrue(runSoapUITestCase(this.testCaseName));
  }
}

Fetching all the soapUI test cases

The getTestCases method looks for test cases in the soapUI project and returns all test cases present in a given test suite ignoring all disabled ones.


public static Collection<String[]> getTestCases() {
  final ArrayList<String[]> testCases = new ArrayList<String[]>();
  WsdlProject soapuiProject = new WsdlProject("PROJECT_FILE_NAME");
  WsdlTestSuite wsdlTestSuite = soapuiProject.getTestSuiteByName("TEST_SUITE_NAME");
  List<TestCase> testCaseStrings = wsdlTestSuite.getTestCaseList();

  for (TestCase ts : testCaseStrings) {
    if (!ts.isDisabled()) {
      testCases.add(new String[] {ts.getName()});
    }
  }
  return testCases;
}

Running the soapUI test case

A basic soapUI runner as described in any soapUI forums or tutorials, add it to your framework and kick off some tests.


public static boolean runSoapUITestCase(WsdlProject soapuiProject, String testSuite, String testCase) {
  TestRunner.Status exitValue = TestRunner.Status.INITIALIZED;
  WsdlProject soapuiProject = new WsdlProject("PROJECT_FILE_NAME");
  WsdlTestSuite wsdlTestSuite = soapuiProject.getTestSuiteByName(testSuite);
  if (wsdlTestSuite == null) {
    System.err.println("soapUI runner, test suite is null: "+testSuite);
    return false;
  }
  WsdlTestCase soapuiTestCase = wsdlTestSuite.getTestCaseByName(testCase);
  if (soapuiTestCase == null) {
    System.err.println("soapUI runner, test case is null: " + testCase);
    return false;
  }
  soapuiTestCase.setDiscardOkResults(true);
  WsdlTestCaseRunner runner = soapuiTestCase.run(new PropertiesMap(), false);
  exitValue = runner.getStatus();

  System.out.println("soapUI test case ended ('" + testSuite + "':'"+ testCase + "'): " + exitValue);
  if (exitValue == TestRunner.Status.FINISHED) {
    return true;
  } else {
    return false;
  }
}

The final output

With a minimal test class implementation, only the soapUI project file and test suite needs to be defined to get an easy to maintain JUnit/soapUI framework that has the kind of reporting one would expect to be in place.

package.testclass
  \-<soapUI test case name A>[0]
  \-<soapUI test case name B>[1]
  \-<soapUI test case name C>[2]
  \- ...