JUnit: Excluding data driven tests

Once in a while there is that single test case that cannot be executed for certain data when running data driven tests. This could be a true  annoyment and it is tempting to break out those tests into a separate test class, don’t do that and stay away from taking shortcuts that will give you;

  • False test statistics, reporting passing test that has never been run
  • Avoid Ignoring tests e.g. using conditional ignores or JUnit’s assume.

A simple way to exclude a test from being executed when running data driven tests using the Parameterized.class is to set up a conditional JUnit rule. A test rule that based on the current test data in the data driven loop runs OR excludes a specifc test.

Consider the scenario below

We run the all tests below with the following samples “a”, “b”, “c” but know that test02 will not work for the sample “b” so we will have to deal with that in a clever way.

@Runwith(Parameterized.class)
public class Test {

  private String sample;

  public Test(String sample) {
    this.sample = sample;
  }

  @Parameters
  public static Collection<Object[]> generateSamples() {
    final List <Object[]> samples = new ArrayList<Object[]>();
    sample.add(new Object[]{"a"});
    sample.add(new Object[]{"b"});
    sample.add(new Object[]{"c"});
    return samples;
  }

  @Test
  public void test01() {
    // Works with sample "a", "b", "c"
  }

  @Test
  public void test02() {
    // Works with sample "a", "c" BUT NOT "b"
  }

  @Test
  public void test03() {
    // Works with sample "a", "b", "c"
  }
}

Mark-up with test annotation

Step one is to mark-up a test case so that we know that it will only run for certain samples. We will do this by adding an annotation, in this case we call it Samples.

@Test
public void test01() { ... }

@Samples({"a","c"})
@Test
public void test02() { ... }

@Samples({"a","b","c"})
@Test
public void test03() { ... }

Using the test annotation Samples we can start filtering during run-time if to execute a test case or not. A test without the annotation will always run the test no matter what data that is reurned in the data driven loop. If the annotation is in place the test will only run for the samples that matches the values in the annotation. In case of test02 it will only run for samples “a” and “c”.

The test annotation

Adding the annotation is straight forward, read up on it here.

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface Samples {
    String [] value();
}

Adding the exclusion rule

Worth knowing is that JUnit rules will always be created before a test class constructor. This means that a rule will not know the test data sample of the current data driven iteration. The current data driven value is always passed to the test class constructor (Parameterized.class) and hence the rule we are going to implement needs to get that information as well at that point.

@Runwith(Parameterized.class)
public class Test {

  @Rule
  public OnlyRunForSampleRule rule = new OnlyRunForSampleRule();
  private String sample;

  public Test(String sample) {
    this.sample = sample;
    rule.setSample(sample); // <<-- HERE
  }

With the current data sample and the Samples annotation values known at run-time, we define a simple rule that excludes test cases if they are not supposed to run for certain test data.

public class OnlyRunForSampleRule implements TestRule {

  private String sample;

  @Override
  public Statement apply(Statement s, Description d) {
    Protocols annotation = d.getAnnotation(Samples.class);
    // No annotation/samples matching, always run
    if (annotation == null) {
      return s;
    }
    // Match! One sample value matches current parameterized sample value
    else if (Arrays.asList(annotation.value()).contains(protocol)) {
      return s;
    }
    // No match in the samples annotation, skipping
    return new Statement() {
      @Override
      public void evaluate() throws Throwable {}
    };
  }

  public void setSample(String sample) {
    this.sample = sample;
  }
}

The rule above is defined to run tests on known samples, creating the inverse of this rule is simply done by returning the empty Statement in the else if that checks for matches in the Samples annotation.

Advertisements

Data-driven soapUI JUnit integration with readable test reports

I have never really been satisfied with the way soapUI and JUnit integration had to be done. There where too many things that had to be maintained at two ends. This posting will describe an approach for runing soapUI test cases through JUnit using the data driven parts of JUnit with a small extension that makes the reporting really useful (traceable). In short, soapUI test case names are not needed any more, JUnit will pick up all non disabled test cases in a soapUI test suite and will execute and use the retrieved soapUI test case name in the report.

  • Minimal JUnit wrapping test classes
  • Superb reporting

Using

  • soapUI APIs – for running tests and retrieveing test cases
  • JUnit 4.8.x+ Parameterized

There are may examples on the web that explains how to wrap JUnit test cases around soapUI test cases. The basic approach for performing this is to simply map all test cases (test case names as strings) into single JUnit test cases. This is a very fragile approach since any change in a test case name needs to be reflected in the JUnit test suite.

  • soapUI test cases can be renamed at will
  • soapUI test cases can be enabled/disabled at will

Both of the above will break any JUnit test suites of not maintained properly.

JUnit 4 data-driven

JUnit 4.x comes with a feature where tests can be Parameterized or in other words data-driven. By using the Parameterized functionality it is rather trivial to extract all test cases to be executed and feed the test cases names as arguments to the test classes. This approach makes the JUnit test classes minimal and easy to maintain since all test case are read from the soapUI project files.

Sounds superb out of the box but things are seldom as fancy as the first appear. JUnit lacks a proper mechanism for the test reporting, all executed tests simply get names in the following format.


package.testclass
  \-<JUnit test case name>[0]
  \-<JUnit test case name>[1]
  \-<JUnit test case name>[2]
  \- ...

Not very useful since this makes the tracing back to the actual soapUI test case rather annoying.

So how to get what we want in place, starting from the end?

JUnit Parameterized extensions

I struggled with this bit for a while before I got it to work. After trying a few different approaches for extending the native JUnit functionality the easiest approach turned out to be to clone and extend the Parameterized class from the JUnit jar. Download any source jars from https://github.com/KentBeck/junit/downloads and fetch the org/junit/runners/Parameterized.java.

With the entire class code available the extension was trivial by simply edit the returned string from the getName and testName methods. I choose to use the exact name as given in the soapUI test suite, replacing all whitespace characters with an underscore ‘_’ sign. The soapUI test case name is fed through the JUnit class as described further down.


// My class is named ParameterizedExtended and is an exact clone of Parameterized apart from
// the changes below.

// Interface renaming to avoid any possible conflicts with the original classes in the JUnit diet
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public static @interface ParametersExtended {
}

@Override
protected String getName() {
  Object[] tcName = fParameterList.get(fParameterSetNumber);
  return String.format("[%s]-%s", fParameterSetNumber,((String)tcName[0]).replaceAll(" ", "_"));
}


@Override
protected String testName(final FrameworkMethod method) {
  Object[] tcName = fParameterList.get(fParameterSetNumber);
  return String.format("%s", ((String)tcName[0]).replaceAll(" ", "_"));
}

The JUnit test class

For enabling the data-driven parts the test class fist of all needs to use the @RunWith annotation pointing at your extended Parameterized class. Apart from this there is a minimum of methods to be implmented the CTOR, the method for getting all test cases, the actual test case and the soapUI runner.

Implementing the test class

Annotations used, @RunWith, “@ParametersExtended” (from the interface in the extended Parameterized class), @Test. Every test case gets an instance where a unique test case name is set based on the collection returned in the getTestCases method. The collection is really a set of String arrays but in this example only a single String is needed, the name of a soapUI test case.


@RunWith(ParameterizedExtended.class)
public class DemoTestSuite {

  private String testCaseName;
  public DemoTestSuite(String testCaseName) {
    super();
    this.testCaseName = testCaseName;
  }

  private boolean runSoapUITestCase(String testCase) {
    ...
  }

  @ParametersExtended
  public static Collection getTestCases() {
    ...
  }

  @Test
  public void TC_SOAP() {
    assertTrue(runSoapUITestCase(this.testCaseName));
  }
}

Fetching all the soapUI test cases

The getTestCases method looks for test cases in the soapUI project and returns all test cases present in a given test suite ignoring all disabled ones.


public static Collection<String[]> getTestCases() {
  final ArrayList<String[]> testCases = new ArrayList<String[]>();
  WsdlProject soapuiProject = new WsdlProject("PROJECT_FILE_NAME");
  WsdlTestSuite wsdlTestSuite = soapuiProject.getTestSuiteByName("TEST_SUITE_NAME");
  List<TestCase> testCaseStrings = wsdlTestSuite.getTestCaseList();

  for (TestCase ts : testCaseStrings) {
    if (!ts.isDisabled()) {
      testCases.add(new String[] {ts.getName()});
    }
  }
  return testCases;
}

Running the soapUI test case

A basic soapUI runner as described in any soapUI forums or tutorials, add it to your framework and kick off some tests.


public static boolean runSoapUITestCase(WsdlProject soapuiProject, String testSuite, String testCase) {
  TestRunner.Status exitValue = TestRunner.Status.INITIALIZED;
  WsdlProject soapuiProject = new WsdlProject("PROJECT_FILE_NAME");
  WsdlTestSuite wsdlTestSuite = soapuiProject.getTestSuiteByName(testSuite);
  if (wsdlTestSuite == null) {
    System.err.println("soapUI runner, test suite is null: "+testSuite);
    return false;
  }
  WsdlTestCase soapuiTestCase = wsdlTestSuite.getTestCaseByName(testCase);
  if (soapuiTestCase == null) {
    System.err.println("soapUI runner, test case is null: " + testCase);
    return false;
  }
  soapuiTestCase.setDiscardOkResults(true);
  WsdlTestCaseRunner runner = soapuiTestCase.run(new PropertiesMap(), false);
  exitValue = runner.getStatus();

  System.out.println("soapUI test case ended ('" + testSuite + "':'"+ testCase + "'): " + exitValue);
  if (exitValue == TestRunner.Status.FINISHED) {
    return true;
  } else {
    return false;
  }
}

The final output

With a minimal test class implementation, only the soapUI project file and test suite needs to be defined to get an easy to maintain JUnit/soapUI framework that has the kind of reporting one would expect to be in place.

package.testclass
  \-<soapUI test case name A>[0]
  \-<soapUI test case name B>[1]
  \-<soapUI test case name C>[2]
  \- ...