Test automation don’ts #1 – Separate repositories

Since this is one of these days when frustration is at what feels like an all time high ‘again’ what better to do that to get it all out of you. When it comes to test automation there are so many poor decisions out there… yes, many. I’ll try to address some of these here. The first one out is, da-da.

#1 Do not keep automation code in a separate reposity

So what does it mean, consider the case where all test cases and test framework code is in a nicely well structured repository and the code to be tested resides in an equally professionally handled repository. But what happens when it is time to run test cases? Is the HEAD of these repositories/branches ‘always’ in synch, well most likely they are not, not even if the codebase is completely ‘branch free’. The end result, most likely broken tests that will cause a lot of waste.

So for the case where there is one repository for the product code, DO NOT put the automation code in a sperate repository or you’ll end up struggling with failing test cases as soon as the branches get out of synch. On top of this guess what kind of overhead you will face if a team/teams are working with a feature branch strategy, let me smile and mention that this is what I struggle with today broken tests all over the place and no one has a clue about which failing tests that are regressions and which ones are expected to fail?

I have seen the scenario with separate repositories at several places and the usual reason is that the system (under test) it self is spread out over multiple repositories and the aim was to keep the automated tests in one place supporting all parts of a ‘system’. Not even tests that are defined as end-2-end tests spanning a full system will handle the scenario smoothly. I have been involved in several approaches trying to resolve problems without putting the automation code in the same repository, none of the approaches have been fully successful.

Identified problems

  • Test code out of synch with code under test
  • TWICE the amount of branches to maintain and work with
Advertisements

Bundling ChromeDriver with your test code

This post will just exemplify one way of maintaining the chrome drivers used for running automated tests with WebDriver and the Chrome browser without having to update and install the ChromeDriver to all possible nodes where tests will be running. The ChromeDriver will simply be bundled with the running tests and put under source control as for all other test ware.

Downloading links
http://code.google.com/p/selenium/wiki/ChromeDriver

Bundling
Start by putting the the drivers under the resource folder so that they will be picked up by Maven per default.

<project root>/src/main/resources/chromedriver/mac/chromedriver
<project root>/src/main/resources/chromedriver/windows/chromedriver.exe

Implementation
There are of course different drivers for different OS types and this needs to be handled using the os.name system property.

As per the ChromeDriver usage instructions (here) a system property has to be set pointing to the ChromeDriver server to use for the Chrome browser bridging. We will not point to a fixed location in the file system, instead well get the path by using the Class.getResource which will enable us to bundle ChromeDriver inside our test framework even if it is bundled into a jar file.

Basically what should be done are the following steps.

  • Determine OS type
  • Get the Chrome Driver resource and make sure it is executable using File.setExecutable(true). This is due to when packaged in a jar the execution attributes ‘x’ will be removed on Mac (and assumed on Linux too).
  • Set the “web driver.chrome.driver” system property.
  • Check that a Chrome installation exists in the default location [OPTIONAL]
private static WebDriver driver = null;
// The ChromeDriver locations under the resource folder
private static String MAC_DRIVER = "/chromedriver/mac/chromedriver";
private static String WINDOWS_DRIVER = "/chromedriver/windows/chromedriver.exe";

public static void setupChromeDriver() {
   // OS type
   if (System.getProperty("os.name").contains("Mac")) {
      File cDriver = new File(Tester.class.getResource(MAC_DRIVER).getFile());

      // Is it executable
      if (!cDriver.canExecute()) {
         cDriver.setExecutable(true);
      }
      System.setProperty("webdriver.chrome.driver", Tester.class.getResource(MAC_DRIVER).getFile());

      // Now checking for existence of Chrome executable.'
      if (!new File("/Applications/Google Chrome.app/Contents/MacOS/Google Chrome").exists()) {
         throw new RuntimeException(errorMessage);
      }
   } else {
      System.setProperty("webdriver.chrome.driver", Tester.class.getResource(WINDOWS_DRIVER).getFile());

      // Now checking for existence of Chrome executable.'
      if (!new File(System.getProperty("user.home") + "/AppData/Local/Google/Chrome/Application/chrome.exe").exists()) {
         throw new RuntimeException(errorMessage);
      }
   }

   ChromeOptions options = new ChromeOptions();
   options.addArguments("--start-maximized");
   options.addArguments("--ignore-certificate-errors");
   driver = new ChromeDriver(options);
}

Test case example
Pretty straight on from here, setup WebDriver through the implemented method above and run a simple open page test to see that things worked out.

private static WebDriver driver = null;
public static void setupChromeDriver(){
   ...
}

@BeforeClass
public static void setupTestClass() throws Exception {
   setupChromeDriver();
}

@Test
public void demoTestCase() throws Exception {
   driver.get("http://code.google.com/p/selenium/wiki/ChromeDriver");
   Thread.sleep(1000);
}

JUnit @Rule for printing test case start and end information

If you are watching the test execution trace during run-time and is missing the information about when a test cases starts and ends, then it can be easily fixed by adding a JUnit rule that prints out this information.

[TEST START] tc_shallPass
stuff printed during execution
[TEST ENDED] Time elapsed: 1.835 sec

The start and end tags gets generated automatically and hence no need to add any test case specific printing.

@Test
public void tc_shallPass() {
	System.out.println("stuff printed during execution");
	assertTrue(true);
}

To hook up the rule to all test cases in a class it has to be declared and instantiated first.

public class TestClass {
	@Rule
	public TestCasePrinterRule pr = new TestCasePrinterRule(System.out);
	
	// @Test
	// ...
}

So far so good, but how is this claimed to be “easily fixed” rule implemented then? Pretty straight forward by implmenting the org.junit.rules.TestRule and using a private class extending the org.junit.rules.ExternalResource. The example below calculates the execution time for the test case as well as providing the name of the current test case.

During run-time the start message will be printed before any other @Before annotated methods in the class but after the @BeforeClass methods. The apply method handles the creation of start and end tags since it has access to the test case description content, org.junit.runner.Description. The actual printing is done in the overridden before and after methods.

package com.yourcompany.customrules;

import java.io.IOException;
import java.io.OutputStream;
import java.text.DecimalFormat;

import org.junit.rules.ExternalResource;
import org.junit.rules.TestRule;
import org.junit.runner.Description;
import org.junit.runners.model.Statement;

public class TestCasePrinterRule implements TestRule {

    private OutputStream out = null;
    private final TestCasePrinter printer = new TestCasePrinter();

    private String beforeContent = null;
    private String afterContent = null;
    private long timeStart;
    private long timeEnd;
    
    public TestCasePrinterRule(OutputStream os) {
        out = os;
    }

    private class TestCasePrinter extends ExternalResource {
        @Override
        protected void before() throws Throwable {
            timeStart = System.currentTimeMillis();
            out.write(beforeContent.getBytes());
        };


        @Override
        protected void after() {
            try {
                timeEnd = System.currentTimeMillis();
                double seconds = (timeEnd-timeStart)/1000.0;
                out.write((afterContent+"Time elapsed: "+new DecimalFormat("0.000").format(seconds)+" sec\n").getBytes());
            } catch (IOException ioe) { /* ignore */
            }
        };
    }

    public final Statement apply(Statement statement, Description description) {
        beforeContent = "\n[TEST START] "+description.getMethodName()+"\n"; // description.getClassName() to get class name
        afterContent =  "[TEST ENDED] ";
        return printer.apply(statement, description);
    }
}

Easy way of counting effective lines of code from the console

When looking for nifty one-liners on the web that counts the number of non-blank or non commented lines in my code I ran in to CLOC. If you have a thing for working in the console, CLOC is a really nice tool for counting lines of code in a set of folders or archives. If Sonar is available and configured it is excellent but sometimes there might simply be easier to run things quickly and locally from a console, pick your favourite one-liner, tweak the sed OR just use CLOC.

Could not be easier
Download and put the executable in the path and run for a given folder,

$ cloc <the folder with my code/archive-file>

and you will get the output

http://cloc.sourceforge.net v 1.56  T=43.0 s (35.2 files/s, 7550.3 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
XML                            680          36091              5         149257
HTML                           699          30773              5          79958
Java                           106           3130           4904          16395
Javascript                      18            430            190           2384
CSS                              6            184             47            862
Python                           1              6              5             30
Bourne Shell                     2              2              0              4
DOS Batch                        2              0              0              3
-------------------------------------------------------------------------------
SUM:                          1514          70616           5156         248893
-------------------------------------------------------------------------------

There are many, many flags (http://cloc.sourceforge.net/#Options) that can be used for filtering and tweaking the output, so from now on this is one of the must have tools on my machine.

Using JUnit @Category and Maven profiles to get stable test suites

Since it happens over and over again, no matter what project I am in. Keeping the test suites green and passing is a …

The blind eye

The one sneaking little test case that starts failing can prove to be a real killer to all automation approaches. More than once it has shown that as soon as a single test case breaks a test suite things starts to go downhill, fast. Ok, there could be acceptable resons for the test case to be broken, e.g. if there are issues in the system under test or in the test framework that are not bugs.

The usual decision taken is to leave the test case as is for now since we are all aware that it will fail for a while, BAD choice!!! All attention on the failing test suite is moving from slight attention to abandoned, and if other test in the test same test suite starts failing for various reasons it will most likely go by undetected.

What can/should be done here

  • Disable the test case and enable it again when it should be back on track
  • Maintain test suites to allow it to execute and fail but in a another suite/run to keep the MUST always pass test suites all green

The later case is the way to go.

Maven profiles to your rescue

Using Maven profiles it is easy to group your JUnit test classes and test cases in different categories. Usually this is used for grouping tests into slow, fast and browser-less tests a.s.f.. This approach can be stretched even further by using a grouping that reflects the current state of the test case. There does not have to be that many different groupable categories but at least two is needed.

  • Stable
  • Unstable/Maturing

Stable is self explained while Unstable/Maturing covers everything that is NOT always PASSING. It might be test cases that are unstable beacuse of;

  • timing issues
  • tested functionality is still under development (during sprints)
  • there is a low priority bug that will be fixed later
  • unknown issues causing it to sporadically fail
  • waiting to get a STABLE stamp

Use case 1
The last item is an approach that can be used to be really be sure that the stable test suites does not get polluted. Any new test case needs to run a set number of times (e.g. 20) in a test suite and must pass all all times before being stamped as a reliable test case.

Use case 2
When a bug surfaces that causes a test case to fail it might sometimes be the case that it wont be fixed for some time period. Instead of totally disabling or trying to fix the test case that would fail an otherwise stable test suite it shall be moved to an unstable test suite. This is a way to put the test case in a quarantine until the bug has been fixed and on the same tame make sure it still compiles and is executable.

Categorizing tests using JUnit @Category annotation
Use the JUnit @Category annotation in front of your test cases or test classes.

@Category(x.y.z.Stable.class)
@Test
public void aTestCase() {
  // void
}

// 
// Alternatively an unstable test case
//

@Category(x.y.z.Unstable.class)
@Test
public void anotherTestCase() {
  // void
}

Splitting test execution using Maven profiles

Configure your Surefire plugin to include and exclude certain test classes as well as determine a set of JUnit @Category groups to be included in the test run. Set up your pom including profiles as examplified below.

<build>	
	<plugins>	
		<plugin>
			<groupId>org.apache.maven.plugins</groupId>
			<artifactId>maven-surefire-plugin</artifactId>
			<version>2.12.2</version>
			<dependencies>
				<dependency>
					<groupId>org.apache.maven.surefire</groupId>
					<artifactId>surefire-junit47</artifactId>
					<version>2.12.2</version>
				</dependency>
			</dependencies>
			<configuration>
				<testFailureIgnore>true</testFailureIgnore>
				<excludes>
					<exclude>${exclude.tests}</exclude>
				</excludes>
				<includes>
					<include>${include.tests}</include>
				</includes>
				<groups>${testcase.groups}</groups>
				<configuration>
					<forkMode>never</forkMode>
					<runOrder>random</runOrder>
				</configuration>
			</configuration>
		</plugin>
	</plugins>
</build>

The testcase.groups property can be used to define a set of groups (JUnit @Categories) to include when tests using the actual profile.

<profiles>
	<profile>
		<id>stable-tests</id>
		<properties>
			<exclude.tests>**/x/**/*.java</exclude.tests>
			<include.tests>**/y/**/*.java</include.tests>
			<testcase.groups>x.y.z.Stable</testcase.groups>
		</properties>
	</profile>
	<profile>
		<id>unstable-tests</id>
		<properties>
			<exclude.tests>**/x/**/*.java</exclude.tests>
			<include.tests>**/y/**/*.java</include.tests>
			<testcase.groups>x.y.z.Unstable,x.y.z.Maturing</testcase.groups>
		</properties>
	</profile>
</profiles>

Test suite pollution – missing term in test glossary

Test suite pollution is when one or more automated test cases deliberately are allowed to fail in a test suite. This usually leads to the executing test suite to be un-attended since errors are expected, a very unfortunate situation during development. New defects surfacing can easily be missed out and the longer the test suite gets ignored recovering it to stable can be cumbersome.

This is just an attemp to clarifying one of the ‘buzz’ terms I preach to my team. Please correct me if you think I am out of line.

Packaging and referencing images for Sikuli based tests in artifacts

This post describes one approach for packaging/bundling image resources used for Sikuli based automated test cases.

  • Bundling images as resources
  • Retrieveing/using image resources

That the images used when automating tests using Sikuli needs to versioned should be rather obvious. Naturally the majority of the pictures used in test cases reflects part of the system under test and hance shall be treated and handled as a part of the test code synchronized with what we are actually testing against. This is usually solved by having the test code in the same repository as the code under test, though it might not always be the case.

In a current test setup it was not possible to keep the test code and images together with the code under test so the test code had to be distrubited as an versioned artifact/jar-file.

Test artifact

One if the easier ways for avoiding a lot of test case maintenance is to wrap the interface to the system under test in to an artifact/API/jar-file. This allows for the artifact to be shared between different source control repositories and included in automated tests residing elsewhere (usual scenario when there are multiple trunks and components in a system). For the artifact approach the images to use for the Sikuli based tests needs to be a part of this artifact.

Bundling the images
Out of the box really, well if you are using Maven. Just put the images under resources and they gets bundled correctly in the jar. The location in the repository should be something like this.


src\main
  \- java
      \- org.project.example
  \- resources\images
      \- theImageToLookFor.PNG
      \- otherImage.PNG
      \- ...

That is all there is to it, now distribute the artifact freely, a complete pom for building a complete Sikuli test artifact including images is posted at the end.

Accessing the images in the test code
To be able to reference and use the images in your test there needs to be functionality in the artifact for getting the image resource. So implement a generic getter method that picks up all images correctly for you from inside the jar/artifact during run-time.

class TestArtifact {
	// Image file name must start with the '/' and the image name is
	// case sensitive so be thorough when using.
	private Pattern getPattern(String fileName) throws IOException {
	   BufferedImage image = ImageIO.read(getClass().getResource(fileName));
	   return new Pattern(image);
	}

	// Example method using an image.
	// Screen screen = new Screen();
	public clickOnImage(String imageName) throws Exception {
		Match m = screen.find(getPattern(imageName));
		screen.click(m);
	}
}

Usage inside test cases
Note that when referencing images the path has to start with ‘/’ (fowardslash) to make sure the image is picked up correctly from within the artifact.

class Test {
	private static TestArtifact sikuli = new TestArtifact();
	
	@Test
	public void shallClickImage() throws Exception {
		sikuli.clickOnImage("/images/theImageToLookFor.PNG");
	}
}

Example of artifact pom

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.yourcompany.automation.apis</groupId>
    <artifactId>sikuliwrapper</artifactId>
    <packaging>jar</packaging>
    <name>Sikili wrapper test artifact</name>
    <version>1.0.0-SNAPSHOT</version>
    <inceptionYear>2012</inceptionYear>

    <dependencies>
        <dependency>
            <groupId>org.sikuli</groupId>
            <artifactId>sikuli-script</artifactId>
            <version>0.10.2</version>
            <type>jar</type>
        </dependency>
    </dependencies>

</project>