Category: \T\h\i\n\k\ \T\a\n\k (22 posts)

May 09 2018

Automatic Test Generation with DSpot

DSpot is a mutation testing tool that automatically generates new tests from existing test suites. It's being developed as part of the STAMP European research project (to which XWiki SAS is participating to, represented by me).

Very quickly, DSpot works as follows:

dspot.png

  • Step 1: Finds an existing test and remove some API call. Also remove assertions (but keep the calls on the code being tested). Add logs in the source to capture object states
  • Step 2: Execute the test and add assertions that validate the captured states
  • Step 3: Run a selector to decide which test to keep and which ones to discard. By default PITest/Descartes is used, meaning that only tests killing mutants than the original test didn't kill are kept. It's also possible to use other selector. For example a Clover selector exists that will keep the tests which generate more coverage than the original test.
  • Step 4: Repeat (with different API calls removed) or stop if good enough.

For full details, see this presentation by Benjamin Danglot (main contributor of DSpot).

Today I tested the latest version of DSpot (I built it from its sources to have the latest code) and tried it on several modules of xwiki-common.

FTR here's what I did to test it:

  • Cloned Dspot and built it with Maven by running mvn clean package -DskipTests. This generated a dspot/target/dspot-1.1.1-SNAPSHOT-jar-with-dependencies.jar JAR.
  • For each module on which I tested it, I created a dspot.properties file. For example for xwiki-commons-core/xwiki-commons-component/xwiki-commons-component-api, I created the following file:
    project=../../../
    targetModule=xwiki-commons-core/xwiki-commons-component/xwiki-commons-component-api
    src=src/main/java/
    srcResources=src/main/resources/
    testSrc=src/test/java/
    testResources=src/test/resources/
    javaVersion=8
    outputDirectory=output
    filter=org.xwiki.*

    Note that project is pointing to the root of the project.

  • Then I executed: java -jar /some/path/dspot/dspot/target/dspot-1.1.1-SNAPSHOT-jar-with-dependencies.jar path-to-properties dspot.properties
  • Then checked results in output/* to see if new tests have been generated

I had to test DSpot on 6 modules before getting any result, as follows:

  • xwiki-commons-core/xwiki-commons-cache/xwiki-commons-cache-infinispan/: No new test generated by DSpot. One reason was because DSpot modifies the test sources and the tests in this module were extending Abstract test classes located in other modules and this DSpot didn't touch those and was not able to modify them to generate new tests.
  • xwiki-commons-core/xwiki-commons-component/xwiki-commons-component-api/: No new test generated by DSpot.
  • xwiki-commons-core/xwiki-commons-component/xwiki-commons-component-default/: No new test generated by DSpot.
  • xwiki-commons-core/xwiki-commons-component/xwiki-commons-component-observation/: No new test generated by DSpot.
  • xwiki-commons-core/xwiki-commons-context/: No new test generated by DSpot.
  • xwiki-commons-core/xwiki-commons-crypto/xwiki-commons-crypto-cipher/: Eureka! One test was generated by Dspot emoticon_smile

Here's the original test:

@Test
public void testRSAEncryptionDecryptionProgressive() throws Exception
{
    Cipher cipher = factory.getInstance(true, publicKey);
    cipher.update(input, 0, 17);
    cipher.update(input, 17, 1);
    cipher.update(input, 18, input.length - 18);
   byte[] encrypted = cipher.doFinal();
    cipher = factory.getInstance(false, privateKey);
    cipher.update(encrypted, 0, 65);
    cipher.update(encrypted, 65, 1);
    cipher.update(encrypted, 66, encrypted.length - 66);
    assertThat(cipher.doFinal(), equalTo(input));

    cipher = factory.getInstance(true, privateKey);
    cipher.update(input, 0, 15);
    cipher.update(input, 15, 1);
    encrypted = cipher.doFinal(input, 16, input.length - 16);
    cipher = factory.getInstance(false, publicKey);
    cipher.update(encrypted);
    assertThat(cipher.doFinal(), equalTo(input));
}

And here's the new test generated by DSpot, based on this test:

@Test
public void testRSAEncryptionDecryptionProgressive_failAssert2() throws Exception {
--> try {
        Cipher cipher = factory.getInstance(true, publicKey);
        cipher.update(input, 0, 17);
        cipher.update(input, 17, 1);
        cipher.update(input, 18, ((input.length) - 18));
       byte[] encrypted = cipher.doFinal();
        cipher = factory.getInstance(false, privateKey);
        cipher.update(encrypted, 0, 65);
        cipher.update(encrypted, 65, 1);
        cipher.update(encrypted, 66, ((encrypted.length) - 66));
        cipher.doFinal();
        CoreMatchers.equalTo(input);
        cipher = factory.getInstance(true, privateKey);
        cipher.update(input, 0, 15);
        cipher.update(input, 15, 1);
        encrypted = cipher.doFinal(input, 16, ((input.length) - 16));
        cipher = factory.getInstance(false, publicKey);
        cipher.update(encrypted);
        cipher.doFinal();
        CoreMatchers.equalTo(input);
-->     cipher.doFinal();
-->     CoreMatchers.equalTo(input);
-->     org.junit.Assert.fail("testRSAEncryptionDecryptionProgressive should have thrown GeneralSecurityException");
--> } catch (GeneralSecurityException eee) {
--> }
}

I've highlighted the parts that were added with the --> prefix. In short DSpot found that by calling cipher.doFinal() twice, it generates a GeneralSecurityException and that's killing some mutants that were not killed by the original test. Note that calling doFinal() resets the cipher, which explains why the second call generates an exception.

Looking at the source code, we can see:

@Override
public byte[] doFinal(byte[] input, int inputOffset, int inputLen) throws GeneralSecurityException
{
   if (input != null) {
       this.cipher.processBytes(input, inputOffset, inputLen);
   }
   try {
       return this.cipher.doFinal();
   } catch (InvalidCipherTextException e) {
       throw new GeneralSecurityException("Cipher failed to process data.", e);
   }
}

Haha... DSpot was able to automatically generate a new test that was able to create a state that makes the code go in the catch.

Note that it would have been even nicer if DSpot had put an assert on the exception message.

Then I wanted to verify if the test coverage had increased so I ran Jacoco before and after for this module:

  • Before: 70.5%
  • After: 71.2%

Awesome!

Conclusions:

  • DSpot was able to improve the quality of our test suite automatically and as a side effect it also increased our test coverage (it's not always the case that new tests will increase the test coverage. DSpot's main intent, when executed with PIT/Descartes, is to increase the test quality - i.e. its ability to kill more mutants).
  • It takes quite a long time to execute, globally on those 6 modules it took about 15 minutes to build them with DSpot/PIT/Descartes (when it takes about 1-2 minutes normally).
  • DSpot doesn't generate a lot of tests: one test generated out of 100s of tests mutated (in this example session).
  • IMO one good strategy to use DSpot is the following:
    • Create a Jenkins pipeline job which executes DSpot on your code
    • Since it's time consuming, run it only every month or so
    • Have the pipeline automatically commit the generated tests to your SCM in a different test tree (e.g. src/test-dspot/)
    • Modify your Maven build to use the Build Helper Maven plugin to add a new test source tree so that your tests run on both your manually-written tests and the ones generated by DSpot
    • I find this an interesting strategy because it's automated and unattended. If you have to manually execute DSpot and look, find some generated tests and then manually incorporate them (with rewriting) to your existing test suite, then it's very tedious and time-consuming and IMO the ratio time spent vs added value is too low to be interesting.

WDYT?

EDIT: If you want to know more, check the presentation I gave at Devoxx France 2018 about New Generation of Tests.

Mar 20 2018

QDashboard & SonarQube

Here's a story from the past... emoticon_smile (it happened 10 years ago).

Arnaud Heritier just dug up some old page on the Maven wiki that I had created back in 2005/2006.

I had written the Maven1 Dashboard plugin and when Maven2 came out I thought about rewriting it with a new more performant architecture and with more features.

At the time, I wanted to start working on this full time and I proposed the idea to several companies to see if they would sponsor its development (Atlassian, Cenqua, Octo Technology). They were all interested but for various reasons, I ended up joining the XWiki SAS company to work on the XWiki open source project.

So once I knew I wouldn't be working on this, I shared my idea publicly on the Maven wiki to see if anyone else would be interested to implement it.

Back then, I was happily surprised to see that Freddy Mallet actually implemented the idea:

 In September 2006, I've discovered this page written by Vincent which has directly inspired the launch of an Open Source project. One year later we are pleased to announce that Sonar 1.0 release is now available. The missions of Sonar are to :
 * Centralize and share quality information for all projects under continuous quality control
 * Show you which ones are in pain
 * Tell you what are the diseases

 To do that, Sonar aggregates metrics from Checkstyle, PMD, Surefire, Cobertura / Clover and JavaNCSS. You can take a look to the screenshots gallery to get a quick insight.

 Have fun.

 Freddy

To give you the full picture, I'm now publishing something I never made public which are the slides that I wrote when I wanted to develop the idea:

Qdashboard and SonarQube have several differences. An important one is that in the idea of QDashboard, there was supposed to be several input sources such as mailing lists, issue tracker, etc. At the moment SonarQube derives metrics mostly from the SCM. But I'm sure that the SonarQube guys have a lot of ideas in store for the future emoticon_wink

In 2013 I got a very nice present from SonarSource: a T-shirt recognizing me as #0 "employee" in the company, as the "Inceptor". That meant a lot to me.

tshirt.jpg

Several years after, SonarQube has come a long way and I'm in awe of the great successful product it has become. Congrats guys! 

Now, on to the following 10 years!

Onboarding Brainstorming

I had the honor of being invited to a seminar on "Automatic Quality Assurance and Release" at Dagstuhl by Benoit Baudry (we collaborate together on the STAMP research project). Our seminar was organized as un unconference and one session I proposed and led was the "Onboarding" one described below. The following persons participated to the discussion: V. Massol, D. Gagliardi, B. Danglot, H. Wright, B. Baudry.

Onboarding Discussions

When you're developing a project (be it some internal project or some open source project) one key element is how easy it is to onboard new users to your project. For open source projects this is essential to attract more contributors and have a lively community. For internal projects, it's useful to be able to have new employees or newcomers in general be able to get up to speed rapidly on your project.

This brainstorming session was about ideas of tools and practices to use to ease onboarding.

Here's the list of ideas we had (in no specific order):

  • 1 - Tag issues in your issue tracker as onboarding issues to make it easy for newcomer to get started on something easy and be in success quickly. This also validates that they're able to use your software.
  • 2 - Have a complete package of your software that can be installed and used as easily as possible. It should just work out of the box without having to perform any configuration or additional steps. A good strategy for applications is to provide a Docker image (or a Virtual Machine) with everything setup.
  • 3 - Similarly, provide a packaged development environment. For example you can provide a VM with some preinstalled and configured IDE (with plugins installed and configured using the project's rules). One downside of such an approach is the time it takes to download the VM (which could several GB in size). 
  • 4 - A similar and possibly better approach would be to use an online IDE (e.g. Eclipse Che) to provide a complete prebuilt dev environment that wouldn't even require any downloading. This provides the fastest dev experience you can get. The downside is that if you need to onboard a potentially large number of developers, you'll need some important infra space/CPU on your server(s) hosting the online IDE, for hosting all the dev workspaces. This makes this option difficult to implement for open source projects for example. But it's viable and interesting in a company environment.
  • 5 - Obviously having good documentation is a given. However too many projects still don't provide this or only provide good user documentation but not good developer documentation with project practices not being well documented or only a small portion being documented. Specific ideas:
    • Document the code structure
    • Document the practices for development
    • Develop a tool that supports newcomers by letting them know when they follow / don't follow the rules
    • Good documentation shall explicit assumptions (e.g. when you read this piece of documentation, I assume that you know X and Y)
    • Have a good system to contribute to the documentation of the project (e.g. a wiki)
    • Different documentation for users and for developers
  • 6 - Have homogeneous practices and tools inside a project. This is especially true in a company environment where you may have various projects, each using its own tools and practices, making it harder to move between projects.
  • 7 - Use standard tools that are well known (e.g. Maven or Docker). That increases the likelihood that a newcomer would already know the tool and be able to developer for your project.
  • 8 - It's good to have documentation about best practices but it's even better if the important "must" rules be enforced automatically by a checking tool (can be part of the build for example, or part of your IDE setup). For example instead of saying "this @Unstable annotation should be removed after one development cycle", you could write a Maven Enforcer rule (or a Checkstyle rule, or a Spoon rule) to break the build if it happens, with a message explaining the reason and what is to be done. Usually humans may prefer to have a tool telling them that than a way telling them that they haven't been following the best practices documented at such location...
  • 9 - Have a bot to help you discover documentation pages about a topic. For example by having a chat bot located in the project's chat, that when asked about will give you the link to it.
  • 10 - Projects must have a medium to ask questions and get fast answers (such as a chat tool). Forum or mailing lists are good but less interesting when onboarding when the newcomer has a lot of questions in the initial phase and requires a conversation.
  • 11 - Have an answer strategy so that when someone asks a question, the doc is updated (new FAQ entry for example) so that the next person who comes can find the answer or be given the link to the doc.
  • 12 - Mentoring (human aspect of onboarding): have a dedicated colleague to whom you're not afraid to ask questions and who is a referent to you.
  • 13 - Supporting a variety of platforms for your software will make it simpler for newcomers to contribute to your project.
  • 14 - Split your projects into smaller parts. While it's hard and a daunting experience to contribute to the core code of a project, if this project has a core as small as possible and the rest is made of plugins/extensions then it becomes simpler to start contributing to those extensions first.
  • 15 - Have some interactive tutorial to learn about your software or about its development. A good example of nice tutorial can be found at www.katacoda.com (for example for Docker, https://www.katacoda.com/courses/docker).
  • 16 - Human aspect: have an environment that makes you feel welcome. Work and discuss how to best answer Pull Requests, how to communicate when someone joins the project, etc. Think of the newcomer as you would a child: somebody who will occasionally stumble and need encouragment. Try to have as much empathy as possible.
  • 17 - Make sure that people asking questions always get an answer quickly, perhaps by establishing a role on the team to ensure answers are provided.
  • 18 - Last but not least, an interesting thought experiment to verify that you have some good onboarding processes: imagine that 1000 developers join your project / company on the same day. How do you handle this?

Onboarding on XWiki

I was also curious to see how those ideas apply to the XWiki open source project and what part we implement.

IdeasImplemented on XWiki?
1 - Tag simple issuesaccept Onboarding issues
2 - Complete install packageaccept Debian apt-get, Docker images.
3 - Dev packaged environmentaccept We have a Developer VM
4 - Online IDE onboardingcancel Hard to do provide for an OSS project in term of infra resources but would love to provide this
5 - Good documentationaccept User guide, Admin guide, Dev guide + there's a wiki dedicated to development practices and tools.
6 - Have homogeneous practices and tools inside a projectaccept See http://dev.xwiki.org
7 - Use standard tools that are well knownaccept Maven, Jenkins, Java, JUnit, Mockito, Selenium
8 - Automatically enforced important rulesaccept See Automatic checks in build
9 - Have a bot to help you discover documentation pages about a topicaccept IRC bot (used here - been broken since 2017-05-09).
10 - Projects must have a medium to ask questions and get fast answersaccept XWiki Chat
11 - Have an answer strategy so that when someone asks a questionaccept XWiki answer strategy and FAQ
12 - Mentoring (human aspect of onboarding)error Done to some extent by employees of XWiki SAS who are committers on the open source project but not a generic open source project practice.
13 - Supporting a variety of platforms for your softwareaccept Windows, Linux, Mac, multiple DBs, multiple browsers, multiple Servlet containers.
14 - Split your projects into smaller partsaccept Core getting smaller and more and more Extensions.
15 - Have some interactive tutorial to learn about your softwarecancel Would be nice to have
16 - Human aspect: have an environment that makes you feel welcome.accept This is subjective. Sometimes we may be a bit abrupt when answering (especially me! Sorry guys if I've been abrupt, it's more a consequence of doing too many things. I need to improve. I think we're globally a welcoming community, WDYT?
17 - Make sure that people asking questions always get an answer quicklyaccept I think we're very good at answering fast. See the Forum for example. We also answer fast on Matrix/IRC (we try).
18 - 1000 devs joining at once experimentaccept Actually we participated to Google CodeIn 2017 and this is exactly what we experienced: 756 students interacting with us.

So globally I'd say XWiki is pretty good at onboarding. I'd love to hear about things that we could improve on for onboarding. Any ideas?

If you own a project, we would be interested to hear about your ideas and how you perform onboarding. You could also use the list above as a way to measure your level of onboarding for your project and find out how you could improve it further.

Nov 17 2017

Controlling Test Quality

We already know how to control code quality by writing automated tests. We also know how to ensure that the code quality doesn't go down by using a tool to measure code covered by tests and fail the build automatically when it goes under a given threshold (and it seems to be working).

Wouldn't it be nice to be also able to verify the quality of the tests themselves? emoticon_smile

I'm proposing the following strategy for this:

  • Integrate PIT/Descartes in your Maven build
  • PIT/Descartes generates a Mutation Score metric. So the idea is to monitor this metric and ensure that it keeps going in the right direction and doesn't go down. Similar than watching the Clover TPC metric and ensuring it always go up.
  • Thus the idea would be, for each Maven module to set up a Mutation Score threshold (you'd run it once to get the current value and set that value as the initial threshold) and have the PIT/Descartes Maven plugin fail the build if the computed mutation score is below this threshold. In effect this would tell that the last changes have introduced tests that are of lowering quality than existing tests (in average) and that the new tests need to be improved to the level of the others.

In order for this strategy to be implementable we need PIT/Descartes to implement the following enhancements requests first:

I'm eagerly waiting for this issues to be fixed in order to try this strategy on the XWiki project and verify it can work in practice. There are some reason why it couldn't work such as being too painful and not being easy enough to identify test problems and fix them.

WDYT? Do you see this as possibly working?

Nov 14 2017

Comparing Clover Reports

On the XWiki project, we use Clover to compute our global test coverage. We do this over several Git repositories and include functional tests (and more generally the coverage brought by some modules into other modules).

Now I wanted to see the difference between 2 reports that were generated:

I was surprised to see a drop in the global TPC, from 73.2% down to 71.3%. So I took the time to understand the issue.

It appears that Clover classifies your code classes as Application Code and Test Code (I have no idea what strategy it uses to differentiate them) and even though we've used the same version of Clover (4.1.2) for both reports, the test classes were not categorized similarly. It also seems that the TPC value given in the HTML report is from Application Code.

Luckily we asked the Clover Maven plugin to generate not only HTML reports but also XML reports. Thus I was able to write the following Groovy script that I executed in a wiki page in XWiki. I aggregated Application Code and Test code together in order to be able to compare the reports and the global TPC value.

result.png

{{groovy}}
def saveMetrics(def packageName, def metricsElement, def map) {
 def coveredconditionals = metricsElement.@coveredconditionals.toDouble()
 def coveredstatements = metricsElement.@coveredstatements.toDouble()
 def coveredmethods = metricsElement.@coveredmethods.toDouble()
 def conditionals = metricsElement.@conditionals.toDouble()
 def statements = metricsElement.@statements.toDouble()
 def methods = metricsElement.@methods.toDouble()
 def mapEntry = map.get(packageName)
 if (mapEntry) {
    coveredconditionals = coveredconditionals + mapEntry.get('coveredconditionals')
    coveredstatements = coveredstatements + mapEntry.get('coveredstatements')
    coveredmethods = coveredmethods + mapEntry.get('coveredmethods')
    conditionals = conditionals + mapEntry.get('conditionals')
    statements = statements + mapEntry.get('statements')
    methods = methods + mapEntry.get('methods')
 }
 def metrics = [:]
  metrics.put('coveredconditionals', coveredconditionals)
  metrics.put('coveredstatements', coveredstatements)
  metrics.put('coveredmethods', coveredmethods)
  metrics.put('conditionals', conditionals)
  metrics.put('statements', statements)
  metrics.put('methods', methods)
  map.put(packageName, metrics)
}
def scrapeData(url) {
 def root = new XmlSlurper().parseText(url.toURL().text)
 def map = [:]
  root.project.package.each() { packageElement ->
   def packageName = packageElement.@name
    saveMetrics(packageName.text(), packageElement.metrics, map)
 }
  root.testproject.package.each() { packageElement ->
   def packageName = packageElement.@name
    saveMetrics(packageName.text(), packageElement.metrics, map)
 }
 return map
}
def computeTPC(def map) {
 def tpcMap = [:]
 def totalcoveredconditionals = 0
 def totalcoveredstatements = 0
 def totalcoveredmethods = 0
 def totalconditionals = 0
 def totalstatements = 0
 def totalmethods = 0
  map.each() { packageName, metrics ->
   def coveredconditionals = metrics.get('coveredconditionals')
    totalcoveredconditionals += coveredconditionals
   def coveredstatements = metrics.get('coveredstatements')
    totalcoveredstatements += coveredstatements
   def coveredmethods = metrics.get('coveredmethods')
    totalcoveredmethods += coveredmethods
   def conditionals = metrics.get('conditionals')
    totalconditionals += conditionals
   def statements = metrics.get('statements')
    totalstatements += statements
   def methods = metrics.get('methods')
    totalmethods += methods
   def elementsCount = conditionals + statements + methods
   def tpc
   if (elementsCount == 0) {
      tpc = 0
   } else {
      tpc = ((coveredconditionals + coveredstatements + coveredmethods)/(conditionals + statements + methods)).trunc(4) * 100
    }
    tpcMap.put(packageName, tpc)
  }
  tpcMap.put("ALL", ((totalcoveredconditionals + totalcoveredstatements + totalcoveredmethods)/
(totalconditionals + totalstatements + totalmethods)).trunc(4) * 100)
 return tpcMap
}

// map1 = old
def map1 = computeTPC(scrapeData('http://maven.xwiki.org/site/clover/20161220/clover-commons+rendering+platform+enterprise-20161220-2134/clover.xml')).sort()

// map2 = new
def map2 = computeTPC(scrapeData('http://maven.xwiki.org/site/clover/20171109/clover-commons+rendering+platform-20171109-1920/clover.xml')).sort()

  println "= Added Packages"
println "|=Package|=TPC New"
map2.each() { packageName, tpc ->
 if (!map1.containsKey(packageName)) {
    println "|${packageName}|${tpc}"
 }  
}
println "= Differences"
println "|=Package|=TPC Old|=TPC New"
map2.each() { packageName, tpc ->
 def oldtpc = map1.get(packageName)
 if (oldtpc && tpc != oldtpc) {
   def css = oldtpc > tpc ? '(% style="color:red;" %)' : '(% style="color:green;" %)'
    println "|${packageName}|${oldtpc}|${css}${tpc}"
 }
}
println "= Removed Packages"
println "|=Package|=TPC Old"
map1.each() { packageName, tpc ->
 if (!map2.containsKey(packageName)) {
    println "|${packageName}|${tpc}"
 }
}
{{/groovy}}

And the result was quite different from what the HTML report was giving us!

We went from 74.07% in 2016-12-20 to 76.28% in 2017-11-09 (so quite different from the 73.2% to 71.3% figure given by the HTML report). Much nicer! emoticon_smile

Note that one reason I wanted to compare the TPC values was to see if our strategy of failing the build if a module's TPC is below the current threshold was working or not (I had tried to assess it before but it wasn't very conclusive).

Now I know that we won 1.9% of TPC in a bit less than a year and that looks good emoticon_smile

EDIT: I'm aware of the Historical feature of Clover but:

  • We haven't set it up so it's too late to compare old reports
  • I don't think it would help with the issue we faced with test code being counted as Application Code, and that being done differently depending on the generated reports.

Nov 08 2017

Flaky tests handling with Jenkins & JIRA

Flaky tests are a plague because they lower the credibility in your CI strategy, by sending false positive notification emails.

In a previous blog post, I detailed a solution we use on the XWiki project to handle false positives caused by the environment on which the CI build is running. However this solution wasn't handling flaky tests. This blog post is about fixing this!

So the strategy I'm proposing for Flaky tests is the following:

  • When a Flaky test is discovered, create a JIRA issue to remember to work on it and fix it (we currently have the following open issues related to Flaky tests)
  • The JIRA issue is marked as containing a flaky test by filling a custom field called "Flickering Test", using the following format: <package name of test class>.<test class name>#<test method name>. There can be several entries separated by commas.

    Example:

    jiraexample.png

  • In our Pipeline script, after the tests have executed, review the failing ones and check if they are in the list of known flaky tests in JIRA. If so, indicate it in the Jenkins test report. If all failing tests are flickers, don't send a notification email.

    Indication in the job history:

    joblist.png

    Indication on the job result page:

    jobpage.png

    Information on the test page itself:

    testpage.png

Note that there's an alternate solution that can also work:

  • When a Flaky test is discovered, create a JIRA issue to remember to work on it and fix it
  • Add an @Ignore annotation in the test with a detail pointing to the JIRA issue (something like @Ignore("WebDriver doesn't support uploading multiple files in one input, see http://code.google.com/p/selenium/issues/detail?id=2239")). This will prevent the build from executing this flaky test.

This last solution is certainly low-tech compared to the first one. I prefer the first one though for the following reasons:

  • It allows flaky tests to continue executing on the CI and thus serve as a constant reminder that something needs to be fixed. Adding the @Ignore annotation feels like putting the dust under the carpet and there's little chance you're going to come back to it in the future...
  • Since our script acts as postbuild script on the CI agent, there's the possibility to add some logic to auto-discover flaky tests that have not yet been marked as flaky.

Also note that there's a Jenkins plugin for Flaky test but I don't like the strategy involved which is to re-run failing tests a number of times to see if they pass. In theory it can work. In practice this means CI jobs that will take a lot longer to execute, making it impractical for functional UI tests (which is where we have flaky tests in XWiki). In addition, flakiness sometimes only happens when the full test suite is executed (i.e. it depends on what executes before) and sometimes require a large number of runs before passing.

So without further ado, here's the Jenkins Pipeline script to implement the strategy we defined above (you can check the full pipeline script):

/**
 * Check for test flickers, and modify test result descriptions for tests that are identified as flicker. A test is
 * a flicker if there's a JIRA issue having the "Flickering Test" custom field containing the FQN of the test in the
 * format {@code <java package name>#<test name>}.
 *
 * @return true if the failing tests only contain flickering tests
 */

def boolean checkForFlickers()
{
   boolean containsOnlyFlickers = false
    AbstractTestResultAction testResultAction =  currentBuild.rawBuild.getAction(AbstractTestResultAction.class)
   if (testResultAction != null) {
       // Find all failed tests
       def failedTests = testResultAction.getResult().getFailedTests()
       if (failedTests.size() > 0) {
           // Get all false positives from JIRA
           def url = "https://jira.xwiki.org/sr/jira.issueviews:searchrequest-xml/temp/SearchRequest.xml?".concat(
                   "jqlQuery=%22Flickering%20Test%22%20is%20not%20empty%20and%20resolution%20=%20Unresolved")
           def root = new XmlSlurper().parseText(url.toURL().text)
           def knownFlickers = []
            root.channel.item.customfields.customfield.each() { customfield ->
               if (customfield.customfieldname == 'Flickering Test') {
                    customfield.customfieldvalues.customfieldvalue.text().split(',').each() {
                        knownFlickers.add(it)
                   }
               }
           }
            echoXWiki "Known flickering tests: ${knownFlickers}"

           // For each failed test, check if it's in the known flicker list.
           // If all failed tests are flickers then don't send notification email
           def containsAtLeastOneFlicker = false
            containsOnlyFlickers = true
            failedTests.each() { testResult ->
               // Format of a Test Result id is "junit/<package name>/<test class name>/<test method name>"
               def parts = testResult.getId().split('/')
               def testName = "${parts[1]}.${parts[2]}#${parts[3]}"
               if (knownFlickers.contains(testName)) {
                   // Add the information that the test is a flicker to the test's description
                   testResult.setDescription(
                       "<h1 style='color:red'>This is a flickering test</h1>${testResult.getDescription() ?: ''}")
                    echoXWiki "Found flickering test: [${testName}]"
                    containsAtLeastOneFlicker = true
               } else {
                   // This is a real failing test, thus we'll need to send athe notification email...
                   containsOnlyFlickers = false
               }
           }

           if (containsAtLeastOneFlicker) {
                manager.addWarningBadge("Contains some flickering tests")
                manager.createSummary("warning.gif").appendText("<h1>Contains some flickering tests</h1>", false,
                   false, false, "red")
           }
       }
   }

   return containsOnlyFlickers
}

Hope you like it! Let me know in comments how you're handling Flaky tests in your project so that we can compare/discuss.

Sep 28 2017

Mutation testing with PIT and Descartes

XWiki SAS is part of an European research project named STAMP. As part of this project I've been able to experiment a bit with Descartes, a mutation engine for PIT.

What PIT does is mutate the code under test and check if the existing test suite is able to detect those mutations. In other words, it checks the quality of your test suite.

Descartes plugs into PIT by providing a set of specific mutators. For example one mutator will replace the output of methods by some fixed value (for example a method returning a boolean will always return true). Another will remove the content of void methods. It then generates a report.

Here's an example of running Descartes on a module of XWiki:

report.png

You can see both the test coverage score (computed automatically by PIT using Jacoco) and the Mutation score. 

If we drill down to one class (MacroId.java) we can see for example the following report for the equals() method:

equals.png

What's interesting to note is that the test coverage says that the following code has been tested:

result =
   (getId() == macroId.getId() || (getId() != null && getId().equals(macroId.getId())))
   && (getSyntax() == macroId.getSyntax() || (getSyntax() != null && getSyntax().equals(
    macroId.getSyntax())));

However, the mutation testing is telling us a different story. It says that if you change the equals method code with negative conditions (i.e. testing for inequality), the test still reports success.

If we check the test code:

@Test
public void testEquality()
{
    MacroId id1 = new MacroId("id", Syntax.XWIKI_2_0);
    MacroId id2 = new MacroId("id", Syntax.XWIKI_2_0);
    MacroId id3 = new MacroId("otherid", Syntax.XWIKI_2_0);
    MacroId id4 = new MacroId("id", Syntax.XHTML_1_0);
    MacroId id5 = new MacroId("otherid", Syntax.XHTML_1_0);
    MacroId id6 = new MacroId("id");
    MacroId id7 = new MacroId("id");

    Assert.assertEquals(id2, id1);
   // Equal objects must have equal hashcode
   Assert.assertTrue(id1.hashCode() == id2.hashCode());

    Assert.assertFalse(id3 == id1);
    Assert.assertFalse(id4 == id1);
    Assert.assertFalse(id5 == id3);
    Assert.assertFalse(id6 == id1);

    Assert.assertEquals(id7, id6);
   // Equal objects must have equal hashcode
   Assert.assertTrue(id6.hashCode() == id7.hashCode());
}

We can indeed see that the test doesn't test for inequality. Thus in practice if we replace the equals method by return true; then the test still pass.

That's interesting because that's something that test coverage didn't notice!

More generally the report provides a summary of all mutations it has done and whether they were killed or not by the tests. For example on this class:

mutations.png

Here's what I learnt while trying to use Descartes on XWiki:

  • It's being actively developed
  • It's interesting to classify the results in 3 categories:
    • strong pseudo-tested methods: no matter the return values of a method, the tests still passes. This is the worst offender since it means the tests really needs to be improved. This was the case in the example above.
    • weak pseudo-tested methods: the tests passes with at least one modified value. Not as bad as strong pseudo-tested but you may want still want to check it out.
    • fully tested methods: the tests fail for all mutations and thus can be considered rock-solid!
  • So in the future, the generated report should provide this classification to help analyze the results and focus on important problems.
  • It would be nice if the Maven plugin was improved and be able to fail if the mutation score was below a certain threshold (as we do for test coverage).
  • Performance: It's quite slow compared to Jacoco execution time for example. In my example above it took 34 seconds to execute will all possible mutations (for a project with 14 test classes, 31 tests and 20 classes).
  • It would be nice to have a Sonar integration so that PIT/Descartes could provide some stats on the Sonar dashboard.
  • Big limitation: ATM there's a big limitation: PIT (and/or Descartes) doesn't support being executed on a multi-module project. This means that right now you need to compute the full classpath for all modules and run all sources and tests as if it was a single module. This causes problems for all tests that depend on the filesystem and expect a given directory structure. It's also tedious and a error-prone problem since the classpath order can have side effects.

Conclusion:

PIT/Descartes is very nice but I feel it would need to provide a bit more added-value out of the box for the XWiki open source project to use it in an automated manner. The test coverage report we have are already providing a lot of information about the code that is not tested at all and if we have 5 hours to spend, we would probably spend them on adding tests rather than improving further existing tests. YMMV. If you have a very strong suite of tests and you want to check its quality, then PIT/Descartes is your friend!

If Descartes could provide the build-failure-on-low-threshold feature mentioned above that could be one way we could integrate it in the XWiki build. But for that to be possible PIT/Descartes need to be able to run on multi-module Maven projects.

I'm also currently testing DSpot. DSpot uses PIT and Descartes but in addition it uses the results to generate new tests automatically. That would be even more interesting (if it can work well-enough). I'll post back when I've been able to run DSpot on XWiki and learn more by using it.

Now, the Descartes project could also use the information provided by line coverage to automatically generate tests to cover the spotted issues.

I'd like to thank Oscar Luis Vera Pérez who's actively working on Descartes and who's shown me how to use it and how to analyze the results. Thanks Oscar! I'll also continue to work with Oscar on improving Descartes and executing it on the XWiki code base. 

Sep 17 2017

Using Docker + Jenkins to test configurations

On the XWiki project, we currently have automated functional tests that use Selenium and Jenkins. However they exercise only a single configuration: HSQLDB, Jetty and FireFox (and all on a fixed version).

XWiki SAS is part of the STAMP research project and one domain of this research is improving configuration testing.

As a first step I've worked on providing official XWiki images but I've only provided 2 configurations (XWiki on Tomcat + Mysql and on Tomcat + PostgreSQL) and they're not currently exercised by our functional tests.

Thus I'm proposing below an architecture that should allow XWiki to be tested on various configurations:

architecture.png

Here's what I think it would mean in term of a Jenkins Pipeline (note that at this stage this is pseudo code and should not be understood literally):

pipeline {
  agent {
    docker {
      image 'xwiki-maven-firefox'
      args '-v $HOME/.m2:/root/.m2'
    }
  }
  stages {
    stage('Test') {
      steps {
        docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->                    
          docker.image('tomcat:8').withRun('-v $XWIKIDIR:/usr/local/tomcat/webapps/xwiki').inside("--link ${c.id}:db") {
            [...]
            wrap([$class: 'Xvnc']) {
              withMaven(maven: mavenTool, mavenOpts: mavenOpts) {
                [...]
                sh "mvn ..."
              }
            }
          }
        }
      }
    }
  }
}

Some explanations:

  • We would setup a custom Docker Registry so that we can prepare images that would be used by the Jenkins pipeline to create containers
  • Those images could themselves be refreshed regularly based on another pipeline that would use the docker.build() construct
  • We would use a Jenkins Agent dynamically provisioned from an image that would contain: sshd and a Jenkins user (so that Jenkins Master can communicate with it), Maven, VNC Server and a browser (FireFox for ex). We would have several such images, one per browser we want to test with.
    • Note that since we want to support only the latest browsers versions for FF/Chrome/Safari we could use apt to update (and commit) the browser version in the container prior to starting it, from the pipeline script.
  • Then the pipeline would spawn two containers: one for the DB and one for the Servlet container. Importantly for the Servlet container, I think we should mount a volume that points to a local directory on the agent, which would contain the XWiki exploded WAR (done as a pre-step by the Maven build). This would save time and not have to recreate a new image every time there's a commit on the XWiki codebase!
  • The build that contains the tests will be started by the Agent (and we would mount the the Maven local repository as a volume in order to sped up build times across runs). 
  • Right now the XWiki build already knows how to run the functional tests by fetching/exploding the XWiki WAR in the target directory and then starting XWiki directly from the tests, so all we would need to do is to make sure we map this directory in the container containing the Servlet container (e.g. in Tomcat it would be mapped to [TOMCATHOME]/webapps/xwiki).

This is just architecture at this stage. Now we need to put that in practice and find the gotchas (there always are emoticon_wink).

WDYT? Could this work? Are you doing this yourself?

Stay tuned, I should be able to report about it went in the coming weeks/months.

Jul 15 2017

XWiki vs statically-generated site

Imagine that you have a software project and you wish to have a web site to document everything related to the project (user documentation, dev documentation, news, etc).

You may wonder whether you should go with a statically-generated site (using GitHub Pages for example or some homemade solution) or use a wiki such as XWiki.

I've tried to list the pros of each solution below, trying to be as impartial as possible (not easy since I'm one of the developers of the XWiki project emoticon_wink). Don't hesitate to comment if you have other points or if some of my points are not fully accurate, and I'll update this blog post. Thanks!

Pros of a statically-generated site

  • Hosting is easier, as it only consists of static pages and assets. More generally, it's simpler to get started (but compensated by the need to set up some DSL and/or build if you don't want to enter content in HTML)
  • Maintenance is simplified, no database to backup for example or software to upgrade
  • Documentation can be versioned along with the code it documents
  • (GitHub) You get a review system built-in with Pull Requests
  • (GitHub) You can tag the whole documentation and have branches per released versions
  • Easier to scale. It's easy to make web servers scale to massively large number of users.

Pros of a wiki with XWiki

  • Easy for anyone to enter content, including for non-technical users. No HTML to know nor any specific DSL to understand. No need for an account on GitHub nor the need to understand how to make a PR.
  • Much faster to enter content through the WYSIWYG editor or through wiki markup.
  • Changes are immediately visible. You edit a page and click save or preview and you can see the result. No need to go through a build that will push the changes. With preview you can go back to editing the page if you're not satisfied and that's very fast. With WYSIWYG editor you don't even need to preview (since WYSIWYG is... WYSIWYG).
  • Richer search, see for example the XWiki.org Search UI vs the Groovy Search UI.
  • Ability for users to comment the website pages. 
  • Ability for users to watch pages and be notified when there are changes to those pages
  • Ability to see what's new in the documentation and the changes made
  • Your pages are not saved along the code in a single SCM. However XWiki pages can be exported to an XML format and the exported pages can be saved in the same SCM as the code. There are even GitHub Extension and SVN Extensionto help you do that.
  • Pages can be exported in different formats: OpenOffice, Word, PDF, etc. Note that it's also possible to export to HTML in order to offer a static web site for example.
  • Ability to display large quantity of filterable data in tables with great scalability. For example:
  • Ability to have dynamic examples that can be tested directly in the wiki. For example the XWiki Rendering can be tested live.
  • Perform dynamic actions, such as generating GitHub statistics for your project.
  • Perform dynamic actions by writing some scripts in wiki page. For example imagine you'd like to list all Extensions having a name containing User and located in the the extensions subwiki, you'd simply write the following in a wiki page (you can try it on XWiki Playground):
    {{velocity}}
    #set ($query = $services.query.xwql("where doc.object(ExtensionCode.ExtensionClass).name like '%User%'").setWiki('extensions'))
    #foreach ($itemDoc in $query.execute())
      * [[extensions:$itemDoc]]
    #end
    {{/velocity}}
  • More generally write some applications to enter data easily for your website. It's easy with Applications within Minutes.

Conclusion 

IMO the choice will hugely depend on your needs from the above list but also on how easy/hard it is for you to get some hosting for XWiki:

  • If it's an internal company project, it shouldn't be hard to install and host an XWiki instance (unless your company doesn't have any IT department, see below in this case)
  • If you have some budget you could use XWiki SAS's cloud solution (starts at 10 euros/month for 10 users)
  • If you're an open source project and have no budget, then the choice is a bit harder. The XWiki projet has a free farm and if your project doesn't require professional hosting then it could be a good option. If your project is quite visible/large you could also contact XWiki SAS which has often offered some free professional hosting for open source projects or non-profit associations in the past.

It would be great if more open source forges such as the Apache Software Foundation, the Eclipse Foundation and others were offering XWiki hosting for their projects as an option.

So what would you choose for your project? emoticon_smile

Jun 06 2017

Jenkins Pipeline: Attach failing test screenshot

On the XWiki project we've started moving to Jenkins 2.0 and to using the Pipeline feature through Jenkinsfiles.

When we run our functional tests (we use Selenium2/Webdriver), we record a screenshot when a test fails. Previously we had a Groovy Scriptler script (written by Eduard Moraru, an XWiki committer) to automatically change the description of a Jenkins test page to include the screenshot as on:

failing.png 

So we needed to port this script to a Jenkinsfile. Here's the solution I came up with:

import hudson.FilePath
import hudson.tasks.junit.TestResultAction
import hudson.util.IOUtils
import javax.xml.bind.DatatypeConverter

def attachScreenshotToFailingTests() {
   def testResults = manager.build.getAction(TestResultAction.class)
   if (testResults == null) {
       // No tests were run in this build, nothing left to do.
       return
    }

   // Go through each failed test in the current build.
   def failedTests = testResults.getFailedTests()
   for (def failedTest : failedTests) {
       // Compute the test's screenshot file name.
       def testClass = failedTest.getClassName()
       def testSimpleClass = failedTest.getSimpleName()
       def testExample = failedTest.getName()

       // Example of value for suiteResultFile (it's a String):
       //   /Users/vmassol/.jenkins/workspace/blog/application-blog-test/application-blog-test-tests/target/
       //     surefire-reports/TEST-org.xwiki.blog.test.ui.AllTests.xml
       def suiteResultFile = failedTest.getSuiteResult().getFile()
       if (suiteResultFile == null) {
           // No results available. Go to the next test.
           continue
        }

       // Compute the screenshot's location on the build agent.
       // Example of target folder path:
       //   /Users/vmassol/.jenkins/workspace/blog/application-blog-test/application-blog-test-tests/target
       def targetFolderPath = createFilePath(suiteResultFile).getParent().getParent()
       // The screenshot can have 2 possible file names and locations, we have to look for both.
       // Selenium 1 test screenshots.
       def imageAbsolutePath1 = new FilePath(targetFolderPath, "selenium-screenshots/${testClass}-${testExample}.png")
       // Selenium 2 test screenshots.
       def imageAbsolutePath2 = new FilePath(targetFolderPath, "screenshots/${testSimpleClass}-${testExample}.png")
       // If screenshotDirectory system property is not defined we save screenshots in the tmp dir so we must also
       // support this.
       def imageAbsolutePath3 =
            new FilePath(createFilePath(System.getProperty("java.io.tmpdir")), "${testSimpleClass}-${testExample}.png")

       // Determine which one exists, if any.
        echo "Image path 1 (selenium 1) [${imageAbsolutePath1}], Exists: [${imageAbsolutePath1.exists()}]"
        echo "Image path 2 (selenium 2) [${imageAbsolutePath2}], Exists: [${imageAbsolutePath2.exists()}]"
        echo "Image path 3 (tmp) [${imageAbsolutePath3}], Exists: [${imageAbsolutePath3.exists()}]"
       def imageAbsolutePath = imageAbsolutePath1.exists() ?
            imageAbsolutePath1 : (imageAbsolutePath2.exists() ? imageAbsolutePath2 :
                (imageAbsolutePath3.exists() ? imageAbsolutePath3 : null))

        echo "Attaching screenshot to description: [${imageAbsolutePath}]"

       // If the screenshot exists...
       if (imageAbsolutePath != null) {
           // Build a base64 string of the image's content.
           def imageDataStream = imageAbsolutePath.read()
            byte[] imageData = IOUtils.toByteArray(imageDataStream)
           def imageDataString = "data:image/png;base64," + DatatypeConverter.printBase64Binary(imageData)

           def testResultAction = failedTest.getParentAction()

           // Build a description HTML to be set for the failing test that includes the image in Data URI format.
           def description = """<h3>Screenshot</h3><a href="${imageDataString}"><img style="width: 800px" src="${imageDataString}" /></a>"""

           // Set the description to the failing test and save it to disk.
            testResultAction.setDescription(failedTest, description)
            currentBuild.rawBuild.save()
        }
    }
}

Note that for this to work you need to:

  • Install the Groovy Postbuild plugin. This exposes the manager variable needed by the script.
  • Add the required security exceptions to http://<jenkins server ip>/scriptApproval/ if need be
  • Install the Pegdown Formatter plugin and set the description syntax to be Pegdown in the Global Security configuration (http://<jenkins server ip>/configureSecurity). Without this you won't be able to display HTML (and the default safe HTML option will strip out the datauri content).

Enjoy!

Tags:
Created by Vincent Massol on 2008/12/18 22:21
Tags:
Created by Vincent Massol on 2008/12/18 22:21
This wiki is licensed under a Creative Commons 2.0 license