Think Tank

Last modified by Vincent Massol on 2008/12/18 22:21

28 posts

Nov 14 2017

Comparing Clover Reports

On the XWiki project, we use Clover to compute our global test coverage. We do this over several Git repositories and include functional tests (and more generally the coverage brought by some modules into other modules).

Now I wanted to see the difference between 2 reports that were generated:

I was surprised to see a drop in the global TPC, from 73.2% down to 71.3%. So I took the time to understand the issue.

It appears that Clover classifies your code classes as Application Code and Test Code (I have no idea what strategy it uses to differentiate them) and even though we've used the same version of Clover (4.1.2) for both reports, the test classes were not categorized similarly. It also seems that the TPC value given in the HTML report is from Application Code.

Luckily we asked the Clover Maven plugin to generate not only HTML reports but also XML reports. Thus I was able to write the following Groovy script that I executed in a wiki page in XWiki. I aggregated Application Code and Test code together in order to be able to compare the reports and the global TPC value.

result.png

{{groovy}}
def saveMetrics(def packageName, def metricsElement, def map) {
 def coveredconditionals = metricsElement.@coveredconditionals.toDouble()
 def coveredstatements = metricsElement.@coveredstatements.toDouble()
 def coveredmethods = metricsElement.@coveredmethods.toDouble()
 def conditionals = metricsElement.@conditionals.toDouble()
 def statements = metricsElement.@statements.toDouble()
 def methods = metricsElement.@methods.toDouble()
 def mapEntry = map.get(packageName)
 if (mapEntry) {
    coveredconditionals = coveredconditionals + mapEntry.get('coveredconditionals')
    coveredstatements = coveredstatements + mapEntry.get('coveredstatements')
    coveredmethods = coveredmethods + mapEntry.get('coveredmethods')
    conditionals = conditionals + mapEntry.get('conditionals')
    statements = statements + mapEntry.get('statements')
    methods = methods + mapEntry.get('methods')
 }
 def metrics = [:]
  metrics.put('coveredconditionals', coveredconditionals)
  metrics.put('coveredstatements', coveredstatements)
  metrics.put('coveredmethods', coveredmethods)
  metrics.put('conditionals', conditionals)
  metrics.put('statements', statements)
  metrics.put('methods', methods)
  map.put(packageName, metrics)
}
def scrapeData(url) {
 def root = new XmlSlurper().parseText(url.toURL().text)
 def map = [:]
  root.project.package.each() { packageElement ->
   def packageName = packageElement.@name
    saveMetrics(packageName.text(), packageElement.metrics, map)
 }
  root.testproject.package.each() { packageElement ->
   def packageName = packageElement.@name
    saveMetrics(packageName.text(), packageElement.metrics, map)
 }
 return map
}
def computeTPC(def map) {
 def tpcMap = [:]
 def totalcoveredconditionals = 0
 def totalcoveredstatements = 0
 def totalcoveredmethods = 0
 def totalconditionals = 0
 def totalstatements = 0
 def totalmethods = 0
  map.each() { packageName, metrics ->
   def coveredconditionals = metrics.get('coveredconditionals')
    totalcoveredconditionals += coveredconditionals
   def coveredstatements = metrics.get('coveredstatements')
    totalcoveredstatements += coveredstatements
   def coveredmethods = metrics.get('coveredmethods')
    totalcoveredmethods += coveredmethods
   def conditionals = metrics.get('conditionals')
    totalconditionals += conditionals
   def statements = metrics.get('statements')
    totalstatements += statements
   def methods = metrics.get('methods')
    totalmethods += methods
   def elementsCount = conditionals + statements + methods
   def tpc
   if (elementsCount == 0) {
      tpc = 0
   } else {
      tpc = ((coveredconditionals + coveredstatements + coveredmethods)/(conditionals + statements + methods)).trunc(4) * 100
    }
    tpcMap.put(packageName, tpc)
  }
  tpcMap.put("ALL", ((totalcoveredconditionals + totalcoveredstatements + totalcoveredmethods)/
(totalconditionals + totalstatements + totalmethods)).trunc(4) * 100)
 return tpcMap
}

// map1 = old
def map1 = computeTPC(scrapeData('http://maven.xwiki.org/site/clover/20161220/clover-commons+rendering+platform+enterprise-20161220-2134/clover.xml')).sort()

// map2 = new
def map2 = computeTPC(scrapeData('http://maven.xwiki.org/site/clover/20171109/clover-commons+rendering+platform-20171109-1920/clover.xml')).sort()

  println "= Added Packages"
println "|=Package|=TPC New"
map2.each() { packageName, tpc ->
 if (!map1.containsKey(packageName)) {
    println "|${packageName}|${tpc}"
 }  
}
println "= Differences"
println "|=Package|=TPC Old|=TPC New"
map2.each() { packageName, tpc ->
 def oldtpc = map1.get(packageName)
 if (oldtpc && tpc != oldtpc) {
   def css = oldtpc > tpc ? '(% style="color:red;" %)' : '(% style="color:green;" %)'
    println "|${packageName}|${oldtpc}|${css}${tpc}"
 }
}
println "= Removed Packages"
println "|=Package|=TPC Old"
map1.each() { packageName, tpc ->
 if (!map2.containsKey(packageName)) {
    println "|${packageName}|${tpc}"
 }
}
{{/groovy}}

And the result was quite different from what the HTML report was giving us!

We went from 74.07% in 2016-12-20 to 76.28% in 2017-11-09 (so quite different from the 73.2% to 71.3% figure given by the HTML report). Much nicer! emoticon_smile

Note that one reason I wanted to compare the TPC values was to see if our strategy of failing the build if a module's TPC is below the current threshold was working or not (I had tried to assess it before but it wasn't very conclusive).

Now I know that we won 1.9% of TPC in a bit less than a year and that looks good emoticon_smile

EDIT: I'm aware of the Historical feature of Clover but:

  • We haven't set it up so it's too late to compare old reports
  • I don't think it would help with the issue we faced with test code being counted as Application Code, and that being done differently depending on the generated reports.

Nov 08 2017

Flaky tests handling with Jenkins & JIRA

Flaky tests are a plague because they lower the credibility in your CI strategy, by sending false positive notification emails.

In a previous blog post, I detailed a solution we use on the XWiki project to handle false positives caused by the environment on which the CI build is running. However this solution wasn't handling flaky tests. This blog post is about fixing this!

So the strategy I'm proposing for Flaky tests is the following:

  • When a Flaky test is discovered, create a JIRA issue to remember to work on it and fix it (we currently have the following open issues related to Flaky tests)
  • The JIRA issue is marked as containing a flaky test by filling a custom field called "Flickering Test", using the following format: <package name of test class>.<test class name>#<test method name>. There can be several entries separated by commas.

    Example:

    jiraexample.png

  • In our Pipeline script, after the tests have executed, review the failing ones and check if they are in the list of known flaky tests in JIRA. If so, indicate it in the Jenkins test report. If all failing tests are flickers, don't send a notification email.

    Indication in the job history:

    joblist.png

    Indication on the job result page:

    jobpage.png

    Information on the test page itself:

    testpage.png

Note that there's an alternate solution that can also work:

  • When a Flaky test is discovered, create a JIRA issue to remember to work on it and fix it
  • Add an @Ignore annotation in the test with a detail pointing to the JIRA issue (something like @Ignore("WebDriver doesn't support uploading multiple files in one input, see http://code.google.com/p/selenium/issues/detail?id=2239")
    ). This will prevent the build from executing this flaky test.

This last solution is certainly low-tech compared to the first one. I prefer the first one though for the following reasons:

  • It allows flaky tests to continue executing on the CI and thus serve as a constant reminder that something needs to be fixed. Adding the @Ignore annotation feels like putting the dust under the carpet and there's little chance you're going to come back to it in the future...
  • Since our script acts as postbuild script on the CI agent, there's the possibility to add some logic to auto-discover flaky tests that have not yet been marked as flaky.

Also note that there's a Jenkins plugin for Flaky test but I don't like the strategy involved which is to re-run failing tests a number of times to see if they pass. In theory it can work. In practice this means CI jobs that will take a lot longer to execute, making it impractical for functional UI tests (which is where we have flaky tests in XWiki). In addition, flakiness sometimes only happens when the full test suite is executed (i.e. it depends on what executes before) and sometimes require a large number of runs before passing.

So without further ado, here's the Jenkins Pipeline script to implement the strategy we defined above (you can check the full pipeline script):

/**
 * Check for test flickers, and modify test result descriptions for tests that are identified as flicker. A test is
 * a flicker if there's a JIRA issue having the "Flickering Test" custom field containing the FQN of the test in the
 * format {@code <java package name>#<test name>}.
 *
 * @return true if the failing tests only contain flickering tests
 */

def boolean checkForFlickers()
{
   boolean containsOnlyFlickers = false
    AbstractTestResultAction testResultAction =  currentBuild.rawBuild.getAction(AbstractTestResultAction.class)
   if (testResultAction != null) {
       // Find all failed tests
       def failedTests = testResultAction.getResult().getFailedTests()
       if (failedTests.size() > 0) {
           // Get all false positives from JIRA
           def url = "https://jira.xwiki.org/sr/jira.issueviews:searchrequest-xml/temp/SearchRequest.xml?".concat(
                   "jqlQuery=%22Flickering%20Test%22%20is%20not%20empty%20and%20resolution%20=%20Unresolved")
           def root = new XmlSlurper().parseText(url.toURL().text)
           def knownFlickers = []
            root.channel.item.customfields.customfield.each() { customfield ->
               if (customfield.customfieldname == 'Flickering Test') {
                    customfield.customfieldvalues.customfieldvalue.text().split(',').each() {
                        knownFlickers.add(it)
                   }
               }
           }
            echoXWiki "Known flickering tests: ${knownFlickers}"

           // For each failed test, check if it's in the known flicker list.
           // If all failed tests are flickers then don't send notification email
           def containsAtLeastOneFlicker = false
            containsOnlyFlickers = true
            failedTests.each() { testResult ->
               // Format of a Test Result id is "junit/<package name>/<test class name>/<test method name>"
               def parts = testResult.getId().split('/')
               def testName = "${parts[1]}.${parts[2]}#${parts[3]}"
               if (knownFlickers.contains(testName)) {
                   // Add the information that the test is a flicker to the test's description
                   testResult.setDescription(
                       "<h1 style='color:red'>This is a flickering test</h1>${testResult.getDescription() ?: ''}")
                    echoXWiki "Found flickering test: [${testName}]"
                    containsAtLeastOneFlicker = true
               } else {
                   // This is a real failing test, thus we'll need to send athe notification email...
                   containsOnlyFlickers = false
               }
           }

           if (containsAtLeastOneFlicker) {
                manager.addWarningBadge("Contains some flickering tests")
                manager.createSummary("warning.gif").appendText("<h1>Contains some flickering tests</h1>", false,
                   false, false, "red")
           }
       }
   }

   return containsOnlyFlickers
}

Hope you like it! Let me know in comments how you're handling Flaky tests in your project so that we can compare/discuss.

Sep 28 2017

Mutation testing with PIT and Descartes

XWiki SAS is part of an European research project named STAMP. As part of this project I've been able to experiment a bit with Descartes, a mutation engine for PIT.

What PIT does is mutate the code under test and check if the existing test suite is able to detect those mutations. In other words, it checks the quality of your test suite.

Descartes plugs into PIT by providing a set of specific mutators. For example one mutator will replace the output of methods by some fixed value (for example a method returning a boolean will always return true). Another will remove the content of void methods. It then generates a report.

Here's an example of running Descartes on a module of XWiki:

report.png

You can see both the test coverage score (computed automatically by PIT using Jacoco) and the Mutation score. 

If we drill down to one class (MacroId.java) we can see for example the following report for the equals() method:

equals.png

What's interesting to note is that the test coverage says that the following code has been tested:

result =
   (getId() == macroId.getId() || (getId() != null && getId().equals(macroId.getId())))
   && (getSyntax() == macroId.getSyntax() || (getSyntax() != null && getSyntax().equals(
    macroId.getSyntax())));

However, the mutation testing is telling us a different story. It says that if you change the equals method code with negative conditions (i.e. testing for inequality), the test still reports success.

If we check the test code:

@Test
public void testEquality()
{
    MacroId id1 = new MacroId("id", Syntax.XWIKI_2_0);
    MacroId id2 = new MacroId("id", Syntax.XWIKI_2_0);
    MacroId id3 = new MacroId("otherid", Syntax.XWIKI_2_0);
    MacroId id4 = new MacroId("id", Syntax.XHTML_1_0);
    MacroId id5 = new MacroId("otherid", Syntax.XHTML_1_0);
    MacroId id6 = new MacroId("id");
    MacroId id7 = new MacroId("id");

    Assert.assertEquals(id2, id1);
   // Equal objects must have equal hashcode
   Assert.assertTrue(id1.hashCode() == id2.hashCode());

    Assert.assertFalse(id3 == id1);
    Assert.assertFalse(id4 == id1);
    Assert.assertFalse(id5 == id3);
    Assert.assertFalse(id6 == id1);

    Assert.assertEquals(id7, id6);
   // Equal objects must have equal hashcode
   Assert.assertTrue(id6.hashCode() == id7.hashCode());
}

We can indeed see that the test doesn't test for inequality. Thus in practice if we replace the equals method by return true; then the test still pass.

That's interesting because that's something that test coverage didn't notice!

More generally the report provides a summary of all mutations it has done and whether they were killed or not by the tests. For example on this class:

mutations.png

Here's what I learnt while trying to use Descartes on XWiki:

  • It's being actively developed
  • It's interesting to classify the results in 3 categories:
    • strong pseudo-tested methods: no matter the return values of a method, the tests still passes. This is the worst offender since it means the tests really needs to be improved. This was the case in the example above.
    • weak pseudo-tested methods: the tests passes with at least one modified value. Not as bad as strong pseudo-tested but you may want still want to check it out.
    • fully tested methods: the tests fail for all mutations and thus can be considered rock-solid!
  • So in the future, the generated report should provide this classification to help analyze the results and focus on important problems.
  • It would be nice if the Maven plugin was improved and be able to fail if the mutation score was below a certain threshold (as we do for test coverage).
  • Performance: It's quite slow compared to Jacoco execution time for example. In my example above it took 34 seconds to execute will all possible mutations (for a project with 14 test classes, 31 tests and 20 classes).
  • It would be nice to have a Sonar integration so that PIT/Descartes could provide some stats on the Sonar dashboard.
  • Big limitation: ATM there's a big limitation: PIT (and/or Descartes) doesn't support being executed on a multi-module project. This means that right now you need to compute the full classpath for all modules and run all sources and tests as if it was a single module. This causes problems for all tests that depend on the filesystem and expect a given directory structure. It's also tedious and a error-prone problem since the classpath order can have side effects.

Conclusion:

PIT/Descartes is very nice but I feel it would need to provide a bit more added-value out of the box for the XWiki open source project to use it in an automated manner. The test coverage report we have are already providing a lot of information about the code that is not tested at all and if we have 5 hours to spend, we would probably spend them on adding tests rather than improving further existing tests. YMMV. If you have a very strong suite of tests and you want to check its quality, then PIT/Descartes is your friend!

If Descartes could provide the build-failure-on-low-threshold feature mentioned above that could be one way we could integrate it in the XWiki build. But for that to be possible PIT/Descartes need to be able to run on multi-module Maven projects.

I'm also currently testing DSpot. DSpot uses PIT and Descartes but in addition it uses the results to generate new tests automatically. That would be even more interesting (if it can work well-enough). I'll post back when I've been able to run DSpot on XWiki and learn more by using it.

Now, the Descartes project could also use the information provided by line coverage to automatically generate tests to cover the spotted issues.

I'd like to thank Oscar Luis Vera Pérez who's actively working on Descartes and who's shown me how to use it and how to analyze the results. Thanks Oscar! I'll also continue to work with Oscar on improving Descartes and executing it on the XWiki code base. 

Sep 17 2017

Using Docker + Jenkins to test configurations

On the XWiki project, we currently have automated functional tests that use Selenium and Jenkins. However they exercise only a single configuration: HSQLDB, Jetty and FireFox (and all on a fixed version).

XWiki SAS is part of the STAMP research project and one domain of this research is improving configuration testing.

As a first step I've worked on providing official XWiki images but I've only provided 2 configurations (XWiki on Tomcat + Mysql and on Tomcat + PostgreSQL) and they're not currently exercised by our functional tests.

Thus I'm proposing below an architecture that should allow XWiki to be tested on various configurations:

architecture.png

Here's what I think it would mean in term of a Jenkins Pipeline (note that at this stage this is pseudo code and should not be understood literally):

pipeline {
  agent {
    docker {
      image 'xwiki-maven-firefox'
      args '-v $HOME/.m2:/root/.m2'
    }
  }
  stages {
    stage('Test') {
      steps {
        docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->                    
          docker.image('tomcat:8').withRun('-v $XWIKIDIR:/usr/local/tomcat/webapps/xwiki').inside("--link ${c.id}:db") {
            [...]
            wrap([$class: 'Xvnc']) {
              withMaven(maven: mavenTool, mavenOpts: mavenOpts) {
                [...]
                sh "mvn ..."
              }
            }
          }
        }
      }
    }
  }
}

Some explanations:

  • We would setup a custom Docker Registry so that we can prepare images that would be used by the Jenkins pipeline to create containers
  • Those images could themselves be refreshed regularly based on another pipeline that would use the docker.build() construct
  • We would use a Jenkins Agent dynamically provisioned from an image that would contain: sshd and a Jenkins user (so that Jenkins Master can communicate with it), Maven, VNC Server and a browser (FireFox for ex). We would have several such images, one per browser we want to test with.
    • Note that since we want to support only the latest browsers versions for FF/Chrome/Safari we could use apt to update (and commit) the browser version in the container prior to starting it, from the pipeline script.
  • Then the pipeline would spawn two containers: one for the DB and one for the Servlet container. Importantly for the Servlet container, I think we should mount a volume that points to a local directory on the agent, which would contain the XWiki exploded WAR (done as a pre-step by the Maven build). This would save time and not have to recreate a new image every time there's a commit on the XWiki codebase!
  • The build that contains the tests will be started by the Agent (and we would mount the the Maven local repository as a volume in order to sped up build times across runs). 
  • Right now the XWiki build already knows how to run the functional tests by fetching/exploding the XWiki WAR in the target directory and then starting XWiki directly from the tests, so all we would need to do is to make sure we map this directory in the container containing the Servlet container (e.g. in Tomcat it would be mapped to [TOMCATHOME]/webapps/xwiki).

This is just architecture at this stage. Now we need to put that in practice and find the gotchas (there always are emoticon_wink).

WDYT? Could this work? Are you doing this yourself?

Stay tuned, I should be able to report about it went in the coming weeks/months.

Jul 15 2017

XWiki vs statically-generated site

Imagine that you have a software project and you wish to have a web site to document everything related to the project (user documentation, dev documentation, news, etc).

You may wonder whether you should go with a statically-generated site (using GitHub Pages for example or some homemade solution) or use a wiki such as XWiki.

I've tried to list the pros of each solution below, trying to be as impartial as possible (not easy since I'm one of the developers of the XWiki project emoticon_wink). Don't hesitate to comment if you have other points or if some of my points are not fully accurate, and I'll update this blog post. Thanks!

Pros of a statically-generated site

  • Hosting is easier, as it only consists of static pages and assets. More generally, it's simpler to get started (but compensated by the need to set up some DSL and/or build if you don't want to enter content in HTML)
  • Maintenance is simplified, no database to backup for example or software to upgrade
  • Documentation can be versioned along with the code it documents
  • (GitHub) You get a review system built-in with Pull Requests
  • (GitHub) You can tag the whole documentation and have branches per released versions
  • Easier to scale. It's easy to make web servers scale to massively large number of users.

Pros of a wiki with XWiki

  • Easy for anyone to enter content, including for non-technical users. No HTML to know nor any specific DSL to understand. No need for an account on GitHub nor the need to understand how to make a PR.
  • Much faster to enter content through the WYSIWYG editor or through wiki markup.
  • Changes are immediately visible. You edit a page and click save or preview and you can see the result. No need to go through a build that will push the changes. With preview you can go back to editing the page if you're not satisfied and that's very fast. With WYSIWYG editor you don't even need to preview (since WYSIWYG is... WYSIWYG).
  • Richer search, see for example the XWiki.org Search UI vs the Groovy Search UI.
  • Ability for users to comment the website pages. 
  • Ability for users to watch pages and be notified when there are changes to those pages
  • Ability to see what's new in the documentation and the changes made
  • Your pages are not saved along the code in a single SCM. However XWiki pages can be exported to an XML format and the exported pages can be saved in the same SCM as the code. There are even GitHub Extension and SVN Extensionto help you do that.
  • Pages can be exported in different formats: OpenOffice, Word, PDF, etc. Note that it's also possible to export to HTML in order to offer a static web site for example.
  • Ability to display large quantity of filterable data in tables with great scalability. For example:
  • Ability to have dynamic examples that can be tested directly in the wiki. For example the XWiki Rendering can be tested live.
  • Perform dynamic actions, such as generating GitHub statistics for your project.
  • Perform dynamic actions by writing some scripts in wiki page. For example imagine you'd like to list all Extensions having a name containing User and located in the the extensions subwiki, you'd simply write the following in a wiki page (you can try it on XWiki Playground):
    {{velocity}}
    #set ($query = $services.query.xwql("where doc.object(ExtensionCode.ExtensionClass).name like '%User%'").setWiki('extensions'))
    #foreach ($itemDoc in $query.execute())
      * [[extensions:$itemDoc]]
    #end
    {{/velocity}}
  • More generally write some applications to enter data easily for your website. It's easy with Applications within Minutes.

Conclusion

IMO the choice will hugely depend on your needs from the above list but also on how easy/hard it is for you to get some hosting for XWiki:

It would be great if more open source forges such as the Apache Software Foundation, the Eclipse Foundation and others were offering XWiki hosting for their projects as an option.

So what would you choose for your project? emoticon_smile

Jun 06 2017

Jenkins Pipeline: Attach failing test screenshot

On the XWiki project we've started moving to Jenkins 2.0 and to using the Pipeline feature through Jenkinsfiles.

When we run our functional tests (we use Selenium2/Webdriver), we record a screenshot when a test fails. Previously we had a Groovy Scriptler script (written by Eduard Moraru, an XWiki committer) to automatically change the description of a Jenkins test page to include the screenshot as on:

failing.png 

So we needed to port this script to a Jenkinsfile. Here's the solution I came up with:

import hudson.FilePath
import hudson.tasks.junit.TestResultAction
import hudson.util.IOUtils
import javax.xml.bind.DatatypeConverter

def attachScreenshotToFailingTests() {
   def testResults = manager.build.getAction(TestResultAction.class)
   if (testResults == null) {
       // No tests were run in this build, nothing left to do.
       return
    }

   // Go through each failed test in the current build.
   def failedTests = testResults.getFailedTests()
   for (def failedTest : failedTests) {
       // Compute the test's screenshot file name.
       def testClass = failedTest.getClassName()
       def testSimpleClass = failedTest.getSimpleName()
       def testExample = failedTest.getName()

       // Example of value for suiteResultFile (it's a String):
       //   /Users/vmassol/.jenkins/workspace/blog/application-blog-test/application-blog-test-tests/target/
       //     surefire-reports/TEST-org.xwiki.blog.test.ui.AllTests.xml
       def suiteResultFile = failedTest.getSuiteResult().getFile()
       if (suiteResultFile == null) {
           // No results available. Go to the next test.
           continue
        }

       // Compute the screenshot's location on the build agent.
       // Example of target folder path:
       //   /Users/vmassol/.jenkins/workspace/blog/application-blog-test/application-blog-test-tests/target
       def targetFolderPath = createFilePath(suiteResultFile).getParent().getParent()
       // The screenshot can have 2 possible file names and locations, we have to look for both.
       // Selenium 1 test screenshots.
       def imageAbsolutePath1 = new FilePath(targetFolderPath, "selenium-screenshots/${testClass}-${testExample}.png")
       // Selenium 2 test screenshots.
       def imageAbsolutePath2 = new FilePath(targetFolderPath, "screenshots/${testSimpleClass}-${testExample}.png")
       // If screenshotDirectory system property is not defined we save screenshots in the tmp dir so we must also
       // support this.
       def imageAbsolutePath3 =
            new FilePath(createFilePath(System.getProperty("java.io.tmpdir")), "${testSimpleClass}-${testExample}.png")

       // Determine which one exists, if any.
        echo "Image path 1 (selenium 1) [${imageAbsolutePath1}], Exists: [${imageAbsolutePath1.exists()}]"
        echo "Image path 2 (selenium 2) [${imageAbsolutePath2}], Exists: [${imageAbsolutePath2.exists()}]"
        echo "Image path 3 (tmp) [${imageAbsolutePath3}], Exists: [${imageAbsolutePath3.exists()}]"
       def imageAbsolutePath = imageAbsolutePath1.exists() ?
            imageAbsolutePath1 : (imageAbsolutePath2.exists() ? imageAbsolutePath2 :
                (imageAbsolutePath3.exists() ? imageAbsolutePath3 : null))

        echo "Attaching screenshot to description: [${imageAbsolutePath}]"

       // If the screenshot exists...
       if (imageAbsolutePath != null) {
           // Build a base64 string of the image's content.
           def imageDataStream = imageAbsolutePath.read()
            byte[] imageData = IOUtils.toByteArray(imageDataStream)
           def imageDataString = "data:image/png;base64," + DatatypeConverter.printBase64Binary(imageData)

           def testResultAction = failedTest.getParentAction()

           // Build a description HTML to be set for the failing test that includes the image in Data URI format.
           def description = """<h3>Screenshot</h3><a href="${imageDataString}"><img style="width: 800px" src="${imageDataString}" /></a>"""

           // Set the description to the failing test and save it to disk.
            testResultAction.setDescription(failedTest, description)
            currentBuild.rawBuild.save()
        }
    }
}

Note that for this to work you need to:

  • Install the Groovy Postbuild plugin. This exposes the manager variable needed by the script.
  • Add the required security exceptions to http://<jenkins server ip>/scriptApproval/ if need be
  • Install the Pegdown Formatter plugin and set the description syntax to be Pegdown in the Global Security configuration (http://<jenkins server ip>/configureSecurity). Without this you won't be able to display HTML (and the default safe HTML option will strip out the datauri content).

Enjoy!

May 10 2017

TPC Strategy Check

The XWiki project is using a strategy to try to ensure that quality goes in the upward direction.

In short we fail the build if the Jacoco-computed coverage is below a per-module threshold. Devs can only increase the threshold but are not supposed to lower it.

However, from time to time, it happens that dev reduce the threshold (for example, when fixing a bug and this removes some lines of code and the coverage is lowered and the dev doesn't have the time to improve existing tests, etc).

Since we've been following this strategy for a long time now (at least since 2013), I thought it would be interesting to check, for a small subset of XWiki, how we fared.

Module NameTPC on Feb 2013TPC on May 2017Difference
xwiki-commons-tool-verification-resources-46%-
xwiki-commons-test-simple0%22%+22%
xwiki-commons-text93.5%94%+0.5%
xwiki-commons-component-api22.7%45%+22.3%
xwiki-commons-classloader-api0%--
xwiki-commons-classloader-protocol-jar0%--
xwiki-commons-observation-api15.9%100%+84.1%
xwiki-commons-component-observation76.2%74%-2.2%
xwiki-commons-component-default74.6%71%-3.6%
xwiki-commons-context76.7%81%+4.3%
xwiki-commons-blame-api-94%-
xwiki-commons-logging-api-76%-
xwiki-commons-diff-api-62%-
xwiki-commons-diff-display-95%-
xwiki-commons-script0%27%+27%
xwiki-commons-cache-infinispan-76%-
xwiki-commons-crypto-common-62%-
xwiki-commons-crypto-cipher-70%-
xwiki-commons-crypto-password-65%-
xwiki-commons-crypto-signer-71%-
xwiki-commons-crypto-pkix-76%-
xwiki-commons-crypto-store-filesystem-73%-
xwiki-commons-configuration-api0%--
xwiki-commons-test-component0%--
xwiki-commons-environment-api-100%--
xwiki-commons-environment-common0%--
xwiki-commons-environment-standard67.3%65%-2.3%
xwiki-commons-environment-servlet84.6%85%+0.4%
xwiki-commons-properties76.6%79%+2.4%
xwiki-commons-logging-api29.5%--
xwiki-commons-observation-local90.8%89%-1.8%
xwiki-commons-job36.1%58%+21.9%
xwiki-commons-logging-logback91.8%93%+1.2%
xwiki-commons-extension-api-68%-
xwiki-commons-extension-maven-70%-
xwiki-commons-extension-handler-jar-82%-
xwiki-commons-extension-repository-maven-69%-
xwiki-commons-repository-api-76%-
xwiki-commons-extension-repository-xwiki-18%-
xwiki-commons-filter-api-29%-
xwiki-commons-xml-59%-
xwiki-commons-filter-xml-54%-
xwiki-commons-filter-test-3%-
xwiki-commons-groovy-94%-
xwiki-commons-velocity-71%-
xwiki-commons-tool-xar-plugin-10%-

Note that - denotates some modules that do not exist at a given date or for which the coverage is empty (for example a module with only Java interfaces).

Conclusions:

  • Coverage has not increased substantially in general. However this is computed on xwiki-commons and those modules are pretty stable and don't change much. It would be interesting to compute something similar for xwiki-platform.
  • Out of 14 modules that have seen their TPC modified between 2013 and May 2017, 10 have seen their coverage increase (that's 71%). So 4 have seen their coverage be reduced by up to -3.6% max.

So while we could better, it's still not too bad and the strategy seems to be globally working.

Feb 06 2017

Jenkins going the Gradle way

Just realized that with the new Jenkins Pipeline strategy, Jenkins is actually moving towards a strategy similar to Gradle.

Before Gradle we had Maven which is using a Build by Configuration strategy. The idea is for users to tell Maven how to configure the build but not what it should do.

Before Pipeline, Jenkins Jobs were exactly that: you configured each job to give Jenkins each plugin's config, similar to Maven.

With Pipeline you now code your job in Groovy, specifying the what the job should do.

So you gain a lot of power to more precisely configure your jobs and an easier solution to reuse actions/configs between jobs. But you loose some simplicity and the fact that you could go to any Jenkins instance and understand what each job was doing easily. You now need to read code to understand what it's doing and everyone is going to have a different way of coding their jobs.

FYI I'm currently working on XWiki's Jenkinsfile. It's still simple at the moment but it'll become more complex as time passes.

Future will tell us if it's good or bad. FTM, being a dev, I'm enjoying it! emoticon_smile I especially like the perks that come with it (but which could have been implemented with a declarative job configuration too):

  • Save the CI job in the SCM next to the code
  • Ability to automatically add or remove jobs for SCM branches

See also my blog post about Jenkins GitHub Organization Jobs.

Feb 02 2017

Jenkins GitHub Organization Jobs

The Jenkins Pipeline plugin includes a very nice feature which is the "GitHub Organization" job type. This job type will scan a given GitHub organization repositories for Jenkinsfile files and when found will automatically create a pipeline job for them.

This has some nice advantages:

  • You save your Jenkins job configuration in your SCM (git in our case, in the Jenkinsfile), next to your code. You can receive email diffs to show who made modifications to it, the reason and understand the changes.
  • It supports branches: when you create a branch it's automatically discovered by Jenkins and the build is executed on it. And if the branch gets removed, it's removed automatically from Jenkins too. This point is awesome for us since we used to have groovy script to execute to copy jobs when there were new branches and when branches were removed.

So we started exploring this for the XWiki project, starting with Contrib Extensions.

Here's a screenshot of our Github Organization job for XWiki Contrib:

github-organization-contrib.png 

And here's an example of a pipeline job executing:

pipeline.png 

Now if you implement this you'll quickly find that you want to share pipeline scripts between Jenkinsfile, in order to not have duplicates.

FYI here's what the Jenkinsfile for the syntax-markdown pipeline job shown above looks like:

xwikiModule {
    name = 'syntax-markdown'
}

Simple, isn't it? emoticon_smile The trick is that we've configured Jenkins to automatically load a Global Pipeline Library (implicit load). You can do that by saving libraries at the root of SCM repositories and configure Jenkins to load them from the SCM sources (see this Jenkins doc for more details).

So we've created this GitHub repository and we've coded a vars/xwikiModule.groovy file. At the moment of writing this is its content (I expect it to be improved a lot in the near future):

// Example usage:
//   xwikiModule {
//     name = 'application-faq'
//     goals = 'clean install' (default is 'clean deploy')
//     profiles = 'legacy,integration-tests,jetty,hsqldb,firefox' (default is 'quality,legacy,integration-tests')
//  }

def call(body) {
   // evaluate the body block, and collect configuration into the object
   def config = [:]
    body.resolveStrategy = Closure.DELEGATE_FIRST
    body.delegate = config
    body()

   // Now build, based on the configuration provided, using the followong configuration:
   // - config.name: the name of the module in git, e.g. "syntax-markdown"

    node {
       def mvnHome
       stage('Preparation') {
           // Get the Maven tool.
           // NOTE: Needs to be configured in the global configuration.
           mvnHome = tool 'Maven'
       }
        stage('Build') {
            dir (config.name) {
                checkout scm
               // Execute the XVNC plugin (useful for integration-tests)
               wrap([$class: 'Xvnc']) {
                    withEnv(["PATH+MAVEN=${mvnHome}/bin", 'MAVEN_OPTS=-Xmx1024m']) {
                     try {
                         def goals = config.goals ?: 'clean deploy'
                         def profiles = config.profiles ?: 'quality,legacy,integration-tests'
                          sh "mvn ${goals} jacoco:report -P${profiles} -U -e -Dmaven.test.failure.ignore"
                          currentBuild.result = 'SUCCESS'
                     } catch (Exception err) {
                          currentBuild.result = 'FAILURE'
                          notifyByMail(currentBuild.result)
                         throw e
                     }
                  }
               }
           }
       }
        stage('Post Build') {
           // Archive the generated artifacts
           archiveArtifacts artifacts: '**/target/*.jar', fingerprint: true
           // Save the JUnit test report
           junit testResults: '**/target/surefire-reports/TEST-*.xml'
       }
   }
}

def notifyByMail(String buildStatus) {
    buildStatus =  buildStatus ?: 'SUCCESSFUL'
   def subject = "${buildStatus}: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'"
   def summary = "${subject} (${env.BUILD_URL})"
   def details = """<p>STARTED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]':</p>
    <p>Check console output at &QUOT;<a href='${env.BUILD_URL}'>${env.JOB_NAME} [${env.BUILD_NUMBER}]</a>&QUOT;</p>"""


   def to = emailextrecipients([
           [$class: 'CulpritsRecipientProvider'],
           [$class: 'DevelopersRecipientProvider'],
           [$class: 'RequesterRecipientProvider']
   ])
   if (to != null && !to.isEmpty()) {
        mail to: to, subject: subject, body: details
   }
}

Ideas of some next steps:

Right now there's one limitation I've found: It seems I need to manually click on "Re-scan Organization" in the Jenkins UI so that new Jenkinsfile added to repositories are taken into account. I hope that will get fixed soon. One workaround would be to add another Jenkins job to do that regularly but it's not perfect. Also note that you absolutely must authenticate against GitHub as otherwise you'll quickly reach the GitHub API request limit (when authenticated you are allowed 5000 requests per hour).

Anyway it's great and I love it.

Dec 10 2016

Full Automated Test Coverage with Jenkins and Clover

Generating test coverage reports for a single Maven project is simple. You can use the Clover maven plugin easily for that. For example:

mvn clean clover:setup install clover:clover

Generating a report for several modules in the same Maven reactor (same build) is also easy since that's supported out of the box. For example:

mvn clean clover:setup install clover:aggregate clover:clover

However, generating a full coverage report for a multi-reactor project is much harder. Let's take the example of the XWiki project which has 4 separate Github repositories and thus 4 builds:

So the question is: How do we generate a single test coverage report for those 4 maven reactor builds. For example we want that tests that execute in the xwiki-enterprise repository generate coverage for source code located, say, in xwiki-commons.

Here's what we want to get:

dashboard.png 

The way to do this manually is to tell the Maven Clover plugin to use a single location for generating its data. Manually this can be achieved like this (more details can be found on the XWiki Test page):

# In xwiki-commons:
mvn clean clover:setup install -Dmaven.clover.cloverDatabase=/path/to/clover/data/clover.db
...
# In xwiki-enterprise:
mvn clean clover:setup install -Dmaven.clover.cloverDatabase=/path/to/clover/data/clover.db

# From xwiki-enterprise, generate the full Clover report:
mvn clover:clover -N -Dmaven.clover.cloverDatabase=/path/to/clover/data/clover.db

This is already pretty cool. However it's taking a lot of time and it would be nicer if it could be executed on the CI (on http://ci.xwiki.org in our case).

One important note is that Clover modifies the artifacts and thus you need to be careful to not push them into production or make sure they're not used in other builds (since they'd fail since they'd need to have the Clover runtime JAR at execution time).

So, I chose to use Jenkins 2 and the new Pipeline plugin and used the following script (see the XWiki Clover Job):

node() {
 def mvnHome
 def localRepository
 def cloverDir
 stage('Preparation') {
   def workspace = pwd()
   localRepository = "${workspace}/maven-repository"
   // Make sure that the special Maven local repository for Clover exists
   sh "mkdir -p ${localRepository}"
   // Remove all XWiki artifacts from it
   sh "rm -Rf ${localRepository}/org/xwiki"
   sh "rm -Rf ${localRepository}/com/xpn"
   // Make sure that the directory where clover will store its data exists in
   // the workspace and that it's clean
   cloverDir = "${workspace}/clover-data"
   sh "rm -Rf ${cloverDir}"
   sh "mkdir -p ${cloverDir}"
   // Get the Maven tool.
   // NOTE: Needs to be configured in the global configuration.           
   mvnHome = tool 'Maven'
  }
 // each() has problems in pipeline, thus using a standard for()
 // See https://issues.jenkins-ci.org/browse/JENKINS-26481
 for (String repoName : ["xwiki-commons", "xwiki-rendering", "xwiki-platform", "xwiki-enterprise"]) {
   stage("Cloverify ${repoName}") {
     dir (repoName) {
       git "https://github.com/xwiki/${repoName}.git"
       runCloverAndGenerateReport(mvnHome, localRepository, cloverDir)
      }  
    }      
  }
 stage("Publish Clover Reports") {
    ...
  }
}
def runCloverAndGenerateReport(def mvnHome, def localRepository, def cloverDir) {
 wrap([$class: 'Xvnc']) {
   withEnv(["PATH+MAVEN=${mvnHome}/bin", 'MAVEN_OPTS=-Xmx2048m']) {
     sh "mvn -Dmaven.repo.local='${localRepository}' clean clover:setup install -Pclover,integration-tests -Dmaven.clover.cloverDatabase=${cloverDir}/clover.db -Dmaven.test.failure.ignore=true -Dxwiki.revapi.skip=true"
     sh "mvn -Dmaven.repo.local='${localRepository}' clover:clover -N -Dmaven.clover.cloverDatabase=${cloverDir}/clover.db"
    }
  }
}

Note that we use the "Xvnc" Jenkins plugin because we run Selenium2 functional tests which require a display.

When this Jenkins job is executed is results in:

pipeline.png 

Over 5 hours of build time... Now you understand why we want to have this running on the CI agent and not on my local machine emoticon_wink

And the generated reports can be seen on xwiki.org.

Good news, we have an overall coverage of 73.2% for the full XWiki Java codebase, that's not too bad (I thought it would be lower emoticon_wink).

Next blog post will be about trying to achieve this with the Jacoco Maven plugin and the associated challenges and issues... Hint: it's harder than with the Clover Maven plugin.

Created by Vincent Massol on 2008/12/18 22:21