On software quality, collaborative development, wikis and Open Source...

Last modified by Vincent Massol on 2021/01/07 06:02

Jul 15 2017

XWiki vs statically-generated site

Imagine that you have a software project and you wish to have a web site to document everything related to the project (user documentation, dev documentation, news, etc).

You may wonder whether you should go with a statically-generated site (using GitHub Pages for example or some homemade solution) or use a wiki such as XWiki.

I've tried to list the pros of each solution below, trying to be as impartial as possible (not easy since I'm one of the developers of the XWiki project emoticon_wink). Don't hesitate to comment if you have other points or if some of my points are not fully accurate, and I'll update this blog post. Thanks!

Pros of a statically-generated site

  • Hosting is easier, as it only consists of static pages and assets. More generally, it's simpler to get started (but compensated by the need to set up some DSL and/or build if you don't want to enter content in HTML)
  • Maintenance is simplified, no database to backup for example or software to upgrade
  • Documentation can be versioned along with the code it documents
  • (GitHub) You get a review system built-in with Pull Requests
  • (GitHub) You can tag the whole documentation and have branches per released versions
  • Easier to scale. It's easy to make web servers scale to massively large number of users.

Pros of a wiki with XWiki

  • Easy for anyone to enter content, including for non-technical users. No HTML to know nor any specific DSL to understand. No need for an account on GitHub nor the need to understand how to make a PR.
  • Much faster to enter content through the WYSIWYG editor or through wiki markup.
  • Changes are immediately visible. You edit a page and click save or preview and you can see the result. No need to go through a build that will push the changes. With preview you can go back to editing the page if you're not satisfied and that's very fast. With WYSIWYG editor you don't even need to preview (since WYSIWYG is... WYSIWYG).
  • Richer search, see for example the XWiki.org Search UI vs the Groovy Search UI.
  • Ability for users to comment the website pages. 
  • Ability for users to watch pages and be notified when there are changes to those pages
  • Ability to see what's new in the documentation and the changes made
  • Your pages are not saved along the code in a single SCM. However XWiki pages can be exported to an XML format and the exported pages can be saved in the same SCM as the code. There are even GitHub Extension and SVN Extensionto help you do that.
  • Pages can be exported in different formats: OpenOffice, Word, PDF, etc. Note that it's also possible to export to HTML in order to offer a static web site for example.
  • Ability to display large quantity of filterable data in tables with great scalability. For example:
  • Ability to have dynamic examples that can be tested directly in the wiki. For example the XWiki Rendering can be tested live.
  • Perform dynamic actions, such as generating GitHub statistics for your project.
  • Perform dynamic actions by writing some scripts in wiki page. For example imagine you'd like to list all Extensions having a name containing User and located in the the extensions subwiki, you'd simply write the following in a wiki page (you can try it on XWiki Playground):
    {{velocity}}
    #set ($query = $services.query.xwql("where doc.object(ExtensionCode.ExtensionClass).name like '%User%'").setWiki('extensions'))
    #foreach ($itemDoc in $query.execute())
      * [[extensions:$itemDoc]]
    #end
    {{/velocity}}
  • More generally write some applications to enter data easily for your website. It's easy with Applications within Minutes.

Conclusion

IMO the choice will hugely depend on your needs from the above list but also on how easy/hard it is for you to get some hosting for XWiki:

It would be great if more open source forges such as the Apache Software Foundation, the Eclipse Foundation and others were offering XWiki hosting for their projects as an option.

So what would you choose for your project? emoticon_smile

Jul 05 2017

What's new in XWiki @ OW2Conf'17

Got to do a quickie at OW2Conf'17 for 15 minutes about What's New in XWiki. Since I didn't know if the audience already was using XWiki, I did a demo mixing some older but interesting stuff with some new features.

Features demoed:

  • Nested Pages
  • Page Templates
  • Menu Application (bundled now)
  • Color Theme Editor (with live preview)
  • OnlyOffice Extension installed with the Extension Manager
  • Application Within Minutes to create your own custom forms/structures

Jun 28 2017

Voxxed Luxembourg 2017

Last week I've participated to Voxxed Luxembourg. It was a nice event, very well organized, with about 450 participants. Well done PAG!.

I presented a session on Leading a community-driven open source project.

Would love to hear feedback.

Jun 06 2017

Jenkins Pipeline: Attach failing test screenshot

On the XWiki project we've started moving to Jenkins 2.0 and to using the Pipeline feature through Jenkinsfiles.

When we run our functional tests (we use Selenium2/Webdriver), we record a screenshot when a test fails. Previously we had a Groovy Scriptler script (written by Eduard Moraru, an XWiki committer) to automatically change the description of a Jenkins test page to include the screenshot as on:

failing.png 

So we needed to port this script to a Jenkinsfile. Here's the solution I came up with:

import hudson.FilePath
import hudson.tasks.junit.TestResultAction
import hudson.util.IOUtils
import javax.xml.bind.DatatypeConverter

def attachScreenshotToFailingTests() {
   def testResults = manager.build.getAction(TestResultAction.class)
   if (testResults == null) {
       // No tests were run in this build, nothing left to do.
       return
    }

   // Go through each failed test in the current build.
   def failedTests = testResults.getFailedTests()
   for (def failedTest : failedTests) {
       // Compute the test's screenshot file name.
       def testClass = failedTest.getClassName()
       def testSimpleClass = failedTest.getSimpleName()
       def testExample = failedTest.getName()

       // Example of value for suiteResultFile (it's a String):
       //   /Users/vmassol/.jenkins/workspace/blog/application-blog-test/application-blog-test-tests/target/
       //     surefire-reports/TEST-org.xwiki.blog.test.ui.AllTests.xml
       def suiteResultFile = failedTest.getSuiteResult().getFile()
       if (suiteResultFile == null) {
           // No results available. Go to the next test.
           continue
        }

       // Compute the screenshot's location on the build agent.
       // Example of target folder path:
       //   /Users/vmassol/.jenkins/workspace/blog/application-blog-test/application-blog-test-tests/target
       def targetFolderPath = createFilePath(suiteResultFile).getParent().getParent()
       // The screenshot can have 2 possible file names and locations, we have to look for both.
       // Selenium 1 test screenshots.
       def imageAbsolutePath1 = new FilePath(targetFolderPath, "selenium-screenshots/${testClass}-${testExample}.png")
       // Selenium 2 test screenshots.
       def imageAbsolutePath2 = new FilePath(targetFolderPath, "screenshots/${testSimpleClass}-${testExample}.png")
       // If screenshotDirectory system property is not defined we save screenshots in the tmp dir so we must also
       // support this.
       def imageAbsolutePath3 =
            new FilePath(createFilePath(System.getProperty("java.io.tmpdir")), "${testSimpleClass}-${testExample}.png")

       // Determine which one exists, if any.
        echo "Image path 1 (selenium 1) [${imageAbsolutePath1}], Exists: [${imageAbsolutePath1.exists()}]"
        echo "Image path 2 (selenium 2) [${imageAbsolutePath2}], Exists: [${imageAbsolutePath2.exists()}]"
        echo "Image path 3 (tmp) [${imageAbsolutePath3}], Exists: [${imageAbsolutePath3.exists()}]"
       def imageAbsolutePath = imageAbsolutePath1.exists() ?
            imageAbsolutePath1 : (imageAbsolutePath2.exists() ? imageAbsolutePath2 :
                (imageAbsolutePath3.exists() ? imageAbsolutePath3 : null))

        echo "Attaching screenshot to description: [${imageAbsolutePath}]"

       // If the screenshot exists...
       if (imageAbsolutePath != null) {
           // Build a base64 string of the image's content.
           def imageDataStream = imageAbsolutePath.read()
            byte[] imageData = IOUtils.toByteArray(imageDataStream)
           def imageDataString = "data:image/png;base64," + DatatypeConverter.printBase64Binary(imageData)

           def testResultAction = failedTest.getParentAction()

           // Build a description HTML to be set for the failing test that includes the image in Data URI format.
           def description = """<h3>Screenshot</h3><a href="${imageDataString}"><img style="width: 800px" src="${imageDataString}" /></a>"""

           // Set the description to the failing test and save it to disk.
            testResultAction.setDescription(failedTest, description)
            currentBuild.rawBuild.save()
        }
    }
}

Note that for this to work you need to:

  • Install the Groovy Postbuild plugin. This exposes the manager variable needed by the script.
  • Add the required security exceptions to http://<jenkins server ip>/scriptApproval/ if need be
  • Install the Pegdown Formatter plugin and set the description syntax to be Pegdown in the Global Security configuration (http://<jenkins server ip>/configureSecurity). Without this you won't be able to display HTML (and the default safe HTML option will strip out the datauri content).

Enjoy!

May 10 2017

TPC Strategy Check

The XWiki project is using a strategy to try to ensure that quality goes in the upward direction.

In short we fail the build if the Jacoco-computed coverage is below a per-module threshold. Devs can only increase the threshold but are not supposed to lower it.

However, from time to time, it happens that dev reduce the threshold (for example, when fixing a bug and this removes some lines of code and the coverage is lowered and the dev doesn't have the time to improve existing tests, etc).

Since we've been following this strategy for a long time now (at least since 2013), I thought it would be interesting to check, for a small subset of XWiki, how we fared.

Module NameTPC on Feb 2013TPC on May 2017Difference
xwiki-commons-tool-verification-resources-46%-
xwiki-commons-test-simple0%22%+22%
xwiki-commons-text93.5%94%+0.5%
xwiki-commons-component-api22.7%45%+22.3%
xwiki-commons-classloader-api0%--
xwiki-commons-classloader-protocol-jar0%--
xwiki-commons-observation-api15.9%100%+84.1%
xwiki-commons-component-observation76.2%74%-2.2%
xwiki-commons-component-default74.6%71%-3.6%
xwiki-commons-context76.7%81%+4.3%
xwiki-commons-blame-api-94%-
xwiki-commons-logging-api-76%-
xwiki-commons-diff-api-62%-
xwiki-commons-diff-display-95%-
xwiki-commons-script0%27%+27%
xwiki-commons-cache-infinispan-76%-
xwiki-commons-crypto-common-62%-
xwiki-commons-crypto-cipher-70%-
xwiki-commons-crypto-password-65%-
xwiki-commons-crypto-signer-71%-
xwiki-commons-crypto-pkix-76%-
xwiki-commons-crypto-store-filesystem-73%-
xwiki-commons-configuration-api0%--
xwiki-commons-test-component0%--
xwiki-commons-environment-api-100%--
xwiki-commons-environment-common0%--
xwiki-commons-environment-standard67.3%65%-2.3%
xwiki-commons-environment-servlet84.6%85%+0.4%
xwiki-commons-properties76.6%79%+2.4%
xwiki-commons-logging-api29.5%--
xwiki-commons-observation-local90.8%89%-1.8%
xwiki-commons-job36.1%58%+21.9%
xwiki-commons-logging-logback91.8%93%+1.2%
xwiki-commons-extension-api-68%-
xwiki-commons-extension-maven-70%-
xwiki-commons-extension-handler-jar-82%-
xwiki-commons-extension-repository-maven-69%-
xwiki-commons-repository-api-76%-
xwiki-commons-extension-repository-xwiki-18%-
xwiki-commons-filter-api-29%-
xwiki-commons-xml-59%-
xwiki-commons-filter-xml-54%-
xwiki-commons-filter-test-3%-
xwiki-commons-groovy-94%-
xwiki-commons-velocity-71%-
xwiki-commons-tool-xar-plugin-10%-

Note that - denotates some modules that do not exist at a given date or for which the coverage is empty (for example a module with only Java interfaces).

Conclusions:

  • Coverage has not increased substantially in general. However this is computed on xwiki-commons and those modules are pretty stable and don't change much. It would be interesting to compute something similar for xwiki-platform.
  • Out of 14 modules that have seen their TPC modified between 2013 and May 2017, 10 have seen their coverage increase (that's 71%). So 4 have seen their coverage be reduced by up to -3.6% max.

So while we could better, it's still not too bad and the strategy seems to be globally working.

Feb 06 2017

Jenkins going the Gradle way

Just realized that with the new Jenkins Pipeline strategy, Jenkins is actually moving towards a strategy similar to Gradle.

Before Gradle we had Maven which is using a Build by Configuration strategy. The idea is for users to tell Maven how to configure the build but not what it should do.

Before Pipeline, Jenkins Jobs were exactly that: you configured each job to give Jenkins each plugin's config, similar to Maven.

With Pipeline you now code your job in Groovy, specifying the what the job should do.

So you gain a lot of power to more precisely configure your jobs and an easier solution to reuse actions/configs between jobs. But you loose some simplicity and the fact that you could go to any Jenkins instance and understand what each job was doing easily. You now need to read code to understand what it's doing and everyone is going to have a different way of coding their jobs.

FYI I'm currently working on XWiki's Jenkinsfile. It's still simple at the moment but it'll become more complex as time passes.

Future will tell us if it's good or bad. FTM, being a dev, I'm enjoying it! emoticon_smile I especially like the perks that come with it (but which could have been implemented with a declarative job configuration too):

  • Save the CI job in the SCM next to the code
  • Ability to automatically add or remove jobs for SCM branches

See also my blog post about Jenkins GitHub Organization Jobs.

Feb 02 2017

Jenkins GitHub Organization Jobs

The Jenkins Pipeline plugin includes a very nice feature which is the "GitHub Organization" job type. This job type will scan a given GitHub organization repositories for Jenkinsfile files and when found will automatically create a pipeline job for them.

This has some nice advantages:

  • You save your Jenkins job configuration in your SCM (git in our case, in the Jenkinsfile), next to your code. You can receive email diffs to show who made modifications to it, the reason and understand the changes.
  • It supports branches: when you create a branch it's automatically discovered by Jenkins and the build is executed on it. And if the branch gets removed, it's removed automatically from Jenkins too. This point is awesome for us since we used to have groovy script to execute to copy jobs when there were new branches and when branches were removed.

So we started exploring this for the XWiki project, starting with Contrib Extensions.

Here's a screenshot of our Github Organization job for XWiki Contrib:

github-organization-contrib.png 

And here's an example of a pipeline job executing:

pipeline.png 

Now if you implement this you'll quickly find that you want to share pipeline scripts between Jenkinsfile, in order to not have duplicates.

FYI here's what the Jenkinsfile for the syntax-markdown pipeline job shown above looks like:

xwikiModule {
    name = 'syntax-markdown'
}

Simple, isn't it? emoticon_smile The trick is that we've configured Jenkins to automatically load a Global Pipeline Library (implicit load). You can do that by saving libraries at the root of SCM repositories and configure Jenkins to load them from the SCM sources (see this Jenkins doc for more details).

So we've created this GitHub repository and we've coded a vars/xwikiModule.groovy file. At the moment of writing this is its content (I expect it to be improved a lot in the near future):

// Example usage:
//   xwikiModule {
//     name = 'application-faq'
//     goals = 'clean install' (default is 'clean deploy')
//     profiles = 'legacy,integration-tests,jetty,hsqldb,firefox' (default is 'quality,legacy,integration-tests')
//  }

def call(body) {
   // evaluate the body block, and collect configuration into the object
   def config = [:]
    body.resolveStrategy = Closure.DELEGATE_FIRST
    body.delegate = config
    body()

   // Now build, based on the configuration provided, using the followong configuration:
   // - config.name: the name of the module in git, e.g. "syntax-markdown"

    node {
       def mvnHome
       stage('Preparation') {
           // Get the Maven tool.
           // NOTE: Needs to be configured in the global configuration.
           mvnHome = tool 'Maven'
       }
        stage('Build') {
            dir (config.name) {
                checkout scm
               // Execute the XVNC plugin (useful for integration-tests)
               wrap([$class: 'Xvnc']) {
                    withEnv(["PATH+MAVEN=${mvnHome}/bin", 'MAVEN_OPTS=-Xmx1024m']) {
                     try {
                         def goals = config.goals ?: 'clean deploy'
                         def profiles = config.profiles ?: 'quality,legacy,integration-tests'
                          sh "mvn ${goals} jacoco:report -P${profiles} -U -e -Dmaven.test.failure.ignore"
                          currentBuild.result = 'SUCCESS'
                     } catch (Exception err) {
                          currentBuild.result = 'FAILURE'
                          notifyByMail(currentBuild.result)
                         throw e
                     }
                  }
               }
           }
       }
        stage('Post Build') {
           // Archive the generated artifacts
           archiveArtifacts artifacts: '**/target/*.jar', fingerprint: true
           // Save the JUnit test report
           junit testResults: '**/target/surefire-reports/TEST-*.xml'
       }
   }
}

def notifyByMail(String buildStatus) {
    buildStatus =  buildStatus ?: 'SUCCESSFUL'
   def subject = "${buildStatus}: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'"
   def summary = "${subject} (${env.BUILD_URL})"
   def details = """<p>STARTED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]':</p>
    <p>Check console output at &QUOT;<a href='${env.BUILD_URL}'>${env.JOB_NAME} [${env.BUILD_NUMBER}]</a>&QUOT;</p>"""


   def to = emailextrecipients([
           [$class: 'CulpritsRecipientProvider'],
           [$class: 'DevelopersRecipientProvider'],
           [$class: 'RequesterRecipientProvider']
   ])
   if (to != null && !to.isEmpty()) {
        mail to: to, subject: subject, body: details
   }
}

Ideas of some next steps:

Right now there's one limitation I've found: It seems I need to manually click on "Re-scan Organization" in the Jenkins UI so that new Jenkinsfile added to repositories are taken into account. I hope that will get fixed soon. One workaround would be to add another Jenkins job to do that regularly but it's not perfect. Also note that you absolutely must authenticate against GitHub as otherwise you'll quickly reach the GitHub API request limit (when authenticated you are allowed 5000 requests per hour).

Anyway it's great and I love it.

Dec 10 2016

Full Automated Test Coverage with Jenkins and Clover

Generating test coverage reports for a single Maven project is simple. You can use the Clover maven plugin easily for that. For example:

mvn clean clover:setup install clover:clover

Generating a report for several modules in the same Maven reactor (same build) is also easy since that's supported out of the box. For example:

mvn clean clover:setup install clover:aggregate clover:clover

However, generating a full coverage report for a multi-reactor project is much harder. Let's take the example of the XWiki project which has 4 separate Github repositories and thus 4 builds:

So the question is: How do we generate a single test coverage report for those 4 maven reactor builds. For example we want that tests that execute in the xwiki-enterprise repository generate coverage for source code located, say, in xwiki-commons.

Here's what we want to get:

dashboard.png 

The way to do this manually is to tell the Maven Clover plugin to use a single location for generating its data. Manually this can be achieved like this (more details can be found on the XWiki Test page):

# In xwiki-commons:
mvn clean clover:setup install -Dmaven.clover.cloverDatabase=/path/to/clover/data/clover.db
...
# In xwiki-enterprise:
mvn clean clover:setup install -Dmaven.clover.cloverDatabase=/path/to/clover/data/clover.db

# From xwiki-enterprise, generate the full Clover report:
mvn clover:clover -N -Dmaven.clover.cloverDatabase=/path/to/clover/data/clover.db

This is already pretty cool. However it's taking a lot of time and it would be nicer if it could be executed on the CI (on http://ci.xwiki.org in our case).

One important note is that Clover modifies the artifacts and thus you need to be careful to not push them into production or make sure they're not used in other builds (since they'd fail since they'd need to have the Clover runtime JAR at execution time).

So, I chose to use Jenkins 2 and the new Pipeline plugin and used the following script (see the XWiki Clover Job):

node() {
 def mvnHome
 def localRepository
 def cloverDir
 stage('Preparation') {
   def workspace = pwd()
   localRepository = "${workspace}/maven-repository"
   // Make sure that the special Maven local repository for Clover exists
   sh "mkdir -p ${localRepository}"
   // Remove all XWiki artifacts from it
   sh "rm -Rf ${localRepository}/org/xwiki"
   sh "rm -Rf ${localRepository}/com/xpn"
   // Make sure that the directory where clover will store its data exists in
   // the workspace and that it's clean
   cloverDir = "${workspace}/clover-data"
   sh "rm -Rf ${cloverDir}"
   sh "mkdir -p ${cloverDir}"
   // Get the Maven tool.
   // NOTE: Needs to be configured in the global configuration.           
   mvnHome = tool 'Maven'
  }
 // each() has problems in pipeline, thus using a standard for()
 // See https://issues.jenkins-ci.org/browse/JENKINS-26481
 for (String repoName : ["xwiki-commons", "xwiki-rendering", "xwiki-platform", "xwiki-enterprise"]) {
   stage("Cloverify ${repoName}") {
     dir (repoName) {
       git "https://github.com/xwiki/${repoName}.git"
       runCloverAndGenerateReport(mvnHome, localRepository, cloverDir)
      }  
    }      
  }
 stage("Publish Clover Reports") {
    ...
  }
}
def runCloverAndGenerateReport(def mvnHome, def localRepository, def cloverDir) {
 wrap([$class: 'Xvnc']) {
   withEnv(["PATH+MAVEN=${mvnHome}/bin", 'MAVEN_OPTS=-Xmx2048m']) {
     sh "mvn -Dmaven.repo.local='${localRepository}' clean clover:setup install -Pclover,integration-tests -Dmaven.clover.cloverDatabase=${cloverDir}/clover.db -Dmaven.test.failure.ignore=true -Dxwiki.revapi.skip=true"
     sh "mvn -Dmaven.repo.local='${localRepository}' clover:clover -N -Dmaven.clover.cloverDatabase=${cloverDir}/clover.db"
    }
  }
}

Note that we use the "Xvnc" Jenkins plugin because we run Selenium2 functional tests which require a display.

When this Jenkins job is executed is results in:

pipeline.png 

Over 5 hours of build time... Now you understand why we want to have this running on the CI agent and not on my local machine emoticon_wink

And the generated reports can be seen on xwiki.org.

Good news, we have an overall coverage of 73.2% for the full XWiki Java codebase, that's not too bad (I thought it would be lower emoticon_wink).

Next blog post will be about trying to achieve this with the Jacoco Maven plugin and the associated challenges and issues... Hint: it's harder than with the Clover Maven plugin.

May 03 2016

Bye Bye CLIRR, Welcome Revapi!

On the XWiki project it's been years that we've been checking automatically for backward compatibilities in our APIs as part of our build (It's even documented here).

Recently, we've moved to Java 8 for XWiki 8.1 and we've discovered that the tool we were using, CLIRR, doesn't work anymore with Java 8. So I searched, without much hope, for an alternative, and I was surprised (and delighted) to find that there are 2 valid solutions nowadays that support Java 8:

I've tried both and they both had issues. However, I've discovered that the maintainers or these 2 solutions were really cool guys and they very quickly fixed any issue I've raised. Big thanks to Lukas Krejci and Martin Mois, you're awesome guys! emoticon_smile

In the end the XWiki project has had to make a difficult choice and we chose Revapi (Some reasons mentioned here but honestly they're both valid choices).

So here's how we use it now:

  • In our top level POM:
    ...
         <plugins>
    ...
           <!-- Used for checking backward compatibility (binary and source) -->
           <plugin>
             <groupId>org.revapi</groupId>
             <artifactId>revapi-maven-plugin</artifactId>
             <!-- Lock down plugin version for build reproducibility -->
             <version>0.4.5</version>
             <dependencies>
               <dependency>
                 <groupId>org.revapi</groupId>
                 <artifactId>revapi-java</artifactId>
                 <version>0.9.0</version>
               </dependency>
             </dependencies>
             <executions>
               <execution>
                 <id>revapi-check</id>
                 <goals>
                   <goal>check</goal>
                 </goals>
               </execution>
             </executions>
             <configuration>
               <oldVersion>${xwiki.compatibility.previous.version}</oldVersion>
               <skip>${xwiki.revapi.skip}</skip>
               <analysisConfiguration>
                {
                  "revapi": {
                    "java": {
                      "filter": {
                        "packages": {
                          "regex": true,
                          "include": ["org\\.xwiki\\..*"],
                          "exclude": ["org\\.xwiki\\..*\\.internal(\\..*)?", "org\\.xwiki\\..*\\.test(\\..*)?"]
                        }
                      }
                    }
                  }
                }
               </analysisConfiguration>
             </configuration>
           </plugin>
    ...
  • Then in specific module POMs we override the configuration to add excludes (for backward and source incompatibilities that we know about and have voluntarily decided to do). For example:
    ...
         <plugin>
           <groupId>org.revapi</groupId>
           <artifactId>revapi-maven-plugin</artifactId>
           <configuration>
             <analysisConfiguration><![CDATA[
                {
                  "revapi": {
                    "java": {
                      "filter": {
                        "packages": {
                          "regex": true,
                          "include": ["org\\.xwiki\\..*"],
                          "exclude": ["org\\.xwiki\\..*\\.internal(\\..*)?", "org\\.xwiki\\..*\\.test(\\..*)?"]
                        }
                      }
                    },
                    "ignore": [
                      {
                        "code": "java.method.returnTypeTypeParametersChanged",
                        "old": "method java.util.List<? extends org.xwiki.extension.ExtensionDependency> org.xwiki.extension.AbstractExtension::getDependencies()",
                        "new": "method java.util.List<org.xwiki.extension.ExtensionDependency> org.xwiki.extension.AbstractExtension::getDependencies()",
                        "justification": "? return type makes signature more complex for nothing"
                      },
                      {
                        "code": "java.method.returnTypeTypeParametersChanged",
                        "old": "method java.util.Collection<? extends org.xwiki.extension.ExtensionDependency> org.xwiki.extension.Extension::getDependencies()",
                        "new": "method java.util.Collection<org.xwiki.extension.ExtensionDependency> org.xwiki.extension.Extension::getDependencies()",
                        "justification": "? return type makes signature more complex for nothing"
                      },
                      {
                        "code": "java.method.returnTypeTypeParametersChanged",
                        "old": "method java.util.Collection<? extends org.xwiki.extension.ExtensionDependency> org.xwiki.extension.wrap.WrappingExtension<E extends org.xwiki.extension.Extension>::getDependencies()",
                        "new": "method java.util.Collection<org.xwiki.extension.ExtensionDependency> org.xwiki.extension.wrap.WrappingExtension<E extends org.xwiki.extension.Extension>::getDependencies()",
                        "justification": "? return type makes signature more complex for nothing"
                      }         
                    ]
                  }
                }
              ]]>
    </analysisConfiguration>
           </configuration>
         </plugin>
    ...
  • Now the interesting part is in generating reports. We've chosen an original way of doing this: We dynamically generate the Revapi report on our release notes wiki pages using Groovy. The big advantage is that there's no work to be done for the Release Manager:
    • Here's an example from the XWiki 8.1M1 Release Notes

      releasenotes.png

    • The Groovy script below does the following:
      • Use the GitHub REST API to get the content of the pom.xml containing the Revapi excludes.
      • Parse the XML using Groovy
      • Display a report out of it
    • And here's the Groovy script packaged as a Wiki Macro for the curious ones:
      {{groovy}}
      import groovy.json.*

      def getIgnores(def repo, def path)
      {
       def url = "https://api.github.com/repos/xwiki/${repo}/contents/${path}".toURL().text
       def result = new JsonSlurper().parseText(url)
       def xml = new String(result.content.decodeBase64())
        result = new XmlSlurper().parseText(xml)
       def revapi = result.build.plugins.plugin.'**'.find { node ->
          node.artifactId.text() == 'revapi-maven-plugin'
       }
       if (revapi) {
          result = new JsonSlurper().parseText(revapi.configuration.analysisConfiguration.text())
         return result.revapi.ignore
       } else {
         return ''
       }
      }

      def displayIgnores(def ignores)
      {
        result = new JsonSlurper().parseText(ignores)
        result.each() {
          it.each() {
            println "* {{{${it.justification}}}}"
            println "** Violation type: {{code}}${it.code}{{/code}}"
            println "** Old: {{code}}${it.old}{{/code}}"
           if (it.new) {
              println "** New: {{code}}${it.new}{{/code}}"
           }
         }
       }
      }

      def getViolations(def version)
      {
       def xobject = doc.getObject('ReleaseNotes.BackwardCompatibility')
       if (!xobject) {
          xobject = doc.newObject('ReleaseNotes.BackwardCompatibility')
         def commonsTag
         def renderingTag
         def platformTag
         if (version == 'master') {
            commonsTag = renderingTag = platformTag = 'master'
         } else {
            commonsTag = "xwiki-commons-${version}"
            renderingTag = "xwiki-rendering-${version}"
            platformTag = "xwiki-platform-${version}"
         }
         def jsonCommons = getIgnores('xwiki-commons', "xwiki-commons-core/pom.xml?ref=${commonsTag}")
         def jsonRendering = getIgnores('xwiki-rendering', "pom.xml?ref=${renderingTag}")
         def jsonPlatform = getIgnores('xwiki-platform', "xwiki-platform-core/pom.xml?ref=${platformTag}")
          xobject.set('violations', JsonOutput.prettyPrint(JsonOutput.toJson([jsonCommons, jsonRendering, jsonPlatform])))
          doc.save('Added backward-compatiblity violations data', true)
       }
       return xobject.getProperty('violations').value
      }

      displayIgnores(getViolations(xcontext.macro.params.version))
      {{/groovy}}

In conclusion we're very happy with the move, especially since we now have Java 8 support but also because Revapi provides more checks than what CLIRR was doing. ...

Jun 05 2015

Why is Jenkins's Incremental Build feature not working

On the XWiki project we have enabled Jenkins's Incremental Build feature:

jenkins.png

This seemed like a nice feature to speed up our CI when building our Maven jobs. Alas, it doesn't work!

The problem is that from time to time you'll get build failure such as:

Caused by: org.sonatype.aether.transfer.ArtifactNotFoundException: Could not find artifact org.xwiki.platform:xwiki-platform-store:pom:6.4.4-SNAPSHOT in local.central (xxx)

You'd think it's not possible, especially as we use Maven's -U flag and thus, even if the artifact is not present in the local repository it should be downloaded from the Maven Remote Repository that we use (and it's available there!).

The reason is because of a Maven bug: MNG-5542. What happens is that the Incremental Build feature will use the -pl Maven parameter to list all the Maven projects to build and when this feature is used, artifacts declared in the <parent> section of your POMs are just ignored and not downloaded...

The consequence is that your build will fail from time to time when one artifact declared in one of your <parent> is not present in your local repository... If you have decided to have one Maven repository per Job in Jenkins - which would seem a good idea to isolate your jobs and to be able to use the Parallel build feature of Jenkins - then you'll hit the problem very frequently...

So in the end you have to choose between Parallel builds and Incremental builds but you cannot have both at the same time!

Note that even with Parallel builds turned off, you build will fail from time to time, just less frequently...

One solution for the XWiki project would be to break our big job that builds the Platform (over 100 modules located in the same Git repo that we release together under a single version) into 100 jobs. But doing this manually is a no go so we would need to script this or wait for some Jenkins dev to implement this idea...

Big pain!

Created by Admin on 2013/10/22 14:34