On software quality, collaborative development, wikis and Open Source...

Last modified by Vincent Massol on 2015/11/23 11:59

Nov 17 2017

Controlling Test Quality

We already know how to control code quality by writing automated tests. We also know how to ensure that the code quality doesn't go down by using a tool to measure code covered by tests and fail the build automatically when it goes under a given threshold (and it seems to be working).

Wouldn't it be nice to be also able to verify the quality of the tests themselves? emoticon_smile

I'm proposing the following strategy for this:

  • Integrate PIT/Descartes in your Maven build
  • PIT/Descartes generates a Mutation Score metric. So the idea is to monitor this metric and ensure that it keeps going in the right direction and doesn't go down. Similar than watching the Clover TPC metric and ensuring it always go up.
  • Thus the idea would be, for each Maven module to set up a Mutation Score threshold (you'd run it once to get the current value and set that value as the initial threshold) and have the PIT/Descartes Maven plugin fail the build if the computed mutation score is below this threshold. In effect this would tell that the last changes have introduced tests that are of lowering quality than existing tests (in average) and that the new tests need to be improved to the level of the others.

In order for this strategy to be implementable we need PIT/Descartes to implement the following enhancements requests first:

I'm eagerly waiting for this issues to be fixed in order to try this strategy on the XWiki project and verify it can work in practice. There are some reason why it couldn't work such as being too painful and not being easy enough to identify test problems and fix them.

WDYT? Do you see this as possibly working?

Nov 14 2017

Comparing Clover Reports

On the XWiki project, we use Clover to compute our global test coverage. We do this over several Git repositories and include functional tests (and more generally the coverage brought by some modules into other modules).

Now I wanted to see the difference between 2 reports that were generated:

I was surprised to see a drop in the global TPC, from 73.2% down to 71.3%. So I took the time to understand the issue.

It appears that Clover classifies your code classes as Application Code and Test Code (I have no idea what strategy it uses to differentiate them) and even though we've used the same version of Clover (4.1.2) for both reports, the test classes were not categorized similarly. It also seems that the TPC value given in the HTML report is from Application Code.

Luckily we asked the Clover Maven plugin to generate not only HTML reports but also XML reports. Thus I was to write the following Groovy script that I executed in a wiki page in XWiki. I aggregated Application Code and Test code together in order to be able to compare the reports and the global TPC value.

result.png

{{groovy}}
def saveMetrics(def packageName, def metricsElement, def map) {
 def coveredconditionals = metricsElement.@coveredconditionals.toDouble()
 def coveredstatements = metricsElement.@coveredstatements.toDouble()
 def coveredmethods = metricsElement.@coveredmethods.toDouble()
 def conditionals = metricsElement.@conditionals.toDouble()
 def statements = metricsElement.@statements.toDouble()
 def methods = metricsElement.@methods.toDouble()
 def mapEntry = map.get(packageName)
 if (mapEntry) {
    coveredconditionals = coveredconditionals + mapEntry.get('coveredconditionals')
    coveredstatements = coveredstatements + mapEntry.get('coveredstatements')
    coveredmethods = coveredmethods + mapEntry.get('coveredmethods')
    conditionals = conditionals + mapEntry.get('conditionals')
    statements = statements + mapEntry.get('statements')
    methods = methods + mapEntry.get('methods')
 }
 def metrics = [:]
  metrics.put('coveredconditionals', coveredconditionals)
  metrics.put('coveredstatements', coveredstatements)
  metrics.put('coveredmethods', coveredmethods)
  metrics.put('conditionals', conditionals)
  metrics.put('statements', statements)
  metrics.put('methods', methods)
  map.put(packageName, metrics)
}
def scrapeData(url) {
 def root = new XmlSlurper().parseText(url.toURL().text)
 def map = [:]
  root.project.package.each() { packageElement ->
   def packageName = packageElement.@name
    saveMetrics(packageName.text(), packageElement.metrics, map)
 }
  root.testproject.package.each() { packageElement ->
   def packageName = packageElement.@name
    saveMetrics(packageName.text(), packageElement.metrics, map)
 }
 return map
}
def computeTPC(def map) {
 def tpcMap = [:]
 def totalcoveredconditionals = 0
 def totalcoveredstatements = 0
 def totalcoveredmethods = 0
 def totalconditionals = 0
 def totalstatements = 0
 def totalmethods = 0
  map.each() { packageName, metrics ->
   def coveredconditionals = metrics.get('coveredconditionals')
    totalcoveredconditionals += coveredconditionals
   def coveredstatements = metrics.get('coveredstatements')
    totalcoveredstatements += coveredstatements
   def coveredmethods = metrics.get('coveredmethods')
    totalcoveredmethods += coveredmethods
   def conditionals = metrics.get('conditionals')
    totalconditionals += conditionals
   def statements = metrics.get('statements')
    totalstatements += statements
   def methods = metrics.get('methods')
    totalmethods += methods
   def elementsCount = conditionals + statements + methods
   def tpc
   if (elementsCount == 0) {
      tpc = 0
   } else {
      tpc = ((coveredconditionals + coveredstatements + coveredmethods)/(conditionals + statements + methods)).trunc(4) * 100
    }
    tpcMap.put(packageName, tpc)
  }
  tpcMap.put("ALL", ((totalcoveredconditionals + totalcoveredstatements + totalcoveredmethods)/
(totalconditionals + totalstatements + totalmethods)).trunc(4) * 100)
 return tpcMap
}

// map1 = old
def map1 = computeTPC(scrapeData('http://maven.xwiki.org/site/clover/20161220/clover-commons+rendering+platform+enterprise-20161220-2134/clover.xml')).sort()

// map2 = new
def map2 = computeTPC(scrapeData('http://maven.xwiki.org/site/clover/20171109/clover-commons+rendering+platform-20171109-1920/clover.xml')).sort()

  println "= Added Packages"
println "|=Package|=TPC New"
map2.each() { packageName, tpc ->
 if (!map1.containsKey(packageName)) {
    println "|${packageName}|${tpc}"
 }  
}
println "= Differences"
println "|=Package|=TPC Old|=TPC New"
map2.each() { packageName, tpc ->
 def oldtpc = map1.get(packageName)
 if (oldtpc && tpc != oldtpc) {
   def css = oldtpc > tpc ? '(% style="color:red;" %)' : '(% style="color:green;" %)'
    println "|${packageName}|${oldtpc}|${css}${tpc}"
 }
}
println "= Removed Packages"
println "|=Package|=TPC Old"
map1.each() { packageName, tpc ->
 if (!map2.containsKey(packageName)) {
    println "|${packageName}|${tpc}"
 }
}
{{/groovy}}

And the result was quite different from what the HTML report was giving us!

We went from 74.07% in 2016-12-20 to 76.28% in 2017-11-09 (so quite different from the 73.2% to 71.3% figure given by the HTML report). Much nicer! emoticon_smile

Note that one reason I wanted to compare the TPC values was to see if our strategy of failing the build if a module's TPC is below the current threshold was working or not (I had tried to assess it before but it wasn't very conclusive).

Now I know that we won 1.9% of TPC in a bit less than a year and that looks good emoticon_smile

EDIT: I'm aware of the Historical feature of Clover but:

  • We haven't set it up so it's too late to compare old reports
  • I don't think it would help with the issue we faced with test code being counted as Application Code, and that being done differently depending on the generated reports.

Nov 08 2017

Flaky tests handling with Jenkins & JIRA

Flaky tests are a plague because they lower the credibility in your CI strategy, by sending false positive notification emails.

In a previous blog post, I detailed a solution we use on the XWiki project to handle false positives caused by the environment on which the CI build is running. However this solution wasn't handling flaky tests. This blog post is about fixing this!

So the strategy I'm proposing for Flaky tests is the following:

  • When a Flaky test is discovered, create a JIRA issue to remember to work on it and fix it (we currently have the following open issues related to Flaky tests)
  • The JIRA issue is marked as containing a flaky test by filling a custom field called "Flickering Test", using the following format: <package name of test class>.<test class name>#<test method name>. There can be several entries separated by commas.

    Example:

    jiraexample.png

  • In our Pipeline script, after the tests have executed, review the failing ones and check if they are in the list of known flaky tests in JIRA. If so, indicate it in the Jenkins test report. If all failing tests are flickers, don't send a notification email.

    Indication in the job history:

    joblist.png

    Indication on the job result page:

    jobpage.png

    Information on the test page itself:

    testpage.png

Note that there's an alternate solution that can also work:

  • When a Flaky test is discovered, create a JIRA issue to remember to work on it and fix it
  • Add an @Ignore annotation in the test with a detail pointing to the JIRA issue (something like @Ignore("WebDriver doesn't support uploading multiple files in one input, see http://code.google.com/p/selenium/issues/detail?id=2239")). This will prevent the build from executing this flaky test.

This last solution is certainly low-tech compared to the first one. I prefer the first one though for the following reasons:

  • It allows flaky tests to continue executing on the CI and thus serve as a constant reminder that something needs to be fixed. Adding the @Ignore annotation feels like putting the dust under the carpet and there's little chance you're going to come back to it in the future...
  • Since our script acts as postbuild script on the CI agent, there's the possibility to add some logic to auto-discover flaky tests that have not yet been marked as flaky.

Also note that there's a Jenkins plugin for Flaky test but I don't like the strategy involved which is to re-run failing tests a number of times to see if they pass. In theory it can work. In practice this means CI jobs that will take a lot longer to execute, making it impractical for functional UI tests (which is where we have flaky tests in XWiki). In addition, flakiness sometimes only happens when the full test suite is executed (i.e. it depends on what executes before) and sometimes require a large number of runs before passing.

So without further ado, here's the Jenkins Pipeline script to implement the strategy we defined above (you can check the full pipeline script):

/**
 * Check for test flickers, and modify test result descriptions for tests that are identified as flicker. A test is
 * a flicker if there's a JIRA issue having the "Flickering Test" custom field containing the FQN of the test in the
 * format {@code <java package name>#<test name>}.
 *
 * @return true if the failing tests only contain flickering tests
 */

def boolean checkForFlickers()
{
   boolean containsOnlyFlickers = false
    AbstractTestResultAction testResultAction =  currentBuild.rawBuild.getAction(AbstractTestResultAction.class)
   if (testResultAction != null) {
       // Find all failed tests
       def failedTests = testResultAction.getResult().getFailedTests()
       if (failedTests.size() > 0) {
           // Get all false positives from JIRA
           def url = "https://jira.xwiki.org/sr/jira.issueviews:searchrequest-xml/temp/SearchRequest.xml?".concat(
                   "jqlQuery=%22Flickering%20Test%22%20is%20not%20empty%20and%20resolution%20=%20Unresolved")
           def root = new XmlSlurper().parseText(url.toURL().text)
           def knownFlickers = []
            root.channel.item.customfields.customfield.each() { customfield ->
               if (customfield.customfieldname == 'Flickering Test') {
                    customfield.customfieldvalues.customfieldvalue.text().split(',').each() {
                        knownFlickers.add(it)
                   }
               }
           }
            echoXWiki "Known flickering tests: ${knownFlickers}"

           // For each failed test, check if it's in the known flicker list.
           // If all failed tests are flickers then don't send notification email
           def containsAtLeastOneFlicker = false
            containsOnlyFlickers = true
            failedTests.each() { testResult ->
               // Format of a Test Result id is "junit/<package name>/<test class name>/<test method name>"
               def parts = testResult.getId().split('/')
               def testName = "${parts[1]}.${parts[2]}#${parts[3]}"
               if (knownFlickers.contains(testName)) {
                   // Add the information that the test is a flicker to the test's description
                   testResult.setDescription(
                       "<h1 style='color:red'>This is a flickering test</h1>${testResult.getDescription() ?: ''}")
                    echoXWiki "Found flickering test: [${testName}]"
                    containsAtLeastOneFlicker = true
               } else {
                   // This is a real failing test, thus we'll need to send athe notification email...
                   containsOnlyFlickers = false
               }
           }

           if (containsAtLeastOneFlicker) {
                manager.addWarningBadge("Contains some flickering tests")
                manager.createSummary("warning.gif").appendText("<h1>Contains some flickering tests</h1>", false,
                   false, false, "red")
           }
       }
   }

   return containsOnlyFlickers
}

Hope you like it! Let me know in comments how you're handling Flaky tests in your project so that we can compare/discuss.

Oct 29 2017

Softshake 2017

I had the chance to participate to Softshake (2017 edition)  for the first time. It's a small but very nice conference held in Geneva, Switzerland.

From what I gathered, this year there were less attendees than in the previous years (About 150 vs 300 before). However, the organization was very nice:

  • Located inside the Hepia school with plenty of rooms available
  • 6 tracks in parallel, which is incredible for a small conference
  • Breakfast, lunch and snacks organized with good food
  • Speaker dinner with Fondue and all emoticon_wink

I got to present 2 talks:

I was also very happy to see my friend and ex-OCTO Technology colleague Philippe Kernevez, and to meet new OCTO consultants. Reminded me of the good times at OCTO emoticon_smile

Oct 28 2017

Google Summer of Code Summit 2017

This year the XWiki project had 5 4 GSOC students (we lost one to FOSSASIA who clicked faster than us!). This is way cool and we're glad that Google is organizing this every year. XWiki has been participating since the beginning (2005 AFAIR).

Every year the XWiki project sends 2 mentors to participate to eh GSOC summit. This year it was Thomas Mortagne and me who got the honor to go.

Quiz: find us on the group picture:

all.jpg

This conference is organized as an unconference. These are the key highlights I gout out of it:

  • Google will continue organizing GSOC in the future. Yeah!
  • Lots of discussions about Google Code In (for students aged 12-17) and to handle it to the best for organizations. This convinced us to register the XWiki project to participate and... we just got selected 2 days ago. If you're interested to participate, see the XWiki Code-In page. I'm really curious to see how it'll go.
  • I proposed a session on "Wikis: what's next", which turned into a "What is XWiki and why is it different from other wikis" emoticon_wink Incidentally this got me thinking about how to best express what XWiki is in one sentence. So far I've found the following:
    • View 1: Bring the concepts of wiki (collaboration on same content, edit+save, history+rollback, links) to application development.
    • View 2: A runtime web development platform. The default wiki you get is just one example (Similar to Eclipse IDE vs eclipse platform).
    • View3: Provide ability to add semantics to content: from free form data to structured data. Pages can contain free form or structured data / metadata, and control how it's displayed.
    • View 4: A web application server for content-related applications i.e. Provides all components / building blocks to easily create web applications.
  • Extremely well organized by Google. And good food and lots of choice (I'm vegetarian). I'm vegetarian but I love chocolate and this was heaven (all attendees brought chocolate!):

    chocolat.jpg

Thanks Google. Till next time!

Sep 28 2017

Mutation testing with PIT and Descartes

XWiki SAS is part of an European research project named STAMP. As part of this project I've been able to experiment a bit with Descartes, a mutation engine for PIT.

What PIT does is mutate the code under test and check if the existing test suite is able to detect those mutations. In other words, it checks the quality of your test suite.

Descartes plugs into PIT by providing a set of specific mutators. For example one mutator will replace the output of methods by some fixed value (for example a method returning a boolean will always return true). Another will remove the content of void methods. It then generates a report.

Here's an example of running Descartes on a module of XWiki:

report.png

You can see both the test coverage score (computed automatically by PIT using Jacoco) and the Mutation score. 

If we drill down to one class (MacroId.java) we can see for example the following report for the equals() method:

equals.png

What's interesting to note is that the test coverage says that the following code has been tested:

result =
   (getId() == macroId.getId() || (getId() != null && getId().equals(macroId.getId())))
   && (getSyntax() == macroId.getSyntax() || (getSyntax() != null && getSyntax().equals(
    macroId.getSyntax())));

However, the mutation testing is telling us a different story. It says that if you change the equals method code with negative conditions (i.e. testing for inequality), the test still reports success.

If we check the test code:

@Test
public void testEquality()
{
    MacroId id1 = new MacroId("id", Syntax.XWIKI_2_0);
    MacroId id2 = new MacroId("id", Syntax.XWIKI_2_0);
    MacroId id3 = new MacroId("otherid", Syntax.XWIKI_2_0);
    MacroId id4 = new MacroId("id", Syntax.XHTML_1_0);
    MacroId id5 = new MacroId("otherid", Syntax.XHTML_1_0);
    MacroId id6 = new MacroId("id");
    MacroId id7 = new MacroId("id");

    Assert.assertEquals(id2, id1);
   // Equal objects must have equal hashcode
   Assert.assertTrue(id1.hashCode() == id2.hashCode());

    Assert.assertFalse(id3 == id1);
    Assert.assertFalse(id4 == id1);
    Assert.assertFalse(id5 == id3);
    Assert.assertFalse(id6 == id1);

    Assert.assertEquals(id7, id6);
   // Equal objects must have equal hashcode
   Assert.assertTrue(id6.hashCode() == id7.hashCode());
}

We can indeed see that the test doesn't test for inequality. Thus in practice if we replace the equals method by return true; then the test still pass.

That's interesting because that's something that test coverage didn't notice!

More generally the report provides a summary of all mutations it has done and whether they were killed or not by the tests. For example on this class:

mutations.png

Here's what I learnt while trying to use Descartes on XWiki:

  • It's being actively developed
  • It's interesting to classify the results in 3 categories:
    • strong pseudo-tested methods: no matter the return values of a method, the tests still passes. This is the worst offender since it means the tests really needs to be improved. This was the case in the example above.
    • weak pseudo-tested methods: the tests passes with at least one modified value. Not as bad as strong pseudo-tested but you may want still want to check it out.
    • fully tested methods: the tests fail for all mutations and thus can be considered rock-solid!
  • So in the future, the generated report should provide this classification to help analyze the results and focus on important problems.
  • It would be nice if the Maven plugin was improved and be able to fail if the mutation score was below a certain threshold (as we do for test coverage).
  • Performance: It's quite slow compared to Jacoco execution time for example. In my example above it took 34 seconds to execute will all possible mutations (for a project with 14 test classes, 31 tests and 20 classes).
  • It would be nice to have a Sonar integration so that PIT/Descartes could provide some stats on the Sonar dashboard.
  • Big limitation: ATM there's a big limitation: PIT (and/or Descartes) doesn't support being executed on a multi-module project. This means that right now you need to compute the full classpath for all modules and run all sources and tests as if it was a single module. This causes problems for all tests that depend on the filesystem and expect a given directory structure. It's also tedious and a error-prone problem since the classpath order can have side effects.

Conclusion:

PIT/Descartes is very nice but I feel it would need to provide a bit more added-value out of the box for the XWiki open source project to use it in an automated manner. The test coverage report we have are already providing a lot of information about the code that is not tested at all and if we have 5 hours to spend, we would probably spend them on adding tests rather than improving further existing tests. YMMV. If you have a very strong suite of tests and you want to check its quality, then PIT/Descartes is your friend!

If Descartes could provide the build-failure-on-low-threshold feature mentioned above that could be one way we could integrate it in the XWiki build. But for that to be possible PIT/Descartes need to be able to run on multi-module Maven projects.

I'm also currently testing DSpot. DSpot uses PIT and Descartes but in addition it uses the results to generate new tests automatically. That would be even more interesting (if it can work well-enough). I'll post back when I've been able to run DSpot on XWiki and learn more by using it.

Now, the Descartes project could also use the information provided by line coverage to automatically generate tests to cover the spotted issues.

I'd like to thank Oscar Luis Vera Pérez who's actively working on Descartes and who's shown me how to use it and how to analyze the results. Thanks Oscar! I'll also continue to work with Oscar on improving Descartes and executing it on the XWiki code base. 

Sep 17 2017

Using Docker + Jenkins to test configurations

On the XWiki project, we currently have automated functional tests that use Selenium and Jenkins. However they exercise only a single configuration: HSQLDB, Jetty and FireFox (and all on a fixed version).

XWiki SAS is part of the STAMP research project and one domain of this research is improving configuration testing.

As a first step I've worked on providing official XWiki images but I've only provided 2 configurations (XWiki on Tomcat + Mysql and on Tomcat + PostgreSQL) and they're not currently exercised by our functional tests.

Thus I'm proposing below an architecture that should allow XWiki to be tested on various configurations:

architecture.png

Here's what I think it would mean in term of a Jenkins Pipeline (note that at this stage this is pseudo code and should not be understood literally):

pipeline {
  agent {
    docker {
      image 'xwiki-maven-firefox'
      args '-v $HOME/.m2:/root/.m2'
    }
  }
  stages {
    stage('Test') {
      steps {
        docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->                    
          docker.image('tomcat:8').withRun('-v $XWIKIDIR:/usr/local/tomcat/webapps/xwiki').inside("--link ${c.id}:db") {
            [...]
            wrap([$class: 'Xvnc']) {
              withMaven(maven: mavenTool, mavenOpts: mavenOpts) {
                [...]
                sh "mvn ..."
              }
            }
          }
        }
      }
    }
  }
}

Some explanations:

  • We would setup a custom Docker Registry so that we can prepare images that would be used by the Jenkins pipeline to create containers
  • Those images could themselves be refreshed regularly based on another pipeline that would use the docker.build() construct
  • We would use a Jenkins Agent dynamically provisioned from an image that would contain: sshd and a Jenkins user (so that Jenkins Master can communicate with it), Maven, VNC Server and a browser (FireFox for ex). We would have several such images, one per browser we want to test with.
    • Note that since we want to support only the latest browsers versions for FF/Chrome/Safari we could use apt to update (and commit) the browser version in the container prior to starting it, from the pipeline script.
  • Then the pipeline would spawn two containers: one for the DB and one for the Servlet container. Importantly for the Servlet container, I think we should mount a volume that points to a local directory on the agent, which would contain the XWiki exploded WAR (done as a pre-step by the Maven build). This would save time and not have to recreate a new image every time there's a commit on the XWiki codebase!
  • The build that contains the tests will be started by the Agent (and we would mount the the Maven local repository as a volume in order to sped up build times across runs). 
  • Right now the XWiki build already knows how to run the functional tests by fetching/exploding the XWiki WAR in the target directory and then starting XWiki directly from the tests, so all we would need to do is to make sure we map this directory in the container containing the Servlet container (e.g. in Tomcat it would be mapped to [TOMCATHOME]/webapps/xwiki).

This is just architecture at this stage. Now we need to put that in practice and find the gotchas (there always are emoticon_wink).

WDYT? Could this work? Are you doing this yourself?

Stay tuned, I should be able to report about it went in the coming weeks/months.

Jul 15 2017

XWiki vs statically-generated site

Imagine that you have a software project and you wish to have a web site to document everything related to the project (user documentation, dev documentation, news, etc).

You may wonder whether you should go with a statically-generated site (using GitHub Pages for example or some homemade solution) or use a wiki such as XWiki.

I've tried to list the pros of each solution below, trying to be as impartial as possible (not easy since I'm one of the developers of the XWiki project emoticon_wink). Don't hesitate to comment if you have other points or if some of my points are not fully accurate, and I'll update this blog post. Thanks!

Pros of a statically-generated site

  • Hosting is easier, as it only consists of static pages and assets. More generally, it's simpler to get started (but compensated by the need to set up some DSL and/or build if you don't want to enter content in HTML)
  • Maintenance is simplified, no database to backup for example or software to upgrade
  • Documentation can be versioned along with the code it documents
  • (GitHub) You get a review system built-in with Pull Requests
  • (GitHub) You can tag the whole documentation and have branches per released versions
  • Easier to scale. It's easy to make web servers scale to massively large number of users.

Pros of a wiki with XWiki

  • Easy for anyone to enter content, including for non-technical users. No HTML to know nor any specific DSL to understand. No need for an account on GitHub nor the need to understand how to make a PR.
  • Much faster to enter content through the WYSIWYG editor or through wiki markup.
  • Changes are immediately visible. You edit a page and click save or preview and you can see the result. No need to go through a build that will push the changes. With preview you can go back to editing the page if you're not satisfied and that's very fast. With WYSIWYG editor you don't even need to preview (since WYSIWYG is... WYSIWYG).
  • Richer search, see for example the XWiki.org Search UI vs the Groovy Search UI.
  • Ability for users to comment the website pages. 
  • Ability for users to watch pages and be notified when there are changes to those pages
  • Ability to see what's new in the documentation and the changes made
  • Your pages are not saved along the code in a single SCM. However XWiki pages can be exported to an XML format and the exported pages can be saved in the same SCM as the code. There are even GitHub Extension and SVN Extensionto help you do that.
  • Pages can be exported in different formats: OpenOffice, Word, PDF, etc. Note that it's also possible to export to HTML in order to offer a static web site for example.
  • Ability to display large quantity of filterable data in tables with great scalability. For example:
  • Ability to have dynamic examples that can be tested directly in the wiki. For example the XWiki Rendering can be tested live.
  • Perform dynamic actions, such as generating GitHub statistics for your project.
  • Perform dynamic actions by writing some scripts in wiki page. For example imagine you'd like to list all Extensions having a name containing User and located in the the extensions subwiki, you'd simply write the following in a wiki page (you can try it on XWiki Playground):
    {{velocity}}
    #set ($query = $services.query.xwql("where doc.object(ExtensionCode.ExtensionClass).name like '%User%'").setWiki('extensions'))
    #foreach ($itemDoc in $query.execute())
      * [[extensions:$itemDoc]]
    #end
    {{/velocity}}
  • More generally write some applications to enter data easily for your website. It's easy with Applications within Minutes.

Conclusion 

IMO the choice will hugely depend on your needs from the above list but also on how easy/hard it is for you to get some hosting for XWiki:

  • If it's an internal company project, it shouldn't be hard to install and host an XWiki instance (unless your company doesn't have any IT department, see below in this case)
  • If you have some budget you could use XWiki SAS's cloud solution (starts at 10 euros/month for 10 users)
  • If you're an open source project and have no budget, then the choice is a bit harder. The XWiki projet has a free farm and if your project doesn't require professional hosting then it could be a good option. If your project is quite visible/large you could also contact XWiki SAS which has often offered some free professional hosting for open source projects or non-profit associations in the past.

It would be great if more open source forges such as the Apache Software Foundation, the Eclipse Foundation and others were offering XWiki hosting for their projects as an option.

So what would you choose for your project? emoticon_smile

Jul 05 2017

What's new in XWiki @ OW2Conf'17

Got to do a quickie at OW2Conf'17 for 15 minutes about What's New in XWiki. Since I didn't know if the audience already was using XWiki, I did a demo mixing some older but interesting stuff with some new features.

Features demoed:

  • Nested Pages
  • Page Templates
  • Menu Application (bundled now)
  • Color Theme Editor (with live preview)
  • OnlyOffice Extension installed with the Extension Manager
  • Application Within Minutes to create your own custom forms/structures

Jun 28 2017

Voxxed Luxembourg 2017

Last week I've participated to Voxxed Luxembourg. It was a nice event, very well organized, with about 450 participants. Well done PAG!.

I presented a session on Leading a community-driven open source project.

Would love to hear feedback.

Tags:
Created by Admin on 2013/10/22 14:34
Tags:
Created by Admin on 2013/10/22 14:34
This wiki is licensed under a Creative Commons 2.0 license