Loading...

Wednesday, June 3, 2015

NextBuild 2015 Conference Report

Saturday May 30th was the first NextBuild developer's conference in Eindhoven at the High Tech Campus. The conference is free for attendees and offers a variety of subjects presented by colleague developer's. This meant all talks were very practical and covered subjects encountered in real projects. This adds real value for me to attend a talk. Although it was on a weekend day there were about 150 attendees present. The location was very nice and allowed for a nice, informal atmosphere with a lot of opportunities to catch up.

The day started with a key note talk by Alex Sinner of Amazon. He looked into microservices and explained the features of AWS and especially the container support and the new AWS Lambda service. With the AWS Lambda service we can deploy functions that are executed on the Amazon infrastructure and we only pay when such a function needs to be executed. After the keynote the conference tracks were separated into five rooms, so sometimes it was difficult to choose a track. I went to the talk by my JDriven colleague Rob Brinkman about a Westy tracking platform he built with Vert.x, Groovy, AngularJS, Redis, Docker and Gradle. For those that don't know, but a Westy is a Volkswagen van used for camping trips. Rob has build a platform where a (cheap) tracker unit communicates to a Vert.x module the location of the Westy. This is all combined with other trip details and information in a web application. Every works with push events and the information is updated in real time in the web application. The talks was very interesting and really shows also the power and elegance of Vert.x. Also the architecture provided is a like a blueprint for Internet of Things (IoT) applications.

The following talk was by Bert Jan Schrijver. He had a very good talk about his real project experiences introducing continuous delivery into an organization with a lot of legacy applications. He showed they could use new applications to introduce a new way of working into the organization. Older applications could migrate to this new way of working afterwards. This allowed the company to still add new features and applications to their portfolio and gradually improve the development proces. He also showed how they used a lot of Amazon infrastructure and how everything (and really everything) was automated. Even the creation of a Nexus repository was automated, so when someone accidentally deleted the instance, it was up and running a couple of minutes later.
The last talk before the lunch was by Sander Elias and was about Reactive Angular. He showed some of the basics of Reactive eXtensions (RX) in Javascript. Only two weeks earlier we had a RxJava workshop at JDriven and the syntax was very similar, which is a good thing, because it makes it very easy to use RX in different languages. He also showed a sample with a code completion component that utilized RX in Javascript. The code was very concise and readable. I really liked the syntax and will want to use it in my Javascript code.

After the lunch we attended a second key note talk by Pieter Hintjes about building a community. He had some nice anecdotes and a bit of science on how communities and social groups come together and can stay together and be successful. One of the important things was to put people over code. So if somebody puts in the effort to create a pull request for an open source project, we must value that. The quality of the code is less important than the personal relation that is build within the community. He also mentioned some sort of agreements are needed to make sure there is peace instead of war. Also he encourages to embrace failure and learn from it. It already starts early at elementary school where kids are supposed to get good grades and don't make mistakes. But we should make mistakes, because that is the only way to learn and improve. I really liked the talk and get a different view on open source projects and communities.

Then I attended the Better Javascript using ECMAScript 2015 by my JDriven colleague Emil van Galen. After a short history of the different ECMAScript specifications he showed some really nice examples of the new ECMAScript 2015 additions to the existing specification. With clear code samples he showed how we can use the new syntax in our everyday Javascript development. Although the talks on the conference are 30 minutes he managed to show a lot of code and features. I can't wait for even more support for ECMAScript 2015 in web browsers so we can use the new features. After his talk it was time for talk about Spock and how Spock makes testing fun for Java Virtual Machine (JVM) languages. I did live coding to show off the power and magic of Spock. The code samples are also on GitHub.
The next talk I attended was about a first taste of integration with Apache Camel by Niels Stevens. He first gave a short introduction on integration patterns and then showed a couple of Apache Camel components that implement the patterns. Camel contains a lot of components for almost every need. So it is very easy to add integration to our applications. At the end he showed a demo of an example flow implemented with Camel and the Java DSL. Finally it was time for the last talk of the day and I did one on Gradle in Java projects. Again with live coding I showed that with very little effort Gradle can be used in a Java project. By configuring tasks that are added by the Java plugin of Gradle we can already customize our build. And I showed how easy it is to add our own task for functionality that is not provided out of the box. Also this code is on GitHub.

The conference was very well organized, the location was great, the food was good, the talks very informative and the audience was super. So it was a great Saturday and hopefully next year we can attend the second NextBuild conference!

Friday, May 1, 2015

Groovy Goodness: Share Data in Concurrent Environment with Dataflow Variables

To work with data in a concurrent environment can be complex. Groovy includes GPars, yes we don't have to download any dependencies, to provide some models to work easily with data in a concurrent environment. In this blog post we are going to look at an example where we use dataflow variables to exchange data between concurrent tasks. In a dataflow algorithm we define certain functions or tasks that have an input and output. A task is started when the input is available for the task. So instead of defining an imperative sequence of tasks that need to be executed, we define a series of tasks that will start executing when their input is available. And the nice thing is that each of these tasks are independent and can run in parallel if needed.

The data that is shared between tasks is stored in dataflow variables. The value of a dataflow variable can only be set once, but it can be read multiple times. When a task wants to read the value, but it is not yet available, the task will wait for the value in a non-blocking way.

In the following example Groovy script we use the Dataflows class. This class provides an easy way to set multiple dataflow variables and get their values. In the script we want to get the temperature in a city in both Celcius and Fahrenheit and we are using remote web services to the data:

import groovyx.gpars.dataflow.Dataflows
import static groovyx.gpars.dataflow.Dataflow.task

// Create new Dataflows instance to hold
// dataflow variables.
final Dataflows data = new Dataflows()

// Convert temperature from Celcius to Fahrenheit.
task {
    println "Task 'convertTemperature' is waiting for dataflow variable 'cityWeather'"

    // Get dataflow variable cityWeather value from
    // Dataflows data object. The value
    // is set by findCityWeather task.
    // If the value is not set yet, wait.
    final cityWeather = data.cityWeather
    final cityTemperature = cityWeather.temperature

    println "Task 'convertTemperature' got dataflow variable 'cityWeather'"

    // Convert value with webservice at
    // www.webservicex.net.
    final params = 
        [Temperature: cityTemperature, 
         FromUnit: 'degreeCelsius', 
         ToUnit: 'degreeFahrenheit']
    final url = "http://www.webservicex.net/ConvertTemperature.asmx/ConvertTemp"
    final result = downloadData(url, params)

    // Assign converted value to dataflow variable
    // temperature in Dataflows data object.
    data.temperature = result.text()
}

// Find temperature for city.
task {
    println "Task 'findCityWeather' is waiting for dataflow variable 'searchCity'"

    // Get value for city attribute in
    // Dataflows data object. This is 
    // set in another task (startSearch) 
    // at another time.
    // If the value is not set yet, wait.
    final city = data.searchCity

    println "Task 'findCityWeather' got dataflow variable 'searchCity'"

    // Get temperature for city with 
    // webservice at api.openweathermap.org.
    final params = 
        [q: city, 
         units: 'metric', 
         mode: 'xml']
    final url = "http://api.openweathermap.org/data/2.5/find"
    final result = downloadData(url, params)
    final temperature = result.list.item.temperature.@value

    // Assign map value to cityWeather dataflow 
    // variable in Dataflows data object. 
    data.cityWeather = [city: city, temperature: temperature]
}

// Get city part from search string.
task {
    println "Task 'parseCity' is waiting for dataflow variable 'searchCity'"

    // Get value for city attribute in
    // Dataflows data object. This is 
    // set in another task (startSearch) 
    // at another time.
    // If the value is not set yet, wait.
    final city = data.searchCity
    
    println "Task 'parseCity' got dataflow variable 'searchCity'"

    final cityName = city.split(',').first()

    // Assign to dataflow variable in Dataflows object.
    data.cityName = cityName
}

final startSearch = task {
    // Use command line argument to set
    // city dataflow variable in 
    // Dataflows data object.
    // Any code that reads this value
    // was waiting, but will start now,
    // because of this assigment.
    data.searchCity = args[0]  
}

// When a variable is bound we log it. 
final printValueBound = { dataflowVar, value ->
    println "Variable '$dataflowVar' bound to '$value'" 
}
data.searchCity printValueBound.curry('searchCity')
data.cityName printValueBound.curry('cityName')
data.cityWeather printValueBound.curry('cityWeather')
data.temperature printValueBound.curry('temperature')


// Here we read the dataflow variables cityWeather and temperature
// from Dataflows data object. Notice once the value is
// is set it is not calculated again. For example cityWeather 
// will not do a remote call again, because the value is already known
// by now.
println "Main thread is waiting for dataflow variables 'cityWeather', 'temperature' and 'cityName'"
final cityInfo = 
    data.cityWeather + [tempFahrenheit: data.temperature] + [cityName: data.cityName]


println """\

Temperature in city $cityInfo.cityName (searched with $cityInfo.city):
$cityInfo.temperature Celcius
$cityInfo.tempFahrenheit Fahrenheit
"""


// Helper method to get XML response from URL
// and parse it using XmlSlurper. Returns GPathResult.
def downloadData(requestUrl, requestParams) {
    final params = requestParams.collect { it }.join('&')
    final url = "${requestUrl}?${params}"

    final response = new XmlSlurper().parseText(url.toURL().text)
    response
}

Now when we run the script we get the following output:

$ groovy citytemp.groovy Tilburg,NL
Task 'convertTemperature' is waiting for dataflow variable 'cityWeather'
Task 'parseCity' is waiting for dataflow variable 'searchCity'
Task 'findCityWeather' is waiting for dataflow variable 'searchCity'
Task 'findCityWeather' got dataflow variable 'searchCity'
Task 'parseCity' got dataflow variable 'searchCity'
Main thread is waiting for dataflow variables 'cityWeather', 'temperature' and 'cityName'
Variable 'searchCity' bound to 'Tilburg,NL'
Variable 'cityName' bound to 'Tilburg'
Task 'convertTemperature' got dataflow variable 'cityWeather'
Variable 'cityWeather' bound to '[city:Tilburg,NL, temperature:11.76]'
Variable 'temperature' bound to '53.167999999999985'

Temperature in city Tilburg (searched with Tilburg,NL):
11.76 Celcius
53.167999999999985 Fahrenheit

Notice how tasks are waiting for values and continue when they receive their input. The order of the definition of the tasks is not important, because they will wait for their input to start the real work.

Written with Groovy 2.4.3.

Wednesday, April 29, 2015

Awesome Asciidoctor Notebook is Published

Today Awesome Asciidoctor Notebook is published as a free book. This book is an electronic publication with all Awesome Asciidoctor blog posts about the Asciidoctor tool bundled.

The book is published at Leanpub and is available in three formats: PDF, MOBI (for Kindle) and EPUB (for iPad). Updates for the book are also free. So when new Awesome Asciidoctor blog posts are added to the book you will get those updates for free.

I hope you will enjoy the book and I will keep it up-to-date with new content when I publish new Awesome Asciidoctor blog posts.

Monday, April 27, 2015

Grails Goodness: Custom Data Binding with @DataBinding Annotation

Grails has a data binding mechanism that will convert request parameters to properties of an object of different types. We can customize the default data binding in different ways. One of them is using the @DataBinding annotation. We use a closure as argument for the annotation in which we must return the converted value. We get two arguments, the first is the object the data binding is applied to and the second is the source with all original values of type SimpleMapDataBindingSource. The source could for example be a map like structure or the parameters of a request object.

In the next example code we have a Product class with a ProductId class. We write a custom data binding to convert the String value with the pattern {code}-{identifier} to a ProductId object:

package mrhaki.grails.binding

import grails.databinding.BindUsing

class Product {

    // Use custom data binding with @BindUsing annotation.
    @BindUsing({ product, source ->

        // Source parameter contains the original values.
        final String productId = source['productId']

        // ID format is like {code}-{identifier},
        // eg. TOYS-067e6162.
        final productIdParts = productId.split('-')

        // Closure must return the actual for 
        // the property.
        new ProductId(
            code: productIdParts[0],
            identifier: productIdParts[1])

    })
    ProductId productId

    String name

}

// Class for product identifier.
class ProductId {
    String code
    String identifier
}

The following specification shows the data binding in action:

package mrhaki.grails.binding

import grails.test.mixin.TestMixin
import grails.test.mixin.support.GrailsUnitTestMixin
import spock.lang.Specification
import grails.databinding.SimpleMapDataBindingSource

@TestMixin(GrailsUnitTestMixin)
class ProductSpec extends Specification {

    def dataBinder

    def setup() {
        // Use Grails data binding
        dataBinder = applicationContext.getBean('grailsWebDataBinder')
    }

    void "productId parameter should be converted to a valid ProductId object"() {
        given:
        final Product product = new Product()

        and:
        final SimpleMapDataBindingSource source = 
            [productId: 'OFFCSPC-103910ab24', name: 'Swingline Stapler']

        when:
        dataBinder.bind(product, source)

        then:
        with(product) {
            name == 'Swingline Stapler'

            with(productId) {
                identifier == '103910ab24'
                code == 'OFFCSPC'
            }
        }
    }

}

If we would have a controller with the request parameters productId=OFFCSPC-103910ab24&name=Swingline%20Stapler the data binding of Grails can create a Product instance and set the properties with the correct values.

Written with Grails 2.5.0 and 3.0.1.

Friday, April 24, 2015

Grails Goodness: Adding Health Check Indicators

With Grails 3 we also get Spring Boot Actuator. We can use Spring Boot Actuator to add some production-ready features for monitoring and managing our Grails application. One of the features is the addition of some endpoints with information about our application. By default we already have a /health endpoint when we start a Grails (3+) application. It gives back a JSON response with status UP. Let's expand this endpoint and add a disk space, database and url health check indicator.

We can set the application property endpoints.health.sensitive to false (securing these endpoints will be another blog post) and we automatically get a disk space health indicator. The default threshold is set to 10MB, so when the disk space is lower than 10MB the status is set to DOWN. The following snippet shows the change in the grails-app/conf/application.yml to set the property:

...
---
endpoints:
    health:
        sensitive: false
...

If we invoke the /health endpoint we get the following output:

{
    "status": "UP",
    "diskSpace": {
        "status": "UP",
        "free": 97169154048,
        "threshold": 10485760
    }
}

If we want to change the threshold we can create a Spring bean of type DiskSpaceHealthIndicatorProperties and name diskSpaceHealthIndicatorProperties to override the default bean. Since Grails 3 we can override doWithSpring method in the Application class to define Spring beans:

package healthcheck

import grails.boot.GrailsApp
import grails.boot.config.GrailsAutoConfiguration
import org.springframework.boot.actuate.health.DiskSpaceHealthIndicatorProperties

class Application extends GrailsAutoConfiguration {

    static void main(String[] args) {
        GrailsApp.run(Application)
    }

    @Override
    Closure doWithSpring() {
        { ->
            diskSpaceHealthIndicatorProperties(DiskSpaceHealthIndicatorProperties) {
                // Set threshold to 250MB.
                threshold = 250 * 1024 * 1024
            }
        }
    }
}

Spring Boot Actuator already contains implementations for checking SQL databases, Mongo, Redis, Solr and RabbitMQ. We can activate those when we add them as Spring beans to our application context. Then they are automatically picked up and added to the results of the /health endpoint. In the following example we create a Spring bean databaseHealth of type DataSourceHealthIndicator:

package healthcheck

import grails.boot.GrailsApp
import grails.boot.config.GrailsAutoConfiguration
import org.springframework.boot.actuate.health.DataSourceHealthIndicator
import org.springframework.boot.actuate.health.DiskSpaceHealthIndicatorProperties

class Application extends GrailsAutoConfiguration {

    static void main(String[] args) {
        GrailsApp.run(Application)
    }

    @Override
    Closure doWithSpring() {
        { ->
            // Configure data source health indicator based
            // on the dataSource in the application context.
            databaseHealthCheck(DataSourceHealthIndicator, dataSource)

            diskSpaceHealthIndicatorProperties(DiskSpaceHealthIndicatorProperties) {
                threshold = 250 * 1024 * 1024
            }
        }
    }
}

To create our own health indicator class we must implement the HealthIndicator interface. The easiest way is to extend the AbstractHealthIndicator class and override the method doHealthCheck. It might be nice to have a health indicator that can check if a URL is reachable. For example if we need to access a REST API reachable through HTTP in our application we can check if it is available.

package healthcheck

import org.springframework.boot.actuate.health.AbstractHealthIndicator
import org.springframework.boot.actuate.health.Health

class UrlHealthIndicator extends AbstractHealthIndicator {

    private final String url

    private final int timeout

    UrlHealthIndicator(final String url, final int timeout = 10 * 1000) {
        this.url = url
        this.timeout = timeout
    }

    @Override
    protected void doHealthCheck(Health.Builder builder) throws Exception {
        final HttpURLConnection urlConnection =
                (HttpURLConnection) url.toURL().openConnection()

        final int responseCode =
                urlConnection.with {
                    requestMethod = 'HEAD'
                    readTimeout = timeout
                    connectTimeout = timeout
                    connect()
                    responseCode
                }

        // If code in 200 to 399 range everything is fine.
        responseCode in (200..399) ?
                builder.up() :
                builder.down(
                        new Exception(
                                "Invalid responseCode '${responseCode}' checking '${url}'."))
    }
}

In our Application class we create a Spring bean for this health indicator so it is picked up by the Spring Boot Actuator code:

package healthcheck

import grails.boot.GrailsApp
import grails.boot.config.GrailsAutoConfiguration
import org.springframework.boot.actuate.health.DataSourceHealthIndicator
import org.springframework.boot.actuate.health.DiskSpaceHealthIndicatorProperties

class Application extends GrailsAutoConfiguration {

    static void main(String[] args) {
        GrailsApp.run(Application)
    }

    @Override
    Closure doWithSpring() {
        { ->
            // Create instance for URL health indicator.
            urlHealthCheck(UrlHealthIndicator, 'http://intranet', 2000)

            databaseHealthCheck(DataSourceHealthIndicator, dataSource)

            diskSpaceHealthIndicatorProperties(DiskSpaceHealthIndicatorProperties) {
                threshold = 250 * 1024 * 1024
            }
        }
    }
}

Now when we run our Grails application and access the /health endpoint we get the following JSON:

{
    "status": "DOWN",
    "urlHealthCheck": {
        "status": "DOWN"
        "error": "java.net.UnknownHostException: intranet",
    },
    "databaseHealthCheck": {
        "status": "UP"
        "database": "H2",
        "hello": 1,
    },
    "diskSpace": {
        "status": "UP",
        "free": 96622411776,
        "threshold": 262144000
    },
}

Notice that the URL health check fails so the complete status is set to DOWN.

Written with Grails 3.0.1.

Thursday, April 23, 2015

Grails Goodness: Log Startup Info

We can let Grails log some extra information when the application starts. Like the process ID (PID) of the application and on which machine the application starts. And the time needed to start the application. The GrailsApp class has a property logStartupInfo which is true by default. If the property is true than some extra lines are logged at INFO and DEBUG level of the logger of our Application class.

So in order to see this information we must configure our logging in the logback.groovy file. Suppose our Application class is in the package mrhaki.grails.sample.Application then we add the following line to see the output of the startup logging on the console:

...
logger 'mrhaki.grails.sample.Application', DEBUG, ['STDOUT'], false
...

When we run our Grails application we see the following in our console:

...
INFO mrhaki.grails.sample.Application - Starting Application on mrhaki-jdriven.local with PID 20948 (/Users/mrhaki/Projects/blog/posts/sample/build/classes/main started by mrhaki in /Users/mrhaki/Projects/mrhaki.com/blog/posts/sample/)
DEBUG mrhaki.grails.sample.Application - Running with Spring Boot v1.2.3.RELEASE, Spring v4.1.6.RELEASE
INFO mrhaki.grails.sample.Application - Started Application in 8.29 seconds (JVM running for 9.906)
Grails application running at http://localhost:8080
...

If we want to add some extra logging we can override the logStartupInfo method:

package mrhaki.grails.sample

import grails.boot.GrailsApp
import grails.boot.config.GrailsAutoConfiguration
import grails.util.*
import groovy.transform.InheritConstructors

class Application extends GrailsAutoConfiguration {

    static void main(String[] args) {
        // Use extended GrailsApp to run.
        new StartupGrailsApp(Application).run(args)
    }

}

@InheritConstructors
class StartupGrailsApp extends GrailsApp {
    @Override
    protected void logStartupInfo(boolean isRoot) {
        // Show default info.
        super.logStartupInfo(isRoot)

        // And add some extra logging information.
        // We use the same logger if we get the
        // applicationLog property.
        if (applicationLog.debugEnabled) {
            final metaInfo = Metadata.getCurrent()
            final String grailsVersion = GrailsUtil.grailsVersion
            applicationLog.debug "Running with Grails v${grailsVersion}"

            final sysprops = System.properties
            applicationLog.debug "Running on ${sysprops.'os.name'} v${sysprops.'os.version'}"
        }
    }
}

If we run the application we see in the console:

...
DEBUG mrhaki.grails.sample.Application - Running with Spring Boot v1.2.3.RELEASE, Spring v4.1.6.RELEASE
DEBUG mrhaki.grails.sample.Application - Running with Grails v3.0.0
DEBUG mrhaki.grails.sample.Application - Running on Mac OS X v10.10.3
...

Written with Grails 3.0.1.

Wednesday, April 22, 2015

Grails Goodness: Save Application PID in File

Since Grails 3 we can borrow a lot of the Spring Boot features in our applications. If we look in our Application.groovy file that is created when we create a new Grails application we see the class GrailsApp. This class extends SpringApplication so we can use all the methods and properties of SpringApplication in our Grails application. Spring Boot and Grails comes with the class ApplicationPidFileWriter in the package org.springframework.boot.actuate.system. This class saves the application PID (Process ID) in a file application.pid when the application starts.

In the following example Application.groovy we create an instance of ApplicationPidFileWriter and register it with the GrailsApp:

package mrhaki.grails.sample

import grails.boot.GrailsApp
import grails.boot.config.GrailsAutoConfiguration
import org.springframework.boot.actuate.system.ApplicationPidFileWriter

class Application extends GrailsAutoConfiguration {

    static void main(String[] args) {
        final GrailsApp app = new GrailsApp(Application)

        // Register PID file writer.
        app.addListeners(new ApplicationPidFileWriter())

        app.run(args)
    }

}

So when we run our application a new file application.pid is created in the current directory and contains the PID:

$ grails run-app

From another console we read the contents of the file with the PID:

$ cat application.pid
20634
$

The default file name is application.pid, but we can use another name if we want to. We can use another constructor for the ApplicationPidFileWriter where we specify the file name. Or we can use a system property or environment variable with the name PIDFILE. But we can also set it with the configuration property spring.pidfile. We use the latest option in our Grails application. In the next example application.yml we set this property:

...
spring:
    pidfile: sample-app.pid
...

When we start our Grails application we get the file sample-app.pid with the application PID as contents.

Written with Grails 3.0.1.

Awesome Asciidoctor: Display Keyboard Shortcuts

When we want to explain in our documentation which keys a user must press to get to a function we can use the keyboard macro in Asciidoctor. The macro will output the key nicely formatted as a real key on the keyboard. The syntax of the macro is kbd:[key]. To get the desired output we must set the document attribute experimental otherwise the macro is not used.

In the next Asciidoctor example file we use the keyboard macro:

= Keyboard macro

With the keyboard macro `kbd:[shortcut]`
we can include nicely formatted keyboard
shortcuts.

// We must enable experimental attribute.
:experimental:

// Define unicode for Apple Command key.
:commandkey: &#89w84;

Press kbd:[{commandkey} + 1] or kbd:[Ctrl + 1] 
to access the _Project_ view.

To zoom out press kbd:[Ctrl + -].

Find files with kbd:[Ctrl + Alt + N] or kbd:[{commandkey} + Shift + N].

When we transform this to HTML with the built-in HTML5 templates we get the following output:

Written with Asciidoctor 1.5.2.

Gradle Goodness: Handle Copying Duplicate Files

In Gradle we can configure how duplicate files should be handled by the Copy task. Actually we can configure how duplicate files are handled by any task that implements the CopySpec interface. For example archive tasks also implements this interface. We must use the setDuplicatesStrategy method to configure how Gradle behaves. The parameter is a value of the enumeration DuplicatesStrategy. We can use the values from the enum class or use String values, which are automatically converted to enum DuplicatesStrategy values.

We can choose the following strategies:

  • include: default strategy where the last duplicate file 'wins'.
  • exclude: only the first found duplicate file is copied and 'wins'.
  • warn: shows a warning on the console, but the last duplicate file 'wins' like with the include strategy.
  • fail: the build fails where duplicate files are found.

The following build file create four task of type Copy, each with a different duplicate strategy. In the directories src/manual and src/website we have a file COPY.txt. The content is simply a text line respectively COPY from src/manual and COPY from src/website:

// For each duplicate strategy we create a copy task.
['warn', 'include', 'exclude', 'fail'].each { strategy ->
    task "copyDuplicatesStrategy${strategy.capitalize()}"(type: Copy) {
        from 'src/manual'
        from 'src/webroot'

        into "$buildDir/copy"

        // Only the value for this property differs for
        // each created task.
        duplicatesStrategy = strategy

        // Print the used duplicates strategy when 
        // the task starts.
        doFirst {
            println "Copying with duplicates strategy '${strategy}'."
        }

        // Print the contents of the copied file COPY.txt.
        doLast {
            println "Contents of COPY.txt:"
            println file("$buildDir/copy/COPY.txt").text
        }
    }
}

We can now invoke the four tasks and see how Gradle reacts:

$ gradle copyDuplicatesStrategyWarn
:copyDuplicatesStrategyWarn
Copying with duplicates strategy 'warn'.
Encountered duplicate path "COPY.txt" during copy operation configured with DuplicatesStrategy.WARN
Contents of COPY.txt:
COPY from src/webroot


BUILD SUCCESSFUL

Total time: 3.728 secs
$ gradle copyDuplicatesStrategyInclude
:copyDuplicatesStrategyInclude
Copying with duplicates strategy 'include'.
Contents of COPY.txt:
COPY from src/webroot


BUILD SUCCESSFUL

Total time: 2.744 secs
$ gradle copyDuplicatesStrategyExclude 
:copyDuplicatesStrategyExclude
Copying with duplicates strategy 'exclude'.
Contents of COPY.txt:
COPY from src/manual


BUILD SUCCESSFUL

Total time: 2.784 secs
$ gradle copyDuplicatesStrategyFail
:copyDuplicatesStrategyFail
Copying with duplicates strategy 'fail'.
:copyDuplicatesStrategyFail FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':copyDuplicatesStrategyFail'.
> Encountered duplicate path "COPY.txt" during copy operation configured with DuplicatesStrategy.FAIL

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.

BUILD FAILED

Total time: 2.786 secs

Written with Gradle 2.3.

Gradle Goodness: Use Git Commit Id in Build Script

The nice thing about Gradle is that we can use Java libraries in our build script. This way we can add extra functionality to our build script in an easy way. We must use the classpath dependency configuration for our build script to include the library. For example we can include the library Grgit, which provides an easy way to interact with Git from Java or Groovy code. This library is also the basis for the Gradle Git plugin.

In the next example build file we add the Grgit library to our build script classpath. Then we use the open method of the Grgit class. From the returned object we invoke the head to get the commit id identified as id. With the abbreviatedId property we get the shorter version of the Git commit id. The build file also includes the application plugin. We customize the applicationDistribution CopySpec from the plugin and expand the properties in a VERSION file. This way our distribution always includes a plain text file VERSION with the Git commit id of the code.

buildscript {

    repositories {
        jcenter()
    }

    dependencies {
        // Add dependency for build script,
        // so we can access Git from our
        // build script.
        classpath 'org.ajoberstar:grgit:1.1.0'
    }

}

apply plugin: 'java'
apply plugin: 'application'

ext {
    // Open the Git repository in the current directory.
    git = org.ajoberstar.grgit.Grgit.open(file('.'))

    // Get commit id of HEAD.
    revision = git.head().id
    // Alternative is using abbreviatedId of head() method.
    // revision = git.head().abbreviatedId
}

// Use abbreviatedId commit id in the version.
version = "2.0.1.${git.head().abbreviatedId}"

// application plugin extension properties.
mainClassName = 'sample.Hello'
applicationName = 'sample'

// Customize applicationDistribution
// CopySpec from application plugin extension.
applicationDistribution.with {
    from('src/dist') {
        include 'VERSION'
        expand(
            buildDate: new Date(), 
            // Use revision with Git commit id:
            revision : revision,
            version  : project.version,
            appName  : applicationName)
    }
}

// Contents for src/dist/VERSION:
/*
Version: ${version}
Revision: ${revision}
Build-date: ${buildDate.format('dd-MM-yyyy HH:mm:ss')}
Application-name: ${appName}
*/

assemble.dependsOn installDist

When we run the build task for our project we get the following contents in our VERSION file:

Version: 2.0.1.e2ab261
Revision: e2ab2614011ff4be18c03e4dc1f86ab9ec565e6c
Build-date: 22-04-2015 13:53:31
Application-name: sample

Written with Gradle 2.3.