Search

May 24, 2019

Awesome Asciidoctor: Include Asciidoc Markup With Listing or Literal Blocks Inside Listing or Literal Block

If we want to include Asciidoc markup as source language and show the markup without transforming it we can use a listing or literal block.
For example we are using Asciidoc markup to write a document about Asciidoctor and want to include some Asciidoc markup examples.
If the markup contains sections like a listing or literal block and it is enclosed in a listing or literal block, the tranformation goes wrong.
Because the beginning of the included listing or literal block is seen as the ending of the enclosing listing or literal block.
Let's see what goes wrong with an example where we have the following Asciidoc markup:

In the following example we see a listing block that can be defined in Asciidoc markup.

[source,asciidoc]
----
= Sample document

We can include listing blocks which are ideal for including source code.

.A listing block
----
public class Sample { }
----

.A literal block
....
public class Sample { }
....

Asciidoctor awesomeness!
----

When we transform this to HTML we get the following output:

We can use nested listing or literal blocks where we have to add an extra hyphen or dot to the nested block, but then the Asciidoc markup we want to show as an example is not correct anymore.
It turns out we can also add an extra hyphen or dot to the enclosing listing or literal block to transform our markup correctly.
So in our example we add an extra hyphen to the outer listing block:

In the following example we see a listing block that can be defined in Asciidoc markup.

[source,asciidoc]
// This time we have 5 hyphens.
-----
= Sample document

We can include listing blocks which are ideal for including source code.

.A listing block
----
public class Sample { }
----

.A literal block
....
public class Sample { }
....

Asciidoctor awesomeness!
-----

The transformed HTML looks like this:

If our sample markup we want to show only contains a listing block we could have enclosed it in a literal block to get the same result or sample markup of a literal block could be in a listing block.
But in our example we have both a listing and literal block so we needed another solution to get the desired result.

Written with Asciidoctor 2.0.9.

May 22, 2019

Awesome Asciidoctor: Don't Wrap Lines in All Listing or Literal Blocks of Document

In a previous post we learned about setting the value of options attribute to nowrap on listing and literal blocks, so the lines in the block are not wrapped. In the comments a user suggested another option to disable line wrapping for all listing and literal blocks in the document by using the document attribute prewrap. We must negate the document attribute, :prewrap!:, to disable all wrapping. If we place this document attribute at the top of our Asciidoctor document it is applied for the whole document. We can also place it at other places in the document to change the setting for all listing and literal blocks following the prewrap document attribute. To enable wrapping again we set :prewrap: (leaving out the exclamation mark).

In the following example we have markup with a listing, literal and example block and we use the document attribute :prewrap!: to disable the wrapping for the listing and literal blocks:

= Wrapping lines sample

// Disable wrapping in listing and literal blocks.
:prewrap!:

== Listing

[.groovy]
----
class GroovyHelloWorld {
    String sayHello(final String message = 'world') { // Set default argument value for message argument
        "Hello $message !"
     }
}
----

== Literal

....
class GroovyHelloWorld {
    String sayHello(final String message = 'world') { // Set default argument value for message argument
        "Hello $message !"
     }
}
....

== Example

====
// We use spacing to mimic a code block.
class GroovyHelloWorld { +
{nbsp}{nbsp}{nbsp}{nbsp}String sayHello(final String message = 'world') { // Set default argument value for message argument +
{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}"Hello $message !" +
{nbsp}{nbsp}{nbsp}{nbsp}} +
}
====

When we create HTML from this markup we get the following output:

The code in the listing and literal blocks is now not wrapped, but continues on the same line.

Written with Asciidoctor 2.0.2.

April 19, 2019

Gradle Goodness: Use bill of materials (BOM) As Dependency Constraints

Since Gradle 5 we can easily use a bill of materials (BOM) in our build file to get recommended dependency versions. The dependency versions defined in the BOM are dependency constraints in Gradle. This means the dependencies we define in our build that are part of the BOM don't need a version, because the version is resolved via the dependency constraint that is defined in the BOM. Also transitive dependency versions are resolved using the BOM if applicable. We use the dependency handler method platform to define the BOM we want to import. The versions in the BOM are recommendations. We can override the recommendation by specifying the version for a dependency found in the BOM with an explicit version.

In the following example build file we import the BOM for Spring Boot 1.2.4.RELEASE. We also add two dependencies that are defined in the BOM: commons-codec:commons-codec and org.yaml:snakeyaml. One without a version and one with an explicit version to override the version defined in the BOM:

// File: build.gradle.kts
plugins {
    java
}

repositories {
    jcenter()
}

dependencies {
    // Load bill of materials (BOM) for Spring Boot.
    // The dependencies in the BOM will be 
    // dependency constraints in our build.
    implementation(platform("org.springframework.boot:spring-boot-dependencies:2.1.4.RELEASE"))

    // Use dependency defined in BOM.
    // Version is not needed, because the version
    // defined in the BOM is a dependency constraint
    // that is used.
    implementation("commons-codec:commons-codec")

    // Override version for dependency in the BOM.
    // Version in BOM is 1.23.
    implementation(group = "org.yaml", 
                   name = "snakeyaml", 
                   version = "1.24")
}

When we run the dependencies task for the configuration compileClasspath we can see how the dependencies are resolved. Notice that version 1.24 for snakeyaml is used, while the version in the BOM is 1.23:

$ gradle -q dependencies --configuration compileClasspath
------------------------------------------------------------
Root project
------------------------------------------------------------

compileClasspath - Compile classpath for source set 'main'.
+--- org.springframework.boot:spring-boot-dependencies:2.1.4.RELEASE
|    +--- commons-codec:commons-codec:1.11 (c)
|    \--- org.yaml:snakeyaml:1.23 -> 1.24 (c)
+--- commons-codec:commons-codec -> 1.11
\--- org.yaml:snakeyaml:1.24

(c) - dependency constraint
A web-based, searchable dependency report is available by adding the --scan option.
$

The dependency handler method enforcedPlatform is also available. When we use this method to import a BOM in our build the versions of dependencies we use are forced for the dependencies we use. Even if we define an explicit version on a dependency found in the BOM, the version is forced that is described in the BOM.

// File: build.gradle.kts
plugins {
    java
}

repositories {
    jcenter()
}

dependencies {
    // Load bill of materials (BOM) for Spring Boot.
    // The dependencies in the BOM will be 
    // dependency constraints in our build, but
    // the versions in the BOM are forced for
    // used dependencies.
    implementation(enforcedPlatform("org.springframework.boot:spring-boot-dependencies:2.1.4.RELEASE"))

    // Use dependency defined in BOM.
    // Version is not needed, because the version
    // defined in the BOM is a dependency constraint
    // that is used.
    implementation("commons-codec:commons-codec")

    // Version in BOM is 1.23 and because
    // we use enforcedPlatform the version
    // will be 1.23 once the dependency is resolved,
    // even though we define a newer version explicitly.
    implementation(group = "org.yaml", 
                   name = "snakeyaml", 
                   version = "1.24")
}

Now we look at the resolved dependencies and see how the snakeyaml dependency is forced to use version 1.23:

$ gradle -q dependencies --configuration compileClasspath
------------------------------------------------------------
Root project
------------------------------------------------------------

compileClasspath - Compile classpath for source set 'main'.
+--- org.springframework.boot:spring-boot-dependencies:2.1.4.RELEASE
|    +--- commons-codec:commons-codec:1.11 (c)
|    \--- org.yaml:snakeyaml:1.23 (c)
+--- commons-codec:commons-codec -> 1.11
\--- org.yaml:snakeyaml:1.24 -> 1.23

(c) - dependency constraint
A web-based, searchable dependency report is available by adding the --scan option.
$

Read more about the Gradle BOM support on the Gradle website.

Written with Gradle 5.4.

Gradle Goodness: Manage Dependency Versions With Dependency Constraints

From Maven builds we know the dependencyManagement section in our POM file. In the section we can describe dependencies with their version and later in the dependencies section we can refer to the dependency without the version. We can use dependency constraints in Gradle to do the same thing. A dependency constraint can be used to define the version or version range for a dependency defined in our scripts or a transitive dependency. Just like a dependency the dependency constraint is defined for a configuration, so we can fine tune the constraints to the correct configuration.

Using dependency constraints in a multi-project build allows us to define the dependency versions in the root build file and define project dependencies per project without a version. The version will then be used from the dependency constraint we defined in the root build file.

In the following example build script we define two dependency constraints for different configurations:

// File: build.gradle.kts
plugins {
    groovy
}

repositories {
    jcenter()
}

// In a multi-project build, this dependencies
// block with constraints could be in the 
// root build file, so all versions are 
// defined in one place.
dependencies {
    constraints {
        // Define dependency with version to be used.
        // This version is used when we define a dependency
        // for guava without a version.
        implementation("com.google.guava:guava:27.1-jre")

        // Constraints are scoped to configurations,
        // so we can make specific constraints for a configuration.
        // In this case we want the dependency on Spock defined
        // in the testImplementation configuration to be a specific version.
        // Here we use named arguments to define the dependency constraint.
        testImplementation(group = "org.spockframework", 
                           name = "spock-core", 
                           version = "1.3-groovy-2.5")
    }
}

// In a multi-project build this dependencies block
// could be in subprojects, where the dependency
// declarations do not need a version, because the
// versions are defined in the root build file using 
// constraints.
dependencies {
    // Because of the dependency constraint, 
    // we don't have to specify the dependency version here.
    implementation("com.google.guava:guava")

    // Another dependency without version for the 
    // testImplementation configuration.
    testImplementation("org.spockframework:spock-core")
}

Let's run the dependencies task for the configuration compileClasspath to see how the dependencies are resolved:

$ gradle -q dependencies --configuration compileClasspath
------------------------------------------------------------
Root project
------------------------------------------------------------

compileClasspath - Compile classpath for source set 'main'.
+--- com.google.guava:guava -> 27.1-jre
|    +--- com.google.guava:failureaccess:1.0.1
|    +--- com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
|    +--- com.google.code.findbugs:jsr305:3.0.2
|    +--- org.checkerframework:checker-qual:2.5.2
|    +--- com.google.errorprone:error_prone_annotations:2.2.0
|    +--- com.google.j2objc:j2objc-annotations:1.1
|    \--- org.codehaus.mojo:animal-sniffer-annotations:1.17
\--- com.google.guava:guava:27.1-jre (c)

(c) - dependency constraint
A web-based, searchable dependency report is available by adding the --scan option.
$

We see that the Guava dependency is resolved for version 27.1-jre. In the output is also shown with (c) that a dependency constraint is used to resolve the dependency.

When we look at the testCompileClasspath configuration we see the following output:

$ gradle -q dependencies --configuration testCompileClasspath
> Task :dependencies

------------------------------------------------------------
Root project
------------------------------------------------------------

testCompileClasspath - Compile classpath for source set 'test'.
+--- com.google.guava:guava -> 27.1-jre
|    +--- com.google.guava:failureaccess:1.0.1
|    +--- com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
|    +--- com.google.code.findbugs:jsr305:3.0.2
|    +--- org.checkerframework:checker-qual:2.5.2
|    +--- com.google.errorprone:error_prone_annotations:2.2.0
|    +--- com.google.j2objc:j2objc-annotations:1.1
|    \--- org.codehaus.mojo:animal-sniffer-annotations:1.17
+--- com.google.guava:guava:27.1-jre (c)
+--- org.spockframework:spock-core:1.3-groovy-2.5
|    +--- org.codehaus.groovy:groovy:2.5.4
|    +--- org.codehaus.groovy:groovy-json:2.5.4
|    |    \--- org.codehaus.groovy:groovy:2.5.4
|    +--- org.codehaus.groovy:groovy-nio:2.5.4
|    |    \--- org.codehaus.groovy:groovy:2.5.4
|    +--- org.codehaus.groovy:groovy-macro:2.5.4
|    |    \--- org.codehaus.groovy:groovy:2.5.4
|    +--- org.codehaus.groovy:groovy-templates:2.5.4
|    |    +--- org.codehaus.groovy:groovy:2.5.4
|    |    \--- org.codehaus.groovy:groovy-xml:2.5.4
|    |         \--- org.codehaus.groovy:groovy:2.5.4
|    +--- org.codehaus.groovy:groovy-test:2.5.4
|    |    +--- org.codehaus.groovy:groovy:2.5.4
|    |    \--- junit:junit:4.12
|    |         \--- org.hamcrest:hamcrest-core:1.3
|    +--- org.codehaus.groovy:groovy-sql:2.5.4
|    |    \--- org.codehaus.groovy:groovy:2.5.4
|    +--- org.codehaus.groovy:groovy-xml:2.5.4 (*)
|    \--- junit:junit:4.12 (*)
\--- org.spockframework:spock-core -> 1.3-groovy-2.5 (*)

(c) - dependency constraint
(*) - dependencies omitted (listed previously)

A web-based, searchable dependency report is available by adding the --scan option.
$

Read more about Gradle dependency constraints on the Gradle website.

Written with Gradle 5.4.

March 28, 2019

Awesome Asciidoctor: Collapsible Content

Since Asciidoctor 2.0.0 we can add the collapsible option to an example block. When the markup is generated to HTML we get a HTML details and summary section. The content of the example block is collapsed (default behaviour because it is in a details section) and a clickable text is available to open the collapsed block (the summary section), so we can see the actual content. The text we can click on is by default Details, but we can change that by setting the title of the example block. Then the title is used as the text to click on to open the collapsed content.

The following example markup has two collapsible blocks with and without a title:

= Sample
:nofooter:
:source-highlighter: highlightjs

== Collapse

[%collapsible]
====
Example block turns into collapsible summary/details.
====

== Exercise

. Implement the `Application` class with `main(String[] args)` method.

=== Solution

// The title attribute is used as
// clickable text to open the example block.
.Click to see solution
[%collapsible]
====
[,java]
----
package mrhaki;

import io.micronaut.runtime.Micronaut;

public class Application {

    public static void main(String[] args) {
        Micronaut.run(Application.class);
    }
}
----
====

When we generate this markup to HTML we get the following result:

And when we expand the collapsible content we see:

Written with Asciidoctor 2.0.2.

March 27, 2019

Awesome Asciidoctor: Change Striping In Tables

Creating tables with Asciidoctor is easy. When we convert our Asciidoc markup to HTML the rows of the body of the table are striped. The even rows have a slightly darker background color and the odd rows don't have a background color. We can change how the body rows are striped using the table attribute stripes. We can use the values even, odd, all or none. Instead of using the table attribute stripes we can also set the role on the table. The prefix of the role name is stripes- followed by the same values as we can use with the attribute stripes.

In the following example markup we define four table with different striping:

= Tables
:nofooter:

.No striping
// Alternative to stripes attributes is
// setting role "stripes-none" as [.stripes-none,cols="1,2"].
[stripes=none,cols="1,2"]
|===
| Name | Description

| Asciidoctor
| *Awesome* way to write documentation

| Micronaut
| Low resource usage and fast startup micro services
|===

.All rows
// Alternative to stripes attributes is
// setting role "stripes-all" as [.stripes-all,cols="1,2"].
[stripes=all,cols="1,2"]
|===
| Name | Description

| Asciidoctor
| *Awesome* way to write documentation

| Micronaut
| Low resource usage and fast startup micro services
|===

.Odd rows
// Alternative to stripes attributes is
// setting role "stripes-odd" as [.stripes-odd,cols="1,2"].
[stripes=odd,cols="1,2"]
|===
| Name | Description

| Asciidoctor
| *Awesome* way to write documentation

| Micronaut
| Low resource usage and fast startup micro services
|===

.Even rows (default)
// Alternative to stripes attributes is
// setting role "stripes-even" as [.stripes-even,cols="1,2"].
[stripes=even,cols="1,2"]
|===
| Name | Description

| Asciidoctor
| *Awesome* way to write documentation

| Micronaut
| Low resource usage and fast startup micro services
|===

When we convert this markup to HTML we get the following result:

Written with Asciidoctor 2.0.2.

Awesome Asciidoctor: Help On Syntax As HTML

With the release of Asciidoctor 2.0.0 we get nice help on the basic syntax of Asciidoc with the command-line option --help syntax. This gives us samples of the syntax in Asciidoc markup. As mentioned by Dan Allen on Twitter we can pipe the syntax sample to Asciidoctor itself to get a HTML page:


This is very useful! We have the source and the output of Asciidoc markup with really useful samples.

Let's run Asciidoctor with --help syntax and convert it to HTML and open the HTML in our web browser: $ asciidoctor --help syntax | asciidoctor -o syntax.html - && open syntax.html.
We get the following result:

To get the syntax in a Asciidoc markup file we can run $ asciidoctor --help syntax > syntax.adoc.

Written with Asciidoctor 2.0.2.

Micronaut Mastery: Parse String Value With Kb/Mb/Gb To Number

Micronaut can convert String values defined as a number followed by (case-insensitive) KB/MB/GB to a number value in certain cases. The conversion service in Micronaut supports the @ReadableBytes annotation that we can apply to a method parameter. Micronaut will then parse the String value and convert it to a number. The value 1Kb is converted to 1024. We can use this for example in a configuration class or path variable in a controller.

In the following example we have a configuration class annotated with @ConfigurationProperties with the property maxFileSize. We use the @ReadableBytes annotation to support setting the value with a String value:

package mrhaki;

import io.micronaut.context.annotation.ConfigurationProperties;
import io.micronaut.core.convert.format.ReadableBytes;

@ConfigurationProperties(SampleConfig.SAMPLE_CONFIG_PREFIX)
public class SampleConfig {
    
    public static final String SAMPLE_CONFIG_PREFIX = "sample.config";
    
    private long maxFileSize;

    /**
     * Use @ReadableBytes for parameter {@code maxFileSize}
     * to convert a String value formatted as number followed
     * by "KB", "MB" or "GB" (case-insensitive).
     * 
     * @param maxFileSize Maximum file size
     */
    public void setMaxFileSize(@ReadableBytes long maxFileSize) {
        this.maxFileSize = maxFileSize;
    }

    public long getMaxFileSize() {
        return maxFileSize;
    }
    
}

Let's write a Spock specification to test the conversion of String values to numbers:

package mrhaki

import io.micronaut.context.ApplicationContext
import spock.lang.Specification;

class SampleConfigSpec extends Specification {

    void "set maxFileSize configuration property with KB/MB/GB format"() {
        given:
        final ApplicationContext context =
                ApplicationContext.run("sample.config.maxFileSize": maxFileSize)

        when:
        final SampleConfig sampleConfig = context.getBean(SampleConfig)

        then:
        sampleConfig.maxFileSize == result

        where:
        maxFileSize || result
        "20KB"      || 20_480
        "20kb"      || 20 * 1024
        "1MB"       || 1_048_576
        "1Mb"       || 1 * 1024 * 1024
        "3GB"       || 3L * 1024 * 1024 * 1024
        113         || 113
    }
}

In another example we use the conversion on a path variable parameter in a controller method:

package mrhaki;

import io.micronaut.core.convert.format.ReadableBytes;
import io.micronaut.http.annotation.Controller;
import io.micronaut.http.annotation.Get;

@Controller
public class SampleController {
    
    @Get("/{size}")
    public long size(@ReadableBytes final long size) {
        return size;
    }
    
}

And with the following test we can see if the conversion is done correctly:

package mrhaki

import io.micronaut.context.ApplicationContext
import io.micronaut.http.client.HttpClient
import io.micronaut.runtime.server.EmbeddedServer
import spock.lang.AutoCleanup
import spock.lang.Shared
import spock.lang.Specification

class SampleControllerSpec extends Specification {

    @Shared
    @AutoCleanup
    private static EmbeddedServer server = ApplicationContext.run(EmbeddedServer)

    @Shared
    @AutoCleanup
    private static HttpClient httpClient = HttpClient.create(server.URL)

    void "return size converted from String value with unit KB/MB/GB"() {
        expect:
        httpClient.toBlocking().retrieve("/$size") == result

        where:
        size   || result
        "20KB" || "20480"
        "20kb" || (20 * 1024).toString()
        "1MB"  || 1_048_576.toString()
        "3GB"  || (3L * 1024 * 1024 * 1024).toString()
        113    || "113"

    }
}

Written with Micronaut 1.0.4.

March 22, 2019

Micronaut Mastery: Binding Request Parameters To POJO

Micronaut supports the RFC-6570 URI template specification to define URI variables in a path definition. The path definition can be a value of the @Controller annotation or any of the routing annotations for example @Get or @Post. We can define a path variable as {?binding*} to support binding of request parameters to all properties of an object type that is defined as method argument with the name binding. We can even use the Bean Validation API (JSR380) to validate the values of the request parameters if we add an implementation of this API to our class path.

In the following example controller we have the method items with method argument sorting of type Sorting. We want to map request parameters ascending and field to the properties of the Sorting object. We only have the use the path variable {?sorting*} to make this happen. We also add the dependency io.micronaut.configuration:micronaut-hibernate-validator to our class path. If we use Gradle we can add compile("io.micronaut.configuration:micronaut-hibernate-validator") to our build file.

package mrhaki;

import io.micronaut.http.annotation.Controller;
import io.micronaut.http.annotation.Get;
import io.micronaut.validation.Validated;

import javax.validation.Valid;
import javax.validation.constraints.Pattern;
import java.util.List;

@Controller("/sample")
@Validated // Enable validation of Sorting properties.
public class SampleController {
    
    private final SampleComponent sampleRepository;

    public SampleController(final SampleComponent sampleRepository) {
        this.sampleRepository = sampleRepository;
    }

    // Using the syntax {?sorting*} we can assign request parameters
    // to a POJO, where the request parameter name matches a property
    // name in the POJO. The name 'must match the argument  
    // name of our method, which is 'sorting' in our example.
    // The properties of the POJO can use the Validation API to 
    // define constraints and those will be validated if we use
    // @Valid for the method argument and @Validated at the class level.
    @Get("/{?sorting*}")
    public List<Item> items(@Valid final Sorting sorting) {
        return sampleRepository.allItems(sorting.getField(), sorting.getDirection());
    }
 
    private static class Sorting {
        
        private boolean ascending = true;
        
        @Pattern(regexp = "name|city", message = "Field must have value 'name' or 'city'.")
        private String field = "name";
        
        private String getDirection() {
            return ascending ? "ASC" : "DESC";
        }

        public boolean isAscending() {
            return ascending;
        }

        public void setAscending(final boolean ascending) {
            this.ascending = ascending;
        }

        public String getField() {
            return field;
        }

        public void setField(final String field) {
            this.field = field;
        }
    }
}

Let's write a test to check that the binding of the request parameters happens correctly. We use the Micronaut test support for Spock so we can use the @Micronaut and @MockBean annotations. We add a dependency on io.micronaut:micronaut-test-spock to our build, which is testCompile("io.micronaut.test:micronaut-test-spock:1.0.2") if we use a Gradle build.

package mrhaki

import io.micronaut.http.HttpStatus
import io.micronaut.http.client.RxHttpClient
import io.micronaut.http.client.annotation.Client
import io.micronaut.http.client.exceptions.HttpClientResponseException
import io.micronaut.http.uri.UriTemplate
import io.micronaut.test.annotation.MicronautTest
import io.micronaut.test.annotation.MockBean
import spock.lang.Specification

import javax.inject.Inject

@MicronautTest
class SampleControllerSpec extends Specification {

    // Client to test the /sample endpoint.
    @Inject
    @Client("/sample")
    RxHttpClient httpClient

    // Will inject mock created by sampleRepository method.
    @Inject
    SampleComponent sampleRepository

    // Mock for SampleRepository to check method is
    // invoked with correct arguments.
    @MockBean(SampleRepository)
    SampleComponent sampleRepository() {
        return Mock(SampleComponent)
    }

    void "sorting request parameters are bound to Sorting object"() {
        given:
        // UriTemplate to expand field and ascending request parameters with values.
        // E.g. ?field=name&expanding=false.
        final requestURI = new UriTemplate("/{?field,ascending}").expand(field: paramField, ascending: paramAscending)

        when:
        httpClient.toBlocking().exchange(requestURI)

        then:
        1 * sampleRepository.allItems(sortField, sortDirection) >> []

        where:
        paramField | paramAscending | sortField | sortDirection
        null       | null           | "name"    | "ASC"
        null       | false          | "name"    | "DESC"
        null       | true           | "name"    | "ASC"
        "city"     | false          | "city"    | "DESC"
        "city"     | true           | "city"    | "ASC"
        "name"     | false          | "name"    | "DESC"
        "name"     | true           | "name"    | "ASC"
    }

    void "invalid sorting field should give error response"() {
        given:
        final requestURI = new UriTemplate("/{?field,ascending}").expand(field: "invalid")

        when:
        httpClient.toBlocking().exchange(requestURI)

        then:
        final HttpClientResponseException clientResponseException = thrown()
        clientResponseException.response.status == HttpStatus.BAD_REQUEST
        clientResponseException.message == "sorting.field: Field must have value 'name' or 'city'."
    }
}

Written with Micronaut 1.0.4.

March 4, 2019

Groovy Goodness: Use Expanded Variables in SQL GString Query

Working with SQL database from Groovy code is very easy using the groovy.sql.Sql class. The class has several methods to execute a SQL query, but we have to take special care if we use methods from Sql that take a GString argument. Groovy will extract all variable expressions and use them as values for placeholders in a PreparedStatement constructed from the SQL query. If we have variable expressions that should not be extracted as parameters for a PreparedStatement we must use the Sql.expand method. This method will make the variable expression a groovy.sql.ExpandedVariable object. This object is not used as parameter for a PreparedStatement query, but the value is evaluated as GString variable expression.

In the following sample we have a class that invokes several methods of an Sql object with a GString query value. We can see when to use Sql.expand and when it is not needed:

package mrhaki

import groovy.sql.*

class SampleDAO {
    private static final String TABLE_NAME = 'sample'
    private static final String COLUMN_ID = 'id'
    private static final String COLUMN_NAME = 'name'
    private static final String COLUMN_DESCRIPTION = 'description'

    private final Sql sql = 
        Sql.newInstance(
            'jdbc:h2:test', 'sa', 'sa', 'org.h2.Driver')

    Long create() {
        // We need to use Sql.expand() in our GString query.
        // If we don't use it the GString variable expressions are interpreted 
        // as a placeholder in a SQL prepared statement, but we don't
        // that here.
        final query = 
            """
            INSERT INTO ${Sql.expand(TABLE_NAME)} DEFAULT VALUES
            """

        final insertedKeys = sql.executeInsert(query)
        return insertedKeys[0][0]
    }

    void updateDescription(final Long id, final String description) {
        // In the following GString SQL we need
        // Sql.expand(), because we use executeUpdate
        // with only the GString argument.
        // Groovy will extract all variable expressions and
        // use them as the placeholders
        // for the SQL prepared statement.
        // So to make sure only description and id are 
        // placeholders for the prepared statement we use
        // Sql.expand() for the other variables.
        final query = 
            """
            UPDATE ${Sql.expand(TABLE_NAME)} 
            SET ${Sql.expand(COLUMN_DESCRIPTION)} = ${description}
            WHERE ${Sql.expand(COLUMN_ID)} = ${id}
            """
        sql.executeUpdate(query)
    }

    void updateName(final Long id, final String name) {
        // In the following GString SQL we don't need
        // Sql.expand(), because we use the executeUpdate
        // method with GString argument AND argument
        // with values for the placeholders.
        final query = 
            """
            UPDATE ${TABLE_NAME} 
            SET ${COLUMN_NAME} = :nameValue
            WHERE ${COLUMN_ID} = :idValue
            """
        sql.executeUpdate(query, nameValue: name, idValue: id)
    }
}

Written with Groovy 2.5.4.