March 22, 2019

Micronaut Mastery: Binding Request Parameters To POJO

Micronaut supports the RFC-6570 URI template specification to define URI variables in a path definition. The path definition can be a value of the @Controller annotation or any of the routing annotations for example @Get or @Post. We can define a path variable as {?binding*} to support binding of request parameters to all properties of an object type that is defined as method argument with the name binding. We can even use the Bean Validation API (JSR380) to validate the values of the request parameters if we add an implementation of this API to our class path.

In the following example controller we have the method items with method argument sorting of type Sorting. We want to map request parameters ascending and field to the properties of the Sorting object. We only have the use the path variable {?sorting*} to make this happen. We also add the dependency io.micronaut.configuration:micronaut-hibernate-validator to our class path. If we use Gradle we can add compile("io.micronaut.configuration:micronaut-hibernate-validator") to our build file.

package mrhaki;

import io.micronaut.http.annotation.Controller;
import io.micronaut.http.annotation.Get;
import io.micronaut.validation.Validated;

import javax.validation.Valid;
import javax.validation.constraints.Pattern;
import java.util.List;

@Controller("/sample")
@Validated // Enable validation of Sorting properties.
public class SampleController {
    
    private final SampleComponent sampleRepository;

    public SampleController(final SampleComponent sampleRepository) {
        this.sampleRepository = sampleRepository;
    }

    // Using the syntax {?sorting*} we can assign request parameters
    // to a POJO, where the request parameter name matches a property
    // name in the POJO. The name 'must match the argument  
    // name of our method, which is 'sorting' in our example.
    // The properties of the POJO can use the Validation API to 
    // define constraints and those will be validated if we use
    // @Valid for the method argument and @Validated at the class level.
    @Get("/{?sorting*}")
    public List<Item> items(@Valid final Sorting sorting) {
        return sampleRepository.allItems(sorting.getField(), sorting.getDirection());
    }
 
    private static class Sorting {
        
        private boolean ascending = true;
        
        @Pattern(regexp = "name|city", message = "Field must have value 'name' or 'city'.")
        private String field = "name";
        
        private String getDirection() {
            return ascending ? "ASC" : "DESC";
        }

        public boolean isAscending() {
            return ascending;
        }

        public void setAscending(final boolean ascending) {
            this.ascending = ascending;
        }

        public String getField() {
            return field;
        }

        public void setField(final String field) {
            this.field = field;
        }
    }
}

Let's write a test to check that the binding of the request parameters happens correctly. We use the Micronaut test support for Spock so we can use the @Micronaut and @MockBean annotations. We add a dependency on io.micronaut:micronaut-test-spock to our build, which is testCompile("io.micronaut.test:micronaut-test-spock:1.0.2") if we use a Gradle build.

package mrhaki

import io.micronaut.http.HttpStatus
import io.micronaut.http.client.RxHttpClient
import io.micronaut.http.client.annotation.Client
import io.micronaut.http.client.exceptions.HttpClientResponseException
import io.micronaut.http.uri.UriTemplate
import io.micronaut.test.annotation.MicronautTest
import io.micronaut.test.annotation.MockBean
import spock.lang.Specification

import javax.inject.Inject

@MicronautTest
class SampleControllerSpec extends Specification {

    // Client to test the /sample endpoint.
    @Inject
    @Client("/sample")
    RxHttpClient httpClient

    // Will inject mock created by sampleRepository method.
    @Inject
    SampleComponent sampleRepository

    // Mock for SampleRepository to check method is
    // invoked with correct arguments.
    @MockBean(SampleRepository)
    SampleComponent sampleRepository() {
        return Mock(SampleComponent)
    }

    void "sorting request parameters are bound to Sorting object"() {
        given:
        // UriTemplate to expand field and ascending request parameters with values.
        // E.g. ?field=name&expanding=false.
        final requestURI = new UriTemplate("/{?field,ascending}").expand(field: paramField, ascending: paramAscending)

        when:
        httpClient.toBlocking().exchange(requestURI)

        then:
        1 * sampleRepository.allItems(sortField, sortDirection) >> []

        where:
        paramField | paramAscending | sortField | sortDirection
        null       | null           | "name"    | "ASC"
        null       | false          | "name"    | "DESC"
        null       | true           | "name"    | "ASC"
        "city"     | false          | "city"    | "DESC"
        "city"     | true           | "city"    | "ASC"
        "name"     | false          | "name"    | "DESC"
        "name"     | true           | "name"    | "ASC"
    }

    void "invalid sorting field should give error response"() {
        given:
        final requestURI = new UriTemplate("/{?field,ascending}").expand(field: "invalid")

        when:
        httpClient.toBlocking().exchange(requestURI)

        then:
        final HttpClientResponseException clientResponseException = thrown()
        clientResponseException.response.status == HttpStatus.BAD_REQUEST
        clientResponseException.message == "sorting.field: Field must have value 'name' or 'city'."
    }
}

Written with Micronaut 1.0.4.

March 4, 2019

Groovy Goodness: Use Expanded Variables in SQL GString Query

Working with SQL database from Groovy code is very easy using the groovy.sql.Sql class. The class has several methods to execute a SQL query, but we have to take special care if we use methods from Sql that take a GString argument. Groovy will extract all variable expressions and use them as values for placeholders in a PreparedStatement constructed from the SQL query. If we have variable expressions that should not be extracted as parameters for a PreparedStatement we must use the Sql.expand method. This method will make the variable expression a groovy.sql.ExpandedVariable object. This object is not used as parameter for a PreparedStatement query, but the value is evaluated as GString variable expression.

In the following sample we have a class that invokes several methods of an Sql object with a GString query value. We can see when to use Sql.expand and when it is not needed:

package mrhaki

import groovy.sql.*

class SampleDAO {
    private static final String TABLE_NAME = 'sample'
    private static final String COLUMN_ID = 'id'
    private static final String COLUMN_NAME = 'name'
    private static final String COLUMN_DESCRIPTION = 'description'

    private final Sql sql = 
        Sql.newInstance(
            'jdbc:h2:test', 'sa', 'sa', 'org.h2.Driver')

    Long create() {
        // We need to use Sql.expand() in our GString query.
        // If we don't use it the GString variable expressions are interpreted 
        // as a placeholder in a SQL prepared statement, but we don't
        // that here.
        final query = 
            """
            INSERT INTO ${Sql.expand(TABLE_NAME)} DEFAULT VALUES
            """

        final keys = sql.executeInsert(query)
        return insertedKeys[0][0]
    }

    void updateDescription(final Long id, final String description) {
        // In the following GString SQL we need
        // Sql.expand(), because we use executeUpdate
        // with only the GString argument.
        // Groovy will extract all variable expressions and
        // use them as the placeholders
        // for the SQL prepared statement.
        // So to make sure only description and id are 
        // placeholders for the prepared statement we use
        // Sql.expand() for the other variables.
        final query = 
            """
            UPDATE ${Sql.expand(TABLE_NAME)} 
            SET ${Sql.expand(COLUMN_DESCRIPTION)} = ${description}
            WHERE ${Sql.expand(COLUMN_ID)} = ${id}
            """
        sql.executeUpdate(query)
    }

    void updateName(final Long id, final String name) {
        // In the following GString SQL we don't need
        // Sql.expand(), because we use the executeUpdate
        // method with GString argument AND argument
        // with values for the placeholders.
        final query = 
            """
            UPDATE ${TABLE_NAME} 
            SET ${COLUMN_NAME} = :nameValue
            WHERE ${COLUMN_ID} = :idValue
            """
        sql.executeUpdate(query, nameValue: name, idValue: id)
    }
}

Written with Groovy 2.5.4.

February 15, 2019

Spring Sweets: Group Loggers With Logical Name

Spring Boot 2.1 introduced log groups. A log group is a logical name for one or more loggers. We can define log groups in our application configuration. Then we can set the log level for a group, so all loggers in the group will get the same log level. This can be very useful to change a log level for multiple loggers that belong together with one setting. Spring Boot already provides two log groups by default: web and sql. In the following list we see which loggers are part of the default log groups:

  • web: org.springframework.core.codec, org.springframework.http, org.springframework.web, org.springframework.boot.actuate.endpoint.web, org.springframework.boot.web.servlet.ServletContextInitializerBeans
  • sql: org.springframework.jdbc.core, org.hibernate.SQL

To define our own log group we must add in our application configuration the key logging.group. followed by our log group name. Next we assign all loggers we want to be part of the group. Once we have defined our group we can set the log level using the group name prefixed with the configuration key logging.level..

In the following example configuration we define a new group controllers that consists of two loggers from different packages. We set the log level for this group to DEBUG. We also set the log level of the default group web to DEBUG:

# src/main/resources/application.properties

# Define a new log group controllers.
logging.group.controllers=mrhaki.hello.HelloController, mrhaki.sample.SampleController

# Set log level to DEBUG for group controllers.
# This means the log level for the loggers
# mrhaki.hello.HelloController and mrhaki.sample.SampleController
# are set to DEBUG.
logging.level.controllers=DEBUG

# Set log level for default group web to DEBUG.
logging.level.web=DEBUG

Written with Spring Boot 2.1.3.RELEASE

February 4, 2019

Gradle Goodness: Only Show All Tasks In A Group

To get an overview of all Gradle tasks in our project we need to run the tasks task. Since Gradle 5.1 we can use the --group option followed by a group name. Gradle will then show all tasks belonging to the group and not the other tasks in the project.

Suppose we have a Gradle Java project and want to show the tasks that belong to the build group:

$ gradle tasks --group build
> Task :tasks

------------------------------------------------------------
Tasks runnable from root project - Sample
------------------------------------------------------------

Build tasks
-----------
assemble - Assembles the outputs of this project.
bootBuildInfo - Generates a META-INF/build-info.properties file.
bootJar - Assembles an executable jar archive containing the main classes and their dependencies.
build - Assembles and tests this project.
buildDependents - Assembles and tests this project and all projects that depend on it.
buildNeeded - Assembles and tests this project and all projects it depends on.
classes - Assembles main classes.
clean - Deletes the build directory.
generateGitProperties - Generate a git.properties file.
jar - Assembles a jar archive containing the main classes.
testClasses - Assembles test classes.

To see all tasks and more detail, run gradle tasks --all

To see more detail about a task, run gradle help --task <task>

Deprecated Gradle features were used in this build, making it incompatible with Gradle 6.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/5.1.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD SUCCESSFUL in 2s
1 actionable task: 1 executed

Written with Gradle 5.1.1.

January 29, 2019

Awesome Asciidoctor: Exclude Parts From Included Files

In a previous post we learned how to include parts of a document in the generated output. The included parts are defined using tags. The start of a tag is defined in a comment with the format tag::tagName[] and the end has the format end::tagName[]. Next we must use the tags attribute for the include macro followed by the tagName. If we don't want to include a tag we must prefix it with an exclamation mark (!).

Suppose we have an external Java source we want to include in our Asciidoctor document.

package mrhaki;

// tag::singletonAnnotation[]
@Singleton
// end::singletonAnnotation[]
public class Sample {
    public String greeting() {
        return "Hello Asciidoctor";
    }
}

In the following sample Asciidoctor document we include Sample.java, but we don't want to include the text enclosed with the singletonAnnotation tag. So we use tags=!singletonAnnotaion with the include macro:

= Sample

To NOT include sections enclosed with tags we must use `tags=!<tagName>` in the `include` directive.

[source,java]
----
include::Sample.java[tags=!singletonAnnotation]
----

When we transform our Asciidoctor markup to HTML we get the following result:

Written with Asciidoctor 1.5.6.1.

November 14, 2018

Gradle Goodness: Generate Javadoc In HTML5

Since Java 9 we can specify that the Javadoc output must be generated in HTML 5 instead of the default HTML 4. We need to pass the option -html5 to the javadoc tool. To do this in Gradle we must add the option to the javadoc task configuration. We use the addBooleanOption method of the options property that is part of the javadoc task. We set the argument to html5 and the value to true.

In the following example we reconfigure the javadoc task to make sure the generated Javadoc output is in HTML 5:

// File: build.gradle
apply plugin: 'java'

javadoc {
    options.addBooleanOption('html5', true)
}

The boolean option we added to the options property is not part of the Gradle check to see if a task is up to date. So if we would change the key html5 to html4, because we want to get documentation in HTML 4, the task would be seen as up to date, because Gradle doesn't keep track of the change. We can change this by adding a property to the task inputs property, that contains the output format. Let's also add a new extension to Javadoc tasks to define our own DSL to set the output format.

We need to create an extension class and plugin to apply the extension to the Javadoc tasks. In the plugin we can also add support to help Gradle check to see if the task is up to date, based on the output format. In the following example we define an extension and plugin in our build file, but we could also place the classes in the buildSrc directory of our project.

// File: build.gradle
apply plugin: 'java'
apply plugin: JavadocPlugin

javadoc {
    // New DSL to configure the task
    // added by the JavadocPlugin.
    output {
        html5 = true
    }
}

/**
 * Plugin to add the {@link JavadocOutputOptions} extension
 * to the Javadoc tasks. 
 * <p>
 * Also make sure Gradle can check if the task needs
 * to rerun when the output format changes.
 */
class JavadocPlugin implements Plugin<Project&g;t {

    void apply(Project project) {
        project.tasks.withType(Javadoc) { Javadoc task ->
            // Create new extension for Javadoc task with the name "output".
            // Users can set output format to HTML 5 as:
            // javadoc {
            //     output {
            //         html5 = true 
            //     }
            // }
            // or as HTML4:
            // javadoc {
            //     output {
            //         html4 = true 
            //     }
            // }
            JavadocOutputOptions outputOptions = 
                    task.extensions.create("output", JavadocOutputOptions)

            // After project evaluation we know what the
            // user has defined as output format using the 
            // "output" configuration block.
            project.afterEvaluate {
                // We need to make sure the up-to-date check
                // is triggered when the output option changes.
                // If the value is not changed the task is up-to-date.
                task.inputs.property("output.html5", outputOptions.html5)

                // We add the boolean option html4 and html5 
                // based on the user's value set via the
                // JavadocOutputOptions.
                task.options.addBooleanOption("html4", outputOptions.html4)
                task.options.addBooleanOption("html5", outputOptions.html5)
            }

        }
    }

}

/**
 * Extension for Javadoc tasks to define
 * if the output format must be HTML 4 or HTML 5.
 */
class JavadocOutputOptions {
    Boolean html4 = true
    Boolean html5 = !html4

    void setHtml4(boolean useHtml4) {
        html4 = useHtml4
        html5 = !html4
    }

    void setHtml5(boolean useHtml5) {
        html5 = useHtml5
        html4 = !html5
    }
}

Written with Gradle 4.10.2.

November 7, 2018

Gradle Goodness: Rerun Incremental Tasks At Specific Intervals

One of the most important features in Gradle is the support for incremental tasks. Incremental tasks have input and output properties that can be checked by Gradle. When the values of the properties haven't changed then the task can be marked as up to date by Gradle and it is not executed. This makes a build much faster. Input and output properties can be files, directories or plain object values. We can set a task input property with a date or date/time value to define when a task is up to date for a specific period. As long as the value of the input property hasn't changed (and of course also the other input and output property values) Gradle will not rerun task and mark it as up to date. This is useful for example if a long running task (e.g. large integration test suite) only needs to run once a day or another period.

In the following example Gradle build file we define a new task Broadcast that will get content from a remote URL and save it in a file. In our case we want to save the latest messages from SDKMAN!. If you don't know SKDMAN! you should check it out!. The Broadcast task has an incremental task output property, which is the output file of the task:

// File: build.gradle

task downloadBroadcastLatest(type: Broadcast) {
    outputFile = file("${buildDir}/broadcast.latest.txt")
}

class Broadcast extends DefaultTask {

    // URL with latest announcements of SDKMAN!
    private static final String API = "https://api.sdkman.io/2/broadcast/latest"

    @OutputFile
    File outputFile

    @TaskAction
    void downloadLatest() {
        // Download text from URL and save in File.
        logger.lifecycle("Downloading latest broadcast message from SDKMAN!.")
        outputFile.text = API.toURL().text
    }

}

We can run the task downloadBroadcastLatest and the contents of the URL is saved in the output file. When we run the task a second time the task action is executed again and the contents of the URL is fetched again and saved in the output file.

$ gradle downloadBroadcastLatest --console plain
> Task :broadcastLatest
Downloading latest broadcast message from SDKMAN!.

BUILD SUCCESSFUL in 1s
1 actionable task: 1 executed
$ gradle downloadBroadcastLatest --console plain
> Task :broadcastLatest
Downloading latest broadcast message from SDKMAN!.

BUILD SUCCESSFUL in 1s
1 actionable task: 1 executed
$

Suppose we don't want to access the remote URL for each task invocation. One time every hour is enough to get the latest messages from SKDMAN!. Let's add a new incremental task input property with the value of the current hour to the task downloadBroadcastLatest.

// File: build.gradle
...
task downloadBroadcastLatest(type: Broadcast) {
    // Add incremental input property, with value that changes only
    // every hour. Gradle will mark the task as up-to-date for
    // every invocation as long as the hour value hasn't changed.
    // For example to set the value so that the task is only
    // execute once a day we could use java.time.LocalDate.now().
    inputs.property 'check_once_per_hour', java.time.LocalDateTime.now().hour

    outputFile = file("${buildDir}/broadcast.latest.txt")
}
...
$ gradle downloadBroadcastLatest --console plain
> Task :broadcastLatest
Downloading latest broadcast message from SDKMAN!.

BUILD SUCCESSFUL in 1s
1 actionable task: 1 executed
$ gradle downloadBroadcastLatest --console plain
> Task :broadcastLatest UP-TO-DATE

BUILD SUCCESSFUL in 0s
1 actionable task: 1 up-to-date
$

Another option in our example is to add the incremental task input property to the source of our Broadcast task. We can do that, because we have written the task class ourselves. If we cannot change the source of a task the previous example is the way to add an incremental task input property to an existing task. The following code sample adds an input property to our task definition:

// File: build.gradle
...
class Broadcast extends DefaultTask {

    // URL with latest announcements of SDKMAN!
    private static final String API = "https://api.sdkman.io/2/broadcast/latest"

    @Input
    int checkOncePerHour = java.time.LocalDateTime.now().hour

    @OutputFile
    File outputFile

    @TaskAction
    void downloadLatest() {
        // Download text from URL and save in File.
        logger.lifecycle("Downloading latest broadcast message from SDKMAN!.")
        outputFile.text = API.toURL().text
    }

}
...

Finally it is important that to make this work a task has to have at least an increment task output property. If an existing task doesn't have one, we can add a outputs.upToDateWhen { true } to a task configuration so Gradle recognises the task as being incremental with output and the output is always up to date. In the following example we create a new task Show without an incremental task output property. In the task showBroadcastLatest we define that the task has an always up to date output:

// File: build.gradle
...
task showBroadcastLatest(type: Show) {
    inputs.property 'check_once_a_day', java.time.LocalDate.now()

    // The original task definition has no increment task
    // output property, so we add one ourselves.
    outputs.upToDateWhen { true }

    inputFile = broadcastLatest.outputFile
}

class Show extends DefaultTask {

    @InputFile
    File inputFile

    @TaskAction
    void showContents() {
        println inputFile.text
    }

}
...

Written with Gradle 4.10.2.

October 3, 2018

Micronaut Mastery: Configuration Property Name Is Lowercased And Hyphen Separated

In Micronaut we can inject configuration properties in different ways into our beans. We can use for example the @Value annotation using a string value with a placeholder for the configuration property name. If we don't want to use a placeholder we can also use the @Property annotation and set the name attribute to the configuration property name. We have to pay attention to the format of the configuration property name we use. If we refer to a configuration property name using @Value or @Property we must use lowercased and hyphen separated names (also known as kebab casing). Even if the name of the configuration property is camel cased in the configuration file. For example if we have a configuration property sample.theAnswer in our application.properties file, we must use the name sample.the-answer to get the value.

In the following Spock specification we see how to use it in code. The specification defines two beans that use the @Value and @Property annotations and we see that we need to use kebab casing for the configuration property names, even though we use camel casing to set the configuration property values:

package mrhaki

import groovy.transform.CompileStatic
import io.micronaut.context.ApplicationContext
import io.micronaut.context.annotation.Property
import io.micronaut.context.annotation.Value
import io.micronaut.context.exceptions.DependencyInjectionException
import spock.lang.Shared
import spock.lang.Specification

import javax.inject.Singleton

class ConfigurationPropertyNameSpec extends Specification {

    // Create application context with two configuration 
    // properties: reader.maxFileSize and reader.showProgress.
    @Shared
    private ApplicationContext context = 
            ApplicationContext.run('reader.maxFileSize': 1024, 
                                   'reader.showProgress': true)

    void "use kebab casing (hyphen-based) to get configuration property value"() {
        expect:
        with(context.getBean(FileReader)) {
            maxFileSize == 1024   
            showProgress == Boolean.TRUE
        }
    }

    void "using camel case to get configuration property should throw exception"() {
        when:
        context.getBean(InvalidFileReader).maxFileSize

        then:
        final dependencyException = thrown(DependencyInjectionException)
        dependencyException.message == """\
            |Failed to inject value for parameter [maxFileSize] of method [setMaxFileSize] of class: mrhaki.InvalidFileReader
            |
            |Message: Error resolving property value [\${reader.maxFileSize}]. Property doesn't exist
            |Path Taken: InvalidFileReader.setMaxFileSize([Integer maxFileSize])""".stripMargin()
    }
}

@CompileStatic
@Singleton
class FileReader {
    
    private Integer maxFileSize
    private Boolean showProgress
    
    // Configuration property names 
    // are normalized and 
    // stored lowercase hyphen separated (= kebab case).
    FileReader(
            @Property(name ='reader.max-file-size') Integer maxFileSize,
            @Value('${reader.show-progress:false}') Boolean showProgress) {
        
        this.maxFileSize = maxFileSize
        this.showProgress = showProgress
    }
    
    Integer getMaxFileSize() {
        return maxFileSize
    }
    
    Boolean showProgress() {
        return showProgress
    }
}

@CompileStatic
@Singleton
class InvalidFileReader {
    // Invalid reference to property name,
    // because the names are normalized and
    // stored lowercase hyphen separated.
    @Value('${reader.maxFileSize}')
    Integer maxFileSize
}

Written with Micronaut 1.0.0.RC1.

October 2, 2018

Micronaut Mastery: Consuming Server-Sent Events (SSE)

Normally we would consume server-sent events (SSE) in a web browser, but we can also consume them in our code on the server. Micronaut has a low-level HTTP client with a SseClient interface that we can use to get server-sent events. The interface has an eventStream method with different arguments that return a Publisher type of the Reactive Streams API. We can use the RxSseClient interface to get back RxJava2 Flowable return type instead of Publisher type. We can also use Micronaut's declarative HTTP client, which we define using the @Client annotation, that supports server-sent events with the correct annotation attributes.

In our example we first create a controller in Micronaut to send out server-sent events. We must create method that returns a Publisher type with Event objects. These Event objects can contains some attributes like id and name, but also the actual object we want to send:

// File: src/main/java/mrhaki/ConferencesController.java
package mrhaki;

import io.micronaut.http.annotation.Controller;
import io.micronaut.http.annotation.Get;
import io.micronaut.http.sse.Event;
import io.reactivex.Flowable;

import java.util.concurrent.TimeUnit;

@Controller("/conferences")
public class ConferencesController {

    private final ConferenceRepository repository;

    public ConferencesController(final ConferenceRepository repository) {
        this.repository = repository;
    }

    /**
     * Send each second a random Conference.
     * 
     * @return Server-sent events each second where the event is a randomly
     * selected Conference object from the repository.
     */
    @Get("/random")
    Flowable<Event<Conference>> events() {
        final Flowable<Long> tick = Flowable.interval(1, TimeUnit.SECONDS);
        final Flowable<Conference> randomConferences = repository.random().repeat();

        return tick.zipWith(randomConferences, this::createEvent);
    }

    /**
     * Create a server-sent event with id, name and the Conference data.
     * 
     * @param counter Counter used as id for event.
     * @param conference Conference data as payload for the event.
     * @return Event with id, name and Conference object.
     */
    private Event<Conference> createEvent(Long counter, final Conference conference) {
        return Event.of(conference)
                    .id(String.valueOf(counter))
                    .name("randomEvent");
    }

}

Notice how easy it is in Micronaut to use server-sent events. Let's add a declarative HTTP client that can consume the server-sent events. We must set the processes attribute of the @Get annotation with the value text/event-stream. This way Micronaut can create an implementation of this interface with the correct code to consume server-sent events:

// File: src/main/java/mrhaki/ConferencesClient.java
package mrhaki;

import io.micronaut.http.MediaType;
import io.micronaut.http.annotation.Get;
import io.micronaut.http.client.annotation.Client;
import io.micronaut.http.sse.Event;
import org.reactivestreams.Publisher;
import reactor.core.publisher.Flux;

@Client("/conferences")
interface ConferencesSseClient {

    /**
     * Return Publisher with SSE containing Conference data.
     * We must set the processes attribute with the value
     * text/event-stream so Micronaut can generate an implementation
     * to support server-sent events.
     * We could also return Publisher implementation class
     * like Flowable or Flux, Micronaut will do the conversion.
     * 
     * @return Publisher with Event objects with Conference data.
     */
    @Get(value = "/random", processes = MediaType.TEXT_EVENT_STREAM)
    Publisher<Event<Conference>> randomEvents();

    /**
     * Here we use a Publisher implementation Flux. Also we don't
     * add the Event in the return type: Micronaut will leave out
     * the event metadata and we get the data that is part of
     * the event as object.
     *
     * @return Flux with Conference data.
     */
    @Get(value = "/random", processes = MediaType.TEXT_EVENT_STREAM)
    Flux<Conference> randomConferences();

}

Next we create a Spock specification to test our controller with server-sent events. In the specification we use the low-level HTTP client and the declarative client:

// File: src/test/groovy/mrhaki/ConferencesControllerSpec.groovy
package mrhaki

import io.micronaut.context.ApplicationContext
import io.micronaut.http.client.sse.RxSseClient
import io.micronaut.http.client.sse.SseClient
import io.micronaut.http.sse.Event
import io.micronaut.runtime.server.EmbeddedServer
import io.reactivex.Flowable
import spock.lang.AutoCleanup
import spock.lang.Shared
import spock.lang.Specification

class ConferencesControllerSpec extends Specification {

    @Shared
    @AutoCleanup
    private EmbeddedServer server = ApplicationContext.run(EmbeddedServer)

    /**
     * Low level client to interact with server
     * that returns server side events, that supports
     * RxJava2.
     */
    @Shared
    @AutoCleanup
    private RxSseClient sseLowLevelClient =
            server.applicationContext
                  .createBean(RxSseClient, server.getURL())

    /**
     * Declarative client for interacting
     * with server that send server side events.
     */
    @Shared
    private ConferencesSseClient sseClient =
            server.applicationContext
                  .getBean(ConferencesSseClient)

    void "test event stream with low level SSE client"() {
        when:
        // Use eventStream method of RxSseClient to get SSE
        // and convert data in event to Conference objects by 
        // setting second argument to Conference.class.
        final List<Event<Conference>> result =
                sseLowLevelClient.eventStream("/conferences/random", Conference.class)
                                 .take(2)
                                 .toList()
                                 .blockingGet()
        
        then:
        result.name.every { name -> name == "randomEvent" }
        result.id == ["0", "1"]
        result.data.every { conference -> conference instanceof Conference }
    }

    void "test event stream with declarative SSE client"() {
        when:
        // Use declarative client (using @Client)
        // with SSE support.
        List<Event<Conference>> result =
                Flowable.fromPublisher(sseClient.randomEvents())
                        .take(2)
                        .toList()
                        .blockingGet();

        then:
        result.name.every { name -> name == "randomEvent" }
        result.id == ["0", "1"]
        result.data.every { conference -> conference instanceof Conference }
    }

    void "test conference stream with declarative SSE client"() {
        when:
        // Use declarative client (using @Client)
        // with built-in extraction of data in event.
        List<Conference> result =
                sseClient.randomConferences()
                         .take(2)
                         .collectList()
                         .block();

        then:
        result.id.every(Closure.IDENTITY) // Check every id property is set.
        result.every { conference -> conference instanceof Conference }
    }
}

Written with Micronaut 1.0.0.RC1.

September 30, 2018

Micronaut Mastery: Running Code On Startup

When our Micronaut application starts we can listen for the ServiceStartedEvent event and write code that needs to run when the event is fired. We can write a bean that implements the ApplicationEventListener interface with the type ServiceStartedEvent. Or we can use the @EventListener annotation on our method with code we want to run on startup. If the execution of the code can take a while we can also add the @Async annotation to the method, so Micronaut can execute the code on a separate thread.

In our example application we have a reactive repository for a Mongo database and we want to save some data in the database when our Micronaut application starts. First we write a bean that implements the ApplicationEventListener:

// File: src/main/java/mrhaki/Dataloader.java
package mrhaki;

import io.micronaut.context.annotation.Requires;
import io.micronaut.context.env.Environment;
import io.micronaut.context.event.ApplicationEventListener;
import io.micronaut.discovery.event.ServiceStartedEvent;
import io.micronaut.scheduling.annotation.Async;
import io.reactivex.Flowable;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import javax.inject.Singleton;

@Singleton
@Requires(notEnv = Environment.TEST) // Don't load data in tests.
public class DataLoader implements ApplicationEventListener<ServiceStartedEvent> {

    private static final Logger log = LoggerFactory.getLogger(DataLoader.class);

    /**
     * Reactive repository for Mongo database to store
     * Conference objects with an id and name property.
     */
    private final ConferenceRepository repository;

    public DataLoader(final ConferenceRepository repository) {
        this.repository = repository;
    }

    @Async
    @Override
    public void onApplicationEvent(final ServiceStartedEvent event) {
        log.info("Loading data at startup");

        // Transform names to Conferences object and save them.
        Flowable.just("Gr8Conf", "Greach", "JavaLand", "JFall", "NextBuild")
                .map(name -> new Conference(name))
                .forEach(this::saveConference);
    }

    /**
     * Save conference in repository.
     * 
     * @param conference Conference to be saved.
     */
    private void saveConference(Conference conference) {
        repository
                .save(conference)
                .subscribe(
                        saved -> log.info("Saved conference {}.", saved),
                        throwable -> log.error("Error saving conference.", throwable));
    }
}

Alternatively we could have used the @EventListener annotation on a method with an argument of type ServiceStartedEvent:

// File: src/main/java/mrhaki/Dataloader.java
package mrhaki;

import io.micronaut.context.annotation.Requires;
import io.micronaut.context.env.Environment;
import io.micronaut.context.event.ApplicationEventListener;
import io.micronaut.discovery.event.ServiceStartedEvent;
import io.micronaut.runtime.event.annotation.EventListener;
import io.micronaut.scheduling.annotation.Async;
import io.reactivex.Flowable;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import javax.inject.Singleton;

@Singleton
@Requires(notEnv = Environment.TEST) // Don't load data in tests.
public class DataLoader {

    private static final Logger log = LoggerFactory.getLogger(DataLoader.class);

    /**
     * Reactive repository for Mongo database to store
     * Conference objects with an id and name property.
     */
    private final ConferenceRepository repository;

    public DataLoader(final ConferenceRepository repository) {
        this.repository = repository;
    }

    @EventListener
    @Async
    public void loadConferenceData(final ServiceStartedEvent event) {
        log.info("Loading data at startup");

        // Transform names to Conferences object and save them.
        Flowable.just("Gr8Conf", "Greach", "JavaLand", "JFall", "NextBuild")
                .map(name -> new Conference(name))
                .forEach(this::saveConference);
    }

    /**
     * Save conference in repository.
     * 
     * @param conference Conference to be saved.
     */
    private void saveConference(Conference conference) {
        repository
                .save(conference)
                .subscribe(
                        saved -> log.info("Saved conference {}.", saved),
                        throwable -> log.error("Error saving conference.", throwable));
    }
}

When we start our Micronaut application we can see in the log messages that our conference data is created:

22:58:17.343 [pool-1-thread-1] INFO  mrhaki.DataLoader - Loading data at startup
22:58:17.343 [main] INFO  io.micronaut.runtime.Micronaut - Startup completed in 1230ms. Server Running: http://localhost:9000
22:58:17.573 [Thread-11] INFO  mrhaki.DataLoader - Saved conference Conference{id=5bb134f505d4feefa74d19c7, name='JFall'}.
22:58:17.573 [Thread-8] INFO  mrhaki.DataLoader - Saved conference Conference{id=5bb134f505d4feefa74d19c3, name='Gr8Conf'}.
22:58:17.573 [Thread-10] INFO  mrhaki.DataLoader - Saved conference Conference{id=5bb134f505d4feefa74d19c5, name='Greach'}.
22:58:17.573 [Thread-9] INFO  mrhaki.DataLoader - Saved conference Conference{id=5bb134f505d4feefa74d19c8, name='NextBuild'}.
22:58:17.573 [Thread-6] INFO  mrhaki.DataLoader - Saved conference Conference{id=5bb134f505d4feefa74d19c6, name='JavaLand'}.

Written with Micronaut 1.0.0.RC1.