February 15, 2019

Spring Sweets: Group Loggers With Logical Name

Spring Boot 2.1 introduced log groups. A log group is a logical name for one or more loggers. We can define log groups in our application configuration. Then we can set the log level for a group, so all loggers in the group will get the same log level. This can be very useful to change a log level for multiple loggers that belong together with one setting. Spring Boot already provides two log groups by default: web and sql. In the following list we see which loggers are part of the default log groups:

  • web: org.springframework.core.codec, org.springframework.http, org.springframework.web, org.springframework.boot.actuate.endpoint.web, org.springframework.boot.web.servlet.ServletContextInitializerBeans
  • sql: org.springframework.jdbc.core, org.hibernate.SQL

To define our own log group we must add in our application configuration the key logging.group. followed by our log group name. Next we assign all loggers we want to be part of the group. Once we have defined our group we can set the log level using the group name prefixed with the configuration key logging.level..

In the following example configuration we define a new group controllers that consists of two loggers from different packages. We set the log level for this group to DEBUG. We also set the log level of the default group web to DEBUG:

# src/main/resources/application.properties

# Define a new log group controllers.
logging.group.controllers=mrhaki.hello.HelloController, mrhaki.sample.SampleController

# Set log level to DEBUG for group controllers.
# This means the log level for the loggers
# mrhaki.hello.HelloController and mrhaki.sample.SampleController
# are set to DEBUG.
logging.level.controllers=DEBUG

# Set log level for default group web to DEBUG.
logging.level.web=DEBUG

Written with Spring Boot 2.1.3.RELEASE

February 4, 2019

Gradle Goodness: Only Show All Tasks In A Group

To get an overview of all Gradle tasks in our project we need to run the tasks task. Since Gradle 5.1 we can use the --group option followed by a group name. Gradle will then show all tasks belonging to the group and not the other tasks in the project.

Suppose we have a Gradle Java project and want to show the tasks that belong to the build group:

$ gradle tasks --group build
> Task :tasks

------------------------------------------------------------
Tasks runnable from root project - Sample
------------------------------------------------------------

Build tasks
-----------
assemble - Assembles the outputs of this project.
bootBuildInfo - Generates a META-INF/build-info.properties file.
bootJar - Assembles an executable jar archive containing the main classes and their dependencies.
build - Assembles and tests this project.
buildDependents - Assembles and tests this project and all projects that depend on it.
buildNeeded - Assembles and tests this project and all projects it depends on.
classes - Assembles main classes.
clean - Deletes the build directory.
generateGitProperties - Generate a git.properties file.
jar - Assembles a jar archive containing the main classes.
testClasses - Assembles test classes.

To see all tasks and more detail, run gradle tasks --all

To see more detail about a task, run gradle help --task <task>

Deprecated Gradle features were used in this build, making it incompatible with Gradle 6.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/5.1.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD SUCCESSFUL in 2s
1 actionable task: 1 executed

Written with Gradle 5.1.1.

January 29, 2019

Awesome Asciidoctor: Exclude Parts From Included Files

In a previous post we learned how to include parts of a document in the generated output. The included parts are defined using tags. The start of a tag is defined in a comment with the format tag::tagName[] and the end has the format end::tagName[]. Next we must use the tags attribute for the include macro followed by the tagName. If we don't want to include a tag we must prefix it with an exclamation mark (!).

Suppose we have an external Java source we want to include in our Asciidoctor document.

package mrhaki;

// tag::singletonAnnotation[]
@Singleton
// end::singletonAnnotation[]
public class Sample {
    public String greeting() {
        return "Hello Asciidoctor";
    }
}

In the following sample Asciidoctor document we include Sample.java, but we don't want to include the text enclosed with the singletonAnnotation tag. So we use tags=!singletonAnnotaion with the include macro:

= Sample

To NOT include sections enclosed with tags we must use `tags=!<tagName>` in the `include` directive.

[source,java]
----
include::Sample.java[tags=!singletonAnnotation]
----

When we transform our Asciidoctor markup to HTML we get the following result:

Written with Asciidoctor 1.5.6.1.

November 14, 2018

Gradle Goodness: Generate Javadoc In HTML5

Since Java 9 we can specify that the Javadoc output must be generated in HTML 5 instead of the default HTML 4. We need to pass the option -html5 to the javadoc tool. To do this in Gradle we must add the option to the javadoc task configuration. We use the addBooleanOption method of the options property that is part of the javadoc task. We set the argument to html5 and the value to true.

In the following example we reconfigure the javadoc task to make sure the generated Javadoc output is in HTML 5:

// File: build.gradle
apply plugin: 'java'

javadoc {
    options.addBooleanOption('html5', true)
}

The boolean option we added to the options property is not part of the Gradle check to see if a task is up to date. So if we would change the key html5 to html4, because we want to get documentation in HTML 4, the task would be seen as up to date, because Gradle doesn't keep track of the change. We can change this by adding a property to the task inputs property, that contains the output format. Let's also add a new extension to Javadoc tasks to define our own DSL to set the output format.

We need to create an extension class and plugin to apply the extension to the Javadoc tasks. In the plugin we can also add support to help Gradle check to see if the task is up to date, based on the output format. In the following example we define an extension and plugin in our build file, but we could also place the classes in the buildSrc directory of our project.

// File: build.gradle
apply plugin: 'java'
apply plugin: JavadocPlugin

javadoc {
    // New DSL to configure the task
    // added by the JavadocPlugin.
    output {
        html5 = true
    }
}

/**
 * Plugin to add the {@link JavadocOutputOptions} extension
 * to the Javadoc tasks. 
 * <p>
 * Also make sure Gradle can check if the task needs
 * to rerun when the output format changes.
 */
class JavadocPlugin implements Plugin<Project&g;t {

    void apply(Project project) {
        project.tasks.withType(Javadoc) { Javadoc task ->
            // Create new extension for Javadoc task with the name "output".
            // Users can set output format to HTML 5 as:
            // javadoc {
            //     output {
            //         html5 = true 
            //     }
            // }
            // or as HTML4:
            // javadoc {
            //     output {
            //         html4 = true 
            //     }
            // }
            JavadocOutputOptions outputOptions = 
                    task.extensions.create("output", JavadocOutputOptions)

            // After project evaluation we know what the
            // user has defined as output format using the 
            // "output" configuration block.
            project.afterEvaluate {
                // We need to make sure the up-to-date check
                // is triggered when the output option changes.
                // If the value is not changed the task is up-to-date.
                task.inputs.property("output.html5", outputOptions.html5)

                // We add the boolean option html4 and html5 
                // based on the user's value set via the
                // JavadocOutputOptions.
                task.options.addBooleanOption("html4", outputOptions.html4)
                task.options.addBooleanOption("html5", outputOptions.html5)
            }

        }
    }

}

/**
 * Extension for Javadoc tasks to define
 * if the output format must be HTML 4 or HTML 5.
 */
class JavadocOutputOptions {
    Boolean html4 = true
    Boolean html5 = !html4

    void setHtml4(boolean useHtml4) {
        html4 = useHtml4
        html5 = !html4
    }

    void setHtml5(boolean useHtml5) {
        html5 = useHtml5
        html4 = !html5
    }
}

Written with Gradle 4.10.2.

November 7, 2018

Gradle Goodness: Rerun Incremental Tasks At Specific Intervals

One of the most important features in Gradle is the support for incremental tasks. Incremental tasks have input and output properties that can be checked by Gradle. When the values of the properties haven't changed then the task can be marked as up to date by Gradle and it is not executed. This makes a build much faster. Input and output properties can be files, directories or plain object values. We can set a task input property with a date or date/time value to define when a task is up to date for a specific period. As long as the value of the input property hasn't changed (and of course also the other input and output property values) Gradle will not rerun task and mark it as up to date. This is useful for example if a long running task (e.g. large integration test suite) only needs to run once a day or another period.

In the following example Gradle build file we define a new task Broadcast that will get content from a remote URL and save it in a file. In our case we want to save the latest messages from SDKMAN!. If you don't know SKDMAN! you should check it out!. The Broadcast task has an incremental task output property, which is the output file of the task:

// File: build.gradle

task downloadBroadcastLatest(type: Broadcast) {
    outputFile = file("${buildDir}/broadcast.latest.txt")
}

class Broadcast extends DefaultTask {

    // URL with latest announcements of SDKMAN!
    private static final String API = "https://api.sdkman.io/2/broadcast/latest"

    @OutputFile
    File outputFile

    @TaskAction
    void downloadLatest() {
        // Download text from URL and save in File.
        logger.lifecycle("Downloading latest broadcast message from SDKMAN!.")
        outputFile.text = API.toURL().text
    }

}

We can run the task downloadBroadcastLatest and the contents of the URL is saved in the output file. When we run the task a second time the task action is executed again and the contents of the URL is fetched again and saved in the output file.

$ gradle downloadBroadcastLatest --console plain
> Task :broadcastLatest
Downloading latest broadcast message from SDKMAN!.

BUILD SUCCESSFUL in 1s
1 actionable task: 1 executed
$ gradle downloadBroadcastLatest --console plain
> Task :broadcastLatest
Downloading latest broadcast message from SDKMAN!.

BUILD SUCCESSFUL in 1s
1 actionable task: 1 executed
$

Suppose we don't want to access the remote URL for each task invocation. One time every hour is enough to get the latest messages from SKDMAN!. Let's add a new incremental task input property with the value of the current hour to the task downloadBroadcastLatest.

// File: build.gradle
...
task downloadBroadcastLatest(type: Broadcast) {
    // Add incremental input property, with value that changes only
    // every hour. Gradle will mark the task as up-to-date for
    // every invocation as long as the hour value hasn't changed.
    // For example to set the value so that the task is only
    // execute once a day we could use java.time.LocalDate.now().
    inputs.property 'check_once_per_hour', java.time.LocalDateTime.now().hour

    outputFile = file("${buildDir}/broadcast.latest.txt")
}
...
$ gradle downloadBroadcastLatest --console plain
> Task :broadcastLatest
Downloading latest broadcast message from SDKMAN!.

BUILD SUCCESSFUL in 1s
1 actionable task: 1 executed
$ gradle downloadBroadcastLatest --console plain
> Task :broadcastLatest UP-TO-DATE

BUILD SUCCESSFUL in 0s
1 actionable task: 1 up-to-date
$

Another option in our example is to add the incremental task input property to the source of our Broadcast task. We can do that, because we have written the task class ourselves. If we cannot change the source of a task the previous example is the way to add an incremental task input property to an existing task. The following code sample adds an input property to our task definition:

// File: build.gradle
...
class Broadcast extends DefaultTask {

    // URL with latest announcements of SDKMAN!
    private static final String API = "https://api.sdkman.io/2/broadcast/latest"

    @Input
    int checkOncePerHour = java.time.LocalDateTime.now().hour

    @OutputFile
    File outputFile

    @TaskAction
    void downloadLatest() {
        // Download text from URL and save in File.
        logger.lifecycle("Downloading latest broadcast message from SDKMAN!.")
        outputFile.text = API.toURL().text
    }

}
...

Finally it is important that to make this work a task has to have at least an increment task output property. If an existing task doesn't have one, we can add a outputs.upToDateWhen { true } to a task configuration so Gradle recognises the task as being incremental with output and the output is always up to date. In the following example we create a new task Show without an incremental task output property. In the task showBroadcastLatest we define that the task has an always up to date output:

// File: build.gradle
...
task showBroadcastLatest(type: Show) {
    inputs.property 'check_once_a_day', java.time.LocalDate.now()

    // The original task definition has no increment task
    // output property, so we add one ourselves.
    outputs.upToDateWhen { true }

    inputFile = broadcastLatest.outputFile
}

class Show extends DefaultTask {

    @InputFile
    File inputFile

    @TaskAction
    void showContents() {
        println inputFile.text
    }

}
...

Written with Gradle 4.10.2.

October 3, 2018

Micronaut Mastery: Configuration Property Name Is Lowercased And Hyphen Separated

In Micronaut we can inject configuration properties in different ways into our beans. We can use for example the @Value annotation using a string value with a placeholder for the configuration property name. If we don't want to use a placeholder we can also use the @Property annotation and set the name attribute to the configuration property name. We have to pay attention to the format of the configuration property name we use. If we refer to a configuration property name using @Value or @Property we must use lowercased and hyphen separated names (also known as kebab casing). Even if the name of the configuration property is camel cased in the configuration file. For example if we have a configuration property sample.theAnswer in our application.properties file, we must use the name sample.the-answer to get the value.

In the following Spock specification we see how to use it in code. The specification defines two beans that use the @Value and @Property annotations and we see that we need to use kebab casing for the configuration property names, even though we use camel casing to set the configuration property values:

package mrhaki

import groovy.transform.CompileStatic
import io.micronaut.context.ApplicationContext
import io.micronaut.context.annotation.Property
import io.micronaut.context.annotation.Value
import io.micronaut.context.exceptions.DependencyInjectionException
import spock.lang.Shared
import spock.lang.Specification

import javax.inject.Singleton

class ConfigurationPropertyNameSpec extends Specification {

    // Create application context with two configuration 
    // properties: reader.maxFileSize and reader.showProgress.
    @Shared
    private ApplicationContext context = 
            ApplicationContext.run('reader.maxFileSize': 1024, 
                                   'reader.showProgress': true)

    void "use kebab casing (hyphen-based) to get configuration property value"() {
        expect:
        with(context.getBean(FileReader)) {
            maxFileSize == 1024   
            showProgress == Boolean.TRUE
        }
    }

    void "using camel case to get configuration property should throw exception"() {
        when:
        context.getBean(InvalidFileReader).maxFileSize

        then:
        final dependencyException = thrown(DependencyInjectionException)
        dependencyException.message == """\
            |Failed to inject value for parameter [maxFileSize] of method [setMaxFileSize] of class: mrhaki.InvalidFileReader
            |
            |Message: Error resolving property value [\${reader.maxFileSize}]. Property doesn't exist
            |Path Taken: InvalidFileReader.setMaxFileSize([Integer maxFileSize])""".stripMargin()
    }
}

@CompileStatic
@Singleton
class FileReader {
    
    private Integer maxFileSize
    private Boolean showProgress
    
    // Configuration property names 
    // are normalized and 
    // stored lowercase hyphen separated (= kebab case).
    FileReader(
            @Property(name ='reader.max-file-size') Integer maxFileSize,
            @Value('${reader.show-progress:false}') Boolean showProgress) {
        
        this.maxFileSize = maxFileSize
        this.showProgress = showProgress
    }
    
    Integer getMaxFileSize() {
        return maxFileSize
    }
    
    Boolean showProgress() {
        return showProgress
    }
}

@CompileStatic
@Singleton
class InvalidFileReader {
    // Invalid reference to property name,
    // because the names are normalized and
    // stored lowercase hyphen separated.
    @Value('${reader.maxFileSize}')
    Integer maxFileSize
}

Written with Micronaut 1.0.0.RC1.

October 2, 2018

Micronaut Mastery: Consuming Server-Sent Events (SSE)

Normally we would consume server-sent events (SSE) in a web browser, but we can also consume them in our code on the server. Micronaut has a low-level HTTP client with a SseClient interface that we can use to get server-sent events. The interface has an eventStream method with different arguments that return a Publisher type of the Reactive Streams API. We can use the RxSseClient interface to get back RxJava2 Flowable return type instead of Publisher type. We can also use Micronaut's declarative HTTP client, which we define using the @Client annotation, that supports server-sent events with the correct annotation attributes.

In our example we first create a controller in Micronaut to send out server-sent events. We must create method that returns a Publisher type with Event objects. These Event objects can contains some attributes like id and name, but also the actual object we want to send:

// File: src/main/java/mrhaki/ConferencesController.java
package mrhaki;

import io.micronaut.http.annotation.Controller;
import io.micronaut.http.annotation.Get;
import io.micronaut.http.sse.Event;
import io.reactivex.Flowable;

import java.util.concurrent.TimeUnit;

@Controller("/conferences")
public class ConferencesController {

    private final ConferenceRepository repository;

    public ConferencesController(final ConferenceRepository repository) {
        this.repository = repository;
    }

    /**
     * Send each second a random Conference.
     * 
     * @return Server-sent events each second where the event is a randomly
     * selected Conference object from the repository.
     */
    @Get("/random")
    Flowable<Event<Conference>> events() {
        final Flowable<Long> tick = Flowable.interval(1, TimeUnit.SECONDS);
        final Flowable<Conference> randomConferences = repository.random().repeat();

        return tick.zipWith(randomConferences, this::createEvent);
    }

    /**
     * Create a server-sent event with id, name and the Conference data.
     * 
     * @param counter Counter used as id for event.
     * @param conference Conference data as payload for the event.
     * @return Event with id, name and Conference object.
     */
    private Event<Conference> createEvent(Long counter, final Conference conference) {
        return Event.of(conference)
                    .id(String.valueOf(counter))
                    .name("randomEvent");
    }

}

Notice how easy it is in Micronaut to use server-sent events. Let's add a declarative HTTP client that can consume the server-sent events. We must set the processes attribute of the @Get annotation with the value text/event-stream. This way Micronaut can create an implementation of this interface with the correct code to consume server-sent events:

// File: src/main/java/mrhaki/ConferencesClient.java
package mrhaki;

import io.micronaut.http.MediaType;
import io.micronaut.http.annotation.Get;
import io.micronaut.http.client.annotation.Client;
import io.micronaut.http.sse.Event;
import org.reactivestreams.Publisher;
import reactor.core.publisher.Flux;

@Client("/conferences")
interface ConferencesSseClient {

    /**
     * Return Publisher with SSE containing Conference data.
     * We must set the processes attribute with the value
     * text/event-stream so Micronaut can generate an implementation
     * to support server-sent events.
     * We could also return Publisher implementation class
     * like Flowable or Flux, Micronaut will do the conversion.
     * 
     * @return Publisher with Event objects with Conference data.
     */
    @Get(value = "/random", processes = MediaType.TEXT_EVENT_STREAM)
    Publisher<Event<Conference>> randomEvents();

    /**
     * Here we use a Publisher implementation Flux. Also we don't
     * add the Event in the return type: Micronaut will leave out
     * the event metadata and we get the data that is part of
     * the event as object.
     *
     * @return Flux with Conference data.
     */
    @Get(value = "/random", processes = MediaType.TEXT_EVENT_STREAM)
    Flux<Conference> randomConferences();

}

Next we create a Spock specification to test our controller with server-sent events. In the specification we use the low-level HTTP client and the declarative client:

// File: src/test/groovy/mrhaki/ConferencesControllerSpec.groovy
package mrhaki

import io.micronaut.context.ApplicationContext
import io.micronaut.http.client.sse.RxSseClient
import io.micronaut.http.client.sse.SseClient
import io.micronaut.http.sse.Event
import io.micronaut.runtime.server.EmbeddedServer
import io.reactivex.Flowable
import spock.lang.AutoCleanup
import spock.lang.Shared
import spock.lang.Specification

class ConferencesControllerSpec extends Specification {

    @Shared
    @AutoCleanup
    private EmbeddedServer server = ApplicationContext.run(EmbeddedServer)

    /**
     * Low level client to interact with server
     * that returns server side events, that supports
     * RxJava2.
     */
    @Shared
    @AutoCleanup
    private RxSseClient sseLowLevelClient =
            server.applicationContext
                  .createBean(RxSseClient, server.getURL())

    /**
     * Declarative client for interacting
     * with server that send server side events.
     */
    @Shared
    private ConferencesSseClient sseClient =
            server.applicationContext
                  .getBean(ConferencesSseClient)

    void "test event stream with low level SSE client"() {
        when:
        // Use eventStream method of RxSseClient to get SSE
        // and convert data in event to Conference objects by 
        // setting second argument to Conference.class.
        final List<Event<Conference>> result =
                sseLowLevelClient.eventStream("/conferences/random", Conference.class)
                                 .take(2)
                                 .toList()
                                 .blockingGet()
        
        then:
        result.name.every { name -> name == "randomEvent" }
        result.id == ["0", "1"]
        result.data.every { conference -> conference instanceof Conference }
    }

    void "test event stream with declarative SSE client"() {
        when:
        // Use declarative client (using @Client)
        // with SSE support.
        List<Event<Conference>> result =
                Flowable.fromPublisher(sseClient.randomEvents())
                        .take(2)
                        .toList()
                        .blockingGet();

        then:
        result.name.every { name -> name == "randomEvent" }
        result.id == ["0", "1"]
        result.data.every { conference -> conference instanceof Conference }
    }

    void "test conference stream with declarative SSE client"() {
        when:
        // Use declarative client (using @Client)
        // with built-in extraction of data in event.
        List<Conference> result =
                sseClient.randomConferences()
                         .take(2)
                         .collectList()
                         .block();

        then:
        result.id.every(Closure.IDENTITY) // Check every id property is set.
        result.every { conference -> conference instanceof Conference }
    }
}

Written with Micronaut 1.0.0.RC1.

September 30, 2018

Micronaut Mastery: Running Code On Startup

When our Micronaut application starts we can listen for the ServiceStartedEvent event and write code that needs to run when the event is fired. We can write a bean that implements the ApplicationEventListener interface with the type ServiceStartedEvent. Or we can use the @EventListener annotation on our method with code we want to run on startup. If the execution of the code can take a while we can also add the @Async annotation to the method, so Micronaut can execute the code on a separate thread.

In our example application we have a reactive repository for a Mongo database and we want to save some data in the database when our Micronaut application starts. First we write a bean that implements the ApplicationEventListener:

// File: src/main/java/mrhaki/Dataloader.java
package mrhaki;

import io.micronaut.context.annotation.Requires;
import io.micronaut.context.env.Environment;
import io.micronaut.context.event.ApplicationEventListener;
import io.micronaut.discovery.event.ServiceStartedEvent;
import io.micronaut.scheduling.annotation.Async;
import io.reactivex.Flowable;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import javax.inject.Singleton;

@Singleton
@Requires(notEnv = Environment.TEST) // Don't load data in tests.
public class DataLoader implements ApplicationEventListener<ServiceStartedEvent> {

    private static final Logger log = LoggerFactory.getLogger(DataLoader.class);

    /**
     * Reactive repository for Mongo database to store
     * Conference objects with an id and name property.
     */
    private final ConferenceRepository repository;

    public DataLoader(final ConferenceRepository repository) {
        this.repository = repository;
    }

    @Async
    @Override
    public void onApplicationEvent(final ServiceStartedEvent event) {
        log.info("Loading data at startup");

        // Transform names to Conferences object and save them.
        Flowable.just("Gr8Conf", "Greach", "JavaLand", "JFall", "NextBuild")
                .map(name -> new Conference(name))
                .forEach(this::saveConference);
    }

    /**
     * Save conference in repository.
     * 
     * @param conference Conference to be saved.
     */
    private void saveConference(Conference conference) {
        repository
                .save(conference)
                .subscribe(
                        saved -> log.info("Saved conference {}.", saved),
                        throwable -> log.error("Error saving conference.", throwable));
    }
}

Alternatively we could have used the @EventListener annotation on a method with an argument of type ServiceStartedEvent:

// File: src/main/java/mrhaki/Dataloader.java
package mrhaki;

import io.micronaut.context.annotation.Requires;
import io.micronaut.context.env.Environment;
import io.micronaut.context.event.ApplicationEventListener;
import io.micronaut.discovery.event.ServiceStartedEvent;
import io.micronaut.runtime.event.annotation.EventListener;
import io.micronaut.scheduling.annotation.Async;
import io.reactivex.Flowable;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import javax.inject.Singleton;

@Singleton
@Requires(notEnv = Environment.TEST) // Don't load data in tests.
public class DataLoader {

    private static final Logger log = LoggerFactory.getLogger(DataLoader.class);

    /**
     * Reactive repository for Mongo database to store
     * Conference objects with an id and name property.
     */
    private final ConferenceRepository repository;

    public DataLoader(final ConferenceRepository repository) {
        this.repository = repository;
    }

    @EventListener
    @Async
    public void loadConferenceData(final ServiceStartedEvent event) {
        log.info("Loading data at startup");

        // Transform names to Conferences object and save them.
        Flowable.just("Gr8Conf", "Greach", "JavaLand", "JFall", "NextBuild")
                .map(name -> new Conference(name))
                .forEach(this::saveConference);
    }

    /**
     * Save conference in repository.
     * 
     * @param conference Conference to be saved.
     */
    private void saveConference(Conference conference) {
        repository
                .save(conference)
                .subscribe(
                        saved -> log.info("Saved conference {}.", saved),
                        throwable -> log.error("Error saving conference.", throwable));
    }
}

When we start our Micronaut application we can see in the log messages that our conference data is created:

22:58:17.343 [pool-1-thread-1] INFO  mrhaki.DataLoader - Loading data at startup
22:58:17.343 [main] INFO  io.micronaut.runtime.Micronaut - Startup completed in 1230ms. Server Running: http://localhost:9000
22:58:17.573 [Thread-11] INFO  mrhaki.DataLoader - Saved conference Conference{id=5bb134f505d4feefa74d19c7, name='JFall'}.
22:58:17.573 [Thread-8] INFO  mrhaki.DataLoader - Saved conference Conference{id=5bb134f505d4feefa74d19c3, name='Gr8Conf'}.
22:58:17.573 [Thread-10] INFO  mrhaki.DataLoader - Saved conference Conference{id=5bb134f505d4feefa74d19c5, name='Greach'}.
22:58:17.573 [Thread-9] INFO  mrhaki.DataLoader - Saved conference Conference{id=5bb134f505d4feefa74d19c8, name='NextBuild'}.
22:58:17.573 [Thread-6] INFO  mrhaki.DataLoader - Saved conference Conference{id=5bb134f505d4feefa74d19c6, name='JavaLand'}.

Written with Micronaut 1.0.0.RC1.

September 4, 2018

Spring Sweets: Dockerize Spring Boot Application With Jib

Jib is an open-source Java library from Google for creating Docker images for Java applications. Jib can be used as Maven or Gradle plugin in our Spring Boot project. One of the nice feature of Jib is that it adds layers with our classes, resources and dependency libraries for the Docker image. This means that when only class files have changed, the classes layer is rebuild, but the others remain the same. Therefore the creation of a Docker image with our Spring Boot application is also very fast (after the first creation). Also the Maven and Gradle plugins have sensible defaults, like using the project name and version as image name, so we don't have to configure anything in our build tool. Although Jib provides options to configure other values for the defaults, for example to change the JVM options passed on to the application.

Let's see Jib in action for a simple Spring Boot application. In our example we use Gradle as build tool with the following Spring Boot application:

// File: src/main/java/mrhaki/spring/sample/SampleApplication.java
package mrhaki.spring.sample;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import reactor.core.publisher.Mono;

@RestController
@SpringBootApplication
public class SampleApplication {

    public static void main(String[] args) {
        SpringApplication.run(SampleApplication.class, args);
    }

    @GetMapping("/message")
    Mono<String> message() {
        return Mono.just("Spring Boot sample application.");
    }
    
}

Next we add the Jib plugin to our Gradle build file:

// File: build.gradle
...
plugins {
    id 'com.google.cloud.tools.jib' version '0.9.10'
}
...

With the Gradle task jibDockerBuild we can create a Docker image for our local Docker. Our project is called springboot-sample with version 1.0.0-SNAPSHOT, so we get the Docker image springboot-sample:1.0.0-SNAPSHOT:

$ ./gradlew jibDockerBuild
Tagging image with generated image reference springboot-sample:1.0.0-SNAPSHOT. If you'd like to specify a different tag, you can set the jib.to.image parameter in your build.gradle, or use the --image=<MY IMAGE> commandline flag.
warning: Base image 'gcr.io/distroless/java' does not use a specific image digest - build may not be reproducible

Containerizing application to Docker daemon as springboot-sample:1.0.0-SNAPSHOT...

Getting base image gcr.io/distroless/java...
Building dependencies layer...
Building resources layer...
Building classes layer...
Finalizing...
Loading to Docker daemon...

Container entrypoint set to [java, -cp, /app/resources/:/app/classes/:/app/libs/*, mrhaki.spring.sample.SampleApplication]

Built image to Docker daemon as springboot-sample:1.0.0-SNAPSHOT

BUILD SUCCESSFUL in 3s
3 actionable tasks: 1 executed, 2 up-to-date
$

Notice that the default image our Docker image is build on is gcr.io/distroless/java, but we can change that in our Gradle build file via the jib configuration block.

Our image is available so we can run a Docker container based on our image and check the URL of our application:

$ docker run -d --name springboot-sample -p 8080:8080 springboot-sample:1.0.0-SNAPSHOT
5d288cbe4ed606760a51157734349135d4d4562072e1024f4585dff370ac6f99
$ curl -X GET http://localhost:8080/message
Spring Boot sample application.
$

In the following example we add some configuration for Jib in our Gradle build file:

// File: build.gradle
...
jib {
    to {
        image = "springboot-mrhaki:${project.version}"
    }

    container {
        // Set JVM options.
        jvmFlags = ['-XX:+UnlockExperimentalVMOptions', '-XX:+UseCGroupMemoryLimitForHeap', '-Dserver.port=9000']
        // Expose different port.
        ports = ['9000']
        // Add labels.
        labels = [maintainer: 'mrhaki']
    }

}
...

When we run the jibDockerBuild task a new Docker image is build:

$ warning: Base image 'gcr.io/distroless/java' does not use a specific image digest - build may not be reproducible

Containerizing application to Docker daemon as springboot-mrhaki:1.0.0-SNAPSHOT...

Getting base image gcr.io/distroless/java...
Building dependencies layer...
Building resources layer...
Building classes layer...
Finalizing...
Loading to Docker daemon...

Container entrypoint set to [java, -XX:+UnlockExperimentalVMOptions, -XX:+UseCGroupMemoryLimitForHeap, -Dserver.port=9000, -cp, /app/resources/:/app/classes/:/app/libs/*, mrhaki.spring.sample.SampleApplication]

Built image to Docker daemon as springboot-mrhaki:1.0.0-SNAPSHOT

BUILD SUCCESSFUL in 3s
3 actionable tasks: 1 executed, 2 up-to-date
$ docker run -d --name springboot-mrhaki -p 9000:9000 springboot-mrhaki:1.0.0-SNAPSHOT
6966ff3b5ca8dae658d59e39c0f26c11d22af7e04e02ad423516572eb5e0e0bd
$ curl -X GET http://localhost:9000/message
Spring Boot sample application.
$

Jib also adds the jib task to deploy a Docker image to a registry instead of the local Docker daemon. Jib can use several command-line applications to authenticate with registries or we can set authentication information in the Gradle build file. Check Jib on Github for more documentation of the options and features.

Written with Spring Boot 2.0.4.RELEASE and Jib Gradle plugin 0.9.10.

September 2, 2018

Awesome Asciidoctor: Document Attributes With Styling

Document attributes in Asciidoctor are very powerful. We can assign values to a document attributes and reference the attribute name in our document enclosed between curly braces. Asciidoctor will fill in the value when the document is transformed. Instead of a plain value we can also use styling markup in the document attribute definition. We must use the passthrough macro and allow for quote substitution.

In the following example document we define three document attributes: cl-added, cl-updated and cl-changed. We use the passthrough macro, quotes substation to assign CSS classes:

= Attributes with styles
// Include contents of docinfo.html
// in HTML head with CSS style
// definitions for .label.added,
// .label.changed and .label.updated
// used in the document attributes
// cl-added, cl-changed and cl-updated.
:docinfo1:

// Document attribues with styling,
// using the passthrough macro
// and quotes subsitution.
// We can use quotes or the short-hand q.
:cl-added: pass:quotes[[.label.added]#Added:#]
:cl-changed: pass:q[[.label.changed]#Changed:#]
:cl-updated: pass:q[[.label.updated]#Updated:#]


== Sample section

* {cl-added} Document attributes for document.
* {cl-changed} Definition of attributes to include
 more options.
* {cl-updated} New version of Asciidoctor.

Notice we need a file docinfo.html with the CSS style definitions:

<style>
.label {
    color: #fff;
    padding: .2em .6em .3em;
    font-weight: 700;
    border-radius: .25em;
    font-size: 90%;
}

.added {background-color: #007700;}
.changed {background-color: #088;}
.updated {background-color: #3344bb;}
</style>

When run Asciidoctor to get HTML output we see the following:

Written with Aciidoctor 1.5.7.1.