Application Logging in Jmix
To monitor and understand the behavior of a running Jmix application, effective logging is essential. The Java ecosystem provides mature tools and methods for implementing application logging. This guide will demonstrate how to leverage this ecosystem within a Jmix application.
Requirements
If you want to implement this guide step by step, you will need the following:
-
Download the sample project. You can download the completed sample project, which includes all the examples used in this guide. This allows you to explore the finished implementation and experiment with the functionality right away.
-
Download and unzip the source repository
-
or clone it using git:
git clone https://github.com/jmix-framework/jmix-application-logging-sample.git
-
Alternatively, you can start with the base Petclinic project and follow the step-by-step instructions in this guide to implement the features yourself: Jmix Petclinic and follow along to add the functionality step-by-step.
What We are Going to Build
In this guide, we extend the Jmix Petclinic application by configuring custom log outputs, adjusting log levels, and adding context-sensitive MDC values to ensure that key information, such as the Pet ID, is consistently included in log entries. We enable SQL logging to gain insight into the database queries being executed, which is crucial for diagnosing performance issues and understanding how data is being retrieved. Finally, we integrate Jmix with a centralized log management solution (Elasticsearch and Kibana), making it easier to monitor and analyze logs from a unified interface.
Why Application Logging is Essential
Once an application is deployed, logging becomes one of the primary ways to gain insight into its operational state. Logs offer visibility into the application’s flow, making them essential for understanding behavior in production. They allow developers to identify and diagnose issues, capturing details about unexpected events or failures, and providing context that helps with debugging production issues.
In modern software systems, Observability refers to the ability to understand an application’s internal state by examining its outputs, such as logs, metrics, and traces. Observability extends beyond logging to include metrics and custom business events, which together create a fuller picture of an application’s performance and user interactions. Metrics track quantitative data (like response times or resource usage), while business events capture high-level actions within the system. Together with logging, these elements support monitoring and troubleshooting effectively.
In this guide, we’ll focus on logging as a foundational aspect of observability, exploring practical configurations and tools for effective monitoring in Jmix applications.
In the example below, two log statements are generated. When a Pet
instance is successfully stored, an info log captures the pet’s details. If storing the Pet fails, an error log records information about the failure, aiding in troubleshooting and monitoring.
public Optional<Pet> savePet(Pet pet) {
try {
Pet savedPet = dataManager.save(pet);
log.info("Pet {} was saved correctly", pet); (2)
return Optional.of(savedPet);
} catch (Exception e) {
log.error("Pet {} could not be saved", pet, e); (3)
return Optional.empty();
}
}
Log messages are typically written either to standard output in the console or to a text file, allowing administrators and developers to access them for review. In production environments, where direct debugging is limited, logging becomes an essential diagnostic tool for understanding application behavior. The above code results in the following log statement:
2024-10-11T12:33:35.664+02:00 INFO 80773 --- [nio-8080-exec-3] io.jmix.petclinic.service.PetService : Pet io.jmix.petclinic.entity.pet.Pet-acf0839d-d96c-a286-9b0d-452548ebb70a [detached] was saved correctly
This log entry captures a successful save action, providing useful context about the Pet
entity and where the action took place.
The Java Logging Ecosystem
Logging has always played a significant role in the Java ecosystem, just as in many other platforms. Several APIs and libraries provide mechanisms for logging in Java applications, helping separate the technical task of generating log data from the process of actually "writing a log message" to a file, console, or other destination.
When you call something like log.info
, you are simply declaring that you want to log a message. The logging libraries then handle the rest—determining where the message should be sent (to a file, standard output, etc.), how it should be formatted, and under what circumstances the message should actually be logged (based on log levels or configurations). This separation allows for flexible, configurable logging behavior across different environments.
Jmix uses Logback as its standard logging library. This choice is based on Spring Boot’s default behavior and provides stability, extensive configuration options, and seamless integration within Jmix applications. Spring Boot makes it straightforward to set up and customize logging configurations. While alternatives exist, Logback’s stability and tight integration with Spring Boot make it the recommended choice for Jmix applications.
Although Jmix uses Logback as its logging library under the hood, when configuring loggers in your application code, you typically interact with the more abstract logging interface, Slf4J (Simple Logging Facade for Java), rather than Logback directly. Slf4J provides a set of abstract APIs for logging at different levels (DEBUG
, INFO
, WARN
, etc.), allowing developers to work with logging in a more flexible and generic way without being tied to any specific logging implementation.
Underneath, Logback implements the Slf4J APIs, handling the actual writing of log messages to files, consoles, or other logging destinations. However, as a developer, you don’t need to interact with Logback’s classes directly. This approach makes it easier to potentially swap out the underlying logging library in the future without needing to change every instance where logging is used in your application. The separation of concerns provided by Slf4J makes your logging more modular and adaptable to future changes.
This guide will focus on leveraging this combination, with examples of configuring Logback in a Jmix application.
For more details on setting up and customizing logging in Spring Boot, including Logback configurations and integration with Slf4J, refer to the official Spring Boot documentation:
Writing Log Messages in Jmix
Writing a log message in a Jmix application is straightforward, much like in any other Java application. Jmix, by default, is configured with Slf4J and Logback, which are also used internally for framework logging.
To log information within a class, let’s revisit the example from earlier and take a closer look at how the logger is configured and used. Below, we define a logger and use it to write log messages based on the success or failure of saving a Pet:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@Component("petclinic_PetService")
public class PetService {
private static final Logger log = LoggerFactory.getLogger(PetService.class); (1)
public Optional<Pet> savePet(Pet pet) {
try {
Pet savedPet = dataManager.save(pet);
log.info("Pet {} was saved correctly", pet); (2)
return Optional.of(savedPet);
} catch (Exception e) {
log.error("Pet {} could not be saved", pet, e); (3)
return Optional.empty();
}
}
1 | Defines the logger for this class via Slf4J. |
2 | Logs an info message upon successful save, with variable data passed as placeholders ({}). |
3 | Logs an error message if saving fails, including exception details. |
Logging Levels
In any application, there is often a need to capture different kinds of events—sometimes errors that need to be addressed, and sometimes just routine information that you might want to know about later. To express the varying levels of importance, Slf4J provides log levels, which indicate how critical or routine a message is. These log levels are arranged in increasing order of severity:
-
TRACE
-
DEBUG
-
INFO
-
WARN
-
ERROR
-
FATAL
These levels act as rough categories for the type of event being logged, helping you understand whether something minor or significant has occurred. For example, INFO
might be used for routine updates, while ERROR
indicates something has gone wrong. You can filter logs by these levels to focus on specific types of events.
There is no strict rule on when to use each level—it’s up to you as the developer to decide what INFO
, WARN
, and ERROR
mean within the context of your application. The goal is to use these categories in a way that best fits your specific use case.
For more information, see the official Slf4J documentation on logging levels.
Logging Configuration
In Spring Boot applications, you can adjust logging configurations in two main ways: using the application.properties
file for simple configurations or by creating a logback-spring.xml
file for more advanced settings.
The application.properties
file is useful for defining basic settings such as log levels and destinations, while the logback-spring.xml
file offers more extensive options like defining appenders and formatting log output in greater detail.
application.properties
In Spring Boot, including Jmix applications, you can easily adjust logging levels and behaviors through the application.properties
file. By default, Jmix generates a handful of useful logging levels for different parts of the source code into the default application.properties
. You can modify these levels to increase or decrease verbosity depending your needs.
logging.level.org.atmosphere = warn
# 'debug' level logs SQL generated by EclipseLink ORM
logging.level.eclipselink.logging.sql = debug
# 'debug' level logs data store operations
logging.level.io.jmix.core.datastore = info
# 'debug' level logs access control constraints
logging.level.io.jmix.core.AccessLogger = info
# 'debug' level logs all Jmix debug output
logging.level.io.jmix = info
logging.level.liquibase = warn
logging.level.io.jmix.petclinic = info
This configuration defines different logging levels for various components of the Jmix application and the underlying libraries like EclipseLink
and Liquibase
. For example:
- The logging level for io.jmix
is set to INFO
, which means only important informational messages, warnings, and errors will be logged.
- The level for liquibase
is set to WARN
, so only warning and error messages are logged, reducing verbosity.
To change a logging level, simply update the corresponding line in application.properties
, and the new level will take effect after restarting the application. This flexibility allows you to tailor the logging output for different environments, such as more detailed logging in development and more minimal logs in production.
You can configure logging levels for any package or class that is not already defined by simply adding logging.level.io.jmix.petclinic.service
to the application.properties
file. You can also specify the logging level for individual classes directly.
For example:
logging.level.io.jmix.petclinic.service = debug
This configuration sets all classes in the service
package of the sample application to log messages at the DEBUG
level and above.
For more information on logging levels and configuration options, refer to the official Spring Boot documentation: Spring Boot Logging Levels Documentation.
Advanced Logback Configuration
While application.properties
provides basic logging configuration, you can create a logback-spring.xml
file in the src/main/resources
directory for more advanced customization. This file allows you to define appenders, log formats, and manage more complex logging destinations.
A simple example configuration might look like this:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>localhost:5044</destination> <!-- <1> -->
<encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
</appender>
<root level="INFO">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="LOGSTASH"/> <!-- <2> -->
</root>
<logger name="org.springframework.web" level="WARN"/>
</configuration>
Using a custom logback-spring.xml
allows for greater flexibility in managing how logs are output and stored, offering more fine-tuned control compared to the application.properties
file.
Environment Variables
In addition to configuring logging levels in application.properties
, Spring Boot allows you to modify logging levels dynamically using environment variables. This approach is especially useful in containerized environments (e.g., Docker or Kubernetes), where configuration changes can be made at deployment time without modifying the source code.
To change a logging level using an environment variable, you can set the LOGGING_LEVEL
prefix followed by the package or class name:
$ docker run -e LOGGING_LEVEL_IO_JMIX_PETCLINIC_SERVICE=DEBUG my-jmix-app
In this example, the logging level for the io.jmix.petclinic.service
package is set to DEBUG
at deployment time. You can also combine this with deployment-specific configurations to ensure that different logging levels are applied based on the environment.
This dynamic approach is useful for scenarios where you want to adjust the logging verbosity without changing the application’s configuration files. It supports more flexibility, especially when managing multiple deployments or environments.
For more details on custom logback logging configuration in Spring Boot, refer to the official Spring Boot Custom Logging Configuration.
Logging SQL Statements
Since Jmix applications interact heavily with the database, it is often important to understand exactly how the application communicates with the database. However, sometimes it’s not possible to get the full picture just by looking at the application source code. In such cases, enabling SQL logging can be very useful, especially when trying to diagnose performance issues, identify inefficient query execution, or verify that the correct database operations are being performed.
By default, Jmix sets the logging level for SQL logging to INFO
in the application.properties
file, which does not activate the Eclipselink SQL logs. However, you can easily adjust this to DEBUG
by updating the logging level for eclipselink.logging.sql
, allowing you to see detailed SQL queries and parameter bindings.
# 'debug' level logs SQL generated by EclipseLink ORM
logging.level.eclipselink.logging.sql = debug
With this configuration, SQL statements are generated and logged, allowing you to closely inspect the actual database queries being executed. This helps verify that the correct queries are being sent to the database, ensuring that your application is functioning as expected. It also provides insight into how data is fetched and processed:
2024-10-23T07:29:53.664+02:00 DEBUG 6166 --- [nio-8080-exec-8] eclipselink.logging.sql : <t 459632279, conn 712486116> SELECT ID, BIRTHDATE, CREATED_BY, CREATED_DATE, DELETED_BY, DELETED_DATE, IDENTIFICATION_NUMBER, LAST_MODIFIED_BY, LAST_MODIFIED_DATE, NAME, VERSION, OWNER_ID, TYPE_ID FROM PETCLINIC_PET WHERE ((ID = ?) AND (0=0))
bind => [098b43a9-e9a2-e6c7-be5d-10f650e3849b]
Logging SQL statements is especially useful when diagnosing performance issues in your application. For example, if you notice that the application is displaying data slowly in the UI, or if fetching large datasets from multiple entities is causing the UI to lag, it might be an indication of an N+1 query issue or improper use of lazy loading. In such cases, enabling SQL logging can help identify these problems, and optimization may be needed by adjusting the fetch strategy to avoid unnecessary queries.
Here’s an example where multiple queries are executed to load a Pet
, its Owner
, and Pet Type
. The fact that three separate queries are executed suggests that optimization through the correct use of a fetch plan could improve performance:
2024-10-23T07:29:53.664+02:00 DEBUG 6166 --- [nio-8080-exec-8] eclipselink.logging.sql : <t 459632279, conn 712486116> SELECT ID, BIRTHDATE, CREATED_BY, CREATED_DATE, DELETED_BY, DELETED_DATE, IDENTIFICATION_NUMBER, LAST_MODIFIED_BY, LAST_MODIFIED_DATE, NAME, VERSION, OWNER_ID, TYPE_ID FROM PETCLINIC_PET WHERE ((ID = ?) AND (0=0))
bind => [098b43a9-e9a2-e6c7-be5d-10f650e3849b]
2024-10-23T07:29:53.665+02:00 DEBUG 6166 --- [nio-8080-exec-8] eclipselink.logging.sql : <t 459632279, conn 712486116> [0 ms] spent
2024-10-23T07:29:53.666+02:00 DEBUG 6166 --- [nio-8080-exec-8] eclipselink.logging.sql : <t 459632279, conn 2009192390> SELECT ID, ADDRESS, CITY, CREATED_BY, CREATED_DATE, DELETED_BY, DELETED_DATE, EMAIL, FIRST_NAME, LAST_MODIFIED_BY, LAST_MODIFIED_DATE, LAST_NAME, TELEPHONE, VERSION FROM PETCLINIC_OWNER WHERE ((ID = ?) AND (0=0))
bind => [c3bb4197-4189-c26a-2aa9-35c0ebb9faa4]
2024-10-23T07:29:53.667+02:00 DEBUG 6166 --- [nio-8080-exec-8] eclipselink.logging.sql : <t 459632279, conn 2009192390> [1 ms] spent
2024-10-23T07:29:53.668+02:00 DEBUG 6166 --- [nio-8080-exec-8] eclipselink.logging.sql : <t 459632279, conn 662306096> SELECT ID, COLOR, CREATED_BY, CREATED_DATE, DELETED_BY, DELETED_DATE, LAST_MODIFIED_BY, LAST_MODIFIED_DATE, NAME, VERSION FROM PETCLINIC_PET_TYPE WHERE ((ID = ?) AND (0=0))
bind => [1e2abb1f-5f77-865e-17fa-b67e85497523]
EclipseLink uses prepared statements to prevent SQL injection. The actual values are shown in the |
Context Information in Logs
As logging grows in an application, certain pieces of information—such as IDs of business objects or user details—tend to appear frequently in log messages. In the petclinic example, many log messages would likely contain the Pet ID to identify which pet the business operation is acting upon. To avoid repeating this information in every log statement, we can centralize it.
Slf4J provides a feature called MDC (Mapped Diagnostic Context), which allows you to set specific values once and have them automatically included in every log message. This simplifies logging by ensuring that important context information is consistently logged without needing to be manually added to each log statement. Additionally, MDC values are logged as dedicated key-value pairs, ensuring they always appear in the same format and location within the log messages. This consistency makes it much easier to filter and analyze logs, compared to manually adding context that may vary between log statements.
In the petclinic example, the PetService
leverages the MDC context to track the Pet’s identification number before performing an update. The Pet ID is set in the MDC, and all log messages related to that operation automatically include the Pet ID, without needing to manually add it each time.
.PetService.java
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@Component("petclinic_PetService")
public class PetService {
private static final Logger log = LoggerFactory.getLogger(PetService.class); (1)
public Optional<Pet> updatePet(Pet pet) {
MDC.put("petId", pet.getIdentificationNumber()); (1)
log.info("Updating Pet"); (2)
try {
Pet updatedPet = dataManager.save(pet);
log.info("Pet Update successfully"); (3)
return Optional.of(updatedPet);
} catch (Exception e) {
log.error("Failed to update Pet", e); (4)
return Optional.empty();
} finally {
// Ensure MDC is cleared to avoid leaking context into unrelated log entries
MDC.remove("petId"); (5)
}
}
1 | The Pet’s identification number is set into the MDC. |
2 | A log message is recorded before updating the Pet. |
3 | After the Pet is successfully updated, another log confirms the success. |
4 | If an error occurs, an error log is created with exception details. |
5 | The MDC is cleared at the end of the method to avoid context leakage. |
The above code will produce the following log messages with the Pet ID and user automatically included using MDC:
2024-10-21T08:13:30.392+02:00 jmixUser:joy petId:088 INFO 30254 --- [nio-8080-exec-5] io.jmix.petclinic.service.PetService : Updating Pet
2024-10-21T08:13:30.407+02:00 jmixUser:joy petId:088 INFO 30254 --- [nio-8080-exec-5] io.jmix.petclinic.service.PetService : Pet Update successfully
In these log messages, the Pet ID (088
) and jmixUser (joy
) are consistently included at the beginning of the log line without any manual effort for each log statement.
By default, Jmix automatically writes the current username into the MDC under the key |
The MDC context allows you to set contextual data, like a Pet ID, at any point in your application (such as in a View-Controller). This data will automatically propagate across method calls and class boundaries, like when interacting with a Service. Once set, the Pet ID is available to all subsequent log messages in that flow, ensuring that consistent information is logged without needing to pass the ID explicitly between classes. This feature simplifies logging across different layers of your application.
Another advantage is that the MDC context is preserved when calling deeper framework-level code. This means that when you invoke a framework method, like saving an entity with Jmix’s DataStore or even deeper with EclipseLink, the MDC context remains intact. To see this in action, let’s activate the framework data storage logs and see how MDC values are also included there:
logging.level.eclipselink.logging.sql=debug
logging.level.io.jmix.core.datastore=debug
Here’s an example log output showing how the Pet ID (087
) and jmixUser (joy
) are included not only in the application-level log messages but also in the internal framework logs during the pet update operation:
2024-10-23T07:29:53.659+02:00 jmixUser:joy petId:087 INFO 6166 --- [nio-8080-exec-8] io.jmix.petclinic.service.PetService : Updating Pet
2024-10-23T07:29:53.660+02:00 jmixUser:joy petId:087 DEBUG 6166 --- [nio-8080-exec-8] i.jmix.core.datastore.AbstractDataStore : save: store=main, entities to save: [io.jmix.petclinic.entity.pet.Pet-098b43a9-e9a2-e6c7-be5d-10f650e3849b [detached]], entities to remove: []
2024-10-23T07:29:53.675+02:00 jmixUser:joy petId:087 DEBUG 6166 --- [nio-8080-exec-8] eclipselink.logging.sql : <t 459632279, conn 1441249747> UPDATE PETCLINIC_PET SET BIRTHDATE = ?, LAST_MODIFIED_DATE = ?, VERSION = ? WHERE ((ID = ?) AND (VERSION = ?))
bind => [1997-08-16, 2024-10-23T07:29:53.670+02:00, 3, 098b43a9-e9a2-e6c7-be5d-10f650e3849b, 2]
2024-10-23T07:29:53.683+02:00 jmixUser:joy petId:087 INFO 6166 --- [nio-8080-exec-8] io.jmix.petclinic.service.PetService : Pet Update successfully
In this log output, the Pet ID and jmixUser values are consistently included not only in your custom application logs (io.jmix.petclinic.service.PetService
) but also in the framework’s internal logs, such as those from the Jmix AbstractDataStore
and EclipseLink SQL operations. This allows you to easily link SQL queries or other framework-level operations to their functional context, even when the context might not be obvious from the logs themselves.
It is important to clear the MDC context after the operation is complete using |
For more details on how MDC works, refer to the official Logback documentation: Logback Manual: MDC.
Centralized Logging
With centralized logging, all log data is collected and stored in one place, rather than scattered across individual servers or files. This makes it much easier to search, access, and analyze logs, no matter the size of your application. Even for smaller applications, centralized logging can be helpful because it allows you to quickly find specific log entries and troubleshoot issues more efficiently.
Centralized logging provides benefits like:
-
Easy accessibility: Logs can be accessed through a web interface, making them searchable and easier to explore. This enables real-time troubleshooting and monitoring without requiring direct access to the servers.
-
Collaboration: Centralized logging allows team members to share and link logs, which can help in debugging or reviewing incidents together.
-
Correlating logs: Logs from multiple services can be aggregated in one place, making it easier to correlate events across different systems or services.
-
Alerting: Many centralized logging solutions offer built-in alerting capabilities. This allows you to set up notifications for specific log messages, so you can be immediately notified when critical errors or issues occur.
-
Enhanced observability: Centralized logging solutions often integrate with metrics collection systems, combining logs with performance metrics. This ties directly into the concept of observability, where logs, metrics, and other signals are used together to gain a more comprehensive view of your application’s performance and health.
There are many providers for centralized logging solutions, such as Datadog, New Relic, or self-hosted options. In this example, we will use the popular ELK Stack (Elasticsearch, Logstash, Kibana) to demonstrate how to integrate Jmix with a centralized logging solution.
Setting up the ELK Stack
To set up the ELK Stack, we will use Docker to run Elasticsearch, Logstash, and Kibana. This setup will allow us to collect, store, and visualize logs in real-time. Start by creating a docker-compose.yml
file in the root of your project and add the following configuration:
version: '3'
services:
elasticsearch:
# Elasticsearch is the database where log data is stored.
# Logstash sends processed logs here, and it provides search and analytics capabilities.
image: docker.elastic.co/elasticsearch/elasticsearch:8.10.2
environment:
- discovery.type=single-node # Running in single-node mode for development.
- xpack.security.enabled=false # Disabling security for easier access in development.
ports:
- "9200:9200" # Exposing Elasticsearch's API on port 9200.
volumes:
- esdata:/usr/share/elasticsearch/data # Persisting data across container restarts.
logstash:
# Logstash collects logs from your application and forwards them to Elasticsearch.
# The application's logging configuration (logback-spring.xml) defines Logstash as the destination for logs.
# Logstash then ensures the logs are properly structured and stored in Elasticsearch.
image: docker.elastic.co/logstash/logstash:8.10.2
volumes:
- ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf # Logstash pipeline configuration.
ports:
- "5044:5044" # Listening on port 5044 for incoming logs from the application.
depends_on:
- elasticsearch # Ensuring Elasticsearch is ready before Logstash starts.
kibana:
# Kibana provides a web interface to visualize and explore log data stored in Elasticsearch.
# You can create dashboards, filter logs, and search for specific data fields.
image: docker.elastic.co/kibana/kibana:8.10.2
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200 # Kibana connects to Elasticsearch at this address.
ports:
- "5601:5601" # Kibana's web interface is accessible on port 5601.
depends_on:
- elasticsearch # Ensures Elasticsearch starts before Kibana.
volumes:
esdata:
driver: local # Persistent storage for Elasticsearch data.
This configuration starts three containers: Elasticsearch is responsible for storing the log data, Logstash receives logs from the Jmix application and forwards them to Elasticsearch for storage, and Kibana provides a web interface where you can visualize and search through the log data.
You can start these services with the following command:
$ docker compose up
Once the services are running, Kibana will be accessible at http://localhost:5601
, where you can explore and visualize logs in real-time.
Configure Logging to Logstash
Next, we will configure the Jmix application to send logs to Logstash, which will forward them to Elasticsearch. This involves two steps: adding the necessary dependencies and modifying the logging configuration.
First, add the following dependencies to your build.gradle
file:
dependencies {
// ...
implementation 'net.logstash.logback:logstash-logback-encoder:8.0'
implementation 'ch.qos.logback:logback-classic:1.5.6'
}
These dependencies include the Logstash encoder and Logback classic, allowing us to configure Logstash in our logging configuration.
Next, modify the logback-spring.xml
file to include a Logstash appender, which will send logs to the Logstash service:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>localhost:5044</destination> <!-- <1> -->
<encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
</appender>
<root level="INFO">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="LOGSTASH"/> <!-- <2> -->
</root>
<logger name="org.springframework.web" level="WARN"/>
</configuration>
With this setup, the LogstashTcpSocketAppender
sends logs from the Jmix application to Logstash. This allows us to centralize and process logs through Elasticsearch and visualize them in Kibana.
Viewing Logs in Kibana
Once the ELK Stack is up and running, you can access Kibana at http://localhost:5601/app/logs
. This web interface allows you to search, filter, and visualize logs sent from your Jmix application. Kibana provides a powerful interface for exploring log data, enabling you to drill down into specific events, correlate logs across services, and create dashboards for monitoring.
The MDC values are stored in the Elasticsearch index as dedicated fields, which makes it possible to easily search for them and display them as columns in Kibana. This allows you to filter logs by MDC values such as pedId or jmixUser and see them directly in the log view. As shown in the screenshot below, these fields appear alongside standard log data, making it easier to analyze logs based on the custom context from your application:
To learn more about using Kibana to search and analyze logs, refer to the official Kibana documentation: Kibana Discover Documentation.
With this setup, you can now efficiently monitor and analyze your application’s logs in a centralized location, making it easier to troubleshoot, optimize, and collaborate on any issues that arise.
Summary
This guide demonstrated how effective logging can be implemented in a Jmix application using the Java ecosystem. We explored basic logging concepts, how to use Slf4J and logback to write log messages, and advanced features like MDC (Mapped Diagnostic Context) to include contextual information, such as a Pet ID, across log messages automatically.
We also looked at how logging levels can be customized for different environments, either through configuration files or dynamic environment variables. Additionally, we touched upon centralized logging solutions, like Elasticsearch, for managing and analyzing logs externally.
Logging is essential for observability and debugging in production environments. Properly configured logging ensures that administrators can track down issues without direct access to the running application, making it a core aspect of application maintenance and monitoring.