Docker includes multiple logging mechanisms to help you get information from running containers and services. These mechanisms are called logging drivers. Each Docker daemon has a default logging driver, which each container uses unless you configure it to use a different logging driver, or “log-driver” for short.
Whenever a new container is created, Docker automatically provides the json-file log driver if no other log driver option has been specified, which caches container logs as JSON internally. At the same time, it allows you to implement and use logging driver plugins if you would like to integrate other logging tools.
In addition to using the logging drivers included with Docker, you can also implement and use logging driver plugins.
Unlike data volumes, the Docker logging driver reads data directly from the container's stdout and stderr output. The default configuration writes logs to a file on the host machine, but changing the logging driver will allow you to forward events to syslog, gelf, journald, and other endpoints.
You'll probably notice benefits in performance because containers won't have to write to and read from log files. However, there are some drawbacks to employing this technique as well: Only the json-file log driver is compatible with Docker log commands, the log driver only supports log shipping without parsing, and containers automatically terminate when the TCP server is unavailable.
Configure the default logging driver
You have two choices when configuring the logging driver -
Setup a default logging driver for all containers
Specify a logging driver for each container
In the first case, the default logging driver is a JSON file, but you have many other options such as logagent, syslog, fluentd, journald, splunk, etc. You can switch to another logging driver by editing the Docker configuration file and changing the log-driver parameter, or using your preferred log shipper.
# /etc/docker/daemon.json { "log-driver": "journald" } systemctl restart docker
Alternatively, you can choose to configure a logging driver on a per-container basis. As Docker provides a default logging driver when you start a new container, you need to specify the new driver from the very beginning by using the -log-driver and -log-opt parameters.
docker run -–log-driver syslog –-log-opt syslog-address=udp://syslog-server:514 \ alpine echo hello world
Where Are Docker Logs Stored By Default?
The logging driver enables you to choose how and where to ship your data. The default logging driver as mentioned above is a JSON file located on the local disk of your Docker host:
var/lib/docker/containers/[container-id]/[container-id]-json.log
You won't find any log files on your disk if you use a logging driver other than journald or json-file. Docker won't keep any local copies of the logs; instead, it will send them across the network. If there are ever any network problems, this is unsafe.
In extreme instances, if the logging driver fails to send the logs, Docker may potentially stop your container. Depending on the delivery mechanism you choose, this issue can arise.
Configure the delivery mode of log messages from container to log driver
Docker containers can write logs by using either the blocking or non-blocking delivery mode. The mode you choose will determine how the container prioritizes logging operations relative to its other tasks.
1. Direct / Blocking
Docker's default mode is blocking. Every time a message for the driver needs to be delivered, it will pause the application. Although it ensures that every message is sent to the driver, it can cause lag in your application's performance. The container delays the application's other operations until the message has been delivered if the logging driver is busy.
Depending on the logging driver you use, the latency differs. The default json-file driver writes logs very quickly since it writes to the local filesystem, so it's unlikely to block and cause latency. However, log drivers that need to open a connection to a remote server can block for longer periods and cause noticeable latency.
2. Non-Blocking
A container first writes its logs to an in-memory ring buffer in non-blocking mode, where they are kept until the logging driver is available to process them. Even if the driver is busy, the container can immediately hand off application output to the ring buffer and resume executing the application. This ensures that a high volume of logging activity won't affect the performance of the application running in the container.
When running in non-blocking mode, the container writes logs to an in-memory ring buffer. The logs are stored in the ring-buffer until it's full. Only then is the log shipped. Even if the driver is unavailable, the container sends logs to the ring buffer and continues executing the application. This ensures high volume of logging without impacting performance. But there are downsides.
Non-blocking mode does not guarantee that the logging driver will log all the events. If the buffer runs out of space, buffered logs will be deleted before they are sent. You can use the max-buffer-size option to set the amount of RAM used by the ring buffer. The default value for max-buffer-size is 1 MB, but if you have more RAM available, increasing the buffer size can increase the reliability of your container's logging.
Although blocking mode is Docker's default for new containers, you can set this to non-blocking mode by adding a log-opts item to Docker's daemon.json file.
# /etc/docker/daemon.json { "log-driver": "json-file", "log-opts": { "mode": "non-blocking" } }
Alternatively, you can set non-blocking mode on an individual container by using the --log-opt option in the command that creates the container:
docker run --log-opt mode=non-blocking alpine echo hello world
Supported logging drivers -
The following logging drivers are supported.
Limitations of logging drivers
Reading log information requires decompressing rotated log files, which causes a temporary increase in disk usage (until the log entries from the rotated files are read) and an increased CPU usage while decompressing.
The capacity of the host storage where the Docker data directory resides determines the maximum size of the log file information.
With this I'll conclude the post here.
I hope you found this informative.
Thank you for reading!
*** Explore | Share | Grow ***
コメント