Cluster Level Logging in Kubernetes
Application and system logs can help you understand what is happening inside your cluster. The logs are useful for debugging problems and monitoring application and cluster activities.
The easiest and most embraced logging method for containerized applications is to write to the standard output and standard error streams.
However, the default functionality provided by a container engine or runtime is usually not enough for a complete logging solution. For example, if a container crashes, or a pod is evicted, or a node dies, you’ll usually still want to access your application’s logs, hence logs should have a separate storage and lifecycle independent of nodes, pods, or containers.
This concept is called cluster-level-logging. Cluster-level logging requires a separate backend store inside or outside of your cluster.
To understand Cluster-level-logging in kubernetes, you need to understand following.
- Basic logging in Kubernetes
- Logging at the node level
- Cluster-level logging architectures
Basic logging in Kubernetes
First you need to understand the basic logging in Kubernetes that outputs data to the standard output stream. To understand basic logging in kubernetes, I will use a Pod with one container that writes some text to standard output once per second.
Following is the pod defination:
apiVersion: v1 kind: Pod metadata: name: counter spec: containers: - name: count image: busybox args: [/bin/sh, -c, 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
Logging at the node level
Cluster-level logging architectures
267 total views, 1 views today